text stringlengths 4 2.78M |
|---|
---
author:
- 'Ricardo A. E. Mendes'
bibliography:
- 'ref.bib'
title: Extending tensors on polar manifolds
---
Introduction
============
Let $(M,g)$ be a Riemannian manifold and $G$ a Lie group acting on $M$ properly by isometries. Recall that, by definition (see [@PalaisTerng87], [@GroveZiller12]), this action is called *polar* if there exists an immersed sub-manifold $\Sigma\to M$ meeting all $G$-orbits orthogonally. Such a submanifold $\Sigma$ is called a *section*, and comes with a natural action by a discrete group of isometries $W=W(\Sigma)$, called its *generalized Weyl group*. Sections are always totally geodesic, and the immersion $\Sigma \to M$ induces an isometry $\Sigma /W \to M/G$, so in particular $M/G$ is a Riemannian orbifold.
Denote by $C^\infty(T^{k,l}M)^G$, respectively $C^\infty(T^{k,l}\Sigma)^{W(\Sigma)}$, the sets of smooth $(k,l)$-tensors on $M$, respectively $\Sigma$, which are invariant under $G$, respectively $W$. Our main result states that the natural restriction map $C^\infty(T^{k,l}M)^G\to C^\infty(T^{k,l}\Sigma)^{W(\Sigma)}$ is surjective:
\[mainthm\] Let $M$ be a polar $G$-manifold with immersed section $i:\Sigma\to M$, and $W(\Sigma)$ the generalized Weyl group associated to $\Sigma$. Define the pull-back (restriction) map $$\psi=i^* :C^\infty(T^{k,l}M)^G\to C^\infty(T^{k,l}\Sigma)^{W(\Sigma)}$$ by $$[\psi(\beta)](x)(v_1,\ldots v_l)=P^{\otimes k}[\beta(i(x)((di)_x v_1, \ldots (di)_x v_l)]$$ where $P:T_{i(x)}M\to T_x \Sigma$ is orthogonal projection. Then $\psi$ is surjective.
In the case of functions, that is, $(k,l)=(0,0)$, the map $\psi$ above is an isomorphism. This is known as the Chevalley Restriction Theorem — see [@PalaisTerng87].
Note that Theorem \[mainthm\] applies to $(0,l)$-tensors with symmetry properties, such as symmetric $l$-tensors, exterior $l$-forms, etc. This can be phrased naturally in terms of Weyl’s construction (see [@FultonHarris] Lecture 6). Recall that Weyl’s construction associates to each partition $\lambda=(\lambda_1, \ldots \lambda_k)$ of $l\in\mathbb{N}$ a functor $\mathbb{S}_\lambda$ of vector spaces called its Schur functor. One recovers $\Lambda^l$ and Sym$^l$ as the Schur functors associated to $\lambda=(l)$ and $\lambda=(1,1, \ldots 1)$, respectively.
\[symmetrizer\] Let $M$ be a Riemannian manifold with an isometric polar action by $G$. Let $\lambda=(\lambda_1, \ldots , \lambda_k)$ be a partition of $l\in\mathbb{N}$, and consider the associated Schur functor $\mathbb{S}_\lambda$. Then the (surjective) restriction map $\psi: C^\infty(T^{0,l}M)^G\to C^\infty(T^{0,l}\Sigma)^W$ induces a surjective map $$\psi_\lambda:C^\infty(\mathbb{S}_\lambda (T^*M))^G \to C^\infty(\mathbb{S}_\lambda (T^*\Sigma))^W$$
For context, consider two special cases of Corollary \[symmetrizer\]: exterior $l$-forms and symmetric $2$-tensors. In the case of exterior forms the conclusion of Corollary \[symmetrizer\] is implied by P. Michor’s Basic Forms Theorem — see [@Michor96] and [@Michor97]. In fact, Michor’s Theorem gives more precise information: it states that for a polar $G$-manifold $M$ with section $\Sigma$, every smooth $W(\Sigma)$-invariant $l$-form on $\Sigma$ can be extended *uniquely* to a smooth $G$-invariant $l$-form on $M$ which is *basic*, that is, vanishes when contracted to vectors tangent to the $G$-orbits.
The case of symmetric $2$-tensors follows from [@Mendes11], which is again a sharper statement in the sense that a set of basic tensors is identified. This is used in the following extension result for Riemannian metrics:
\[metric\] Let $G$ act polarly on the Riemannian manifold $M$ with section $\Sigma$ and generalized Weyl group $W$. Consider the restriction map (which is surjective by Corollary \[symmetrizer\]): $$\psi=|_\Sigma: C^\infty(Sym^2M)^G \to C^\infty(Sym^2\Sigma)^W$$ For any Riemmanian metric $\sigma\in C^\infty(Sym^2\Sigma)^W $, there is a Riemannian metric $\tilde{\sigma}\in C^\infty(Sym^2M)^G$ such that $\psi(\tilde{\sigma})=\sigma$, and with respect to which the $G$-action is polar with the same section $\Sigma$.
For both Theorem \[metric\] and Michor’s Basic Forms Theorem, the proof relies on polarization results in the Invariant Theory of finite reflection groups — see section \[polarizations\]. On the other hand, the main ingredient in the proof of Theorem \[mainthm\] is a multi-variable version of the Chevalley Restriction Theorem due to Tevelev — see section \[MVCRT\].
An application of Theorem \[metric\] is to give a partial answer to a natural question by K. Grove: Given a proper isometric action of $G$ on a Riemannian manifold $(M,g)$, describe the set of all metrics on $M/G$ which are induced by smooth $G$-invariant metrics $g_0$ on $M$. Theorem \[metric\] answers this question under the additional hypothesis that $M$ is a polar $G$-manifold. Namely, that set of metrics on $M/G=\Sigma/W$ coincides with the set of smooth orbifold metrics.
Another application of Theorem \[metric\] is an important step in the main reconstruction result in [@GroveZiller12]. This was in fact our main motivation for Theorem \[metric\].
The present paper is organized as follows.
In section \[MVCRT\] we state Tevelev’s multi-variable version of the Chevalley Restriction Theorem for isotropy representations of symmetric spaces (Theorem \[Tevelev\]), and generalize it to the class of polar representations (Corollary \[polarMVCRT\]).
Section \[sectionpolar\] is concerned with the proofs of Theorem \[mainthm\] and Corollary \[symmetrizer\].
In section \[polarizations\] we show how the algebraic results behind Michor’s Basic Forms Theorem [@Michor96; @Michor97] and Theorem \[metric\] (namely Solomon’s Theorem [@Solomon63] and Theorem \[hessian\]) are in fact results about polarizations in the Invariant Theory of finite reflection groups. We then show in detail how Theorem \[metric\] follows from Theorem \[hessian\].
The Appendix provides a proof of Theorem \[hessian\]. It is computer-assisted, and mostly reproduced from the author’s PhD dissertation [@Mendes11].
Acknowledgements: Part of this work was completed during my PhD, and I would like to thank my advisor W. Ziller for the long-term support. I would also like to thank A. Lytchak and J. Tevelev for useful communication.
Multi-variable Chevalley Restriction Theorem {#MVCRT}
============================================
Let $(G,K)$ be a symmetric pair, and consider the isotropy representation of $K$ on $V=T_K G/K$, also called an s-representation. This is polar, and any maximal abelian sub-algebra $\Sigma\subset V$ is a section. Its generalized Weyl group $W$ is also called the “baby Weyl group”. The classic Chevalley Restriction Theorem says that $$|_\Sigma : {\mathbb{R}}[V]^K\to {\mathbb{R}}[\Sigma]^W$$ is an isomorphism (see [@Warner], page 143).
Now consider the diagonal action of $K$ on $V^m$ (respectively $W$ on $\Sigma^m$), and the corresponding algebras of invariant ($m$-variable) polynomials ${\mathbb{R}}[V^m]^K$ (respectively ${\mathbb{R}}[\Sigma^m]^W$). In contrast with the single-variable case, the restriction map $|_\Sigma $ is not injective. On the other hand, surjectivity is due to Tevelev:
\[Tevelev\] In the notation above, the restriction map $|_\Sigma:{\mathbb{R}}[V^m]^K\to {\mathbb{R}}[\Sigma^m]^W $ is surjective.
Remarks: The proof of Theorem \[Tevelev\] relies on the Kumar-Mathieu Theorem, previously known as the PRV conjecture, see [@Kumar89] and [@Mathieu89]. Joseph [@Joseph97] proved the Theorem above in the special case of the adjoint action, using similar techniques. In [@Tevelev00] the Theorem above is stated only for $m=2$ factors. But on page 324 it is remarked that “Actually, this (and Josephs’s) Theorem also holds for any number of summands \[...\] ”.
We observe that Theorem \[Tevelev\] generalizes to the class of *polar* representations. (See [@Dadok85] for a treatment of polar representations)
\[polarMVCRT\] Let $K\subset O(V)$ be a polar representation, with section $\Sigma$ and generalized Weyl group $W\subset O(\Sigma)$. Then the $m$-variable restriction is surjective: $$|_\Sigma :{\mathbb{R}}[V^m]^K\to {\mathbb{R}}[\Sigma^m]^W$$
Let $K_0$ be the connected component of $K$ which contains the identity. It is polar with the same section $\Sigma$. Let $W_0$ be its generalized Weyl group, so that $W_0\subset W$. From the classification of irreducible polar representations in [@Dadok85], it follows that the maximal subgroup $\tilde{K}\subset O(V)$, containing $K_0$, that is orbit-equivalent to $K_0$, defines an s-representation. (This fact has been given a classification-free proof in [@EschenburgHeintze99].) Note that $K_0$ and $\tilde{K}$ have the same sections and generalized Weyl groups.
Theorem \[Tevelev\] states that $$|_\Sigma:{\mathbb{R}}[V^m]^{\tilde{K}}\to{\mathbb{R}}[\Sigma^m]^{W_0}$$ is surjective. But since $\tilde{K}\supset K_0$, we have ${\mathbb{R}}[V^m]^{\tilde{K}}\subset {\mathbb{R}}[V^m]^{K_0}$, and so $$|_\Sigma:{\mathbb{R}}[V^m]^{K_0}\to{\mathbb{R}}[\Sigma^m]^{W_0}$$ is again surjective.
Finally, to show $|_\Sigma:{\mathbb{R}}[V^m]^{K}\to{\mathbb{R}}[\Sigma^m]^{W}$ is surjective, let $\beta\in{\mathbb{R}}[\Sigma^m]^{W}$. Then there is $\tilde{\beta_0}\in {\mathbb{R}}[V^m]^{K_0}$ which restricts to $\beta$. Define $$\tilde{\beta}=\frac{1}{|K/K_0|}\sum_{h\in K/K_0} h \tilde{\beta_0}$$ Since $\tilde{\beta}$ equals the average of $\tilde{\beta_0}$ over $K$, it is $K$-invariant. To show that $\tilde{\beta}|_\Sigma=\beta$, we note that each coset $hK_o\in K/K_0$ can be represented by some $h\in N(\Sigma)$. Indeed, for an arbitrary $h\in K$, $h\Sigma$ is a section for $K$, hence also for $K_0$. Since $K_0$ acts transitively on the sections, there is $h_0\in K_0$ such that $hh_0^{-1}\in N(\Sigma)$. Therefore $$\tilde{\beta}|_\Sigma= \frac{1}{|K/K_0|}\sum_{h\in K/K_0} (h \tilde{\beta_0})|_\Sigma=\frac{1}{|K/K_0|}\sum \beta= \beta$$ because $\beta$ is $W$-invariant.
Note that the algebra of multi-variable polynomials ${\mathbb{R}}[V^m]$ is graded by $m$-tuples of natural numbers $(d_1,\ldots,d_m)$, and similarly for ${\mathbb{R}}[\Sigma^m]$. Consider the subspace generated by the polynomials of degree $(*,1,\ldots, 1)$. These can be identified with those tensor fields of type $(0,m-1)$ which have polynomial coefficients, that is, members of ${\mathbb{R}}[V, (V^*)^{m-1}]$, respectively ${\mathbb{R}}[\Sigma, (\Sigma^*)^{m-1}]$.
Since this grading is preserved by the restriction map $|_\Sigma$, Corollary \[polarMVCRT\] implies:
\[maincor\] Let $K\subset O(V)$ be a polar representation, with section $\Sigma$ and generalized Weyl group $W\subset O(\Sigma)$. Then the restriction map for polynomial-coefficient invariant $(0,l-1)$-tensors $$|_\Sigma :{\mathbb{R}}[V, (V^*)^{l-1}]^K\to {\mathbb{R}}[\Sigma, (\Sigma^*)^{l-1}]^W$$ is surjective.
Extending tensors {#sectionpolar}
=================
The goal of this section is to provide proofs of Theorem \[mainthm\] and Corollary \[symmetrizer\]. We start with two Lemmas that will be used in proving Theorem \[mainthm\].
\[polarrep\] Let $V$ be a polar $K$-representation with section $\Sigma$ and generalized Weyl group $W$. Then restriction to $\Sigma$ is a surjective map $$|_\Sigma :C^\infty(T^{0,l}V)^K \to C^\infty(T^{0,l}\Sigma)^W$$
The space of polynomial-coefficient $(0,l)$-tensors ${\mathbb{R}}[V,(V^*)^l]^K\subset C^\infty(T^{0,l}V)^K$ is generated, as an ${\mathbb{R}}[V]^K$-module, by finitely many (homogeneous) $\sigma_1, \ldots \sigma_r$. (See [@Springer] Proposition 2.4.14)
Since ${\mathbb{R}}[V]^K={\mathbb{R}}[\Sigma]^W$, Corollary \[maincor\] implies that the restrictions $\sigma_1|_\Sigma, \ldots \sigma_r|_\Sigma$ generate ${\mathbb{R}}[\Sigma,(\Sigma^*)^l]^W$ as an ${\mathbb{R}}[\Sigma]^W$-module.
Then, by an argument involving the Malgrange Division Theorem and the fact that ${\mathbb{R}}[\Sigma,(\Sigma^*)^l]^W$ is dense in $C^\infty(T^{0,l}\Sigma)^W$ (see [@Field77] Lemma 3.1), we conclude that $\sigma_1|_\Sigma, \ldots \sigma_r|_\Sigma$ generate $C^\infty(\Sigma, (\Sigma^*)^l)^W=C^\infty(T^{0,l}\Sigma)^W$ as a $C^\infty(\Sigma)^W$-module. This implies that $|_\Sigma :C^\infty(T^{0,l}V)^K \to C^\infty(T^{0,l}\Sigma)^W $ is surjective.
The next Lemma describes the smooth $G$-invariant tensors on a tube $\mathcal{U}=G\times_K V$ in terms of smooth $K$-invariant tensors on the slice $V$.
\[tube\] Let $K\subset G$ be Lie groups with $K$ compact, and $V$ be a $K$-representation. Define $\mathcal{U}=G\times_K V$ to be the quotient of $G\times V$ by the free action of $K$ given by $k\cdot (g,v)= (g k^{-1}, kv)$, and identify $V$ with the subset of $\mathcal{U}$ which is the image of $\{1\}\times V\subset G\times V$ under the natural quotient projection $G\times V \to \mathcal{U}$.
Then there is a $K$-representation $H$ and an isomorphism $$C^\infty(T^{0,l} V)^K\times C^\infty (V,H)^K \to C^\infty(T^{0,l}\mathcal{U})^G$$ Under this identification the restriction map $$|_V :C^\infty(T^{0,l}\mathcal{U})^G \to C^\infty(T^{0,l} V)^K$$ corresponds to projection onto the first factor. In particular $|_V$ is onto.
To describe $H$, let $p\in \mathcal{U}$ be the image of $(1,0)\in G\times V$ in $\mathcal{U}$. Then $(V^*)^{\otimes l}$ is a $K$-invariant subspace of $(T^*_p\mathcal{U})^{\otimes l}$, and we define $H$ to be its $K$-invariant complement, so that $$(T^*_p\mathcal{U})^{\otimes l}= (V^*)^{\otimes l} \oplus H$$ as $K$-representations.
We define $\Psi: C^\infty(T^{0,l} V)^K\times C^\infty (V,H)^K \to C^\infty(T^{0,l}\mathcal{U})^G $ in the following way: Given $(\beta_1 ,\beta_2)\in C^\infty(T^{0,l} V)^K\times C^\infty (V,H)^K$, let $\tilde{\beta}:G\times V\to T^{0,l}\mathcal{U}$ be given by $$\tilde{\beta} (g,v)= g\cdot (\beta_1 (v) +\beta_2 (v))$$ Since $\tilde{\beta}$ is $K$-invariant, it descends to $\beta = \Psi(\beta_1, \beta_2) :\mathcal{U}\to T^{0,l}\mathcal{U} $.
The map $\beta$ is smooth because $\tilde{\beta}$ is smooth and the action of $K$ on $G\times V$ is free. Moreover $\beta$ is clearly a $G$-invariant cross-section of the bundle $T^{0,l}\mathcal{U}\to \mathcal{U}$, and $\beta |_V = \beta_1$.
Now the proof of Theorem \[mainthm\] essentially follows from Lemmas \[polarrep\], \[tube\], together with the Slice Theorem (see [@Bredon]) and partitions of unity:
First note that it is enough to consider $(0,l)$ tensors. Indeed, $\psi$ for $(k,l)$ tensors equals the composition of $\psi$ for $(0,k+l)$-tensors with raising and lowering indices (using the Riemannian metric on $M$) to transform between $(k,l)$-tensors and $(0,k+l)$-tensors.
It is enough to prove surjectivity of $\psi$ locally around each orbit in $M$, because of the existence of $G$-invariant partitions of unity subject to any cover by $G$-invariant open sets in $M$.
So let $p\in M$ be an arbitrary point, with orbit $Gp$, isotropy $K=G_p$, and slice $V=(T_pGp)^\perp$. The Slice Theorem (see [@Bredon]) then says that for an open $G$-invariant tubular neighborhood $\mathcal{U}$ of the orbit $Gp$ there is a $G$-equivariant diffeomorphism $$E: G\times_K V\to \mathcal{U}$$ From now on we we will identify $\mathcal{U}$ with $G\times_K V$ through $E$.
The slice representation of $K$ on $V$ is polar (see [@PalaisTerng87]). If $\Sigma\subset V$ is a section with generalized Weyl group $W(\Sigma)$, the quotients $\mathcal{U} /G$, $V/K$ and $\Sigma/W$ are isometric.
Since the inclusion $\Sigma\to\mathcal{U}$ factors as $\Sigma\to V\to \mathcal{U}$, the restriction map $\psi$ factors as $\psi=|_\Sigma^V\circ |_V^\mathcal{U}$, where $$|_\Sigma^V:C^\infty(T^{0,l}V)^K \to C^\infty(T^{0,l}\Sigma)^W \qquad |_V^\mathcal{U}: C^\infty(T^{0,l}\mathcal{U})^G\to C^\infty(T^{0,l}V)^K$$ Both these maps are surjective, by Lemmas \[polarrep\] and \[tube\]. Therefore $\psi$ is surjective.
Now we turn to Corollary \[symmetrizer\], about $(0,l)$-tensors with symmetry properties, such as exterior forms and symmetric tensors.
The Schur functor $\mathbb{S}_\lambda$ is defined in terms of a certain element $c_\lambda \in \mathbb{Z}S_l$ in the group ring $\mathbb{Z}S_l$, called the Young symmetrizer associated to $\lambda$ — see [@FultonHarris] Lecture 6. Indeed, given a vector space $V$, the group $S_l$ acts on $V^{\otimes l}$, and so $c_\lambda$ determines a linear map $V^{\otimes l} \to V^{\otimes l}$. The image of this map is defined to be $\mathbb{S}_\lambda (V)$.
Thus $C^\infty (\mathbb{S}_\lambda (T^*M))$ is simply the image of the natural map $$c_\lambda:C^\infty(T^{0,l}M)\to C^\infty(T^{0,l}M)$$ and similarly for $C^\infty (\mathbb{S}_\lambda (T^*M))^G$ (because the actions of $G$ and $S_l$ commute), and $C^\infty (\mathbb{S}_\lambda (T^*\Sigma))^W$.
Since the restriction map $\psi$ is $S_l$-equivariant and surjective, it takes the image of $$c_\lambda:C^\infty(T^{0,l}M)^G\to C^\infty(T^{0,l}M)^G$$ onto the image of $$c_\lambda:C^\infty(T^{0,l}\Sigma)^W\to C^\infty(T^{0,l}\Sigma)^W$$ completing the proof.
Polarizations and finite reflection groups {#polarizations}
==========================================
An alternative way of proving special cases of Theorem \[Tevelev\] is given by the polarization technique. This has the advantage of providing explicit lifts, which we exploit to give a proof of Theorem \[metric\].
We start by recalling the definition of polarizations (see [@Schwarz07] for a reference). Let $U$ be an Euclidean vector space, and $H\to O(U)$ be a representation of the group $H$. Consider the diagonal action of $H$ on $m$ copies of $U$, and the corresponding algebra of invariant ($m$-variable) polynomials ${\mathbb{R}}[U^m]^H$. Identify ${\mathbb{R}}[U]^H$ with the elements of ${\mathbb{R}}[U^m]^H$ which depend only on the first variable.
The method of polarizations consists of generating multi-variable invariants from single-variable invariants. Indeed, assuming $f\in{\mathbb{R}}[U]^H$ is homogeneous of degree $d$, let $t_1, \ldots t_m$ be formal variables, and formally expand $$f(t_1v_1 +\ldots + t_mv_m)=\sum_{r_1+\ldots +r_m=d}t_1^{r_1}\cdots t_m^{r_m} f_{r_1, \ldots, r_m}(v_1, \ldots, v_m)$$ Then each $f_{r_1, \ldots, r_m}$ belongs to ${\mathbb{R}}[U^m]^H$, and is called a polarization of $f$.
An alternative but equivalent definition of polarizations is given in terms of *polarization operators* — see [@Wallach93]. These are differential operators $D_{ij}$ (for $1\leq i,j\leq m$) on ${\mathbb{R}}[U^m]^H$ defined by $$(D_{ij} f ) (u_1, \ldots u_m)= \left. \frac{d}{dt}\right|_{t=0} f(u_1, \ldots, u_j+tu_i, \ldots u_m)$$ Then one defines the subalgebra $\mathcal{P}^m\subset {\mathbb{R}}[U^m]^H$ of polarizations to be the smallest subalgebra of ${\mathbb{R}}[U^m]^H$ containing ${\mathbb{R}}[U]^H$ and stable under the operators $D_{ij}$.
For example, if $f\in{\mathbb{R}}[U]^H$, then the tensors $df=D_{2,1}f \in {\mathbb{R}}[U^2]^H$ and Hess$f=D_{2,1}(D_{3,1} f)\in{\mathbb{R}}[U^3]^H$ are polarizations. Similarly, if $f_1, \ldots f_p \in {\mathbb{R}}[U]^H$, then $df_1\otimes df_2\otimes \cdots \otimes df_p=(D_{2,1}f_1)\cdots(D_{p+1,1}f_p) $ is a polarization, and so is $df_1\wedge \cdots \wedge df_p$. (Here we are identifying tensor fields with multi-variable functions as in section \[MVCRT\].)
Now consider the special case where $H=W_0$ is a finite group generated by reflections on $U=\Sigma$. If $W_0$ is irreducible of type $A$, $B$, or dihedral, then $\mathcal{P}^m={\mathbb{R}}[\Sigma^m]^{W_0}$ by [@Weyl], [@Hunziker97].
It was noted by Wallach [@Wallach93] that ${\mathbb{R}}[\Sigma^m]^{W_0}$ is *not* generated by polarizations for $W_0$ of type $D_n$ for $n>3$ and $m>1$. He proposed a definition of generalized polarizations, and showed that these do generate all multi-variable invariants for type $D$. Unfortunately Wallach’s generalized polarizations fail to generate all multi-variable invariants for $W_0$ of type $F_4$ (see [@Hunziker97]).
For $W_0$ of general type, even though $\mathcal{P}^m \neq {\mathbb{R}}[\Sigma^m]^{W_0}$, one can still identify geometrically interesting subspaces of ${\mathbb{R}}[\Sigma^m]^{W_0}$ which are contained in $\mathcal{P}^m$. For example, Solomon’s Theorem [@Solomon63] states that the subspace ${\mathbb{R}}[\Sigma, \Lambda^{m-1} \Sigma^*]^{W_0} \subset {\mathbb{R}}[\Sigma^m]^{W_0} $ of exterior $(m-1)$-forms is contained in $\mathcal{P}^m$. Another example is the main ingredient in the proof of Theorem \[metric\]:
\[hessian\] Let $W_0\subset O(\Sigma)$ be a finite group generated by reflections. Then every $W_0$-invariant symmetric $2$-tensor field on $\Sigma$ is a sum of terms of the form $a$Hess$(b)$, for $a,b\in {\mathbb{R}}[\Sigma]^{W_0}$.
For the convenience of the reader, we provide a proof of Theorem \[hessian\] above in the Appendix.
Now assume $K\subset O(V)$ is a polar representation of the compact group $K$ with section $\Sigma$, and generalized Weyl group $W$. Recall that the connected component of the identity $K_0$ is polar with the same section $\Sigma$, and denote by $W_0$ its generalized Weyl group. By [@Dadok85], $W_0$ is a finite group generated by reflections. Since the operators $D_{ij}$ commute with the restriction map $|_{\Sigma^m}: {\mathbb{R}}[V^m]^{K_0}\to {\mathbb{R}}[\Sigma^m]^{W_0}$, and the single-variable invariants coincide by the Chevalley Restriction Theorem, the image of $|_{\Sigma^m}$ must contain $\mathcal{P}^m$. In particular, this gives an alternative proof of Theorem \[Tevelev\] in the special case that $W_0$ is of classical type – see [@Hunziker97].
Similarly, Theorem \[hessian\] implies surjectivity of the restriction map for symmetric $2$-tensors. In fact, we have the sharper statement:
Let $K\subset O(V)$ be a polar representation of the compact group $K$, with section $\Sigma\subset V$ and generalized Weyl group $W$. Consider the restriction map for symmetric $2$-tensor fields $|_\Sigma : C^\infty(\mathrm{Sym}^2 V)^K\to C^\infty(\mathrm{Sym}^2 \Sigma)^W$.
This map is surjective. Moreover, given $\beta \in C^\infty(\mathrm{Sym}^2 \Sigma)^W$ there is $\tilde{\beta} \in C^\infty(\mathrm{Sym}^2 V)^K $ such the $\tilde{\beta}|_\Sigma=\beta$ and satisfying the following property:
For all $q\in V$, and $X,Y\in T_qV$ such that $X$ is vertical (that is, tangent to the $K$-orbit through $q$) and $Y$ is horizontal (that is, normal to the $K$-orbit through $q$), we have $\tilde{\beta}(X,Y)=0$.
Let $K_0$ be the connected component of the identity. It is polar with the same section $\Sigma$, and generalized Weyl group $W_0$. By [@Dadok85], $W_0$ is generated by reflections.
Let $\beta\in C^\infty(\mathrm{Sym}^2 \Sigma)^W$. By Theorem \[hessian\] together with [@Field77], Lemma 3.1, $\beta$ is of the form $\beta=\sum_i a_i \mathrm{Hess}(b_i)$, where $a_i, b_i\in C^\infty(\Sigma)^{W_0}$. By the Chevalley Restriction Theorem, $a_i,b_i$ extend uniquely to $\tilde{a}_i, \tilde{b}_i\in C^\infty (V)^{K_0}$.
Define $\tilde{\beta}_0=\sum_i \tilde{a}_i\mathrm{Hess}(\tilde{b}_i)$ and $$\tilde{\beta}=\frac{1}{|K/K_0|}\sum_{h\in K/K_0} h \tilde{\beta_0}$$ Then $\tilde{\beta}|_\Sigma=\beta$ by the same argument as in Corollary \[polarMVCRT\].
To show that $\tilde{\beta}$ satisfies the additional property in the statement of the Lemma, it is enough to do so for each Hess$(\tilde{\beta}_i)$. Changing the section $\Sigma$ if necessary, we may assume that $q,Y\in \Sigma$. Extend the given $X,Y\in T_qV$ to parallel vector fields (in the Euclidean metric), also denoted by $X,Y$. Let $f=d\tilde{\beta}_i(X)$.
We claim that $f|_\Sigma$ is identically zero. Indeed, since $X(q)$ is vertical, it is orthogonal to $\Sigma$, and so $X(p)$ is orthogonal to $\Sigma$ for every $p\in\Sigma$. Thus, for regular $p\in\Sigma$, $X(p)$ is vertical. Since $\tilde{\beta}_i$ is constant on orbits, $f(p)=0$ for every regular $p\in\Sigma$, and hence on all of $\Sigma$ by continuity.
Therefore Hess$(\tilde{\beta}_i)(X,Y)= df(Y)=0$, because $Y\in\Sigma$.
The following Lemma is needed in the proof of Theorem \[metric\].
\[positive\] Let $V$ be a polar $K$-representation with section $\Sigma \subset V$ and generalized Weyl group $W$. Let $\tilde{\sigma} \in C^\infty(\mathrm{Sym}^2 V)^K$, and $\sigma=\tilde{\sigma}|_\Sigma$. Then $\sigma(0)$ is positive definite if and only if $\tilde{\sigma}(0)$ is positive definite.
Denote by $K_0$ the connected subgroup of $K$ containing the identity. Recall that the action of $K_0$ is polar with the same section $\Sigma$. Denote by $W_0$ its generalized Weyl group. Consider a decomposition of $V$ into $K_0$-invariant subspaces $$V={\mathbb{R}}^m\oplus V_1\oplus\cdots\oplus V_k$$ where $K_0$ acts trivially on ${\mathbb{R}}^m$, and each $V_i$ is irreducible and non-trivial.
By Theorem 4 in [@Dadok85], each $V_i$ is a polar $K_0$-representation, with section $\Sigma_i=\Sigma\cap V_i$, and we have the decomposition into $W_0$-invariant subspaces $$\Sigma = {\mathbb{R}}^m\oplus \Sigma_1\oplus\cdots\oplus \Sigma_k$$ Moreover $W_0$ splits as a product $W_1\times\cdots\times W_k$ (see section 2.2 in [@Humphreys]), where $W_i$ is the generalized Weyl group associated to the section $\Sigma_i \subset V_i$, so that $\Sigma_i$ are pairwise inequivalent as $W_0$-representations. This implies that $V_i$ are pairwise inequivalent as $K_0$-representations.
Since the quotients $V_i/K_0$ and $\Sigma_i/W_0$ are isometric, irreducibility of $V_i$ as a $K_0$-representation implies irreducibility of $\Sigma_i$ as a $W_0$-representation. (Indeed, a general representation of a compact group $H$ on Euclidean space ${\mathbb{R}}^n$ is irreducible if and only if the quotient $S^{n-1}/H$ has diameter less than $\pi/2$)
By Schur’s Lemma together with the assumption $\tilde{\sigma}|_\Sigma = \sigma$, $$\sigma(0)=A\oplus\lambda_1 \mathrm{Id}_{\Sigma_1}\oplus\cdots\oplus \lambda_k\mathrm{Id}_{\Sigma_k}$$ $$\tilde{\sigma}(0)=A\oplus\lambda_1 \mathrm{Id}_{V_1}\oplus\cdots\oplus \lambda_k\mathrm{Id}_{V_k}$$ where $A$ is a symmetric $m\times m$ matrix, and $\lambda_i \in{\mathbb{R}}$.
Therefore $\sigma(0) >0$ if and only if $\tilde{\sigma}(0) >0$.
Now we are ready to prove Theorem \[metric\]:
As in the proof of Theorem \[mainthm\], we use partitions of unity and the Slice Theorem to reduce to the case where $M$ is a tube $\mathcal{U}=G\times_K V$, and $V$ is a polar representation. Let $\Sigma \subset V$ be a section, with generalized Weyl group $W$, so that $M/G=V/K=\Sigma/W$.
Note that it suffices to extend the given Riemannian metric $\sigma \in C^\infty(\mathrm{Sym}^2\Sigma)^W$ to a $G$-invariant Riemmanian metric on a possibly smaller tube $G\times_K V^\epsilon$ around the orbit $G/K$, for some $\epsilon >0$.
By Corollary \[symmetrizer\], $\sigma$ extends to $\beta_1\in C^\infty (\mathrm{Sym}^2 V)^K$. By Lemma \[positive\], $\beta_1(0)$ is positive-definite, and so by continuity, $\beta_1 >0$ on $V^\epsilon$ for some small $\epsilon>0$.
Choose any smooth, $K$-invariant and positive-definite $\beta_2 :V \to \mathrm{Sym}^2(T_KG/K)$. Then, by Lemma \[tube\], the pair $(\beta_1, \beta_2)$ defines $\tilde{\sigma}\in C^\infty(\mathrm{Sym}^2M)^G$, which is positive-definite on $G\times_K V^\epsilon$ and extends the given $\sigma$. By construction, $\Sigma$ is $\tilde{\sigma}$-orthogonal to $G$-orbits.
Appendix — Hessian Theorem for finite reflection groups
=======================================================
In this section we provide a proof of Theorem \[hessian\] for all finite reflection groups $W\subset O(\Sigma)$. Note that as far as the proof of Theorem \[metric\] is concerned, the only case of Theorem \[hessian\] needed is that of crystallographic reflection groups (see [@Humphreys] for a definition). Our proof includes the non-crystallographic case for the sake of completeness.
The structure of the proof is as follows. First we reduce to the case where $W$ is irreducible — see Lemma \[product\]. Then we point out that for $W$ irreducible of classical type, Theorem \[hessian\] follows from more general polarization results due to Weyl [@Weyl] and Hunziker [@Hunziker97]. Finally we tackle the case of the exceptional groups with the help of a computer.
Recall some facts about finite reflection groups: First, the algebra of invariants, ${\mathbb{R}}[\Sigma]^W$, is a free polynomial algebra with as many generators as the dimension of $\Sigma$. This is known as Chevalley’s Theorem — see [@Bourbaki] Chapter V. Such a set of homogeneous generators is called a set of *basic invariants*. Second, $\Sigma$ is reducible as a $W$-representation if and only if $\Sigma=\Sigma_1\times \Sigma_2$ and $W=W_1\times W_2$ for two reflection groups $W_k\subset O(\Sigma_k)$ — see section 2.2 in [@Humphreys]. Because of the latter, the following proposition reduces the proof of the Hessian Theorem to the irreducible case.
\[product\] Let $W_k\subseteq O(\Sigma_k)$, $k=1,2$ be two finite reflection groups in the Euclidean vector spaces $\Sigma_k$, and let $W=W_1\times W_2\subset O(\Sigma)=O(\Sigma_1\times \Sigma_2)$. Then the conclusion of the Hessian Theorem holds for $W\subset O(\Sigma)$ if and only if it holds for both $W_k\subseteq O(\Sigma_k)$, $k=1,2$.
Let $i_k:\Sigma_k\to \Sigma_1\times \Sigma_2$ and $p_k:\Sigma_1\times \Sigma_2\to \Sigma_k$ be the natural inclusions and projections. As a $W$-representation, $\mathrm{Sym}^2(\Sigma^*)$ decomposes as $$\mathrm{Sym}^2(\Sigma^*) =\mathrm{Sym}^2(\Sigma_1^*)\oplus \mathrm{Sym}^2(\Sigma_2^*)\oplus (\Sigma_1^*\otimes \Sigma_2^*)$$ Denote by $i_{11}$ and $i_{22}$ the natural inclusions of the first two summands. All these maps are $W$-equivariant.
Assume the conclusion of the Hessian Theorem holds for $W\subset O(\Sigma)$. Thus there are $Q_j\in{\mathbb{R}}[\Sigma]^W$ whose Hessians form a basis for ${\mathbb{R}}[\Sigma,\mathrm{Sym}^2(\Sigma^*)]^W$. Then the restrictions $Q_j|_{\Sigma_k}=i_k^*Q_j$ generate ${\mathbb{R}}[\Sigma_k,\mathrm{Sym}^2(\Sigma_k^*)]^{W_k}$ as an ${\mathbb{R}}[\Sigma_k]^{W_k}$-module.
Indeed, given $\sigma\in {\mathbb{R}}[\Sigma_k,\mathrm{Sym}^2(\Sigma_k^*)]^{W_k} $, define $$\tilde{\sigma}=i_{kk}\circ\sigma\circ p_k$$ Since $\tilde{\sigma}$ is $W$-equivariant, there are $a_j\in {\mathbb{R}}[\Sigma]^W$ such that $\tilde{\sigma}=\sum_j a_j\mathrm{Hess}(Q_j)$. Therefore $$\sigma=i_k^*(\tilde{\sigma})=\sum_j (a_j|_{\Sigma_k})\mathrm{Hess}(Q_j|_{\Sigma_k})$$
For the converse, assume the conclusion of the Hessian Theorem holds for $W_k\subset O(\Sigma_k)$. Let $\rho_j\in{\mathbb{R}}[\Sigma_1]^{W_1}$, $j=1,\ldots n_1$ and $\psi_j\in{\mathbb{R}}[\Sigma_2]^{W_2}$, $j=1,\ldots n_2$ be basic invariants on $\Sigma_1$ and $\Sigma_2$ respectively, and $Q_j\in{\mathbb{R}}[\Sigma_1]^{W_1}$, for $j=1,\ldots (n_1^2+n_1)/2$, $R_j\in{\mathbb{R}}[\Sigma_2]^{W_2}$, for $j=1,\ldots (n_2^2+n_2)/2$ be homogeneous invariants whose Hessians form a basis for the corresponding spaces of equivariant symmetric $2$-tensors.
Claim: The Hessians of the following set of $W=W_1\times W_2$-invariant polynomials on $\Sigma=\Sigma_1\times \Sigma_2$ form a basis for the space of equivariant symmetric $2$-tensors on $\Sigma$: $$\{Q_j\} \cup \{R_j\}\cup \{\rho_i\psi_j , \quad i=1\ldots n_1,\ j=1\ldots n_2\}$$
Indeed, ${\mathbb{R}}[\Sigma,\mathrm{Sym}^2(\Sigma^*)]^W$ decomposes as $${\mathbb{R}}[\Sigma,\mathrm{Sym}^2(\Sigma_1^*)]^W\oplus {\mathbb{R}}[\Sigma,\mathrm{Sym}^2(\Sigma_2^*)]^W\oplus{\mathbb{R}}[\Sigma,\Sigma_1^*\otimes \Sigma_2^*]^W$$ The first two pieces are freely generated over ${\mathbb{R}}[\Sigma]^W$ by Hess$Q_j$ and Hess$R_j$. The third piece can be rewritten as ${\mathbb{R}}[\Sigma,\Sigma_1^*\otimes \Sigma_2^*]^W={\mathbb{R}}[\Sigma_1,\Sigma_1^*]^{W_1}\otimes{\mathbb{R}}[\Sigma_2,\Sigma_2^*]^{W_2}$. By Solomon’s Theorem [@Solomon63], ${\mathbb{R}}[\Sigma_k,\Sigma_k^*]^{W_k}$ are freely generated by $d\rho_j$ and $d\psi_j$, so that ${\mathbb{R}}[\Sigma,\Sigma_1^*\otimes \Sigma_2^*]^W$ is freely generated by $d\rho_j\otimes d\psi_j$. To finish the proof of the Claim one uses the product rule $$\mathrm{Hess}(\rho_i\psi_j)= d\rho_i\otimes d\psi_j + \rho_i\mathrm{Hess}(\psi_j)+\psi_j\mathrm{Hess}(\rho_i)$$
Irreducible finite reflection groups are classified by type — see [@Humphreys]. For $W$ irreducible of type $A$, $B$ and dihedral, the statement of Theorem \[hessian\] follows from [@Weyl], [@Hunziker97], while for type $D$, it follows from [@Hunziker97], Theorem 3.1.
Finally we prove the Hessian Theorem for the six exceptional finite reflection groups $W\subset O(\Sigma)$ usually called by the names of their Dynkin diagrams: $H_3$, $H_4$, $F_4$, $E_6$, $E_7$ and $E_8$. Note that the subscript denotes the rank $n=$dim$(\Sigma)$. In all cases our proof relies on calculations performed by a computer running GAP 3 (see [@GAP3]) using the package CHEVIE, which ultimately rely only on integer arithmetic. For the actual code that was used, see
http://www.nd.edu/\~rmendes/sym2.txt
Recall a way of describing $W\subset O(\Sigma)$ from its Cartan matrix $C=(C_{ij})$. $\Sigma$ has a basis $r_1,\ldots r_n$ of simple roots with corresponding co-roots $r^{\vee}_1,\ldots r^{\vee}_n$. This means that $W$ is generated by the reflections in the hyperplanes $\ker(r^{\vee}_i)$ given by: $$R_i :v \mapsto v-r^{\vee}_i (v) r_i \qquad i=1,\ldots n$$ Expressing $v\in \Sigma$ in the basis of simple roots $v=a_1r_1+\ldots a_nr_n$, we get $$R_i(v)=v-\left(\sum_j a_jr_i^{\vee}(r_j)\right)r_i$$ The coefficients $r_i^{\vee}(r_j)=C_{ij}$ form the Cartan matrix.
Here are the Cartan matrices for $H_3$, $H_4$ and $F_4$: (where $\zeta = \exp(2\pi i/5)$)
$$H_3:\ \left( \begin{array}{ccc}
2 & \zeta^2+\zeta^3 & 0 \\
\zeta^2+\zeta^3 & 2 & -1 \\
0 & -1 & 2
\end{array}\right) ,\quad
H_4:\ \left( \begin{array}{cccc}
2 & \zeta^2+\zeta^3 & 0 &0 \\
\zeta^2+\zeta^3 & 2 & -1 & 0 \\
0 & -1 & 2 & -1 \\
0 & 0 & -1 & 2
\end{array}\right)$$ $$F_4:\ \left( \begin{array}{cccc}
\phantom{-}2 & -1 & \phantom{-}0 & \phantom{-}0 \\
-1 & \phantom{-}2 & -1 & \phantom{-}0 \\
\phantom{-}0 & -2 & \phantom{-}2 & -1 \\
\phantom{-}0 & \phantom{-}0 & -1 & \phantom{-}2
\end{array}\right)$$ For the Cartan matrices in type E, refer to the tables at the end of [@Bourbaki].
We start the proof of the Hessian Theorem by describing how the program computes the polynomial $$\frac{P_t({\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W)}{P_t({\mathbb{R}}[\Sigma]^W)}$$ where $P_t(U)=\sum _{l=0}^\infty (\dim U_l) t^l$ denotes the Poincaré series of a graded vector space $U=\oplus _{l=0}^\infty U_l$.
We need to recall a few facts. Let $I$ be the ideal in ${\mathbb{R}}[\Sigma]$ generated by the homogeneous invariants of positive degree. The quotient ${\mathbb{R}}[\Sigma]/I$ is known to be isomorphic, as a $W$-representation, to the regular representation (see Theorem B in [@Chevalley55]), but it is also a graded vector space. Fixing an irreducible representation/character $\xi$, the Poincaré polynomial FD$_\xi(t) $ of the subspace of ${\mathbb{R}}[\Sigma]/I$ with components isomorphic to $\xi$ is called the *fake degree* of $\xi$ . Moreover ${\mathbb{R}}[\Sigma]$ is isomorphic to $({\mathbb{R}}[\Sigma]/I)\otimes{\mathbb{R}}[\Sigma]^W$ . Thus the Poincaré series of the vector subspace in ${\mathbb{R}}[\Sigma]$ given by the direct sum of all irreducible subspaces isomorphic to $\xi$ equals FD$_\xi(t) P_t({\mathbb{R}}[\Sigma]^W)$.
The way the program computes $P_t({\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W)$ is as follows:
It first computes the character $\chi$ of Sym$^2\Sigma^*$, and decomposes it into a sum of irreducible characters, using character tables that come with CHEVIE. $$\chi=\sum_{\xi\text{ irreducible}} c_\xi \xi$$
It then uses a command in CHEVIE that returns the fake degrees of the irreducible characters $\xi$, and computes $$\sum_\xi c_\xi \mathrm{FD}_\xi(t)$$ Using Schur’s Lemma one sees that this equals $$\frac{P_t({\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W)}{P_t({\mathbb{R}}[\Sigma]^W)}$$
Here are the outputs: $$\begin{array}{l|l}
& P_t({\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W) / P_t({\mathbb{R}}[\Sigma]^W)=\\
\hline\\
H_3 & t^{10}+t^8+t^6+t^4+t^2+1\\
H_4 & t^{38}+t^{30}+t^{28}+t^{22}+t^{20}+t^{18}+t^{12}+t^{10}+t^2+1\\
F_4 & t^{14}+t^{12}+2t^{10}+t^8+2t^6+t^4+t^2+1\\
E_6 & t^{16} + t^{15} + t^{14} + t^{13} + 2t^{12} + t^{11} +2t^{10} + 2t^9 + \\
&+2t^8 + t^7 + 2t^6 + t^5 + t^4 + t^3 + t^2 + 1\\
E_7 & t^{26} + t^{24} + 2t^{22} + 2t^{20} + 3t^{18} + 3t^{16} +
3t^{14} + 3t^{12} +\\
&+ 3t^{10} + 2t^8 + 2t^6 + t^4 + t^2 + 1\\
E_8 & t^{46} + t^{42} + t^{40} + t^{38} + 2t^{36} + 2t^{34} + t^{32} + 3t^{30} + 2t^{28} + 2t^{26} + 3t^{24} + \\
& +2t^{22} + 2t^{20} + 3t^{18} + t^{16} + 2t^{14} + 2t^{12} + t^{10} + t^8 + t^6 + t^2 + 1
\end{array}$$
Now we turn to the task of defining an explicit set of basic invariants $\rho_1,\ldots \rho_n\in {\mathbb{R}}[\Sigma]^W$. The degrees $d_i=\mathrm{deg}(\rho_i)$ are well known: (see tables at the end of [@Bourbaki]) $$\begin{array}{l|l}
&\text{degrees } d_1,\ldots d_n\\
\hline
H_3 & 2,6,10\\
H_4 & 2,12, 20, 30\\
F_4 & 2,6,8,12 \\
E_6 & 2,5,6,8,9,12\\
E_7 & 2,6,8,10,12,14,18\\
E_8 & 2,8,12,14,18,20,24,30
\end{array}$$
We choose for each group a regular vector $v\in \Sigma$ and identify it with the row vector of its coefficients in the basis of the simple roots $r_i$. We also take one non-zero $\lambda\in \Sigma^*$ with minimal $W$-orbit size, namely the one which in the basis $\{r^{\vee}_i\}$ of simple co-roots is identified with the row vector $$\lambda=(0,\ldots 0,1)\cdot C^{-1}$$
Then the program computes the $W$-orbit $\mathcal{O}$ of $\lambda$. Here are our choices of $v$ and the number of elements in the orbit $\mathcal{O}$: $$\begin{array}{l|l|l}
& v \text{ (in the basis } \{r_i\}) & |\mathcal{O}|\\
\hline
H_3 & (1,2,3) & 12\\
H_4 & (1,2,3,5) & 20\\
F_4 & (2,-3,5,7) & 24\\
E_6 & (2,-5,41,7,-9,110) & 27\\
E_7 & (2,-5,41,7,-9,110 ,-87) & 56\\
E_8 & (2,-5,41,7,-9,110 ,-87,11) & 240
\end{array}$$
Since $W$ permutes the linear polynomials in $\mathcal{O}$, for each natural number $m$ we get a $W$-invariant polynomial of degree $m$ $$\psi_m=\sum_{\lambda \in \mathcal{O}} \lambda^m$$
The invariants constructed this way are called the Chern classes associated to the orbit $\mathcal{O}$. See [@NeuselSmith] chapter 4.
The polynomials $\rho_i=\psi_{d_i}$, $i=1,\ldots n$, form a set of basic invariants, and $v$ is indeed a regular vector.
Let $J$ be the Jacobian matrix $$J= \left( \frac{\partial\rho_i}{\partial r^{\vee}_j} \right)_{i,j}=\left( \sum_{\lambda \in \mathcal{O}} d_i\lambda^{d_i-1}\frac{\partial\lambda}{\partial r^{\vee}_j}\right)_{i,j}$$ The program computes its determinant, evaluates it at the vector $v$, and checks that the value is non-zero. This proves both that $\rho_i$ are algebraically independent (see Proposition 3.10 in [@Humphreys]) and hence a set of basic invariants because they have the right degrees; and that $v$ is indeed a regular vector, that is, does not belong to any of the reflecting hyperplanes (see section 3.13 in [@Humphreys]).
We point out that L. Flatto and M. Weiner studied the set of all $\lambda\in \Sigma^*$ that make the $\rho_i=\psi_{d_i}$ constructed above a set of basic invariants. They produce a distinguished set of basic invariants $J_1, \ldots J_n$, determined up to non-zero constants, such that $\lambda\in \Sigma^*$ gives rise to a set of basic invariants if and only if $J_i(\lambda)\neq 0$ for all $i$ — see [@FlattoWeiner69; @Flatto70] for more details.
\[exceptional\] Let $W\subset O(\Sigma)$ be one of the six exceptional finite reflection groups, and $\rho_1, \ldots \rho_n$ the set of basic invariants described above. Let $ T\subset \{\rho_i\} \cup \{\rho_i\rho_j\}$ be a subset with $n(n+1)/2$ elements such that $T$ contains $\{\rho_i\}$ and $$\sum_{Q\in T} t^{\deg(Q)-2} = \frac{P_t({\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W)}{P_t({\mathbb{R}}[\Sigma]^W)}$$
There is at least one such $T$, and for each one, $ \{ \mathrm{Hess}(Q)\ |\ Q\in T\}$ is a basis for ${\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W$ as a free module over ${\mathbb{R}}[\Sigma]^W$.
First the program finds a list of all subsets $T$ satisfying the condition in the statement of the Theorem. The number of elements in this list (choices for $T$) are:
$$\begin{array}{l|l|l|l|l|l|l}
& H_3 & H_4 & F_4 & E_6 & E_7 & E_8 \\
\hline
\text{choices }& 2 & 2 & 2 & 12 & 48 & 96\\
\end{array}$$
For each $T$, the program constructs a square matrix $M$ of size $n(n+1)/2$. The rows are in correspondence with the set $\mathcal{H}= \{ \mathrm{Hess}(Q)\ |\ Q\in T\}$ , and the columns with the set $\mathcal{P}$ of upper triangular positions of an $n\times n$ matrix. The entry of $M$ associated with $\mathrm{Hess}(Q)\in \mathcal{H}$ and a position $(a,b)\in\mathcal{P}$ is the $(a,b)$-entry of Hess$(Q)(v)$, that is, $$\frac{\partial^2 Q} { \partial r^{\vee}_a \partial r^{\vee}_b} (v)$$
Then it proceeds to compute the determinant of $M$ and checks that it is non-zero. This implies that $\mathcal{H}$ is linearly independent at $v$, hence over ${\mathbb{R}}[\Sigma]$, and in particular over ${\mathbb{R}}[\Sigma]^W$.
Therefore span$_{{\mathbb{R}}[\Sigma]^W}\mathcal{H}$ is a submodule of ${\mathbb{R}}[\Sigma,\mathrm{Sym}^2\Sigma^*]^W$ with the same Poincaré series, and so they must coincide.
|
---
abstract: 'We develop here a relatively simple description of dark energy based on the dynamics of non-minimally coupled to gravity phantom scalar field which, in limit, corresponds to cosmological constant. The dark energy equation of state, obtained directly from the dynamics of the model, turns out to be an oscillatory function of the scale factor. This parameterisation is compared to other possible dark energy parameterisations, among them, the most popular one, linear in the scale factor. We use the Bayesian framework for model selection and make a comparison in the light of SN Ia, CMB shift parameter, BAO A parameter, observational H(z) and growth rate function data. We find that there is evidence to favour a parameterisation with oscillations over [*a priori*]{} assumed linear one.'
author:
- Aleksandra Kurek
- Orest Hrycyna
- 'Marek Szyd[ł]{}owski'
bibliography:
- 'parEoS\_R1.bib'
title: From model dynamics to oscillating dark energy parameterisation
---
Introduction
============
The recent discovery of the acceleration of the Universe is one of the most significant discoveries over last decade [@Riess:1998cb; @Perlmutter:1998np]. Observations of distant supernovae type Ia [@Riess:1998cb; @Perlmutter:1998np] as well as cosmic microwave background (CMB) fluctuations [@Bennett:2003bz; @Spergel:2006hy] and large scale structure (LSS) [@Tegmark:2006az] indicate that the Universe is undergoing an accelerating phase of expansion. These observations suggest that the Universe is filled by dark energy of unknown form, violating the strong energy conditions $\rho_{X} + 3p_{X} > 0$ or a dynamical equation governing gravity should be modified. A simple cosmological constant model of dark energy can serve the purpose of explanation of dark energy and is in good agreement with the astronomical data (supernovae type Ia and other measurements). Although this model is favoured by the Bayesian framework of model selection [@Szydlowski:2006ay; @Szydlowski:2006pz; @Kurek:2007tb; @Kurek:2007gr], it faces the serious problem of fine tuning [@Padmanabhan:2002ji]. Therefore the other alternatives [@Bludman:2006cg] have been proposed which includes an evolving scalar field. When one tries to accommodate a time-varying equation of state, the simplest parameterisation is the one which adds a linear dependence on the scale factor $a$. Other choices are motivated by a possibility of integration dark energy density in an exact form. In this context a class of simple oscillating dark energy equation of state coefficients appeared [@Linder:2005dw; @Xia:2004rw; @Barenboim:2004kz; @Zhao:2006mn; @Feng:2004ff]. It is interesting that these models may provide a way to unify the early inflation and the late time acceleration. Moreover in these scenarios we obtain a possible way to solve the cosmic coincidence problem [@Jain:2007fa; @Griest:2002cu; @Dodelson:2001fq].
If we allow that the dark energy density might vary in time (or redshift) then there appears a problem of choosing or finding an appropriate form of parameterisation of the equation of a state parameter $w_{X}(z)$. In the most popular approach $w_{X}(z)=p_{X}/\rho_{X}$ appears to be virtual and the dynamical dark energy parameterisation makes the model phenomenological, containing free parameters and functions. As a result we have a model difficult to constrain [@Crittenden:2007yy]. Another approach is to postulate a quintessence potential of the scalar field which has motivation from fundamental physics (particle physics) and then extracts it from the true dynamics directly [@Hrycyna:2007mq; @Hrycyna:2007gd; @Kurek:2007bu; @Faraoni:2000vg]. In this approach we can expect that the parameterisation of dark energy equation of state reflects some realistic underlying of the physical model. The most popular dynamical form of dark energy offers the idea of quintessence. In this conception dark energy is described in terms of the minimally coupled to gravity scalar field $\phi$ with the potential $V(\phi)$. The scalar field is rolling down its potential starts to dominate the energy density of the standard matter [@Ratra:1987rm; @Wetterich:1987fm]. The oscillating scalar field as a quintessence model for dark energy has been recently proposed [@Dutta:2008px; @Johnson:2008se; @Gu:2007be]. The case of extended quintessence introduced by Amendola [@Amendola:1999dr] was also considered in our previous papers [@Hrycyna:2007mq; @Hrycyna:2007gd; @Kurek:2007bu] where we assumed the non-zero coupling constant. The possibility of violating of the weak energy condition (phantom scalar field) was admitted. In this scenario, instead of standard minimally coupled scalar field, the phantom scalar field, non-minimally coupled to gravity, causes the accelerating phase of expansion of the Universe [@Hrycyna:2007mq; @Hrycyna:2007gd]. We found that in the generic case trajectories are approaching the de Sitter state after an infinite number of damping oscillations around the mysterious $w_{X}=-1$ value. Therefore the $\Lambda$CDM model appears as a global attractor in the phase space $(\psi,\psi')$ (where $\psi$ is a phantom scalar field and $'={\mathrm{d}}/{\mathrm{d}}\ln{a}$).
In this letter, we aim at testing and selecting the viability of different parameterisations of oscillating dark energy in the light of recent astronomical data. We focus our attention on the equation of state for a non-minimally coupled to gravity phantom scalar field with the potential in the simple quadratic form.
Class of kinessence models
==========================
This class of models is understood as a class of FRW models with standard dust matter and dark energy parameterised by redshift, i.e. $w_{X}=w_{X}(z)$ [@Ratra:1987rm; @Wetterich:1987fm]. For simplicity the flat FRW model ($k=0$) is assumed. Then dynamics of the model is determined by the acceleration equation $$\frac{\ddot{a}}{a}=-\frac{1}{6}\Big(\rho_{\text{m}}+(1+3w_{X})\rho_{X}\Big)=
-\frac{1}{2}H^{2}\Big(\Omega_{\text{m}}+(1+3w_{X})\Omega_{X}\Big),
\label{eq:1}$$ where $a$ is the scale factor, dot is a differentiation with respect to the cosmological time, $\Omega_{\text{m}}$ and $\Omega_{X}$ are density parameters for matter and dark energy $X$, respectively, $H=(\ln{a})\dot{}$ is the Hubble parameter.
We assume that standard matter with energy density $\rho_{\text{m}}=\rho_{\text{m},0}a^{-3}$ is a dust matter and energy density of the dark energy is given, from conservation condition, by $\rho_{X}=\rho_{X,0}a^{-3}\exp{\Big[-3\int_{1}^{a}\frac{w_{X}(a')}{a'}\, {\mathrm{d}}a'\Big]}$.
The acceleration equation (\[eq:1\]) admits the first integral (which is called Friedman’s first integral) in the form $$\bigg(\frac{H}{H_{0}}\bigg)^{2} = \Omega_{\text{m},0}(1+z)^{3} + \Omega_{X,0}f(z),
\label{eq:3}$$ where $H_{0}$ and $\Omega_{i,0}$ are parameters referring to the present epoch, and $z$ is the redshift related to the scale factor by the relation $1+z=a^{-1}$ (the present value of the scale factor $a_{0}=1$). The phenomenological properties of dark energy are described in terms of the function $f(z)$ such that $f(z)=\exp{\Big[3\int_{0}^{z}\frac{1+w_{X}(z')}{1+z'}\, {\mathrm{d}}z'\Big]}$.
In the context of the accelerated expansion of the Universe most theoretical models of dark energy are based on scalar fields. It is a consequence of exploring an analogy to the inflationary theory of the primordial universe [@Abbott:1981rg]. However, a single canonical scalar field cannot explain the range of the coefficient of the equation of state $w<-1$ which is preferred by the astronomical data [@Kurek:2007tb]. One possibility, that has received much attention, is that we formally allow the scalar field to have a negative kinetic energy and switch its sign in comparison with canonical scalar field. There are some physical motivation for introducing such a phantom scalar field arising from string/M theory and in supergravity [@MersiniHoughton:2001su]. Another possibility lies in introduction of the coupling term $\xi R\psi^{2}$ between the scalar field and the gravity (for the review and references see [@Faraoni:2006ik]). Such a theory offers scalar-tensor models of a dark energy called extended quintessence.
If we assume that a source of gravity is the phantom scalar field $\psi$ with an arbitrary coupling constant $\xi$ then the dynamics is governed by the action $$S=\frac{1}{2}\int {\mathrm{d}}^{4}x \sqrt{-g}\Big(m_{p}^{2}R +
(g^{\mu\nu}\psi_{\mu}\psi_{\nu} + \xi R\psi^{2} - 2 V(\psi))\Big)
\label{eq:2}$$ where $m_{p}^{2}=(8\pi G)^{-1}$ and $V(\psi)$ is a scalar field potential.
The phantom cosmology with general potentials was studied by Faraoni [@Faraoni:2005gg] using language of qualitative analysis of differential equation to obtain the late time attractors without the specific assumptions of the shape of potential functions. There are many reasons why we should consider a nonzero coupling constant $\xi$. First, a nonzero $\xi$ is generated by quantum corrections even if it is absent in the classical action (see [@Faraoni:2000gx] and references therein). Another reason is that the non-minimal coupling is motivated by the renormalization of the Einstein–Klein–Gordon equation. Of course the value of the coupling constant should be fixed by the physics only, but in relativity any value of the parameter $\xi$ different from $1/6$ (conformal coupling) gives rise to the violation of the equivalence principle [@Szydlowski:2008zza].
In our paper [@Hrycyna:2007gd] we studied generic features of the evolutional paths of the flat FRW model with the phantom scalar field non-minimally coupled to gravity by using dynamical systems methods. We reduced dynamics of the model to the simple case of autonomous dynamical system on the invariant submanifold $(\psi,\psi')$ (a prime denotes differentiation with respect to the natural logarithm of the scale factor) $$\begin{aligned}
\label{sys}
\psi' &= y \nonumber \\
y' &= -y- y^{2}(y+6\xi\psi)\frac{1-6\xi}{1+6\xi\psi^{2}(1-6\xi)} - \\
&-
\frac{1+(1-6\xi)y^{2}+6\xi(y+\psi)^{2}}{1+6\xi\psi^{2}(1-6\xi)}
\bigg[ \frac{2\psi(y+6\xi\psi) -
\big(1+6\xi\psi(y+\psi)\big)}{\psi}\bigg] \nonumber \end{aligned}$$ where $V(\psi) \propto \psi^2$ is assumed, and we found that principally there is one asymptotic state, which corresponds to the critical point in the phase space $\psi_{0}=\pm1/\sqrt{6\xi}$ and $\psi_{0}'=0$. This critical point is also the de Sitter state ($w_{X}=-1$). Note that in this model a problem of the big rip singularity does not appear in contrast to the standard phantom cosmology where it is present because the late time attractors in the phase space represent the de Sitter stage. There are two types of evolutional scenarios leading to this Lambda state (depending on the value of $\xi$), through
- [the monotonic evolution toward the critical point of a node type for $0<\xi\le3/25$, (Fig. \[fig:1\]), in the special case $\xi=3/25$ we obtain a degenerate node;]{}
- [the damping oscillations around the critical point of a focus type for $3/25<\xi<1/3$, (Fig. \[fig:2\]).]{}
![The phase portrait represents generic behaviour of the system (\[sys\]) around the critical point of a stable node type.[]{data-label="fig:1"}](fig1.eps)
![The phase portrait represents the generic behaviour of the system (\[sys\]) around a focus type critical point.[]{data-label="fig:2"}](fig2.eps)
The effect of a non-minimal coupling can be treated as an effect of fictitious fluid with some effective coefficient of the equation of state given by $$w_{\mathrm{eff}} = \frac{2}{1+6\xi\psi^{2}(1-6\xi)}\bigg\{\frac{1}{2}[1+2\xi\psi^{2}(1-6\xi)]
-(1-2\xi)[1+(1-6\xi)\psi'^{2}] - 4\xi(1-3\xi)(\psi'+\psi)^{2}\bigg\}.$$ In both evolutional scenarios we can find linearised solutions of the dynamical system in the vicinity of the critical point, which the following Hartman-Grobman theorem are good approximations of the system.
Finally one can compute linearised formulas for $w(z)$ around the corresponding critical point for both cases (see Appendix \[appa\]).
In what follows we concentrate on the special case of parameterisation (\[osc\]) with $\xi=1/6$ which corresponds to the conformally coupled phantom scalar field [@Szydlowski:2008zza]. We will confront it (using the Bayesian model selection method) with most popular dark energy parameterisations of $w_X(z)$, which are presented in Table \[tab:1\].
-----------------------------------------------------------------------------------------------
case parameterisation
------ ----------------------------------------------------------------------------------------
(1) Chevallier-Polarski-Linder [@Chevallier:2000qy; @Linder:2004ng]
$w_X(z)=w_{0}+w_{1}\frac{z}{1+z}$
(2) purely oscillating dark energy
a\) $w_X(z)=w_{0}\cos{(\omega_{c}\ln{(1+z)})}$
b\) $w_X(z)=-1+w_{0}\sin{(\omega_{s}\ln{(1+z)})}$
(3) damping osc. DE
a\) $w_X(z)=w_{0}(1+z)^{3}\cos{(\omega_{c}\ln(1+z))}$
b\) $w_X(z)=-1+w_{0}(1+z)^{3}\sin{(\omega_{s}\ln{(1+z)})}$
(4) damping osc. DE parameterisation determined directly from
the dynamics of phantom scalar field model [@Hrycyna:2007gd]
a\) $\xi =\frac{1}{6}$:
$w_X(z)=-1-\frac{4}{3}(1+z)^{3/2}\Big((\cos{(\frac{\sqrt{7}}{2}\ln{(1+z)})} +
\frac{5\sqrt{7}}{7}\sin{(\frac{\sqrt{7}}{2}\ln{(1+z)})})x_{0}+$
$(\cos{(\frac{\sqrt{7}}{2}\ln{(1+z)})} +
\frac{\sqrt{7}}{7}\sin{(\frac{\sqrt{7}}{2}\ln{(1+z)})})y_{0}\Big)$
$-\frac{2}{3}(1+z)^3\Big((\cos{(\frac{\sqrt{7}}{2}\ln{(1+z)})} +
\frac{5\sqrt{7}}{7}\sin{(\frac{\sqrt{7}}{2}\ln{(1+z)})})x_{0}+$
$(\cos{(\frac{\sqrt{7}}{2}\ln{(1+z)})} +
\frac{\sqrt{7}}{7}\sin{(\frac{\sqrt{7}}{2}\ln{(1+z)})})y_{0}\Big)^2$
b\) $\xi =\frac{1}{6}$, $y_0= \alpha x_0$ :
$w_X(z)=-1-\frac{4}{3}(1+z)^{3/2}\Big((1+\alpha)\cos{(\frac{\sqrt{7}}{2}\ln{(1+z)})} +
\frac{\sqrt{7}}{7}(5+\alpha)\sin{(\frac{\sqrt{7}}{2}\ln{(1+z)})}\Big)x_{0}-$
$-\frac{2}{3}(1+z)^3\Big((1+\alpha)\cos{(\frac{\sqrt{7}}{2}\ln{(1+z)})} +
\frac{\sqrt{7}}{7}(5+\alpha)\sin{(\frac{\sqrt{7}}{2}\ln{(1+z)})}\Big)^{2}x_{0}^{2}$
-----------------------------------------------------------------------------------------------
: \[tab:1\]Different dark energy parameterisations in terms of $w_{X}(z)\equiv p_{X}/\rho_{X}$ – the coefficient of EoS.
Recently, it has been argued that models with oscillating dark energy are favoured over a model with a linear parameterisation of EoS [@Kurek:2007bu; @Jain:2007fa].
Results
=======
Bayesian method of model comparison
-----------------------------------
To find the best parameterisation of $w_X(z)$ we use the Bayesian method of model comparison [@Jeffreys:1961]. Here the best model ($M$) from the set of models under consideration is the one which has the greatest value of the probability in the light of the data ($D$) (posterior probability) $$P(M|D)=\frac{P(D|M)P(M)}{P(D)}.$$ $P(M)$ is the prior probability for model $M$, $P(D)$ is the normalisation constant and $P(D|M)$ is the model likelihood (also called evidence) and is given by $P(D|M)=\int P(D|\bar{\theta},M)P(\bar{\theta}|M) d\bar{\theta}$, where $P(D|\bar{\theta},M)=\mathrm{L}(\bar{\theta})$ is the likelihood function for model $M$ and $ P(\bar{\theta}|M)$ is the prior probability for the model parameters $\bar{\theta}$. It is convenient to consider the ratio of the posterior probabilities for models which we want to compare $\frac{P(M_1|D)}{P(M_2|D)}=\frac{P(D|M_1)}{P(D|M_2)}\frac{P(M_1)}{P(M_2)}$. If we have no prior information to favour one model over another one ($P(M_1)=P(M_2)$), posterior ratio is reduced to the ratio of the model likelihoods, so called Bayes factor ($B_{12}$), which values can be interpreted as the strength of evidence to favour model $M_1$ over model $M_2$ [@Trotta:2008qt]: $0< \ln B_{12} <1 $ – ‘inconclusive’; $1<\ln B_{12} <2.5$ – ‘weak’; $2.5<\ln B_{12} <5$ – ‘moderate’; $\ln B_{12} >5$ – ‘strong’.
The values of Bayesian evidence for models with $w_X(z)$ defined in Table \[tab:1\] were obtained using a nested sampling algorithm [@Skilling], which implementation to the cosmological case is available as a part of the CosmoMC code [@cosmo:1; @Lewis:2002ah], called CosmoNest [@cosmo:2; @Mukherjee:2005wg; @Mukherjee:2005tr; @Parkinson:2006ku]. It was changed for our purpose. We assume flat prior probabilities for the model parameters in the following intervals: $\Omega_{\text{m},0} \in [0,1]$ and $w_0 \in [-2,0]$, $w_1 \in [-3,3]$ (Model 1); $w_0 \in [-2,0]$, $\omega_c \in [0,2]$ (Model 2a and Model 3a); $w_0 \in [-2,2]$, $\omega_s \in [0,2]$ (Model 2b and Model 3b); $x_0 \in [-1,1], y_0 \in [-1, 1]$ (Model 4a); $x_0 \in [-1,1], \alpha\in[-3,0]$ (Model 4b). The values of evidence were averaged from the eight runs.
Analysis with SNIa, CMB R and BAO A data
----------------------------------------
To compare models gathered in Table \[tab:1\] we use information coming from the sample of $N_1=192$ SNIa data [@Davis:2007na], which consists of the ESSENCE sample [@WoodVasey:2007jb] and a SNIa detected by HST [@Riess:2006fw]. After suitable calibration SNIa could be treated as standard candles and tests on the assumed cosmology could be done. In this case the likelihood function has the following form $$\mathcal{L}'_{\text{SN}}\propto \exp
\left[-\frac{1}{2}\left(\sum_{i=1}^{N_1}\frac{(\mu_{i}^{\text{theor}}-\mu_{i}^{\text{obs}})^{2}}{\sigma_{i}^{2}}\right)
\right],$$ where $\sigma_{i}$ is known, $\mu_{i}^{\text{obs}}=m_{i}-M$ ($m_{i}$–apparent magnitude, $M$–absolute magnitude of SNIa), $\mu_{i}^{\text{theor}}=5\log_{10}D_{Li} +
\mathcal{M}$, $\mathcal{M}=-5\log_{10}H_{0}+25$ and $D_{Li}=H_{0}d_{Li}$, where $d_{Li}$ is the luminosity distance, which (with the assumption that the Universe is spatially flat) is given by $d_{Li}=(1+z_{i})c\int_{0}^{z_{i}} \frac{dz'}{H(z')}$ and $H(z)$ is defined in equation (\[eq:3\]). After an analytical marginalisation over the nuisance parameter $\mathcal{M}$ one can obtain the likelihood function $\mathcal{L}_{\text{SN}}$ which does not depend on the parameter $H_0$.
We also include information coming from the CMB data using measurement of the shift parameter ($R^{\text{obs}}=1.70 \pm 0.03$ for $z_{\text{dec}}=1089$) [@Spergel:2006hy; @Wang:2006ts], which is related to the first acoustic peak in the temperature power spectrum and is given by $R^{\text{theor}} =
\sqrt{\Omega_{\text{m},0}}\int_{0}^{z_{dec}}\frac{H_0}{H(z)}dz$. Here the likelihood function has the following form $$\mathcal{L}_{R} \propto \exp \left[- \frac
{(R^{\text{theor}}-R^{\text{obs}})^2}{2\sigma_{R}^2} \right].$$
As the third observational data we use the SDSS luminous red galaxies measurement of $A$ parameter ($A^{\text{obs}}=0.469 \pm 0.017$ for $z_{A}=0.35$) [@Eisenstein:2005su], which is related to the baryon acoustic oscillations (BAO) peak and defined in the following way $A^{\text{theor}}=\sqrt{\Omega_{\text{m},0}} \left (\frac{H(z_A)}{H_{0}}
\right ) ^{-\frac{1}{3}} \left [ \frac{1}{z_{A}} \int_{0}^{z_{A}}\frac
{H_0}{H(z)} dz\right]^{\frac{2}{3}}$. In this case the likelihood function has the following form $$\mathcal{L}_{A} \propto \exp \left[
-\frac{(A^{\text{theor}}-A^{\text{obs}})^2}{2\sigma_{A}^2}\right ].$$
The final likelihood function used in analysis is given by $\mathcal{L}=\mathcal{L}_{\text{SN}}\mathcal{L}_{R}\mathcal{L}_{A}$.\
The results, i.e. values of $\ln B_{1i}$ together with their uncertainties, computed with respect to the model with linear in $a$ parameterisation of $w_X(z)$, are presented in the first column of Table \[tab:2\].
As we can conclude there is weak evidence to favour model with purely oscillations (2b) over the model with linear in $a$ parameterisation. The comparison with models 2a, 3b and 4a is inconclusive, which means that the data set used in analysis is not enough powerful to distinguish those models. Additional information coming from different data set or more accurate data set is required. The value of logarithm of the Bayes factor calculated with respect to model 4b is close to $-1$, which could indicates on the weak evidence in favour of it, but more information is needed to make the conclusion more robust. There is moderate evidence to favour model 1 over the model with damping oscillations (3a).
We can also check if the damping term, i.e. $(1+z)^3$, is required by the data. Comparing model 2a with 3a one can conclude that there is moderate evidence to favour the purely oscillations ($\ln B_{\text{2a3a}}=4.32$), while comparing 2b with 3b that there is weak evidence to favour purely oscillations ($\ln
B_{\text{2b3b}}=2.13$).
The comparison of the best parameterisation from the set of models with purely oscillations (2b) over the best one among the models with damping oscillations (4b) does not give conclusive answer: $\ln B_{\text{2b4b}}=1.06$.
Finally one can compare models with dynamical dark energy with parameterisations of the equation of state gathered in Table \[tab:1\] with the simplest alternative, i.e. the $\Lambda$CDM model with $w=-1$. As one can conclude this model is still the best one in the light of SNIa data, CMB $R$ shift parameter and BAO $A$ parameter. However the conclusion from the comparison of the $\Lambda$CDM model with the purely oscillating model (2b) is inconclusive ($\ln
B_{\Lambda\text{CDM,2b}}=1.05$).\
It is interesting that the model with purely oscillations (2b) is favoured by the data over the model with linear in $a$ parameterisation of $w(z)$. Also the model with damping oscillations (4b) fares well when compared with model 1. One can try to understand the reason of such conclusions. In Figure \[fig:3\] the functions $w(z)$ for the $\Lambda$CDM model, model 1, model 2b and model 4b are presented, which were calculated for the best fit values of the model parameters (obtained in the analysis with the SNIa, CMB R and BAO A data). While in Figure \[fig:4\] we present the corresponding distance modulus vs redshift relations (with the additional assumption that $H_0=72$ kms${}^{-1}$ Mpc${}^{-1}$).
![The functions $w(z)$ for the $\Lambda$CDM model, model with linear in $a$ parameterisation of $w(z)$ (1), model with purely oscillations (2b) and model with damping oscillations (4b), calculated for the best fit values of model parameters (SNIa+CMBR+BAOA data).[]{data-label="fig:3"}](fig3.eps)
![The distance modulus vs redshift relations for the $\Lambda$CDM model, model with linear in $a$ parameterisation of $w(z)$ (1), model with purely oscillations (2b) and model with damping oscillations (4b), calculated for the best fit values of model parameters (SNIa+CMBR+BAOA data) and with the assumption that $H_0=72$ kms${}^{-1}$ Mpc${}^{-1}$. The SNIa data set is also presented.[]{data-label="fig:4"}](fig4.eps)
As we can conclude in spite of the prominent differences in the functions $w(z)$, the distance modulus relations are nearly identical for those models. One should keep in mind that parameters were fitted using the data which are based on the luminosity distance. In this case $w(z)$ is integrated twice.
Analysis with $H(z)$ data added
-------------------------------
It is interesting to consider the relation $H(z)$, as it depends on the $w(z)$ through one integral. Unfortunately the present Hubble function measurements on different redshifts are small and inaccurate. However $H(z)$ data set could gives us another insight into the problem considered.
One possibility to measure the Hubble parameter as a function of redshift is based on the differential ages $dt/dz$ of passively evolving luminous red galaxies (LRG), which correspond to the Hubble function through the relation $$H(z)=-\frac{1}{1+z} \frac{dz}{dt}.$$ Using Gemini Deep Deep Survey and archival data the authors of [@Simon:2004tf] obtained nine values of the Hubble parameter for different redshifts in the range $0.09<z<1.75$. Although this data set is small and has large uncertainties we include it in our analysis.
Another method to determine the Hubble function values at various redshifts is based on the line of sight (LOS) baryon acoustic oscillation scale measurements. The scale of the BAO in the radial direction depends on the $H(z)$. On the other hand the precise measurement of this scale is given by the CMB observations, so the comparison gives us the value of Hubble parameter. Based on this method and using the SDSS DR6 luminous red galaxies data the authors of [@Gaztanaga:2008xz] obtained the values of $H$ at three different redshifts. The uncertainties are highly reduced when compared with the previous data set. We include those points in our analysis.
To complete our $H(z)$ data set we use the HST measurement of $H_0$ [@Freedman:2000cf].
We repeat previous calculations with the additional $N_2=13$ Hubble function measurements. The corresponding likelihood function has the following form: $\mathcal{L}=\mathcal{L}_{SN}\mathcal{L}_{R}\mathcal{L}_{A}\mathcal{L}_{H}$, where $$\mathcal{L}_{H} \propto \exp \left[-\frac{1}{2} \sum_{i=1}^{N_2}\left(
\frac{(H^{\text{theor}}(z_i)-H_i^{\text{obs}})^2}{\sigma_{H i}^2}\right ) \right ].$$ The values of $\ln B_{1i}$ together with uncertainties are gathered in the second column of Table \[tab:2\]. As one can conclude the inclusion of $H(z)$ data does not change our conclusion in most cases. There is still weak evidence to favour of model with purely oscillations (2b) over the model with linear in $a$ parameterisation of $w$. Evidence in favour of model 4b becomes slightly greater (weak evidence to favour this model over the model 1). The evidence against model 3a is even greater than in previous calculations, we find strong evidence against it.
The evidence to favour model with purely oscillations (2a) over the model with damping term (3a) is strong ($B_{2a3a}=6.2$), while the evidence in favour of model 2b over the model 3b is moderate ($B_{2b3b}=2.57$).
The comparison of the best model among the models with purely oscillations, i.e. 2b, with the best model from the set of models with damping term, i.e. 4b, does not give the conclusive answer $B_{2b4b}=0.99$. As one can conclude the $\Lambda$CDM model is still the best one from the models considered, however the conclusion from the comparison with model 2b is inconclusive ($B_{\Lambda\text{CDM},2b}=0.97)$.
We present the relations $H(z)$ for model 1, model 2b, model 4b and the $\Lambda$CDM model in Figure \[fig:5\]. The Hubble functions were derived for the best fit model parameters in the analysis with SNIa, CMB R, BAO A and observational H(z) data.
![The $H(z)$ functions for the $\Lambda$CDM model, model with linear in $a$ parameterisation of $w(z)$ (1), model with purely oscillations (2b) and model with damping oscillations (4b), calculated for the best fit values of model parameters (SNIa+CMBR+BAOA+H data). The filled square, circle and filled circle points correspond to observational $H$ data from [@Simon:2004tf], [@Gaztanaga:2008xz] and [@Freedman:2000cf], respectively.[]{data-label="fig:5"}](fig5.eps)
As one conclude the relations $H(z)$ for models considered are similar in the redshift range under consideration. More data with better quality is required. The most promising future $H$ data will come from the BAO measurements. This method gives us much more precise data points when compared with the alternative method.
Analysis with growth rate function data added
---------------------------------------------
The conclusions stated before are based on the geometrical dark energy probes. It is interesting to check how the inclusion of the dynamical probes, related to the growth of structures, will change the results. We consider observations of the growth rate function $f$, which is related to the growth function $D$ by the following formula $f\equiv d \ln D / d \ln a $. Its evolution in the general relativity framework is described by the following equation $$\label{eq:f}
a\frac{df}{da}=-f^2-f\left(\frac{1}{2} -\frac{3}{2} (1-\Omega_{\text{m}}(a))w(a) \right) + \frac{3}{2}\Omega_{\text{m}}(a),$$ where $\Omega_{\text{m}}(a)=\frac{\Omega_{\text{m},0}a^{-3}}{H^2/H_0^2}$.
The values of growth rate ($f^{\text{theor}}$) at various scale factor ($a$) for considered models were obtained with the help of eq. \[eq:f\]. It was solved using numerical methods, with the assumption that $f(a \simeq 0)=1$.
The observational growth rate data ($f^{\text{obs}}$) could be obtained through the measurements of the redshift distortion parameter $\beta$. It is observed through the anisotropic pattern of galactic redshifts on cluster scales. It is related to the growth rate function by the following formula $\beta \equiv f / b$. The so called bias parameter $b$ reflects the fact that the galaxy distribution does not perfectly trace the matter distribution in the Universe. Currently there are only few measurements of $f$ available (see Table \[tab:3\]). This data set is similar to the one presented in [@Wei:2008rv]. We do not consider the data points at $z=0.55$ and $z=1.4$, as the bias parameter was derived with the help of the value $\beta$ in those cases. The measurement at $z=3$ was obtained in different method, which does not rely on $\beta$ and $b$ parameters. The value of $f$ is finding in the analysis with Ly-$\alpha$ forest data.
z $\beta$ $b$ $f$ references
------ ----------------- ----------------- ----------------- ------------------------------------
0.15 0.49 $\pm 0.09$ 1.04 $\pm 0.11$ 0.51 $\pm 0.11$ [@Hawkins:2002sg], [@Verde:2001sf]
0.35 0.31 $\pm 0.04$ 2.25 $\pm 0.08$ 0.70 $\pm 0.18$ [@Tegmark:2006az]
0.77 0.70 $\pm 0.26$ 1.30 $\pm 0.10$ 0.91 $\pm 0.36$ [@Guzzo:2008ac]
3.00 - - 1.46 $\pm 0.29$ [@McDonald:2004xn]
: The values of distortion parameter $\beta$, bias parameter $b$ and corresponding growth rate function $f=\beta b$ which are used in the calculations.
\[tab:3\]
It should be kept in mind that this data set was obtained with the assumption of the $\Lambda$CDM model. Its inclusion in the analysis with other models could decrease its reliability and the results should be treated with care.
The likelihood function used in analysis is of the following form $\mathcal{L}=\mathcal{L}_{SN}\mathcal{L}_{R}\mathcal{L}_{A}\mathcal{L}_{H}\mathcal{L}_{f}$, where $$\mathcal{L}_{f} \propto \exp \left[-\frac{1}{2} \sum_{i=1}^{N_3} \left(
\frac{(f^{\text{theor}}(a_i)-f_i^{\text{obs}})^2}{\sigma_{fi}^2} \right) \right ],$$ where $N_3=4$. The values of $\ln B_{1i}$ and its uncertainties are gathered in the third column of Table \[tab:2\].
As one can conclude the final conclusions do not change in all cases. The data set is not informative enough to change results.
In Figure \[fig:6\] one can find a plot of the growth rate as a function of the scale factor for the $\Lambda$CDM model, model 1, model 2b and model 4b, calculated for the best fit values of model parameters (in the analysis with SNIa,CMB R, BAO A, H and f data).
![The $f(z)$ functions for the $\Lambda$CDM model, model with linear in $a$ parameterisation of $w(z)$ (1), model with purely oscillations (2b) and model with damping oscillations (4b), calculated for the best fit values of model parameters (SNIa+CMBR+BAOA+H+f data).[]{data-label="fig:6"}](fig6.eps)
The relation $f(a)$ for model 4b differs from the other relations. Anyway the data points have large uncertainties, which prevent this set to distinguish models considered. This is in agreement with our previous conclusion.
MODEL SNIa+CMBR+BAOA SNIa+CMBR+BAOA+H SNIa+CMBR+BAOA+H+f
-------------- ------------------ ------------------ --------------------
1 0 0 0
2a 0.29 $\pm 0.22$ 0.33 $\pm 0.21$ 0.18 $\pm 0.22$
2b -2.08 $\pm 0.13$ -2.14 $\pm 0.20$ -2.24 $\pm 0.23$
3a 4.61 $\pm 0.18$ 6.53 $\pm 0.24$ 6.3 $\pm 0.23$
3b 0.05 $\pm 0.12$ 0.43 $\pm 0.23$ 0.34 $\pm 0.25$
4a 0.5 $\pm 0.13$ 0.31 $\pm 0.23$ 0.25 $\pm 0.24$
4b -1.02 $\pm 0.18$ -1.15 $\pm 0.23$ -1.21 $\pm 0.23$
$\Lambda$CDM -3.13 $\pm 0.16$ -3.11 $\pm 0.22$ -3.3 $\pm 0.23$
: The values of $\ln(B_{1i})=\ln P(D|M_1) - \ln P(D|M_i)$ calculated with respect to the model with linear in $a$ parameterisation of $w_X(z)$ for different data sets.
\[tab:2\]
Discussion
----------
In spite of the fact that models with oscillating relation for $w(z)$ (i.e. 2b and 4b) fare well when compared with the model in which $w(a)$ is a linear function of scale factor, the oscillating behaviour is not seen in the $w(z)$ vs $z$ plots in the redshift interval considered. While the frequency parameter of the model 4b is fixed by the theory, it appears as a free parameter in model 2b. The relation for $w(z)$ in model 4b is complicated. However it can be rewritten as sum of sine and cosine components, with the amplitudes, which depend on $z$, as well as on the model parameters. On the other hand frequency parameters $w_s$ are equal to $\sqrt{7}/2$ or $\sqrt{7}$. If we consider the relation $| w_s \ln(1+z) |= 2 \pi$, we can claim that the oscillating behaviour should be observed in the redshift range of about $\Delta z \simeq 10 $. Unfortunately most of the data points used in analysis are for $z < 2 $. The oscillating behaviour is not seen. More observations at higher redshifts are needed. On the contrary, as was stated before, the frequency parameter of model 2b is a free one. The assumed prior range for this parameter (i.e. $w_s \in [0,2]$) corresponds to the period of oscillation of at least $\Delta z \simeq 22 $. It is again too big to be observed with the present data sets. It is interesting to consider the situation in which the oscillating behaviour could be detected. It can be done by assessing a different prior range for frequency parameter. We repeat calculations for model 2b, with the assumption that $w_s \in [2, 4.5]$ (model 2b1). It corresponds to period of oscillations of at least $\Delta z \simeq 3 $ (the redshift of the most distant data point, of course apart from the one at $z=1089$). The value of logarithm of the Bayes factor, calculated with respect to model 2b, is equal to $\ln B_{2b,2b1} = 4.1 \pm 0.17$. This means that the evidence against model 2b1 is moderate. We can conclude that available data sets prefer a model with period of oscillations larger than could be detected nowadays.
Conclusions
===========
We use the Bayesian method of model selection to compare the FRW models with different functional forms of dynamical dark energy (different parameterisations of the EoS). We examine two categories of parameterisations: [*a priori*]{} assumed and derived from the model dynamics. We show that two parameterisations are favoured over most popular linear with respect to the scale factor.
In particular we obtain following results:
- parameterisation with purely oscillations, i.e. 2b, is the best one among parameterisation considered in this paper;
- there is weak evidence to favour this parameterisation over the linear in $a$ parameterisation of EoS (this conclusion is based on the SNIa, CMB R and BAO A data sets and does not change after inclusion observational $H$ and $f$ data);
- data sets used in analysis prefer model 2b in which oscillating behaviour could not be detected nowadays
- comparison of model 4b with the linear in $a$ parameterisation of EoS does not give conclusive answer when it is based on SNIa, CMB R and BAO A data, but after the inclusion of $H$ and $f$ data we find the weak evidence in favour of this model
- the comparison of the $\Lambda$CDM model with the model with dark energy parameterised as 2b is inconclusive, a more accurate data set is required to distinguish those models;
- damping term, i.e. $(1+z)^3$, which appears in parameterisations (3) is not supported by the data used in analysis.
In study of cosmological constraints on the form of dark energy the most popular methodology is study of the viability of different parameterisations for the equation of state parameter. They are postulated rather in the a priori forms without connection with true model dynamics. Our approach is different because we claim that if model dynamics is closed then corresponding form of dark energy parameterisation should be forced. It is because we tested the FRW model with dark energy rather than the parameterisation $w(z)$ itself.
Linearised formulas for w(z). {#appa}
=============================
Here we present linearised formulas for w(z) around the critical point corresponding to the deSitter state for monotonic and oscillating evolution toward this point [@Hrycyna:2007gd]: $$w_{X}^{\mathrm{mon}} = \frac{-(1-3\xi) + f_{1}(\xi,a)a^{-3/2} +
f_{2}(\xi,a)a^{-3}}
{(1-3\xi) + 6\xi(1-6\xi)\psi_{0}(A a^{\alpha_{l}}+Ba^{-\alpha_{l}})a^{-3/2} +
3\xi(1-6\xi)(A a^{\alpha_{l}}+ B a^{-\alpha_{l}})^{2}a^{-3}},
\label{lin}$$ where $\psi_{0}^{2}=\frac{1}{6\xi}$, $\alpha_{l}=\frac{\sqrt{3}}{2}\sqrt{\frac{3-25\xi}{1-3\xi}}$, $A =\frac{1}{2}x_{0} +\sqrt{3}\sqrt{\frac{1-3\xi}{3-25\xi}}\Big(\frac{1}{2}x_{0}+\frac{1}{3}y_{0}\Big)$, $B=\frac{1}{2}x_{0}-\sqrt{3}\sqrt{\frac{1-3\xi}{3-25\xi}}\Big(\frac{1}{2}x_{0}+\frac{1}{3}y_{0}\Big)$, $x_{0}$ and $y_{0}$ are the initial conditions for $\psi$ and $\psi'$, respectively, and $f_{1}=2\xi\psi_{0}\Big(\big(3(1-4\xi)-4\alpha_{l}(1-3\xi)\big)Aa^{\alpha_{l}}+\big(3(1-4\xi)+4\alpha_{l}(1-3\xi)\big)Ba^{-\alpha_{l}}\Big)$, $f_{2}=\big(-\frac{3}{4}(3-4\xi)+15\xi(1-2\xi)\big)\Big(A a^{\alpha_{l}}+Ba^{-\alpha_{l}}\Big)^{2} + \alpha_{l}\big(3(1-4\xi)-8\xi(1-3\xi)\big)\Big(A^{2}a^{2\alpha_{l}} -B^{2}a^{-2\alpha_{l}}\Big) -\alpha_{l}^{2}(1-4\xi)\Big(Aa^{\alpha_{l}}-Ba^{-\alpha_{l}}\Big)^{2}$, and $$w_{X}^{\mathrm{osc}} =
\frac{-(1-3\xi)+g_{1}(\xi,a)a^{-3/2}+g_{2}(\xi,a)a^{-3}}
{(1-3\xi)+6\xi(1-6\xi)\psi_{0}h(\xi,a)a^{-3/2}
+3\xi(1-6\xi)h^{2}(\xi,a)a^{-3}
},
\label{osc}$$ where $h=x_{0}\cos{(\alpha_{\mathrm{osc}}\ln{a})+\frac{3}{\alpha_{\mathrm{osc}}}
\sin{(\alpha_{\mathrm{osc}}\ln{a})\big(\frac{1}{2}x_{0}+\frac{1}{3}y_{0}\big)}}$, $g_{1}=2\xi\psi_{0}\Big((1-6\xi)h-4(1-3\xi)\big((x_{0}+y_{0})
\cos{(\alpha_{\mathrm{osc}}\ln{a})}-\alpha_{\mathrm{osc}}x_{0}\sin{(\alpha_
{\mathrm{osc}}\ln{a})}-\frac{3}{2\alpha_{\mathrm{osc}}}\sin{(\alpha_{\mathrm{osc}}
\ln{a})(\frac{1}{2}x_{0}+\frac{1}{3}y_{0})}\big)\Big)$, $g_{2}=\xi(1-6\xi)h^{2}-(1-2\xi)(1-6\xi)\big(y_{0}\cos{(\alpha_{\mathrm{osc}}
\ln{a})}-\alpha_{\mathrm{osc}}x_{0}\sin{(\alpha_{\mathrm{osc}}\ln{a})}-\frac{9}
{2\alpha_{\mathrm{osc}}}\sin{(\alpha_{\mathrm{osc}}\ln{a})}(\frac{1}{2}x_{0}
+\frac{1}{3}y_{0})\big)^{2}-4\xi(1-3\xi)\Big((x_{0}+y_{0})\cos{
(\alpha_{\mathrm{osc}}\ln{a})}-\alpha_{\mathrm{osc}}x_{0}\sin{(\alpha_{\mathrm{osc}}
\ln{a})}-\frac{3}{2\alpha_{\mathrm{osc}}}\sin{(\alpha_{\mathrm{osc}}\ln{a})(\frac{1}{2}
x_{0}+\frac{1}{3}y_{0})}\Big)^{2}$, where $\alpha_{\mathrm{osc}}=\frac{\sqrt{3}}{2}\sqrt{\frac{25\xi-3}{1-3\xi}}$ and $x_{0}$, $y_{0}$ and $\psi_{0}$ have their usual meaning.
Note that in all cases purely oscillating scenario does not exist.
This work has been supported by the Marie Curie Host Fellowships for the Transfer of Knowledge project COCOS (Contract No. MTKD-CT-2004-517186). The authors also acknowledge cooperation in the project PARTICLE PHYSICS AND COSMOLOGY: THE INTERFACE (Particles-Astrophysics-Cosmology Agreement for scientific collaboration in theoretical research).
|
---
abstract: 'We present a search for pair production of doubly-charged Higgs bosons in the processes $q\bar{q} \to \Hpp\Hmm$ decaying through $\Hpm \to \tau^{\pm}\tau^{\pm},\mu^{\pm}\tau^{\pm},\mu^{\pm}\mu^{\pm}$. The search is performed in $\ppbar$ collisions at a center-of-mass energy of $\sqrt{s}=1.96$ TeV using an integrated luminosity of up to $7.0$ fb$^{-1}$ collected by the D0 experiment at the Fermilab Tevatron Collider. The results are used to set $95\%$ C.L. limits on the pair production cross section of doubly-charged Higgs bosons and on their mass for different $\Hpm$ branching fractions. Models predicting different $\Hpm$ decays are investigated. Assuming $\BR(\Hpm\to\tau^{\pm}\tau^{\pm})=1$ yields an observed (expected) lower limit on the mass of a left-handed $\Hpm_L$ boson of 128 (116) GeV and assuming $\BR(\Hpm\to\mu^{\pm}\tau^{\pm})=1$ the corresponding limits are 144 (149) GeV. In a model with $\BR(\Hpm \to \tau^{\pm}\tau^{\pm})=\BR(\Hpm\to\mu^{\pm}\tau^{\pm})=\BR(\Hpm\to\mu^{\pm}\mu^{\pm})=1/3$, we obtain $M(\Hpm_L)>130~(138)$ GeV.'
title: 'Search for doubly-charged Higgs boson pair production in $\boldsymbol\ppbar$ collisions at $\boldsymbol{\sqrt{s}=1.96}$ TeV'
---
author\_list.tex
Doubly-charged Higgs bosons ($\Hpm$) appear in models with an extended Higgs sector such as the Little Higgs model [@bib-littleH], left-right symmetric models [@bib-LRsym], and in models with [*SU$(3)_c\times$SU$(3)_L\times$U$(1)_Y$*]{} (3-3-1) gauge symmetry [@bib-331].
The $\Hpm$ bosons could be pair-produced and observed at a hadron collider through the process $q\bar{q} \rightarrow Z/\gamma^* \rightarrow H^{++} H^{--}\to{\ell}^{+}{\ell'}^{+}{\ell}^{-}{\ell'}^{-}$ ($\ell,\ell'=e,\mu,\tau$). Single production of $\Hpm$ bosons through $W$ exchange, leading to $\Hpm H^{\mp}$ final states, is not considered in this Letter to reduce the model dependency of the results [@bib-single]. Some models favor a mass of the $\Hpm$ boson at the electroweak scale [@bib-mass]. The decay into like-charge lepton pairs violates lepton flavor number conservation. The decays $\Hpm\to\tau^{\pm}\tau^{\pm}$ are predicted to dominate in some scenarios, such as the 3-3-1 model of Ref. [@bib-scalar]. In a Higgs triplet model that is based on a seesaw neutrino mass mechanism, a normal hierarchy of neutrino masses leads to approximately equal branching fractions for $\Hpm$ boson decays to $\tau\tau$, $\mu\tau$, and $\mu\mu$, if the mass of the lightest neutrino is less than $10$ meV [@bib-seesaw]. In this Letter, we present the first comparison of data with this model and the first search for $\Hpm\to\tau^{\pm}\tau^{\pm}$ decays at a hadron collider.
In left-right symmetric models, right-handed states ($\HpmR$) appear in addition to left-handed states ($\HpmL$). They are characterized through their coupling to right-handed and left-handed fermions, respectively. The cross section for production of right-handed $\HppR\HmmR$ pairs is about a factor of 2 smaller than for $\HppL\HmmL$ because of the different coupling to the $Z$ boson [@bib-spira]. The mass limits for $\HpmR$ bosons therefore tend to be weaker than for $\HpmL$ bosons.
Searches for production of $\Hpm$ bosons have been performed previously at the CERN $e^+e^-$ Collider (LEP) [@bib-lep] and at the DESY $ep$ Collider (HERA) [@bib-hera]. Limits on the mass of the $\Hpm$ boson were obtained in the range of $95-100$ GeV, depending on the flavor of the final state leptons. The OPAL and H1 Collaborations searched for single $\Hpm$ production in the processes $e^+e^-\to e^{\mp}e^{\mp}\Hpm$ [@bib-opal] and $e^{\pm}p\to {\ell}^{\mp}\Hpm p$ [@bib-hera], and through the study of Bhabha scattering $e^+e^-\to e^+e^-$ [@bib-opal], constraining the $\Hpm$ boson’s Yukawa couplings $h_{ee}$ to electrons. Bounds on decays such as $\tau\to 3 \mu$ or $\mu\to e\gamma$ and the measured $(g-2)_\mu$ also constrain different $h_{\ell\ell'}$ [@bib-moha1]. At the Fermilab Tevatron Collider, the D0 and CDF Collaborations published limits for $\mu\mu$, $ee$, $e\tau$, and $\mu\tau$ final states in the range $M(\HpmL)>112-150$ GeV, assuming $100\%$ decays into the specified final state [@bib-d0hmm1; @bib-d0hmm2; @bib-cdf1; @bib-cdf2].
The results in this Letter are based on data collected with the D0 detector at the Fermilab Tevatron Collider and correspond to an integrated luminosity of up to $7.0$ fb$^{-1}$. The D0 detector [@d0det] comprises tracking detectors and calorimeters. Silicon microstrip detectors and a scintillating fiber tracker are used to reconstruct charged particle tracks within a $2$ T solenoid. The uranium and liquid-argon calorimeters used to measure particle energies consist of electromagnetic (EM) and hadronic sections. Muons are identified by combining tracks in the central tracker with patterns of hits in the muon spectrometer. Events are required to pass triggers that select at least one muon candidate.
  
All background processes are simulated using Monte Carlo (MC) event generators, except the multijet background, which is determined from data. The $W$+jet, $Z/\gamma^{*}\to {\ell^+\ell^-}$, and $t\bar{t}$ processes are generated using [alpgen]{} [@bib-alpgen] with showering and hadronization provided by [pythia]{} [@bib-pythia]. Diboson production ([*WW*]{}, [*WZ*]{}, and [*ZZ*]{}) and signal events are simulated using [pythia]{}. The signal samples for the model with equal branching ratios for the decays $\Hpm\to\tau^{\pm}\tau^{\pm}$, $\mu^{\pm}\mu^{\pm}$, and $\mu^{\pm}\tau^{\pm}$ are generated using Yukawa couplings $h_{\mu\tau}=h_{\tau\mu}=\sqrt{2}h_{\tau\tau}=\sqrt{2}h_{\mu\mu}$. The tau lepton decays are simulated with [tauola]{} [@bib-tauola], which includes a full treatment of the tau polarization. All MC samples are processed through a [geant]{} [@bib-geant] simulation of the detector. Data from random beam crossings are overlaid on MC events to account for detector noise and additional $\ppbar$ interactions. The simulated distributions are corrected for the dependence of the trigger efficiency in data on the instantaneous luminosity and for differences between data and simulation in the reconstruction efficiencies and in the distribution of the longitudinal coordinate of the interaction point along the beam direction. Next-to-leading order (NLO) quantum chromodynamics calculations of cross sections are used to normalize the signal and the background contribution of diboson processes, and next-to-NLO calculations are used for all other processes.
Two types of tau lepton decays into hadrons ($\tau_h$) are identified by their signatures: Type-1 tau candidates consist of a calorimeter cluster, with one associated track and no subcluster in the EM section of the calorimeter. This signature corresponds mainly to $\tau^{\pm} \rightarrow \pi^{\pm} \nu$ decays. For type-2 tau candidates, an energy deposit in the EM calorimeter is required in addition to the type-1 signature, as expected for $\tau^{\pm} \rightarrow \pi^{\pm} \pi^{0} \nu$ decays. The outputs of neural networks, one for each tau-type, designed to discriminate $\tau_h$ from jets, have to be [*NN*]{}$_{\tau}>0.75$ [@d0-z-tautau]. Their input variables are based on isolation variables for objects and on the spatial distribution of showers. The tau lepton energy is measured with the calorimeter.
We select events with at least one muon and at least two $\tau_h$ candidates. The muons must be isolated, both in the tracking detectors and in the calorimeters. Each event must have a reconstructed $\ppbar$ interaction vertex with a longitudinal component located within $60$ cm of the nominal center of the detector. The longitudinal coordinate $z_{\rm dca}$ of the distance of closest approach for each track is measured with respect to the nominal center of the detector. The differences between $z_{\rm dca}$ of the highest-$p_T$ muon and the two highest-$p_T$ $\tau_h$ (labeled $\tau_1$ and $\tau_2$), must be less than $2$ cm. The pseudorapidity [@eta] of the selected muons, $\tau_1$, and $\tau_2$ must be $|\eta^{\mu}| < 1.6$ and $|\eta^{\tau_{1,2}}| < 1.5$, respectively, and for additional $\tau_h$ candidates we require $|\eta^{\tau}| < 2$. The transverse momenta must be $p_T^{\mu}>15$ GeV and $p_T^{\tau_{1,2}}>12.5$ GeV. All selected $\tau_h$ candidates and muons are required to be separated by $\Delta {\cal R}_{\mu\tau} > 0.5$, where $\Delta{\cal R}=\sqrt{(\Delta\phi)^2+(\Delta\eta)^2}$ and $\phi$ is the azimuthal angle, and the two leading $\tau_h$ must be separated by $\Delta {\cal R}_{\tau_1\tau_2} > 0.7$. The sum of the charges of the highest-$p_T$ muon, $\tau_1$, and $\tau_2$ is required to be $Q=\sum_{i=\mu,\tau_1,\tau_2} q_i=\pm 1$ as expected for signal. After all selections, the main background is from diboson production and $Z\to\tau^+\tau^-$, where an additional jet mimics a lepton.
------------------------ --------------------------- ------------------------- -------------------------- --------------- -----------------
All $N_{\mu}=1$ $N_{\mu}=2$
$N_{\tau}= 3$ $N_{\tau}=2$
$q_{\tau_1}=q_{\tau_2}$ $q_{\tau_1}=-q_{\tau_2}$
Signal
$\tau^{\pm}\tau^{\pm}$ $6.6 \pm 0.9$ $1.4 \pm 0.2$ $3.1 \pm 0.4$ $1.6\pm 0.2$ $0.4\pm 0.1$
$\mu^{\pm}\tau^{\pm}$ $13.9 \pm 1.9\phantom{0}$ $0.3 \pm 0.1$ $6.8 \pm 0.9$ $0.4\pm 0.1$ $6.3\pm 0.9$
Equal $\BR$ $9.5 \pm 1.3$ $2.5 \pm 0.3$ $3.1 \pm 1.0$ $1.2\pm 0.2$ $2.6\pm 0.4$
Background
$Z\to\tau^+\tau^-$ $8.2 \pm 1.1$ $3.4 \pm 0.5$ $4.8 \pm 0.7$ $<0.1$ $<0.1$
$Z\to \mu^+\mu^-$ $5.1 \pm 0.7$ $2.2 \pm 0.3$ $2.5 \pm 0.4$ $0.1 \pm 0.1$ $0.2 \pm 0.1$
$Z\to e^+e^-$ $0.3 \pm 0.1$ $<0.1$ $0.3 \pm 0.1$ $<0.1$ $<0.1$
$W$ + jets $2.9 \pm 0.4$ $1.1 \pm 0.2$ $1.8 \pm 0.3$ $<0.1$ $<0.1$
$t\bar{t}$ $0.6 \pm 0.1$ $0.3 \pm 0.1$ $0.3 \pm 0.1$ $0.1 \pm 0.1$ $<0.1$
Diboson $10.5 \pm 1.7$ $0.5 \pm 0.1$ $8.5 \pm 1.4$ $0.4 \pm 0.1$ $1.1\pm0.2$
Multijet $<0.8$ $<0.2$ $<0.5$ $<0.1$ $<0.1$
Background
Sum $27.6 \pm 4.9$ $7.5 \pm 1.2$ $18.2 \pm 3.3$ $0.6 \pm 0.1$ $1.3 \pm 0.2$
Data $22$ $5$ $15$ $0$ $2$
------------------------ --------------------------- ------------------------- -------------------------- --------------- -----------------
: Numbers of events in data, predicted background, and expected signal for $M(\HpmL)=120$ GeV, assuming the NLO calculation of the signal cross section for $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=1$, $\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1$, and $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=\BR(\HpmL\to\mu^{\pm}\mu^{\pm})
=\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1/3$. The numbers are shown for the four samples separately, together with their total uncertainties.
\[tab-events\]
We estimate the multijet background using three independent data samples and identical selections, except with the [*NN*]{}$_{\tau}$ requirements reversed, by requiring that either one or both $\tau_h$ candidates have [*NN*]{}$_{\tau}<0.75$. The simulated background is subtracted before the samples are used to determine the differential distributions and normalization of the multijet background in the signal region. A second method used to estimate the multijet background is based on the fact that events with $Q = \pm 1$ are signal-like, whereas events with $Q=\pm 3$ correspond largely to multijet background. To reduce the $W$+jets contribution in the sample with $Q=\pm 3$, the visible $W$ boson mass $M_W = \sqrt{2p^{\mu}\MET(1-\cos\phi)}$ is required to be $<50$ GeV, where $p^{\mu}$ is the muon momentum, $\MET$ the imbalance in transverse momentum measured in the calorimeter, and $\phi$ is the azimuthal angle between the muon and the direction of the $\MET$. The total rate of expected multijet background events following all selections is negligible ($<3\%$ of the total background). We also use the sample where both $\tau_h$ candidates have [*NN*]{}$_{\tau}<0.75$ to study the rate of jets that are falsely reconstructed as $\tau_h$ and we find this rate to be well modeled by the simulation.
![(color online). Upper limit on the $\HpmL\HpmL$ pair production cross section for (a) $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=1$, (b) $\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1$, and (c) $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=\BR(\HpmL\to\mu^{\pm}\mu^{\pm})
=\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1/3$. The bands around the median expected limits correspond to regions of $\pm 1$ and $\pm 2$ standard deviation (s.d.), and the band around the predicted NLO cross section for signal corresponds to a theoretical uncertainty of $\pm 10\%$. []{data-label="fig-limits"}](fig4.eps "fig:"){width="42.00000%"}\
![(color online). Upper limit on the $\HpmL\HpmL$ pair production cross section for (a) $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=1$, (b) $\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1$, and (c) $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=\BR(\HpmL\to\mu^{\pm}\mu^{\pm})
=\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1/3$. The bands around the median expected limits correspond to regions of $\pm 1$ and $\pm 2$ standard deviation (s.d.), and the band around the predicted NLO cross section for signal corresponds to a theoretical uncertainty of $\pm 10\%$. []{data-label="fig-limits"}](fig5.eps "fig:"){width="42.00000%"}\
![(color online). Upper limit on the $\HpmL\HpmL$ pair production cross section for (a) $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=1$, (b) $\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1$, and (c) $\BR(\HpmL\to\tau^{\pm}\tau^{\pm})=\BR(\HpmL\to\mu^{\pm}\mu^{\pm})
=\BR(\HpmL\to\mu^{\pm}\tau^{\pm})=1/3$. The bands around the median expected limits correspond to regions of $\pm 1$ and $\pm 2$ standard deviation (s.d.), and the band around the predicted NLO cross section for signal corresponds to a theoretical uncertainty of $\pm 10\%$. []{data-label="fig-limits"}](fig6.eps "fig:"){width="42.00000%"}
To improve the discrimination of signal from background, the data are subdivided into four nonoverlapping samples, depending on the charges of the muon ($q_{\mu}$) and the $\tau_h$ candidates ($q_{\tau}$) and the number of muons ($N_{\mu}$) and $\tau_h$ ($N_{\tau}$) in the event. First, we define two samples for events with $N_{\mu}=1$ and $N_{\tau}= 2$. Because the two like-charge leptons are assumed to originate from a single $\Hpm$ decay, we consider separately events where both tau leptons have the same charge, $q_{\tau_1}=q_{\tau_2}$, and events with $\tau_1$ and $\tau_2$ of opposite charge, i.e., $q_{\tau_1}=-q_{\tau_2}$, which implies that one of the $\tau$ leptons and the muon have the same charge. The third sample is defined by $N_{\tau}=3$ and the fourth sample by $N_{\mu}= 2$, without any additional requirements on the charges.
The distributions of the invariant mass of the two leading tau candidates, $M(\tau_1,\tau_2)$, for the like and opposite-charge samples are shown in Figs. \[fig-kine\](a) and (b). The separation into samples with different fractions of signal and background events increases the sensitivity to signal, as the composition of the background is different, with the like-charge sample being dominated by background from $Z$+jets decays and the opposite-charge sample by background from diboson production. The diboson background is mainly due to [*WZ*]{} $\to\mu\nu e^+e^-$ events where the electrons are misidentified as tau leptons. In Fig. \[fig-kine\](c) we show the transverse momentum of the doubly-charged dilepton system, $p_T^H$, which corresponds to the reconstructed $\Hpm\to\ell^{\pm}\ell'^{\pm}$ decay, where $\ell^{\pm}\ell'^{\pm}=
(\mu^{\pm}\tau_1^{\pm},\mu^{\pm}\tau_2^{\pm},\tau_1^{\pm}\tau_2^{\pm})$ is the pairing of the two highest-$p_T$ $\tau_h$ and the highest-$p_T$ muon that have the same charges. Since $|Q|=1$, only one such pairing exists per event. The expected number of background and signal events for the four samples and the observed numbers of events in data are shown in Table \[tab-events\] with the statistical uncertainties of the MC samples and systematic uncertainties added in quadrature.
--------------------------------------------------------------- ---------- ---------- ---------- ----------
Decay
expected observed expected observed
${\cal B}(\Hpm\to\tau^{\pm}\tau^{\pm})=1$ $116$ $128$
${\cal B}(\Hpm\to\mu^{\pm}\tau^{\pm})=1$ $149$ $144$ $119$ $113$
Equal $\BR$ into
$\tau^{\pm}\tau^{\pm},\mu^{\pm}\mu^{\pm},\mu^{\pm}\tau^{\pm}$ $130$ $138$
${\cal B}(\Hpm\to\mu^{\pm}\mu^{\pm})=1$ $180$ $168$ $154$ $145$
--------------------------------------------------------------- ---------- ---------- ---------- ----------
: Expected and observed limits on $M(\Hpm)$ (in GeV) for left-handed and right-handed $\Hpm$ bosons. Only left-handed states exist in the model that assumes equality of branching fractions into $\tau\tau$, $\mu\tau$, and $\mu\mu$ final states. We only derive limits if the expected limit on $M(\Hpm)$ is $\ge 90$ GeV.
\[tab-limits\]
Since the data are well described by the background expectation, we determine limits on the $\Hpp\Hmm$ production cross section using a modified frequentist approach [@bib-wade]. A log-likelihood ratio (LLR) test statistic is formed using the Poisson probabilities for estimated background yields, the signal acceptance, and the observed number of events for different $\Hpm$ mass hypotheses. The confidence levels are derived by integrating the LLR distribution in pseudoexperiments using both the signal-plus-background ([*CL*]{}$_{s+b}$) and the background-only hypotheses ([*CL*]{}$_b$). The excluded production cross section is taken to be the cross section for which the confidence level for signal, [*CL*]{}$_s=$[*CL*]{}$_{s+b}/$[*CL*]{}$_b$, equals $0.05$. The $M(\tau_1,\tau_2)$ distribution is used to discriminate signal from background.
Systematic uncertainties on both background and signal, including their correlations, are taken into account. The theoretical uncertainty on background cross sections for $Z/\gamma^{*}\to\ell^+\ell^-$, $W$+jets, $t\bar{t}$, and dibo son production vary between $6\%-10\%$. The uncertainty on the measured integrated luminosity is $6.1\%$ [@bib-lumi]. The systematic uncertainty on muon identification is $2.9\%$ per muon and the uncertainty on the identification of $\tau_h$, including the uncertainty from applying a neural network to discriminate $\tau_h$ from jets, is $4\%$ for each type-1 and $7\%$ for each type-2 $\tau_h$ candidate. The trigger efficiency has a systematic uncertainty of $5\%$. The uncertainty on the signal acceptance from parton distribution functions is $4\%$.
In Fig. \[fig-limits\], the upper limits on the cross sections are compared to the NLO signal cross sections for $\HpmL\HpmL$ pair production [@bib-spira] for some of the branching ratios considered. The corresponding expected and observed limits are shown in Table \[tab-limits\].
The $\Hpm$ boson mass limits assuming $\BR(\Hpm\to\tau^{\pm}\tau^{\pm})+\BR(\Hpm\to\mu^{\pm}\mu^{\pm})=1$ are determined by combining signal samples generated with pure $4\tau$, $(2\tau/2\mu)$, and $4\mu$ final states with fractions $\BR^2$, $2\BR(1-\BR)$, and $(1-\BR)^2$, respectively, where $\BR\equiv\BR(\Hpm\to\tau^{\pm}\tau^{\pm})$. Here, we include in the limit setting the distribution of the invariant mass of the two highest $p_T$ muons, including the systematic uncertainties and their correlations, from a search for $\Hpp\Hmm\to 4\mu$ decays performed by the D0 Collaboration in $1.1$ fb$^{-1}$ of integrated luminosity [@bib-d0hmm1]. The results are shown in Fig. \[fig-result\] for varying $\BR=0\%-100\%$ in steps of $10\%$. When performing this analysis, we found that the statistical uncertainties on the background simulations were overestimated in [@bib-d0hmm1]. A standard treatment of the uncertainties in the limit setting improves the mass limits for the $4\mu$ final state, as shown in Table \[tab-limits\].
![(color online). Expected and observed exclusion region at the $95\%$ C.L. in the plane of $\BR(\Hpm\to\tau^{\pm}\tau^{\pm})$ versus $M(\Hpm)$, assuming $\BR(\Hpm\to\tau^{\pm}\tau^{\pm})+\BR(\Hpm\to\mu^{\pm}\mu^{\pm})=1$, for (a) left-handed and (b) right-handed $\Hpm$ bosons. The band around the expected limit represents the uncertainty on the NLO calculation of the cross section for signal. []{data-label="fig-result"}](fig7.eps "fig:"){width="42.00000%"} ![(color online). Expected and observed exclusion region at the $95\%$ C.L. in the plane of $\BR(\Hpm\to\tau^{\pm}\tau^{\pm})$ versus $M(\Hpm)$, assuming $\BR(\Hpm\to\tau^{\pm}\tau^{\pm})+\BR(\Hpm\to\mu^{\pm}\mu^{\pm})=1$, for (a) left-handed and (b) right-handed $\Hpm$ bosons. The band around the expected limit represents the uncertainty on the NLO calculation of the cross section for signal. []{data-label="fig-result"}](fig8.eps "fig:"){width="42.00000%"}
In summary, we have performed the first search at a hadron collider for pair production of doubly-charged Higgs bosons decaying exclusively into tau leptons. We set an observed (expected) lower limit of $M(\HpmL) > 128~(116)$ GeV for a $100\%$ branching fraction of $\Hpm\to\tau^{\pm}\tau^{\pm}$, $M(\HpmL) > 144~(149)$ GeV for a $100\%$ branching fraction into $\mu\tau$, and $M(\HpmL) > 130~(138)$ GeV for a model with equal branching ratios into $\tau\tau$, $\mu\tau$, and $\mu\mu$. These are the most stringent limits on $\Hpm$ boson masses in these decay channels.
acknowledgement.tex
[99]{}
N. Arkani-Hamed [*et al.*]{}, J. High Energy Phys. [**08**]{}, 021 (2002).
R.N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. [44]{}, 912 (1980); R.N. Mohapatra and G. Senjanovic, Phys. Rev. [**D 23**]{}, 165 (1981); J.F. Gunion [*et al.*]{}, Phys. Rev. D [**40**]{}, 1546 (1989); N.G. Deshpande [*et al.*]{}, Phys. Rev. D [**44**]{}, 837 (1991).
J.E. Cieza Montalvo [*et al.*]{}, Nucl. Phys. [**B756**]{}, 1 (2006); erratum-ibid. [**B796**]{}, 422 (2008).
A.G. Akeroyd and M. Aoki, Phys. Rev. D [**72**]{}, 035011 (2005).
C.S. Aulakh, A. Melfo, and G. Senjanovic, Phys. Rev. D [**57**]{}, 4174 (1998); Z. Chacko and R. N. Mohapatra, Phys. Rev. D [**58**]{}, 015003 (1998).
E. Ramirez Barreto, Y.A. Coutinho, and J. Sá Borges, Phys. Rev. D [**83**]{}, 075001 (2011).
M. Kadastik, M. Raidal, and L. Rebane, Phys. Rev. D [**77**]{}, 115023 (2008); A. Hektor [*et al.*]{}, Nucl. Phys. [**B787**]{}, 198 (2007).
M. Mühlleitner and M. Spira, Phys. Rev. D [**68**]{}, 117701 (2003) and private communications.
G. Abbiendi [*et al.*]{} (OPAL Collaboration), Phys. Lett. B [**526**]{}, 221 (2002); P. Achard [*et al.*]{} (L3 Collaboration), Phys. Lett. B [**576**]{}, 18 (2003); J. Abdallah [*et al.*]{} (DELPHI Collaboration), Phys. Lett. B [**552**]{}, 127 (2003).
A. Aktas [*et al.*]{} (H1 Collaboration), Phys. Lett. B [**638**]{}, 432 (2006).
G. Abbiendi [*et al.*]{} (OPAL Collaboration), Phys. Lett. B [**577**]{}, 93 (2003).
J.F. Gunion [*et al.*]{}, Phys. Rev. D [**40**]{}, 1546 (1989); R.N. Mohapatra, Phys. Rev. D [**46**]{}, 2990 (1992).
V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**101**]{}, 071803 (2008).
V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**93**]{} 141801 (2004).
D. Acosta [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**93**]{}, 221802 (2004).
T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**101**]{}, 121801 (2008).
V.M. Abazov [*et al.*]{} (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A [**565**]{}, 463 (2006); M. Abolins [*et al.*]{}, Nucl. Instrum. Methods in Phys. Res. A [**584**]{}, 75 (2008); R. Angstadt [*et al.*]{}, Nucl. Instrum. Methods in Phys. Res. A [**622**]{}, 298 (2010).
M.L. Mangano [*et al.*]{}, J. High Energy Phys. [**07**]{}, 1 (2003); we use version 2.11.
T. Sjöstrand, S. Mrenna, and P. Skands, J. High Energy Phys. [**05**]{}, 026 (2006); we use version 6.323.
Z. Waş, Nucl. Phys. [**B**]{} Proc. Suppl. [**98**]{}, 96 (2001); we use version 2.5.04.
R. Brun and F. Carminati, CERN Program Library Long Writeup W5013, 1993.
V.M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. D [**71**]{}, 072004 (2005); erratum-ibid. D [**77**]{}, 039901 (2008).
The pseudorapidity is defined as $\eta = -\ln[\tan(\theta/2)]$, where $\theta$ is the polar angle with respect to the proton beam direction.
W. Fisher, FERMILAB-TM-2386-E (2006).
T. Andeen [*et al.*]{}, FERMILAB-TM-2365 (2007).
|
---
abstract: 'Quantum-limited amplifiers increase the amplitude of quantum signals at the price of introducing additional noise. Quantum purification protocols operate in the reverse way, by reducing the noise while attenuating the signal. Here we investigate a scenario that interpolates between these two extremes. We search for the optimal physical process that generates $M$ approximate copies of pure and amplified coherent state, starting from $N$ copies of a noisy coherent state with Gaussian modulation. We prove that the optimal deterministic processes are always Gaussian, whereas non-Gaussianity powers up probabilistic advantages in suitable parameter regimes. The optimal processes are experimentally feasible, both in the deterministic and in the probabilistic scenario. In view of this fact, we provide benchmarks that can be used to certify the experimental demonstration of the quantum-enhanced amplification and purification of coherent states.'
author:
- 'Xiaobin Zhao$^{1,2}$ and Giulio Chiribella$^{1,2,3}$'
bibliography:
- 'GaussianAmplification.bib'
title: Quantum amplification and purification of noisy coherent states
---
introduction
============
Coping with noise is a fundamental problem in quantum communication networks, where the quality of the communication is often affected by imperfections in the transmission line, by the presence of eavesdroppers, and by the use of non-ideal repeaters. Various techniques have been developed to fight noise: error correction codes focus on preventing noise [@shor-1st-error-corr; @QEC-meas; @err-corr-2015-rmp], while purification techniques can be used to distill cleaner resources from noisy systems [@qubit-purification; @duan-entpuri; @pan-entpuri; @fiurasek; @puri-exp-coherent-state; @puri-prob-coherent-state]. Purification techniques are crucial to quantum repeaters [@QR-QP1; @QR-QP2; @QR-QP3], where they can be used to enhance the quality of quantum communication at the price of a reduced rate.
Another fundamental primitive in quantum optics is the amplification of quantum signals [@caves1982amplification]. Here the task is to increase the amplitude of a weak signal, in order to make it more easily detectable. This task cannot be achieved perfectly, because a perfect amplification would violate fundamental quantum principles, such as Heisenberg’s uncertainty and the no-signalling principle. The price for amplification is an increased level of noise, which manifests itself in the form of increased fluctuations of the canonical quadratures. Still, the price may be worth paying when the original signal is so weak that it would be hard to detect otherwise. The amplification of quantum signals encoded in pure states has been studied in depth, both theoretically [@ralph-lund-nondeterministic; @namiki2011fundamental; @caves2012quantumlimits; @chiribellaxie; @caves2013quantumlimits; @caves2014noise; @namiki2015amplification; @caves2016models] and experimentally [@ferreyrol2010implementation; @heralded-amp; @usuga2010noise; @noiseless-amplifier; @kocsis2013heralded; @bruno2013complete]. Comparatively little is known, however, about the case of states that have been subject to noise *before* the amplification process. The goal of this paper is to identify the quantum processes that achieve the best amplification performance and to give criteria for assessing their experimental demonstration.
We will focus our attention on the amplification of noisy coherent states, generated from pure coherent states through the action of Gaussian additive noise. Pure coherent states have been successfully used in modeling a large number of physical systems [@coherentrmp], including the electromagnetic field, vibrational modes of solids, atomic ensembles, nuclear spins in a quantum dot, and Bose-Einstein condensates. They are important in continuous variable quantum protocols [@rmp1; @rmp2], such as quantum key distribution [@qkd1; @qkd2; @qkd3; @qkd4; @qkd5], cloning [@fiurasek; @braunstein2001clonecoh; @nonGau-opt; @andersen2005clonecoh; @namiki2006optimal], and quantum teleportation [@CV-tele]. In all these applications, the coherent state amplitude is Gaussian-modulated, meaning that the coherent state $|\alpha\> = \sum_n \, e^{-|\alpha|^2/2} \, \alpha^n |n\>/\sqrt{n!}$ is generated with probability $$\begin{aligned}
\label{prob} p_\lambda (d^2 \alpha) = \lambda e^{-\lambda |\alpha|^2} ~ \frac{ d^2 \alpha}\pi \, .\end{aligned}$$ Upon the action of the Gaussian additive noise, pure coherent states are transformed into displaced thermal states. Specifically, the input coherent state $|\alpha\>$ is transformed into the displaced thermal state $$\begin{aligned}
\rho_{\mu,\alpha} = \int \frac{d^2\beta}\pi \, \mu \, e^{-\mu |\beta|^2} \, |\alpha+ \beta\>\<\alpha+ \beta| \, .
\end{aligned}$$ In this paper we consider the scenario where $N$ input modes are independently prepared in the state $\rho_{\mu,\alpha}$, a common situation in several experiments, as copies of the same input coherent state can be generated with standard techniques of phase locking [@drever1983laser; @wiseman1993feedback; @wiseman-feedback-book]. Given the $N$ input modes, we will search for the physical process that produces $M$ output modes in the best possible approximation of the pure amplified state $|g \alpha\>^{\otimes M}$, where $g \ge 1$ is the amplifier’s gain. The general problem considered in this paper includes as special cases the problems of coherent state amplification ($M=N=1$, $g>1$, $\mu \to \infty$) [@caves1982amplification; @ralph-lund-nondeterministic; @namiki2011fundamental; @caves2012quantumlimits; @chiribellaxie; @caves2013quantumlimits; @caves2014noise; @namiki2015amplification; @caves2016models; @ferreyrol2010implementation; @heralded-amp; @usuga2010noise; @noiseless-amplifier; @kocsis2013heralded; @bruno2013complete], cloning ($M\ge N$, $g=1$, $\mu \to \infty$) [@cerf2000cloning; @braunstein2001clonecoh; @nonGau-opt; @andersen2005clonecoh; @namiki2006optimal], purification of noisy coherent states ($M\le N$, $g=1$, and $\mu<\infty$) [@puri-exp-coherent-state; @puri-prob-coherent-state]. The problem of joint amplification and purification is similar in spirit to the task of superbroadcasting (corresponding to $M \ge N$, $g=1$, $\mu< \infty$) [@superbroadcasting-2; @superbroadcasting-3], with the difference that superbroadcasting optimizes the fidelity of the individual output modes, whereas in our problem we will focus on the global fidelity, quantifying how much the output modes globally resemble $M$ copies of the target state $|g\alpha\>$.
We will consider both deterministic and probabilistic processes. In the deterministic processes the output is generated with unit probability, while in the probabilistic processes there is a non-zero chance of discarding the output. Probabilistic processes are interesting in that they can achieve enhanced performances in a variety of quantum tasks, including amplification [@ralph-lund-nondeterministic; @chiribellaxie; @caves2013quantumlimits; @namiki2015amplification; @caves2016models; @ferreyrol2010implementation; @heralded-amp; @usuga2010noise; @noiseless-amplifier; @kocsis2013heralded; @bruno2013complete], cloning [@fiurasek], and estimation [@fiurasek-probabilistic-estimation]. While the probability of success can sometimes be unrealistically small, probabilistic processes are conceptually important because they provide the ultimate limits of what is possible in quantum mechanics. The study of probabilistic protocols is even more relevant when it comes to evaluating realistic experiments of quantum amplification. In this scenario, the key question is whether the experiment conclusively demonstrates the use of quantum resources, such as entanglement or coherence. More specifically, one wants to know whether the results of the experiment could be simulated by measuring the input systems, performing a classical computation, and generating the output systems in a quantum state that depends on the outcome of the computation. Startegies of this kind are called [*measure-and-prepare protocols (M$ \&$P)*]{}, [*entanglement-breaking channels*]{}, or also [*classical strategies*]{}. The performance of the best M&P channel is the benchmark that needs to be surpassed in order to claim a genuine quantum processing. In this scenario, probabilistic M&P protocols provide the strictest criterion of quantumness, because they certify that the results of the experiment could not be simulated without quantum resources, even with arbitrarily small probability.
In this paper, we provide the complete solution to the problem of optimal amplification of noisy coherent states, identifying both the optimal quantum strategies and the corresponding quantum benchmarks. As an optimality criterion, we adopt the fidelity between the output and the ideal target of the amplification process. It is worth stressing that the target output is a pure state, and therefore the fidelity has a direct operational interpretation as the probability of passing a test set up by a verifier [@braunstein2000criteria; @yang2014certifying]. The highest fidelity achievable by any M&P protocol is called the [*classical fidelity threshold (CFT)*]{}. Classical fidelity thresholds have been extensively studied for processes involving pure states [@hammerer-prl-2005; @adesso-chiribella2008; @polzik-squeezingthelimit; @namiki2008fidelity; @bagan2009benchmark; @namiki2011simple; @chiribellaxie; @chiribellaadesso2014; @namiki2015quantum]. However, very little is known about the case where the states are mixed and the existing results are limited to the teleportation and storage of single-mode quantum states [@polzik-squeezingthelimit; @adesso-chiribella2008]. In this paper, we derive the quantum benchmarks for amplification-purification of displaced thermal states, along with the complete characterization of the optimal quantum strategies, both in the deterministic and in the probabilistic setting.
The paper is structured as follows: in Section \[sec:amplipuri\] we formulate the problem of joint amplification and purification of noisy coherent states, giving a reduction to a single-mode problem. When the single-mode problem involves amplification, the optimal deterministic strategy is presented in Section \[sec:optdet\], while the optimal probabilistic strategy is presented in Section \[sec:probopt\]. When the single-mode problem does not involve amplification, we show that deterministic and probabilistic strategies perform equally well (Section \[sec:puri\]). The quantum benchmarks for amplification and purification of noisy coherent states are provided in Section \[sec:benchmark\]. Finally, the conclusions are drawn in Section \[sec:conclusion\].
Formulation of the problem {#sec:amplipuri}
==========================
Here we consider the joint amplification and purification of noisy coherent states by means of deterministic process. Our goal is to identify the best quantum channel that maps the input state $\rho_{\mu,\alpha}^{\otimes N}$ into the target state $|g\alpha\>^{\otimes M}$, where the coherent amplitude $\alpha$ is Gaussian-distributed.
Figure of merit
---------------
As a figure of merit, we adopt the [*global fidelity*]{} between the output of the channel and the target state. In formula, $$\begin{aligned}
\label{globalfid}
F^{\rm det}_{N\to M} (\alpha) = \<g\alpha |^{\otimes M}{\mathcal{C}}(\rho_{\mu,\alpha}) |g\alpha\>^{\otimes M} \, ,\end{aligned}$$ where ${\mathcal{C}}$ is a quantum channel (completely positive trace preserving map), sending states on the input Hilbert space ${\mathcal{H}} _{in} = {\mathcal{H}}^{\otimes N}$ to the output Hilbert space ${\mathcal{H}} _{out } = {\mathcal{H}}^{\otimes M}$, ${\mathcal{H}}$ being the Hilbert space associated to each mode. The map ${\mathcal{C}}$ describes the input-output transformation occurring in the amplification process.
Operationally, the global fidelity is the probability of passing a test set up by a verifier who [*(i)*]{} knows the value of $\alpha$, and [*(ii)*]{} has access to the $M$ output modes. Alternative choices of figure of merit are the *single-copy fidelity* [@keyl1999optimal]—corresponding to the probability of passing a test where the verifier knows $\alpha$ but has access a single output mode—or the trace distance [@nielsen2002quantum]—corresponding to the probability of passing a test where the verifier tries to distinguish between the channel output ${\mathcal{C}} \left( \rho_{\mu,\alpha}^{\otimes N} \right)$ and the target state $|g\alpha\>^{\otimes M}$. Note that operational interpretation of the trace distance presupposes that the verifier knows the channel ${\mathcal{C}}$, in addition to the value of $\alpha$. In the context of this paper, the fidelity is a more convenient choice because it can be used to define benchmarks in the scenario where the channel ${\mathcal{C}}$ is unknown.
To identify the optimal quantum channel, we will be to find the channel ${\mathcal{C}}$ that maximizes the average fidelity $$\begin{aligned}
\label{Fmultimode}
F_{N\to M,g}^{\rm det}=\int \frac{d^2 \alpha} \pi \,\, p_{\lambda}(\alpha) ~\<g\alpha |^{\otimes M} \, {\mathcal{C}}\left(\rho_{\mu,\alpha}^{\otimes N}\right) \, |g\alpha\>^{\otimes M} \, .\end{aligned}$$ Due to the symmetry of the problem, the average fidelity includes as a special case the *worst-case fidelity*, which can be obtained in the limit $\lambda \to 0$.
When carrying out the optimization, we will make no assumptions on the channel ${\mathcal{C}}$. In particular, we will not assume that ${\mathcal{C}}$ is Gaussian or covariant. The only requirement—implicit in the fact that ${\mathcal{C}}$ is trace-preserving—is that the amplification process happens *deterministically*, meaning that an output is produced with unit probability.
Reduction to single-mode
------------------------
The first step towards the solution of the problem is the reduction to the single-mode scenario $M=N=1$. The reduction is implemented by invertible transformations on the input and the output: specifically, one has $$\begin{aligned}
|g \alpha \>^{\otimes M}
\quad \overset{U}{\longrightarrow} \quad & |g \sqrt M \alpha \>\otimes |0\>^{\otimes (M-1)} \\
\label{multimodemixed} \rho_{\mu, \alpha}^{\otimes N} \quad \overset{U \cdot U^\dag}{\longrightarrow} \quad & \rho_{\mu,\sqrt N \, \alpha} \otimes \rho_{\mu}^{\otimes {N-1}}
\, ,\end{aligned}$$ where $U$ is the Fourier transform of the modes, implementable through a network of beamsplitters [@reck1994experimental]. Through this mapping, the multimode is converted into a single-mode problem, with the substitutions $$\begin{aligned}
\label{substitutions}
g \to g' = g \sqrt{M/N} \qquad {\rm and} \qquad \lambda \to \lambda' = \lambda/N \, ,\end{aligned}$$ to be made in the target state for the process and in the probability distribution (\[prob\]), respectively. The multimode fidelity (\[Fmultimode\]) is then reduced to the single-mode fidelity $$\begin{aligned}
\label{Fsinglemode}
F_{g'}^{\rm det}=\int \frac{d^2 \alpha} \pi \,\, p_{\lambda'}(\alpha) ~\<g'\alpha | \, {\mathcal{C}}'\left(\rho_{\mu,\alpha}\right) \, |g'\alpha\> \, ,\end{aligned}$$ where ${\mathcal{C}}'$ is the quantum channel implementing the reduced, single-mode transformation.
At this point, the problem is to find the process that maximizes the single-mode fidelity (\[Fsinglemode\]). A convenient approach is the semidefinite programming method developed in Ref. [@chiribellaxie], which gives an explicit expression for the optimal fidelity, denoted by $F^{\rm det*}_{g'}$. Precisely, we have $$\begin{aligned}
\label{1}
F^{\rm det*}_{g'} = \inf_{\sigma} \left\| \int \frac{\d^2 \alpha}{\pi} p_{\lambda'} (\alpha) |g' \alpha\>\< g' \alpha| \otimes \sigma^{- \frac12} \rho_{\mu,\overline \alpha} \sigma^{-\frac 12}\right\|_{\infty} \, ,\end{aligned}$$ where $\sigma$ is arbitrary quantum state of the input mode space and $\| A \|_{\infty} = \sup_{\| |\psi\> \|=1} \, \| A \, |\psi\>\|$ is the operator norm of the operator $A$.
Optimal deterministic processes: the $g'\ge 1$ case {#sec:optdet}
===================================================
We start from the case $g'\ge 1$, which we call the *amplification regime*. The name is motivated by the fact that, in the single-mode setting, the output states of the optimal quantum process are more mixed than the input states, meaning that amplification dominates over purification. The situation is more nuanced in the multimode setting, as we will see at the end of this section.
For $g'\ge 1$, explicit calculation of the optimal deterministic fidelity (\[1\]) gives the value $$\begin{aligned}
\label{det-opt}
F^{\rm det*}_{g'\ge 1 }&=\begin{cases}
\dfrac{N_{\rm C}+N_{\rm T}+1}{g'^2N_{\rm C}(N_{\rm T}+1)},& g'\ge \frac{N_{\rm C}+N_{\rm T}+1}{N_{\rm C}} \\
&\\
\frac{1}{(g'-1)^2N_{\rm C}+N_{\rm T}+1},& g'< \frac{N_{\rm C}+N_{\rm T}+1}{N_{\rm C}} .
\end{cases}\end{aligned}$$ where $N_{\rm C}= 1/{\lambda'} = N/\lambda$ is the expected number of photons in the signal and $N_{\rm T}=1/\mu$ is the expected number of photons added by thermal noise. The details of the derivation are presented in Appendices \[app:optimalfidelity\] and \[app:twomodesqueeze\].
The optimal fidelity can be attained with standard quantum optics techniques. For the single-mode problem, the optimal deterministic strategy is to couple the input mode with an ancillary mode, initially in the vacuum, so that the two modes undergo a two-mode squeezing transformation. Eventually, the ancillary mode is discarded. Mathematically, this sequence of operations is described by the quantum channel $$\begin{aligned}
\mathcal{C}_r(\rho)=\mathrm{Tr}_{B} \left[e^{r(a^{\dag}b^{\dag}-ab)}(\rho\otimes|0\rangle\langle0|)e^{-r(a^{\dag}b^{\dag}-ab)} \right] \, ,\end{aligned}$$ where $a$ and $b$ are the annihilation operators of the input mode and the ancillary mode, respectively, and ${\operatorname{Tr}}_B$ is the partial trace over the ancillary mode. In order to achieve the maximum fidelity the squeezing parameter $r$ must be tuned to satisfy the condition $$\begin{aligned}
\label{cosh}\cosh r =\frac{g' N_{\rm C}}{1+N_{\rm C}+N_{\rm T}}\end{aligned}$$ when $g'\geq (N_{\rm C}+N_{\rm T}+1)/N_{\rm C}$ and $r=0$ otherwise (see Appendix \[app:twomodesqueeze\]). Note that the choice $r=0$ corresponds to the identity channel: when the expected number of thermal photons is increased, the optimal strategy is just to leave the input state untouched. In the multimode scenario, this means that the best amplification setup for high-temperature states consists of a network of beamsplitters.
[**Remark 1 (Gaussianity vs non-Gaussianity).**]{} We have seen that the optimal deterministic amplification is achieved by Gaussian operations. It is worth stressing that Gaussianity was not assumed from the start, but came as a result of the optimization. This result may seem to be in contrast with the earlier work by Cerf *et al* [@nonGau-opt], who showed that Gaussian operations are suboptimal for the problem of cloning coherent states, corresponding to $N=1$, $M=2$, $g=1$, and $\mu \to \infty$. The reason for the discrepancy is that Wolf *et al* focussed on the *local* fidelity—that is, the fidelity of each individual clone with the target state. By employing non-Gaussian operations, their protocol manages to produces clones that better resemble the target state, when examined individually. Still, when the two clones are examined jointly, the optimal cloning operation is Gaussian and coincides with the cloner proposed by Cerf, Ipe, and Rottenberg [@cerf2000cloning] (see also [@braunstein2001clonecoh]).
[**Remark 2 (amplification vs purification).**]{} For $g'\ge 1$, the output states of the optimal single-mode process are more mixed than the input states. Indeed, the number of thermal photons goes from $N_{\rm T}= 1/\mu$ in the input state to $N_{\rm T}' =\cosh^2 r \, N_{\rm T} + (\cosh^2 r -1)$ in the output state. Since $N_{\rm T}'$ is larger than $N_{\rm T}$, no purification takes place. Summarizing: for $g'\ge 1$, amplification dominates over purification in the single-mode setting. Let us look at the multimode scenario. There, the total number of thermal photons in the input (summing the contributions from all modes) is $ N_{\rm total}= N/\mu $. The total number of thermal photons in the output is $$\begin{aligned}
\nonumber N_{\rm total}' &=\cosh^2 r \, \frac {N_{\rm total}}{N} + (\cosh^2 r -1) \\
&= \left( \frac{g N_{\rm C}}{1+N_{\rm C}+N_{\rm total}/N}\right)^2 \frac { M \, }{N} \left( \frac{N_{\rm total}}N +1\right) - 1 \, ,
\end{aligned}$$ having used Eq. (\[cosh\]). The relation can also be expressed in terms of individual input and output modes as $$\begin{aligned}
\nonumber N_{\rm single}'
&= \left( \frac{g N_{\rm C}}{1+N_{\rm C}+N_{\rm single}}\right)^2 \frac { 1 \, }{N} \left( {N_{\rm single}} +1\right) - \frac 1M \, ,
\end{aligned}$$ with $N_{\rm single} = N_{\rm total}/N$ and $N_{\rm single}' = N_{\rm total}'/M$. The above equation quantifies the competition between amplification and purification for deterministic processes and for $ g \ge \sqrt {N/M}$.
Optimal probabilistic processes: the $g'\ge 1$ case {#sec:probopt}
===================================================
Non-deterministic processes are known to boost the performances of amplification [@ralph-lund-nondeterministic; @chiribellaxie], a mechanism that has been observed experimentally in the case of pure states [@heralded-amp]. Still, the case of mixed states has remained unexplored so far. To tackle the problem, we model a generic non-deterministic amplification process by a trace non-increasing completely positive map ${\mathcal{Q}}$, which describes the occurrence of a desired transformation heralded by a suitable measurement outcome. In this setting, the fidelity is given by $$F_{N\to M,g}^{\rm prob}=\frac{\int \frac {d^2 \alpha }\pi\,\, p_{\lambda} (\alpha)~ \<g\alpha |^{\otimes M}{\mathcal{Q}} \left(\rho^{\otimes N}_{\mu,\alpha} \right) |g\alpha\>^{\otimes M}}{\int \frac{d^2 \alpha } \pi\,\, p_{\lambda}(\alpha)~ {\operatorname{Tr}}\left[ {\mathcal{Q}}\left(\rho^{\otimes N}_{\mu,\alpha} \right) \right] } \, .$$ In general, the fidelity depends on the probability of the heralded outcome [@caves2013quantumlimits]. Here we will allow the probability to be arbitrarily small, thus giving the ultimate quantum fidelity achievable by arbitrary probabilistic processes.
The problem can be reduced to a single-mode problem, as illustrated in the deterministic case. Using the technique of [@fiurasek; @chiribellaxie], the ultimate fidelity for probabilistic amplification can be expressed as $$\begin{aligned}
\label{prob_opt_gen}
F^{{\rm prob}*}_{g' } = \left\| \int \frac{\d^2 \alpha}{\pi} p_{{\lambda'}} (\alpha) |g' \alpha\>\< g' \alpha| \otimes \< \tilde \rho\>^{- \frac12} \rho_{\mu,\overline \alpha} \<\tilde \rho\>^{-\frac 12}\right\|_{\infty}\end{aligned}$$ where $\<\tilde \rho\>$ is the average state of the ensemble $\{\rho_{\mu,\overline \alpha} \, , p_{\lambda'} (\alpha)\}$.
By explicit calculation (Appendix \[app:optprob\]), we obtain the ultimate probabilistic fidelity $$\label{prob-opt}
F^{{\rm prob}*}_{g'\ge 1}=
\begin{cases}
\dfrac{N_{\rm C}+N_{\rm T}+1}{g'^2N_{\rm C}(N_{\rm T}+1)},& g'\ge\frac{\sqrt{(N_{\rm C}+N_{\rm T}+1)(N_{\rm C}+N_{\rm T})}}{N_{\rm C}}\\
& \\
\frac{N_{\rm C}+N_{\rm T}}{N_{\rm C}+N_{\rm T}+g'^2N_{\rm C}N_{\rm T}},& g' <\frac{\sqrt{(N_{\rm C}+N_{\rm T}+1)(N_{\rm C}+N_{\rm T})}}{N_{\rm C}} \, .
\end{cases}$$ Note that in the region $g' \ge (N_{\rm C}+N_{\rm T}+1)/N_{\rm C}$, there is no difference between the maximum probabilistic fidelity and the maximum deterministic fidelity in Eqs.(\[det-opt\]) and (\[prob-opt\]), respectively. This means that arbitrary probabilistic setups with arbitrary success probability cannot do better than the best deterministic setup. A similar phenomenon occurs in the amplification of Gaussian-distributed coherent states [@chiribellaxie], where the advantage of probabilistic setup disappears after the amplification parameter exceeds a critical threshold.
For values of $g' $ between $1$ and $ (N_{\rm C}+N_{\rm T}+1)/N_{\rm C}$ there is a gap between the performance of probabilistic and deterministic processes. For example, when the input states are pure $(N_{\rm T} = 0)$, we obtain $F^{{\rm prob}*}_{g'\ge 1} = 1$, in agreement with the existence of noiseless probabilistic amplifiers [@ralph-lund-nondeterministic; @chiribellaxie]. An interesting question is whether noiseless amplification is possible for mixed states $(N_{\rm T} > 0)$. Our result answers the question in the negative, showing that the probabilistic fidelity is never equal to 1, except in the trivial case where the input state is perfectly known ($N_{\rm C} = 0$).
The ultimate probabilistic fidelity can be achieved by a non-deterministic noiseless amplifier of the kind proposed by Ralph and Lund [@ralph-lund-nondeterministic; @heralded-amp]. Mathematically, the non-deterministic amplifier is described by the map $$\begin{aligned}
\label{Q}
Q_{K}(\rho)=Q_{K}\rho Q_{K}^{\dagger}\, , \qquad Q_{K} = y^{-K} \, \sum^{K}_{n=0}y^{n}|n\rangle\langle n| \, ,\end{aligned}$$ where $K$ is a large integer (ideally approaching infinity) and $y$ is a suitable parameter depending on the desired degree of amplification.
In order to approach the ultimate probabilistic fidelity (\[prob-opt\]), the amplification parameter $y$ has to be tuned as $$\begin{aligned}
y=\frac{g'N_{\rm C}}{N_{\rm C}+N_{\rm T}} \, ,\end{aligned}$$ for values of $ g' $ between ${\sqrt{(N_{\rm C}+N_{\rm T}+1)(N_{\rm C}+N_{\rm T})}}/{N_{\rm C}} $ and $1$, and as $$\begin{aligned}
y=\frac{N_{\rm C}+N_{\rm T}+1}{g'N_{\rm C}}
\end{aligned}$$ for values of $g'$ between ${\sqrt{(N_{\rm C}+N_{\rm T}+1)(N_{\rm C}+N_{\rm T})}}/{N_{\rm C}} $ and $ {(N_{\rm C}+N_{\rm T}+1)}/{N_{\rm C}}$. Choosing the above values, the fidelity of the non-deterministic amplifier (\[Q\]) becomes exponentially close to the optimal probabilistic fidelity in the large $K$ limit (cf. Appendix \[app:ralphflund\]).
[**Remark (amplification vs purification)**]{}. For $g'\ge 1$, the output states of the optimal process are more mixed then the input states, also in the probabilistic setting. Let us look at the region $1\le g'\le(N_{\rm C}+N_{\rm T}+1)/N_{\rm C}$, where the probabilistic advantages are more prominent. Here the expected number of thermal photons is $$\begin{aligned}
N'_{\rm T }= N_{\rm T} \, \, \frac{ y^2 \, \mu}{1+\mu-y^2} \, ,
\end{aligned}$$ which cannot be smaller than $N_{\rm T}$, since the amplification parameter $y$ is larger than or equal to $1$. In summary, no purification takes place at the single-mode level. Again, the situation is more nuanced in the multimode case, where the number of thermal photons is initially larger by a factor $N$. Explicitly, the total number of thermal photons, initially equal to $N_{\rm total}= N/\mu$, is finally equal to $$\begin{aligned}
N'_{\rm total }= \frac{ N_{\rm total} ~ M\, \mu g^2 N^2_{\rm C} }{ N^2 \left[ (1+\mu) \left(N_{\rm C}+\frac{N_{\rm total}}{N}\right)^2- (g N_{\rm C})^2 \frac M N \right]} \, .
\end{aligned}$$ Equivalently, the number of thermal photons per mode goes from $N_{\rm single}= 1/\mu$ to $$\begin{aligned}
N'_{\rm single }= \, \, \frac{ N_{\rm single}~ \mu \, g^2 \, N^2_{\rm C} }{ N \left[ (1+\mu) (N_{\rm C}+N_{\rm single})^2- (g N_{\rm C})^2 \frac MN \right]} \, .
\end{aligned}$$ According the the above equation, the values of the parameter determine whether purification can take place in conjunction with amplification.
The $g' \le 1$ case: no advantage from probabilistic operations {#sec:puri}
===============================================================
Let us consider now the case where $g' \le 1$. In the single-mode picture, the task is to transform a mixed input state into a purer, albeit attenuated, output state. Quite surprisingly, we find that in this case there is no difference between the optimal performance of deterministic and probabilistic processes. Specifically, the exact calculation of the optimal fidelities yields the value $$\begin{aligned}
\label{puri}
F^{\rm det*}_{g'\le 1}=F^{{\rm prob}*}_{g'\le 1}=
\frac{N_{\rm C}+N_{\rm T}}{N_{\rm C}+N_{\rm T}+g'^2N_{\rm C}N_{\rm T}} \, .\end{aligned}$$ (see Appendix \[app:optimalfidelity\] for the details). Eq. (\[puri\]) tells us that there is no fidelity-probability tradeoff in the purification regime: the fidelity has the same value for every value of the success probability. Even if we postselect on extremely rare events, these events cannot increase the performance of our purification setup. This situation has to be contrasted with the case of amplification, where the reduction of the success probability is accompanied by an increase in fidelity. Note that, the fidelity formula (\[puri\]) can be applied to the special case of purification of $N=2$ noisy coherent states, corresponding to $g=1$ and $g'=1/\sqrt 2$. In the limit of infinite modulation $N_{\rm C} \to \infty$, we retrieve the fidelity from the earlier work of Andersen [*et al*]{} [@puri-exp-coherent-state].
The ultimate quantum fidelity of Eq.(\[puri\]) can be attained via the attenuation channel $$\begin{aligned}
{\mathcal{C}}_\theta (\rho) = {\operatorname{Tr}}_B \left[e^{i\theta(a^\dag b-b^\dag a)}(\rho\otimes |0\>\<0|) e^{-i\theta(a^\dag b-b^\dag a)}\right] \, ,\end{aligned}$$ where the angle $\theta$ has to be adjusted to satisfy the condition $$\begin{aligned}
\cos \theta=\frac{g'}{1+{N_{\rm T}}/{N_{\rm C}}} \, .\end{aligned}$$
By definition, no amplification takes place at the single-mode level for $g'< 1$. The situation is different at the multimode level: for $g < N/M$ it is still possible to have amplification ($ g>1$), provided that $N$ is larger than $M$. In this setting, some of the input modes are sacrificed, in order to allow for the joint amplification and purification of the output modes. Note that postselection and other probabilistic operations do not contribute to the tradeoff: the best way to jointly amplify and purify noisy coherent states is deterministic.
Quantum benchmark {#sec:benchmark}
=================
We identified the optimal setups for the joint amplification and purification of noisy coherent states, both in the deterministic and in the probabilistic scenario. The results obtained so far are appealing because the optimal quantum processes can be achieved using present-day technology. Still, real experiments are typically subjects to imperfections and, as a result, the optimal performance may not be exactly attained. The question is how to certify that the experiment could not be reproduced with classical (a.k.a. M&P) strategies, by just estimating the input state and, based on the outcome, preparing the output state. In order to rule out this possibility, it is important to know the value of the classical fidelity threshold, which provides the benchmark that has to be passed in order to claim a successful implementation of quantum amplification.
We will start from the CFT corresponding to probabilistic M&P strategies, where one is allowed to discard unfavourable measurement outcomes. By considering strategies with arbitrary probability of success, we will obtain the most stringent benchmark one can choose. In the single-mode scenario, the probabilistic CFT can be computed using the method of [@chiribellaxie], as $$\begin{aligned}
F^{c}_{g'} = \left\| \int \frac{\d^2 \alpha}{\pi} p_{{\lambda'}} (\alpha) |g \alpha\>\< g \alpha| \otimes \< \tilde \rho\>^{- \frac12} \rho_{\mu,\overline \alpha} \<\tilde \rho\>^{-\frac 12}\right\|_{\times}\end{aligned}$$ where where $\<\tilde \rho\>$ is the average state of the ensemble $\{\rho_{\mu,\bar \alpha} $ and $$\begin{aligned}
\| A\|_\times = \sup_{ \| |\varphi \> \| = \| |\psi \> \|= \| | \varphi'\>\| = \| |\psi'\> \| = 1} \, | \<\varphi| \<\psi| A |\varphi'\> |\psi'\> |
\end{aligned}$$ is the injective cross norm.
By evaluating the norm (Appendix \[app:CFT\]), we find the probabilistic CFT $$\begin{aligned}
\label{cft explic n}
&F_{g'}^{c}=\dfrac{N_{\rm C}+N_{\rm T}'}{N_{\rm C}+\widetilde N_{\rm T}+N_{\rm C} \widetilde N_{\rm T}g'^{2}} \, ,\end{aligned}$$ with $\widetilde N_{\rm T}=N_{\rm T}+1$. Note that the CFT is always lower than the quantum limits established for both quantum deterministic and probabilistic process. This means that, in principle, there is always a way to certify the quantumness of a realistic setup for amplification.
Quite remarkably, we find that the probabilistic CFT can be achieved deterministically by performing the heterodyne measurement with POVM elements $P(\beta) = |\beta\>\<\beta|$ and conditionally re-preparing the pure coherent state $|z\beta\>$, with $z=g'N_{\rm C}/(N_{\rm C}+N_{\rm T}+1)$ (see Appendix \[app:CFT\]). In other words, postselection is completely useless when searching for a classical strategy for amplification and purification of noisy coherent states: no matter how small the probability of success is, probabilistic M&P channels cannot do better than the optimal deterministic channel.
Mathematically, this procedure is described by the entanglement breaking channel $$\tilde{Q}(\rho)=\int\dfrac{d^{2}\beta}{\pi}\langle\beta|\rho|\beta\rangle |z \beta\rangle\langle z \beta| \, .$$
conclusions {#sec:conclusion}
===========
In this paper we investigated a general scenario that interpolates between the tasks of amplification and purification. We identified the optimal physical process that produces the best approximation of a pure and amplified coherent state from multiple copies of a Gaussian-distributed noisy coherent state. We carried out an *ab initio* optimization both for deterministic processes and for probabilistic processes, showing how to implement the optimal processes using existing techniques in quantum optics. Specifically, the optimal deterministic process can be implemented using a network of beamsplitters and a two-mode squeezing operation, while the optimal probabilistic process uses the nondeterministic amplifier by Ralph and Lund [@ralph-lund-nondeterministic], again, combined with a network of beamsplitters.
We proved that probabilistic operations outperform their deterministic counterpart in a certain region of the parameter space. However, there is also a parameter region where using probabilistic operations offers no advantage, irrespectively of the probability. In fact, there exists even a region where the best amplification scheme consists in a passive optical network, consisting only of beamsplitters.
Since all the optimal protocols identified in our work are experimentally feasible, it is important to have criteria to witness quantum advantages over classical amplification techniques. In the paper we provided rigorous benchmarks that can be used to establish such advantages. Remarkably, the value of the benchmark is independent of the required probability of success: classical deterministic strategies and classical strategies using postselection perform equally well. It is also worth noting that the value of the benchmark is strictly smaller than the optimal quantum fidelity for every value of the parameters. This result establishes that the joint amplification and purification of noisy coherent states is a genuinely quantum task. It is our hope that this work will stimulate the realization of new experiments and the progress in the implementation of optimized optical setups that approach the ultimate quantum limit.
[**Acknowledgements.**]{} This work is supported by the Hong Kong Research Grant Council through Grant No. 17326616, by National Science Foundation of China through Grant No. 11675136, by the HKU Seed Funding for Basic Research, and by the Canadian Institute for Advanced Research.
Proof that the r.h.s. of Eq. (\[det-opt\]) is an upper bound to the deterministic fidelity {#app:optimalfidelity}
==========================================================================================
Our goal is to evaluate the operator norm of the operator $$\begin{aligned}
\Gamma_\sigma = \int \frac{\d^2 \alpha}{\pi} p_{\lambda'} (\alpha) |g' \alpha\>\< g' \alpha| \otimes \sigma^{- \frac12} \rho_{\mu,\overline \alpha} \sigma^{-\frac 12}\end{aligned}$$ and to minimize the norm over the state $\sigma$. As an ansatz, we choose $\sigma$ to be a thermal state, of the form $$\begin{aligned}
& \sigma_\kappa=\int \frac{d^2 \alpha}{\pi} \kappa e^{-\kappa |\alpha|^2}|\alpha\>\<\alpha|\, , \qquad \kappa\ge 0 \, .\end{aligned}$$
This choice gives us an upper bound of the optimal fidelity, as $$\begin{aligned}
\label{upperbound}
F^{\rm det*}_{g'}&=\|\Gamma_{\sigma}\|_\infty\le\| \Gamma_{ \sigma_\kappa}\|_\infty \, .\end{aligned}$$ Now, the operator norm is given by $\| \Gamma_{ \sigma_\kappa}\|_\infty=\lim_{p\to \infty}({\operatorname{Tr}}| \Gamma_{\sigma_\kappa}|^p)^{1/p}$. Calculating the trace we obtain $$\begin{aligned}
\label{b1}
& {\operatorname{Tr}}\left[ \Gamma_{\sigma_\kappa}^p\right]\\
&\nonumber = \left[ \frac{\mu {\lambda'}(\kappa+1)}{\kappa}\right]^p \, \int \frac{\d^{2p} {\boldsymbol{\alpha}}}{\pi^p} \int \frac{\d^{2p} {\boldsymbol{\beta}}}{\pi^p} \, e^{ - ({\boldsymbol{\alpha}} \oplus {\boldsymbol{\beta}})^\dag M_p ({\boldsymbol{\alpha}} \oplus {\boldsymbol{\beta}})} \, ,\end{aligned}$$ where ${\boldsymbol{\alpha}} \in {\mathbb{C}}^p$ and $ {\boldsymbol{\beta}} \in {\mathbb{C}}^p$ are the column vectors defined by ${\boldsymbol{\alpha}} = (\alpha_1,\dots, \alpha_p)^T$ and ${\boldsymbol{\beta}} = (\beta_1,\dots, \beta_p)^T$, respectively, ${\boldsymbol{\alpha}} \oplus {\boldsymbol{\beta}} \in{\mathbb{C}}^{2p}$ is the column vector ${\boldsymbol{\alpha}} \oplus {\boldsymbol{\beta}} = (\alpha_1,\dots, \alpha_p , \beta_1,\dots, \beta_p)^T$, and $M_p$ is the $2p\times 2p$ matrix defined by $M_p = \begin{pmatrix} A & B \\
B & C \end{pmatrix}$ with $$\begin{aligned}
A_{ij} =& ({\lambda'} +1+g'^2) \, \delta_{ij} - g'^2 \delta_{ i, (i+1) \mod p} \\& - (\kappa+1) \, \delta_{ i, (i-1)\mod p} \\
B_{ij} =& \delta_{ij} - (\kappa+1) \, \delta_{ i, (i-1) \mod p} \\
C_{ij} =& (\mu+ 1) \delta_{ij} - (\kappa+1) \, \delta_{ i, (i-1) \mod p} \, .
\end{aligned}$$ Note that $A, B$, and $C$ are circulant matrices and, therefore, they are diagonal in the Fourier basis. Hence, the matrix $M_p$ can be rewritten as $M_p = U M_p' U^\dag$, with $M_p' = \begin{pmatrix} A' & B' \\ B' & C' \end{pmatrix} $, where $A', B',$ and $C'$ are diagonal matrices and $U$ is the block diagonal matrix $U = \begin{pmatrix} F & {\boldsymbol{0}} \\ {\boldsymbol{0}} & F \end{pmatrix}$, $F$ being the Fourier transform. Finally, the matrix $M_p'$ can be expressed as $M_p' = U_\pi \widetilde M_p U_{\pi}^\dag $, where $U_\pi$ is a permutation matrix and $\widetilde M_p$ is a direct sum of two-by-two matrices, which in turn can be diagonalized. As a result, we obtain the relation $$\begin{aligned}
\nonumber {\operatorname{Tr}}\left[ \Gamma_{\sigma_\kappa}^p\right] & = \left[ \frac{\mu {\lambda'}(\kappa+1)}{\kappa}\right]^p \, \frac 1 { \det \widetilde M_p} \\
\label{aaa} & = \left[ \frac{\mu {\lambda'}(\kappa+1)}{\kappa}\right]^p \, \frac 1 { \det M_p} \, ,
\end{aligned}$$ where we used the fact that $\widetilde M_p$ and $M_p$ are unitarily equivalent and therefore have the same determinant. Using Eq. (\[aaa\]), we can compute the norm of $\widetilde \Gamma_{\sigma_y}$ as $$\begin{aligned}
\label{b3}
\| \Gamma_{\sigma_\kappa} \|_{\infty} & = \frac{\mu {\lambda'}(\kappa+1)}{\kappa} \, \frac 1 { \lim_{p\to \infty} \,\left(\det M_p \right)^{1/p}}\end{aligned}$$ The determinant of $M_p$ can be computed with the relations $\det M_p =\det ( AC-B^2)$ and $$\begin{aligned}
(AC-B^2)_{ij} &= a \, \delta_{ij} - b \, \delta_{ i, (i+1) \mod p} - c \, \delta_{ i, (i-1) \mod p} \, .
\end{aligned}$$ with $$\begin{aligned}
a &= \mu+{\lambda'} +\mu{\lambda'} +g'^2(\mu+\kappa+2) \\
b & = g'^2(\mu+1) \\
c & = (g'^2+\mu+{\lambda'})(\kappa+1) \, .
\end{aligned}$$ Since $AC-B^2$ is a circulant matrix, its eigenvalues are given by ${\lambda'}_{p,n} = a - b\, \omega_p^{-n} - c\, \omega_p^{n} $, where $\omega_p = \exp[ 2\pi i/p]$. Hence, we have $$\begin{aligned}
\lim_{p\rightarrow\infty} \, \ln \left( \det M_p \right )^{1/p} & =\lim_{p\rightarrow\infty} \, \frac{1}{p} \, \sum^{p-1}_{n=0} \, \ln {\lambda'}_{p,n}\\
& =\lim_{p\rightarrow\infty} \, \frac{1}{p}\sum^{p-1}_{n=0} \, \ln(a -b \, \omega_p^{-n} - c \, \omega_p^{n})\\
&=\int^{2\pi}_0 \, \frac{d\theta}{2\pi} \, \ln[b(y_+-e^{-i\theta})(1-y_-e^{i\theta})] \, ,
$$ with $y_\pm=\frac{a\pm \sqrt{a^2-4bc}}{2b}$.
Now, choosing $\kappa \leq \frac{{\lambda'}\mu}{{\lambda'}+\mu}$ we can satisfy the conditions $y_+\geq1$ and $y_-\leq1$. Under these conditions, the above integral can be evaluated, giving the value $\ln(b y_+)$. Hence, the upper bound (\[upperbound\]) becomes $$\begin{aligned}
\label{eq:a5}
\mathrm{F}_{g'\ge 1}^{\rm det*}&\le\frac{\mu{\lambda'}(\kappa+1)}{\kappa}\frac{2}{a+\sqrt{a^2-4bc}} \, .\end{aligned}$$
By minimizing over $\kappa$, we obtain the optimal upper bound. The optimal value of $\kappa$ is $$\begin{aligned}
\kappa^*= \left\{
\begin{array}{ll} \frac{{\lambda'}\mu}{{\lambda'}+\mu} \qquad & g'\ge ({\lambda'}+\mu+{\lambda'}\mu)/\mu \\
& \\
\frac{\mu(g'-1)^2+{\lambda'}(\mu+1)}{g'(g'+\mu)} \qquad & g'< ({\lambda'}+\mu+{\lambda'}\mu)/\mu \, .
\end{array}
\right.
\end{aligned}$$ Inserting the above values in Eq.(\[eq:a5\]) we obtain the r.h.s. of Eq. (\[det-opt\]).
Proof of Eq. (\[prob-opt\]) {#app:optprob}
============================
The evaluation of the probabilistic fidelity follows the same steps used in the previous section. The only difference is that we have to fix the state $\sigma$ to $\sigma = \<\tilde \rho \>$, the (conjugate of) average state of the input ensemble.
The state $\<\tilde \rho \> $ is a thermal state, of the form $\sigma_{\kappa'}$ with $\kappa'=\frac{{\lambda'}\mu}{{\lambda'}+\mu}$. By substituting the the value of $\kappa'$ into Eq.(\[b3\]) and using Eq.(\[prob\_opt\_gen\]), we get the ultimate fidelity achievable for arbitrary probabilistic processes, in the form of in Eq.(\[prob-opt\]).
Optimality of two-mode squeezing {#app:twomodesqueeze}
================================
For the parametric amplifier channel $$\begin{aligned}
{\mathcal{C}}_r(\rho)=\mathrm{Tr}_{B}\left[e^{r(a^{\dag}b^{\dag}-ab)} \, (\rho\otimes|0\rangle\langle0| ) \, e^{-r(a^{\dag}b^{\dag}-ab)}\right] \, ,\end{aligned}$$ the fidelity is $$\begin{aligned}
F^{r}_{g'\ge 1}=\dfrac{{\lambda'}\mu}{{\lambda'}(\mu+1)\cosh^{2}r+\mu(g'-\cosh r)^{2}} \, .\end{aligned}$$
By optimizing over $r$, we get the maximum value advertised in Eq.(\[det-opt\]). The maximum is attained by choosing $r$ such that $\cosh r =g' N_{\rm C}/(1+N_{\rm C}+N_{\rm T})$, in the case $g'\geq (N_{\rm C}+N_{\rm T}+1)/N_{\rm C}$, and by choosing $\cosh r=1$ otherwise.
\[sec:level1\] Optimality of the noiseless nondeterministic amplifier {#app:ralphflund}
=====================================================================
The noiseless amplifier is described by the quantum operation $Q_{K}(\rho)=Q_{K}\rho Q_{K}^{\dagger}$ with $Q_{K}\propto\sum^{K}_{n=0}y^{n}|n\rangle\langle n|$. Its fidelity is given by
$$\begin{split}
&F_{g'\ge 1,K}^{\rm prob}=\dfrac{\int\dfrac{d^{2}\alpha}{\pi}{\lambda'} e^{-{\lambda'}|\alpha|^{2}}\langle g \alpha |Q_{K}\rho_{\alpha,x}Q_{K}^{\dagger}|d\alpha\rangle}{\int\dfrac{d^{2}\beta}{\pi}{\lambda'} e^{-{\lambda'}|\beta|^{2}}\mathrm{Tr}[\rho_{\beta,x}Q_{K}^{\dagger}Q_{K}]}\\
&=\dfrac{\int\dfrac{d^{2}\alpha}{\pi}\int\dfrac{d^{2}\gamma}{\pi} e^{-\mu|\gamma|^{2}} e^{-{\lambda'}|\alpha|^{2}}e^{-(1-y^{2})|\gamma+\alpha|^{2}}|\langle g' \alpha|P_{K}|y(\gamma+\alpha)\rangle|^{2}}{\int\dfrac{d^{2}\beta}{\pi}\int\dfrac{d^{2}\gamma}{\pi} e^{-\mu|\gamma|^{2}} e^{-{\lambda'}|\beta|^{2}}e^{-(1-y^{2})|\gamma+\beta|^{2}}\langle y(\gamma+\beta)|P_{K}|y(\gamma+\beta)\rangle}\\
& \geq C(y)\int\dfrac{d^{2}\alpha}{\pi}\int\dfrac{d^{2}\gamma}{\pi} e^{-\mu|\gamma|^{2}} e^{-{\lambda'}|\alpha|^{2}}e^{-(1-y^{2})|\gamma+\alpha|^{2}}[|\langle g' \alpha|y(\gamma+\alpha)\rangle|^{2}-2|\langle g' \alpha |(I-P_{K})|y(\gamma+\alpha)|]\\
&\geq \dfrac{({\lambda'}+\mu)(1-y^{2})+{\lambda'}\mu }{{\lambda'}+\mu+{\lambda'}\mu+g'^2-y^2g'^2+\mu g'^2-2yg'\mu}-2\sqrt{\mathbb{E}[\langle g' \alpha |(I-P_{K})|g'\alpha\rangle]\mathbb{E}[\langle y(\gamma+ \alpha )|(I-P_{K})|y(\gamma+\alpha)\rangle]}
\end{split}$$
where $\mathbb{E}(f)$ denotes the expectation value of the function $f$ over the Gaussian distribution $p(\alpha,\gamma)=C(y)e^{-\mu|\gamma|^{2}}e^{-{\lambda'}|\alpha|^{2}}e^{-(1-y^{2})|\gamma+\alpha|^{2}}$, with $C(y)=({\lambda'}+\mu)(1-y^{2})+{\lambda'}\mu$.
The expectation values can be computed explicitly as $$\begin{aligned}
&\mathbb{E}[\langle g' \alpha |(I-P_{K})|g'\alpha\rangle]
\\&=\left[\dfrac{g'^{2}}{g'^{2}+{\lambda'}+1-y^{2}-\dfrac{(1-y^{2})^{2}}{\mu+1-y^{2}}}\right]^{K+1}\\\end{aligned}$$ and $$\begin{aligned}
&\mathbb{E}[\langle y(\gamma+ \alpha) |(I-P_{K})|y(\gamma+\alpha)\rangle]
\\&=\left[\dfrac{y^{2}}{1+{\lambda'} -\dfrac{{\lambda'}^{2}}{{\lambda'}+\mu}}\right]^{K+1} \, .\end{aligned}$$ Note that both expectation values vanish exponentially fast in the large $K$ limit.
Now, we tune the amplification parameter $y$ in order to attain the maximum fidelity:
1. for $g'$ between $1$ and $\sqrt{\frac{{\lambda'}^2}{\mu}+{\lambda'}+(1+\frac{\lambda'}\mu)^2}$, we choose set $y=\frac{g'\mu}{{\lambda'}+\mu}$, obtaining fidelity $F^{\rm prob}_{K}\rightarrow\dfrac{{\lambda'} +\mu}{{\lambda'} +\mu+ g'^{2}}$ in the limit $K\to\infty$.
2. For $g'$ between $ \sqrt{\frac{{\lambda'}^2}{\mu}+{\lambda'}+(1+\frac{\lambda'}\mu)^2}$ and $\frac{{\lambda'}}{\mu}+{\lambda'}+1 $, we set $y=\frac{{\lambda'}+\mu+{\lambda'}\mu}{g'\mu}$, obtaining fidelity $F_{K}^{\rm prob}\rightarrow\dfrac{{\lambda'}+\mu+{\lambda'}\mu}{g'^{2}(\mu+1)}$ in the limit $K\rightarrow\infty$.
Since the limit values coincide with the optimal probabilistic fidelities of Eq. (\[prob-opt\]), we conclude that the noiseless amplifier, for suitable values of the amplification parameter, is optimal for the probabilistic amplification and purification of noisy coherent states.
Evaluation of the quantum benchmark {#app:CFT}
===================================
The CFT for joint amplification and purification can be upper bounded as $$\begin{split}\label{cft-prob-gen}
F_{g'}^{prob,c}&=\|\Gamma_{\sigma}\|_{\times}=\|\Gamma_{\sigma}^{\mathrm{T}_{2}}\|_{\times}\\&\le \|\Gamma_{\sigma}^{\mathrm{T}_{2}}\|_{\infty}\\
&=c_1\left \| \int\dfrac{d^{2}\alpha}{\pi}
D(c_2 \alpha)x'^{a^\dag a}D^\dag(c_2 \alpha)\otimes|g'\alpha\>\<g'\alpha|\right\|_\infty\\
&=\dfrac{{\lambda'}+\mu+{\lambda'}\mu}{{\lambda'}+\mu+{\lambda'}\mu+g'^{2}(\mu+1)} \, ,
\end{split}$$ where $T_2$ is the partial transpose over the Hilbert space of the input state and $c_1$, $c_2$ and $x'$ are: $$\begin{aligned}
&c_1=\frac{{\lambda'}\mu+{\lambda'}+\mu}{\mu+1}\\
&c_2=\frac{\sqrt{({\lambda'}\mu+{\lambda'}+\mu)({\lambda'}+\mu)}}{\mu}\\
&x'=\frac{{\lambda'}\mu+{\lambda'}+\mu}{({\lambda'}+\mu)(\mu+1)} \, .\end{aligned}$$
Here we show that the upper bound can be achieved by the deterministic measure-and-prepare channel $$\widetilde{C}(\rho)=\int\dfrac{d^{2}\beta}{\pi}\langle\beta|\rho|\beta\rangle |z \beta\rangle\langle z \beta| \qquad z=\frac{g'\mu}{{\lambda'}+\mu+{\lambda'}\mu} \, .$$
Explicitly, the average fidelity is $$\begin{aligned}
F^{{\rm M\&P}}_{g'}&\nonumber=\int\frac {d^2\alpha}{\pi}\dfrac{d^{2}\beta}{\pi}{\lambda'} e^{-{\lambda'}|\alpha|^2}\langle\beta|\rho_{\mu,\alpha}|\beta\rangle|\<g'\alpha |z \beta\rangle|^2\\
&=\nonumber \dfrac{{\lambda'}\mu}{({\lambda'}\mu+{\lambda'}+\mu)g'^{2}z^{2}-2\mu g'^{2}z+\mu({\lambda'}+g'^{2})} \\
&=\dfrac{{\lambda'}+\mu+{\lambda'}\mu}{{\lambda'}+\mu+{\lambda'}\mu+g'^{2}(\mu+1)} \, .\end{aligned}$$ Note that this value coincides with the upper bound (\[cft-prob-gen\]). Hence, we conclude that the CFT amplification is $$F_{g'}^{\rm det,c} = F_{g'}^{prob,c}=\dfrac{{\lambda'}+\mu+{\lambda'}\mu}{{\lambda'}+\mu+{\lambda'}\mu+g'^{2}(\mu+1)}\, .$$ Eq.(\[cft explic n\]) follows by expressing the above equation in terms of the expected photon numbers $N_{\rm C}$ and $\widetilde N_{\rm T}$.
|
---
abstract: 'First, second and third nearest neighbor mixing potentials for FePt alloys, were calculated from first principles using a Connolly-Williams approach. Using the mixing potentials obtained in this manner, the dependency of equilibrium L1$_0$ ordering on temperature was studied for bulk and for a spherical nanoparticle with 3.5nm diameter at equiatomic composition by use of Monte Carlo simulation and the analytical ring approximation. The calculated order-disorder temperature for bulk (1495-1514 K) was in relatively good agreement (4% error) with the experimental value (1572K). For nanoparticles of finite size, the (long range) order parameter changed continuously from unity to zero with increasing temperature. Rather than a discontinuity indicative of a phase transition we obtained an inflection point in the order as a function of temperature. This inflection point occurred at a temperature below the bulk phase transition temperature and which decreased as the particle size decreased. Our calculations predict that 3.5nm diameter particles in configurational equilibrium at 600$^{\circ}$C (a typical annealing temperature for promoting L1$_0$ ordering) have an L1$_0$ order parameter of 0.83 (compared to a maximum possible value equal to unity). According to our investigations, the experimental absence of (relatively) high L1$_0$ order in 3.5nm diameter nanoparticles annealed at 600$^{\circ}$C or below is primarily a problem of *kinetics* rather than equilibrium.'
author:
- 'R.V. Chepulskii$^{1,2,*}$, J. Velev$^{1}$ and W.H. Butler$^{1}$'
date: 'Received 7 September 2004; accepted 10 October 2004 by J. Appl. Phys.'
title: 'Monte Carlo simulation of equilibrium L1$_0$ ordering in FePt nanoparticles'
---
Introduction
============
Self-assembled, monodispersed FePt nanoparticles are being intensively investigated for possible future application as an ultra-high density magnetic storage medium. In order to be useful as a storage medium, these particles, because of their extremely small volume, $V$, must have sufficiently high magnetic anisotropy, $K_u$, to withstand thermal fluctuations of the direction of magnetization. This requires values of the thermal stability factor, $(K_u V)/(k_\texttt{B} T)$, of approximately 50. The particles are usually produced by a “hot soap” process that yields a disordered fcc solid solution alloy (e.g. Ref. ). Such particles are not useful for information storage because they are superparamagnetic at room temperature due to their low magnetic anisotropy.
Typically, the particles are annealed at a temperature $T\simeq600^\circ$C in order to induce an ordered L1$_0$ phase[@Takahashi03; @Sint]. The layered L1$_0$ phase[@L10] is known from studies of bulk alloys to have an extremely high magnetic anisotropy ($K_u\cong7\times10^7$ erg/cm$^3$). This value of magnetic anisotropy would provide a sufficiently large thermal stability factor to make 3.5nm diameter particles viable for information storage.
Unfortunately, it appears to be difficult to a achieve a high degree of long range atomic order in FePt *nanoparticles* with $\lesssim$4nm diameter by annealing at $T\lesssim$600$^{\circ}$C (e.g. Ref.). One can consider two possible reasons for the fact that it has not been possible to obtain well ordered small particles. First, the observed order may be low because the particle is *not* in its equilibrium state due to the slow kinetics at low temperatures. Alternatively, the *equilibrium* order itself may be low even at relatively low temperatures because of the small size of nanoparticles. The latter explanation was suggested in Ref. . There, the order-disorder phase transition temperature was estimated to decrease with decrease of particle size. For particle sizes less than 1.5 nm in diameter, the phase transition temperature was found to be below the typical annealing $T\simeq600^{\circ}$C. Therefore, particles of diameter less than 1.5 nm were predicted to have no long range order in their equilibrium state at 600$^{\circ}$C. This explanation is in qualitative agreement with experiment. The difference between the experimental (4nm) and theoretical (1.5nm) critical size for disappearance of L1$_0$ order at 600$^{\circ}$C was attributed to the neglect of nanoparticle surface effects.
From our point of view,however, the results obtained in Ref. require verification because of the limitations of the theoretical models used in that study. Namely, the interatomic potentials in alloys usually are much more complicated and long-ranged than the nearest neighbor Lennard-Jones model that was used. In addition, the order-disorder phase transition temperature was estimated in Ref. by comparing the free energies of completely ordered and completely disordered states; whereas in reality, the ordered state approaches (with increasing temperature) the phase transition point being not completely ordered. Also, the disordered state would be expected to approach the phase transition (with decreasing temperature), not with a completely random atomic distribution but with an atomic distribution that has substantial short range order. Moreover, it is known[@NoFinitePhTr] that there is no formal phase transition in a finite system.
In the present paper we utilize first principles calculations (VASP code[@VASP]) together with the Connolly-Williams[@CW] method and Monte Carlo (MC) simulations (utilizing the Metropolis algorithm[@Metropolis]) to study the temperature dependence of equilibrium L1$_0$ order in a spherical FePt nanoparticle with 3.5nm diameter and equatomic composition ($c=0.5$).
Results
=======
We consider an Fe-Pt alloy in the framework of the commonly used two-component lattice gas model. In such a model[@Lee52], two types of atoms are distributed over the sites of a rigid crystal lattice. The atoms are allowed to be situated only at the crystal lattice sites and each site can be occupied by only one atom. The atoms interact through the lattice potentials (so-called mixing potentials) and can exchange their positions according to Gibbs statistics.
We used the Connolly-Williams[@CW] method to calculate the mixing potentials. Within this method, the energies of several periodic atomic distributions (i.e. long-range ordered structures called superstructures; for example L1$_0$) are calculated by first principles methods. Then the mixing potentials are determined by the best fit to those energies. We considered twenty three linearly independent Fe-Pt superstructures of the same equiatomic composition $c=0.5$. First principles calculations were performed within the local-density approximation to density-functional theory, using the VASP program package.[@VASP] All superstructures were totally relaxed including shape and volume relaxation of the unit cell and individual displacements of atoms within the unit cell. An $8\times8\times8$ mesh of $k$-points in the full Brillouin zone was employed.
The L1$_0$ superstructure was included in our first set of first principles calculations. In this case, after atom position relaxation, we obtained $3.848$Å and $3.771$Å for $a$ and $c$ lattice parameters of the corresponding tetragonal lattice, respectively. For comparison the experimental values are $3.847$Å and $3.715$Å.[@acExper] In addition, our calculated results showed the L1$_0$ ferromagnetic superstructure to be more stable (i.e. has lower energy) than the antiferromagnetic one in accordance with experiment. We believe that this good correspondence between theoretical and experimental results confirms the adequacy of our VASP first principles calculations.
By applying the Connolly-Williams method, we obtained 0.08769 eV, -0.03946 eV and 0.01585 eV for the first, second and third nearest neighbor pair mixing potentials, respectively. The average accuracy with which we fit the energy of the twenty-three superstructures within the Connolly-Williams method was 1.14% per one structure.
To verify the calculated values of mixing potential, we calculated the phase transition temperature in the *bulk* FePt alloy using these values. As a result we obtained 1495 K and 1514 K within the analytical ring approximation[@Ring] and MC simulation, respectively. The close correspondence of these values to the experimental[@FePt-L10-bulk] one of 1572 K (4% error), demonstrates the adequacy of the calculated mixing potential.
To investigate long range order in spherical nanoparticles, we used the calculated mixing potentials in MC simulations to determine the temperature dependence of the equilibrium L1$_0$ order parameter $\eta$ in the case of spherical FePt nanoparticles with 3.5nm diameter and equiatomic composition $c=0.5$. The results are presented in Fig. \[Fig\].
![The temperature dependence of the FePt equilibrium L1$_0$ order parameter $\eta$ in the cases of bulk (“bulk”) and a spherical nanoparticle with 3.5nm diameter (“sphere”) at equiatomic composition $c=0.5$. Two results for bulk were obtained by Monte Carlo simulation (“MC”) for parallelepiped sample containing $N=216000$ atoms and within the analytical ring approximation[@Ring] (“ring”). At simulation, the starting configuration for each temperature was chosen to be the completely ordered one. We applied free and periodic boundary conditions in cases of spherical nanoparticle and parallelepiped, respectively. For the case of a nanoparticle at 378$^{\circ}$C, 528$^{\circ}$C and 600$^{\circ}$C, the error bars correspond to dispersion of $\eta$ due to the thermodynamic fluctuations.[]{data-label="Fig"}](fig_1)
We define the equilibrium L1$_0$ order parameter $\eta$ as the statistical average of the maximum value among three absolute values of “directional” order parameters $\eta_x,\eta_y,\eta_z$: $$\label{LRO-def}
\eta=\left\langle\max\{\left|\eta_x\right|,\left|\eta_y\right|,\left|\eta_z\right|\}\right\rangle_{\texttt{MC}},$$ where $\eta_i\quad(i=x,y,z)$ is defined as difference between the Fe atom concentrations at odd and even crystal planes perpendicular to $i$-th direction, $\left\langle\ldots\right\rangle_{\texttt{MC}}$ is the statistical average over the MC steps. We chose such a definition of $\eta$ because of the equivalence by symmetry of the $x,y$, and $z$ directions of L1$_0$ order. In addition, one can obtain an equivalent structure (at $c=0.5$) by changing the sign of $\eta_i$, which results in the exchange of Fe and Pt atoms producing a configuration that is equivalent by symmetry to the original one. During MC simulation, we observed fluctuations that cause transformations between these equivalent states (i.e. fluctuations in the sign and direction of $\eta$)[@APB]. This is in addition to the usual statistical fluctuations within one such state. The L1$_0$ order parameter $\eta$, defined in Eq. (\[LRO-def\]) takes into account the fluctuation induced transformations between the equivalent states[@SepAver].
Conclusions and discussion
==========================
From Fig. \[Fig\] one may conclude the following. The ring approximation (which corresponds to bulk, i.e. an infinite sample) clearly shows a phase transition when the order parameter $\eta$ drops to zero. Strictly speaking, in both of the cases considered here of finite size samples (sphere and parallelepiped) there is no phase transition in accordance with a general theorem[@NoFinitePhTr]. The order parameter $\eta$ continuously changes from unity to zero with increasing temperature and instead of a phase transition we obtain an inflection point in the $\eta(T)$ curve. In the case of the parallelepiped with $216000$ atoms, the inflection point is very similar to the phase transition.[@MC-PhTr]
Our calculations predict that 3.5nm diameter particles in configurational equilibrium at 600$^{\circ}$C would have an order parameter $\eta=0.83$ (compared to a maximum possible value of unity). Therefore, annealing at 600$^{\circ}$C will not yield perfect order for 3.5nm diameter particles. Approximately 17% of the atoms will be on the wrong sublattices, even in equilibrium. The dispersion of $\eta$ due to the thermodynamic fluctuations is comparatively small (2.5%) near annealing $T=600^{\circ}$C.
According to our investigations, the experimental absence of (relatively) high order in nanoparticles below 600$^{\circ}$C is primarily a *kinetic* problem rather than an equilibrium one. It should be noted that to rapidly obtain the correct equilibrium state, we used simplified kinetics in our MC simulation[@Equilibr]. Namely, we allowed *any* two randomly chosen atoms to exchange their positions *without* an additional diffusion barrier. In a real alloy, the main mechanism of atomic diffusion is much slower because it consists in exchange the positions between atoms and their nearest neighbor vacancies through energy barriers. Moreover, at each temperature we started the simulation from the completely ordered state, whereas the actual nanoparticles are initially prepared in disordered state and transformation from the disordered to the ordered state may be much slower than the reverse one, especially at low temperatures. Nevertheless, even with our simplified kinetics, we observed a slowing down problem in approaching the equilibrium ordered state at low temperatures. In real nanoparticles this problem must be much worse. Kinetic acceleration methods such as irradiation and/or addition of other types of atoms[@harrell] may be useful in accelerating the formation of long range order.
In our study we used mixing potentials obtained for *infinite* bulk alloys and used *free* boundary conditions to simulate the equilibrium configuration of finite size particles. The presence of the surface will change the atomic potentials in the near-surface region in comparison with bulk potentials. Analytical estimation of such surface effects is not straightforward and will be done elsewhere[@FullPaper]. In reality, the problem of the effect of the surface on the interatomic exchange potentials is even more complicated because the nanoparticles of most current interest are likely to have unknown atoms and molecules attached to their surfaces.
Acknowledgments
===============
This research was supported by the Defense Advanced Research Projects Agency through ONR contract N00014-02-01-0590 and by National Science Foundation MRSEC Grant No. DMR0213985. The authors thanks Prof. J.W. Harrell for stimulating discussions.
[\*]{}
Electronic address: r\_chepulskii@yahoo.com.
S. Sun, C.B. Murray, D. Weller, L. Folks, and A. Moser, Science **287**, 1989 (2000).
Y.K. Takahashi, T. Ohkubo, M. Ohnuma, and K. Hono, J. Appl. Phys. **93**, 7166 (2003).
Annealing at $\sim$600$^{\circ}$C results in sintering of nanoparticles into larger agglomerates which is not desirable for high density magnetic recording.
L1$_0$ is an f.c.c. tetragonal superstructure, in which atoms of two types form layers occupying alternating (001) or (010) or (100) planes of the original f.c.c. lattice.
O.G. Mouritsen, *Computer Studies of Phase Transitions and Critical Phenomena* (Springer-Verlag, Berlin, 1984), Sec. 2.2.8.
G. Kresse and J. Furthmüller, Comput. Mater. Sci. **6**, 15 (1996); Phys. Rev. B **54**, 11169 (1996).
J.W. Connolly and A.R. Williams, Phys. Rev. B **27**, 5169 (1983).
N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953).
T.D. Lee and C.N. Yang, Phys. Rev. **87**, 410 (1952).
JCPDS-International Centre for Diffraction Data, 1999.
R.V. Chepulskii, Solid State Commun. **115**, 497 (2000); Phys. Rev. B **69**, 134431 (2004).
H. Okamoto, in *Binary Phase Diagrams* (ASM International, Cleveland, OH, 1996).
We did not observe the anti-phase domains and in Ref. it will be shown that they are not favored energetically.
R.V. Chepulskii and W.H. Butler, to be published.
Because of the above-discussed symmetry equivalence, we obtain $\left \langle
\eta_i\right\rangle_{\texttt{MC}}=0$ for any $i=x,y,z$ at *any* temperature, when the statistical average is taken over a sufficiently large number of MC steps.
Often the inflection point is used to approximate the bulk phase transition point in MC simulations finite size samples.
All of the results presented in this paper correspond to equilibrium states and are, therefore, independent of the particular kinetic pathways that lead to these states.
S. Kang, D.E. Nikles and J. W. Harrell, Nano Letters, **2**, 1033 (2002).
|
---
abstract: |
A gravity theory is developed with the metric ${\hat g}_{\mu\nu}=
{g}_{\mu\nu}+B\partial_\mu\phi\partial_\nu\phi$. In the present universe the additional contribution from the scalar field in the metric ${\hat g}_{\mu\nu}$ can generate an acceleration in the expansion of the universe, without negative pressure and with a zero cosmological constant. In this theory, gravitational waves will propagate at a different speed from non-gravitational waves. It is suggested that gravitational wave experiments could test this observational signature.
author:
- |
M. A. Clayton$^\dagger$ and J. W. Moffat${ }^*$\
\
${^\dagger}$*Department of Physics, Virginia Commonwealth University,*\
*Richmond, Virginia 23284-2000, USA*\
\
${}^*$*Department of Physics, University of Toronto,*\
*Toronto, Ontario M5S 1A7, Canada*
title: '**Scalar-Tensor Gravity Theory For Dynamical Light Velocity**'
---
Introduction
============
Recent observations of apparent luminosities of Type Ia supernovae (SNe-Ia) at moderate redshift ($z\sim 0.6$) indicate that the universe is expanding at an accelerated rate [@Perlmutter]. If this observational evidence is correct, then the implications for cosmology are remarkable. Attempts to explain this phenomenon include “quintesssence” [@Steinhardt], the cosmological constant [@Starobinsky], a domain wall dominated universe [@Spergel] and non-perturbative vacuum contributions to the effective action of a massive scalar field [@Parker]. In general it would appear that the cosmological fluid is dominated in the present universe by an exotic energy density, which has a negative pressure and which did not play an important role in the early universe. The fine tuning problem related to the cosmological constant is well-known [@Starobinsky; @Weinberg] and amounts to a fine tuning of order $10^{100}$ between the early universe inflationary phase and the present universe. For the quintessence model described by a slowly rolling scalar field, the potential has to be extremely flat, so that it cannot roll to its true minimum in the present universe. Characterizing the flatness by a mass $m$ requires that the mass be extraordinarily ultra-light $m\sim 10^{-33}$ eV. A mass of the same size is implied by the non-perturbative vacuum driven mechanism.
All standard gravity theories, including Einstein’s general relativity (GR), are normally required to satisfy the positive energy conditions for matter in the present universe, since over extended times this is perceived as a reasonable physical requirement of a gravity theory. For the vacuum energy, $p=-\rho$, which leads to a cosmological constant $\Lambda$. It is believed that symmetry principles exist in particle physics which would force $\Lambda$ to be zero, but there have been no convincing arguments found to support this. How can we then describe the apparent speeding up of the expansion of the universe in a gravity theory without violating the positivity conditions on the density and pressure? Can we construct a self-consistent gravity theory which can explain the data without violating the positivity conditions and with a zero cosmological constant? In the following, we shall develop such a gravity theory based on a metric given by $$\label{bimetric}
{\hat g}_{\mu\nu}=A[\phi]g_{\mu\nu}+B[\phi]\partial_\mu\phi\partial_\nu\phi,$$ where $\phi$ is a scalar field and $\partial_\mu\phi=\partial\phi/\partial
x^{\mu}$. The inverse metrics ${\hat g}^{\mu\nu}$ and $g^{\mu\nu}$ satisfy $${\hat g}^{\mu\alpha}{\hat g}_{\nu\alpha}={\delta^\mu}_\nu,\quad g^{\mu\alpha}
g_{\nu\alpha}={\delta^\mu}_\nu.$$ When $A[\phi]=1$ and $B[\phi]=0$, we retrieve conventional GR. As an immediate simplification, we assume that $A[\phi]=1$ and $B[\phi]=B={\rm
constant}$, which serves to eliminate contributions to the field equations for $\phi$ with more complicated dependence on derivatives of the source tensor. The assumption that $A[\phi]=1$ is motivated by the fact that the, essentially, conformal factor has been exhaustively studied in the Brans-Dicke scenario [@Dicke; @Bertolami] and we are interested in novel effects.
In a previous work [@Vector], a bimetric gravity theory was constructed based on a metric similar to (\[bimetric\]) but in which the second term was described by a vector field $\psi_{\mu}$. This model provided a dynamical mechanism for a superluminary theory in which light travels faster in the early universe, thereby resolving problems in cosmology [@Vector; @Superlum1; @Albrecht; @Superlum2].
In the gravity theory presented in the following, we shall find that there is an extra contribution to the gravity component in the equations of motion for the expansion factor in cosmology. This contribution can lead to an acceleration of the expansion of the universe. The equation of state satisfies the standard positivity conditions for the density and pressure. A fit to the type Ia supernovae (SNe-Ia) data is obtained.
The Action and Field Equations
==============================
The model that we consider here is identical in spirit to that which appeared in an earlier publication [@Vector], with the important difference that the coupling is through a scalar field which, given the predominance of scalar fields in cosmological models [@KT] and as effective models of more fundamental theories such as string theory, is more in line with current research in the field. Here we will constrain ourselves to the simplest of the class of such models so that we focus on novel effects and avoid confusing the issue with Brans-Dicke-like and dilaton couplings. These are easily included.
The model consists of three parts, represented by the three separate contributions to the action: $$S=S_{\rm grav}+S_{\phi}+S_{\rm M}.$$ The standard general relativity contribution is: $$S_{\rm grav}=-\frac{1}{\kappa}\int d^4x\sqrt{-g}(R[g]-2\Lambda),$$ where $\kappa=16\pi G/c^4$, $\Lambda$ is the cosmological constant, and we employ a metric with a ($+$$-$$-$$-$) signature. We also have a contribution from a minimally-coupled scalar field: $$S_{\rm \phi}=-\frac{1}{\kappa}\int d^4x\sqrt{-g}
\Bigl[\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-V(\phi)\Bigr],$$ where the scalar field $\phi$ has been chosen to be dimensionless. The stress-energy tensor of the scalar field is of the standard form: $$\label{phiT}
T_\phi^{\mu\nu}=\frac{1}{\kappa}\Bigl[g^{\mu\alpha}g^{\nu\beta}\partial_\alpha\phi
\partial_\beta\phi-g^{\mu\nu}\Bigl(\frac{1}{2}g^{\alpha\beta}
\partial_\alpha\phi\partial_\beta\phi-V(\phi)\Bigr)\Bigr],$$ and results from the variation of the scalar field action with respect to $g_{\mu\nu}$.
Where our model departs from conventional model-building wisdom is in the coupling of the gravitational field to matter. Instead of constructing the matter action using the metric $g_{\mu\nu}$, we use the combination (\[bimetric\]), resulting in the identification of $\hat{g}_{\mu\nu}$ as the physical metric that provides the arena on which matter fields interact. The matter action, $S_{\mathrm{M}}[\psi^I] =
S_{\mathrm{M}}[\hat{g},\psi^I]$, where $\psi^I$ represents all the matter fields in spacetime, is one of the standard forms, and therefore the energy-momentum tensor derived from it is given by $$\label{eq:matterEM}
\frac{\delta S_{\mathrm{M}}}{\delta
\hat{g}_{\mu\nu}}=-\frac{1}{2}\sqrt{-\hat{g}}\hat{T}^{\mu\nu}.$$ It satisfies the conservation laws $$\label{matterconservation}
\hat{\nabla}_\nu\Bigl[\sqrt{-\hat{g}}\hat{T}^{\mu\nu}\Bigr]=0,$$ as a consequence of the matter field equations only. Note that it is the covariant derivative $\hat{\nabla}_\mu$ which is compatible with the metric $\hat{g}_{\mu\nu}$ which appears, and *not* $\nabla_\mu$ which is defined to be compatible with $g_{\mu\nu}$. We have included the factor $\sqrt{-\hat{g}}$ in (\[matterconservation\]) since this will be a convenient starting point to derive the consistency of the Bianchi identities with the field equations.
As an explicit example, if the matter model consisted of a Maxwell one-form field, then we would have: $$\label{eq:Maxwell Action} %
S_M=-\int d^4x\sqrt{-\hat{g}}\Bigl[\frac{1}{4}
\hat{g}^{\mu\nu}\hat{g}^{\alpha\beta}
F_{\mu\alpha}F_{\nu\beta}\Bigr],$$ where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. Note that we have assumed that it is the density $\sqrt{-\hat{g}}$ that appears in the action, which implies that the energy-momentum tensor $$\hat{T}^{\alpha\beta}=\hat{F}^{\alpha\mu}{{}\hat{F}^\beta}_\mu-\frac{1}{4}\hat{F}^2\hat{g}^{\alpha\beta},$$ where for example $\hat{F}^{\alpha\beta}=\hat{g}^{\alpha\mu}\hat{g}^{\beta\nu}F_{\mu\nu}$, satisfies (\[matterconservation\]) if $A_\mu$ satisfies the field equations $\hat{\nabla}_\beta\hat{F}^{\alpha\beta}=0$.
Since it is only $\hat{g}_{\mu\nu}$ which is ‘visible’ to matter fields, it is a reasonable assumption (and in line with the identical assumption implicitly made in general relativity) that material test bodies will follow geodesics of $\hat{g}_{\mu\nu}$: $$\frac{d{\hat u}^\alpha}{d\lambda}+{\hat\Gamma}^\alpha_{\mu\nu}{\hat u}^\mu
{\hat u}^\nu=0,$$ where $\lambda$ is an affine parameter and the tangent vector ${\hat
u}^\mu$ is normalized as: $\hat{g}_{\mu\nu}{\hat u}^\mu{\hat u}^\nu = 0$, $\hat{g}_{\mu\nu}{\hat u}^\mu {\hat u}^\nu = c^2$ for null and time-like geodesics, respectively.
Because all matter fields will couple to $\hat{g}_{\mu\nu}$ in the same manner, the Weak Equivalence Principle is not violated. We could easily introduce weak equivalence principle violating terms into the theory through Yukawa couplings between the scalar field $\phi$ and the matter fields. This we leave for future consideration. Because one can always work in a locally defined frame with $\hat{g}_{\mu\nu}\approx
\eta_{\mu\nu}$, where $\eta_{\mu\nu}$ is the Minkowski flat spacetime metric tensor, the field equations for the matter fields can take on their special relativistic form and the Einstein Equivalence Principle is not violated, either. However, if one considers expanding the matter and gravitational fields in some region of spacetime where $\hat{g}_{\mu\nu}\approx
\eta_{\mu\nu}$, then the perturbation equations for $\hat{g}_{\mu\nu}$ and $\phi$ will in general *not* take on their special relativistic form, and therefore the Strong Equivalence Principle is expected to be violated.
One key feature of our model is that there are no instabilities induced by this coupling, in the sense of higher-order derivatives or couplings to unhealthy gauge modes. To see this we derive the field equations.
From the variation of the metric (\[bimetric\]) we get $$\delta{\hat g}_{\mu\nu}=\delta g_{\mu\nu}
+2B\partial_{(\mu}\phi\partial_{\nu)}\delta\phi,$$ and using the definition (\[eq:matterEM\]), we obtain the field equations $$\label{fieldeq1}
G^{\mu\nu}=
\frac{\kappa}{2}(T_\phi^{\mu\nu}+s{\hat T}^{\mu\nu}),$$ $$\label{fieldeq2} %
\nabla^2\phi+V'[\phi]-\kappa s B{\hat T}^{\mu\nu}
{\hat\nabla}_\mu {\hat\nabla}_\nu\phi =0,$$ where $G^{\mu\nu}=R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R$, $s
=\sqrt{-{\hat g}}/\sqrt{-g}$ and $\nabla^2\phi=g^{\mu\nu}\nabla_\mu\nabla_\nu\phi$.
As a check that we have indeed produced sensible field equations, we can show that the Bianchi identities on the curvature of $g_{\mu\nu}$ are compatible with the field equations. From the definition (\[bimetric\]) we can determine the inverses $${\hat g}^{\mu\nu}=g^{\mu\nu}
-\frac{B}{I}\nabla^\mu\phi\nabla^\nu\phi,$$ and $$g^{\mu\nu}={\hat g}^{\mu\nu}+\frac{B
}{K}{\hat\nabla}^\mu\phi{\hat\nabla}^\nu\phi,$$ where $$I=1+Bg^{\mu\nu}\partial_\mu\phi\partial_\nu\phi,\quad K=1-B {\hat
g}^{\mu\nu}\partial_\mu\phi\partial_\nu\phi,$$ from which it follows that $IK=1$, and we have defined $\nabla^\mu\phi=g^{\mu\nu}\partial_\nu\phi$ and noted that $$\label{eq:unhat}
{\hat\nabla}^\mu\phi={\hat g}^{\mu\nu}\partial_\nu\phi %
= K\nabla^\mu\phi.$$ Using these in the definition of the metric compatible connection coefficients $\hat{\Gamma}^\alpha_{\mu\nu}$, we find the equivalent forms $$\label{gammahat} %
{\hat\Gamma}^\alpha_{\mu\nu}-\Gamma^\alpha_{\mu\nu}
=\frac{B}{I}\nabla^\alpha\phi\nabla_\mu\nabla_\nu\phi %
=\frac{B}{K}\hat{\nabla}^\alpha\phi\hat{\nabla}_\mu\hat{\nabla}_\nu\phi.$$
The matter energy-momentum conservation laws (\[matterconservation\]) can be re-written as $${\hat\nabla}_\nu \Bigl[\sqrt{-{\hat g}}{\hat T}^{\mu\nu}\Bigr]=
\nabla_\nu\Bigl[\sqrt{-{\hat g}}{\hat
T}^{\mu\nu}\Bigr]+({\hat\Gamma}^\mu_{\alpha\beta}
-\Gamma^\mu_{\alpha\beta})\sqrt{-{\hat g}}{\hat
T}^{\alpha\beta}=0,$$ which using (\[gammahat\]), become $$\label{eq:DT1} %
{\hat\nabla}_\nu \Bigl[\sqrt{-{\hat g}}{\hat T}^{\mu\nu}\Bigr]= %
\nabla_\nu\Bigl[\sqrt{-{\hat g}}{\hat T}^{\mu\nu}\Bigr] %
+\frac{B}{K}\hat{\nabla}^\mu\phi \sqrt{-{\hat g}}{\hat
T}^{\alpha\beta}\hat{\nabla}_\alpha\hat{\nabla}_\beta\phi=0.$$ From the $\nabla_\nu$-derivative of (\[phiT\]) we find $$\label{eq:DT2} %
\nabla_\nu[ \sqrt{-g}
T_\phi^{\mu\nu}]=\frac{1}{\kappa}\sqrt{-g}(\nabla^2\phi+V'[\phi])\nabla^\mu\phi %
=\sqrt{-\hat{g}}\frac{B}{K}\hat{\nabla}^\mu\phi\hat{T}^{\alpha\beta}\hat{\nabla}_\alpha\hat{\nabla}_\beta\phi,$$ where we have used (\[eq:unhat\]) and the scalar field equations (\[fieldeq2\]). Finally, taking the $\nabla_\nu$-derivative of ($\sqrt{-g}$ times) the field equations (\[fieldeq1\]) we find $$\nabla_\nu[\sqrt{-g} G^{\mu\nu}] = %
\frac{\kappa}{2}\Bigl(\nabla_\nu [\sqrt{-g} T^{\mu\nu}_\phi] %
+\nabla_\nu [s\sqrt{-g} \hat{T}^{\mu\nu}]\Bigr),$$ the right hand side of which vanishes using (\[eq:DT1\]) and (\[eq:DT2\]). We have therefore shown that (\[matterconservation\]) is sufficient to guarantee that the field equations (\[fieldeq1\]) are consistent with the Bianchi identities.
Although we have introduced two metrics into the action, the field equation for $\phi$ (equation (\[fieldeq2\])) is forcing upon us another null cone. In order to identify it, we use (\[gammahat\]) to find $$\hat{\nabla}_\mu\hat{\nabla}_\nu\phi=\frac{1}{I}\nabla_\mu\nabla_\nu\phi,$$ which gives the equivalent form of (\[fieldeq2\]): $$\label{equiv1} \biggl(g^{\mu\nu}- s \kappa\frac{B}{I} {\hat
T}^{\mu\nu}\biggr)\nabla_\mu\phi\nabla_\nu\phi+V'[\phi]=0.$$ Therefore, there are three light cones in our gravitational theory. The light cone, $ds^2=g_{\mu\nu}dx^\mu dx^\nu=0$, which describes the null surfaces of the gravitational propagation, $d{\hat
s}^2=0$ which describes the propagation of ordinary matter, and $d{\bar s}^2=0$ determined from the metric $$\label{metric3}
{\bar g}^{\mu\nu}=g^{\mu\nu}
- s \kappa \frac{B}{I}{\hat T}^{\mu\nu},$$ which describes the propagation of the scalar field $\phi$ obtained from the principal part of (\[equiv1\]). There are, of course, expressions for these in terms of the metric $\hat{g}_{\mu\nu}$, but they will not be required in their general form here.
We stress that the field equations do not “suffer” from any higher-order derivative instabilities, and the causality ‘violation’ is expressed in this class of models as multiple light cones dynamically determined indirectly by the matter fields. Thus it can be said that it is a reasonable model with a “variable speed of light”.
Cosmological Model
==================
We shall assume that spacetime and the scalar field $\phi$ are homogeneous and isotropic. We will also begin by writing the metric $g_{\mu\nu}$ in comoving form, which leads us to the standard Friedmann-Robertson-Walker (FRW) metric: $$\label{metric1}
ds^2
=c^2dt^2-R^2(t)\biggl[\frac{dr^2}{1-kr^2}+r^2d\theta^2+r^2\sin^2\theta
d\phi^2\biggr],$$ where we employ a dimensionless radial coordinate $r$ and $k=0,\pm 1$ for the flat, closed and hyperbolic spatial topologies, respectively. Let us write the metric as $$\label{eq:FRWcomoving}
ds^2=c^2dt^2-R^2(t)\gamma_{ij}dx^idx^j.$$ For the metric ${\hat g}_{\mu\nu}$ we obtain $$\label{metric2} d{\hat s}^2
=I(t)c^2dt^2-R^2(t)\gamma_{ij}dx^idx^j,$$ where $$I(t)=1+\frac{B}{c^2}{\dot\phi}^2(t),$$ and ${\dot\phi}=d\phi/dt$. We have that $s=\sqrt{I}$ and that $I >
0$ to guarantee that the metric $\hat{g}_{\mu\nu}$ is real and non-degenerate. From (\[metric3\]), we find $$\label{FRWmetric3}
d{\bar s}^2
=\biggl(1-\frac{c^2\kappa
B}{I^{3/2}}\rho_M\biggr)c^2dt^2 - \biggl(1+\frac{\kappa
B}{I^{1/2}}p_M\biggr)R^2(t) \gamma_{ij}dx^idx^j,$$ and so we have that $I^{3/2}>c^2\kappa B\rho_M$ for this metric to be non-degenerate.
Let us assume a perfect fluid model for the energy-momentum tensor $${\hat T}^{\mu\nu}=(\rho_M+\frac{p_M}{c^2}){\hat u}^\mu{\hat
u}^\nu -p_M{\hat g}^{\mu\nu},$$ where ${\hat u}^\mu=dx^\mu/d{\hat s}$ is normalized to ${\hat g}_{\mu\nu}
{\hat u}^\mu{\hat u}^\nu=c^2$, so that the only non-vanishing component is ${\hat u}^0=c/\sqrt{{\hat g}_{00}}=c\sqrt{{\hat g}^{00}}$. We obtain $${\hat T}^{00}=\frac{\rho_M}{I},\quad {\hat T}^{0i}=0,\quad {\hat
T}^{ij}=\frac{p_M}{R^2}\gamma^{ij},$$ and from (\[matterconservation\]) the matter conservation equation: $$\label{conserve}
{\dot\rho_M}+3\biggl(\rho_M+\frac{p_M}{c^2}\biggr)\frac{{\dot R}}{R}=0.$$
The field equations (\[fieldeq1\]) become $$\label{equation1}
\biggl(\frac{{\dot R}}{R}\biggr)^2+\frac{c^2k}{R^2}
=\frac{1}{3}c^2\Lambda+\frac{1}{6}\biggl(\frac{1}{2}{\dot\phi}^2
+c^2V[\phi]\biggr)+\frac{\kappa c^4}{6}\frac{\rho_M}{\sqrt{I}},$$ $$\label{equation2} \biggl(\frac{{\dot R}}{R}\biggr)^2+2\frac{{\ddot
R}}{R}+\frac{c^2k}{R^2}
=c^2\Lambda-\frac{1}{2}\biggl(\frac{1}{2}{\dot\phi}^2-c^2V[\phi]\biggr)
-\frac{\kappa c^2}{2}{\sqrt{I}}p_M,$$ and the scalar field equation (\[fieldeq2\]) reduces to $$\label{ddotphi}
\frac{1}{c^2}\biggl(1-\frac{c^2\kappa B}{I^{3/2}}\rho_M\biggr){\ddot\phi}
+\frac{3{\dot R}}{c^2R}{\dot\phi}\biggl(1+\frac{\kappa B}
{\sqrt{I}}p_M\biggr)+V'[\phi]=0.$$
The field equations (\[equation1\]-\[ddotphi\]) are written in a comoving frame for the gravitational metric $g_{\mu\nu}$. In order to make a more obvious connection with standard cosmology, we shall transform our field equations by using the time coordinate defined by $$\tau=\int\sqrt{I}dt,$$ which puts the matter metric ${\hat g}_{\mu\nu}$ into comoving coordinate form. In this new frame, the field equations become $$\label{coeq1}
\biggl(\frac{{\dot R}}{R}\biggr)^2+\frac{c^2kK}{R^2}
=\frac{1}{3}c^2\Lambda K+\frac{1}{6}
\biggl(\frac{1}{2}{\dot\phi}^2+c^2KV[\phi]\biggr)
+\frac{1}{6}\kappa c^4K^{3/2}\rho_M,$$ $$\label{coeq2}
\biggl(\frac{{\dot R}}{R}\biggr)^2+2\frac{{\ddot
R}}{R}+\frac{c^2kK}{R^2} =\frac{{\dot K}{\dot
R}}{KR}+c^2\Lambda K-\frac{1}{2}\biggl(\frac{1}{2}{\dot\phi}^2
-c^2KV[\phi]\biggr)-\frac{1}{2}\kappa c^2\sqrt{K}p_M,$$ and $$\label{scalarco}
\frac{1}{c^2}\biggl(1-\kappa c^2B K^{3/2}\rho_M\biggr){\ddot\phi}
+\frac{3K{\dot R}}{c^2R}{\dot\phi}\biggl(1+ \kappa
B\sqrt{K}p_M\biggr)+K^2V'[\phi]=0.$$
Making the definitions $$\label{scalar} \rho_\phi=\frac{1}{\kappa
c^2}\biggl(\frac{1}{2c^2}{\dot\phi}^2+KV[\phi]\biggr),\quad
p_\phi=\frac{1}{\kappa}\biggl(\frac{1}{2c^2}{\dot\phi}^2-KV[\phi]\biggr),$$ we see that $K=1-\kappa c^2 B(\rho_\phi+p_\phi/c^2)$, and that $\rho_B:=1/(\kappa c^2 B)$ gives the energy scale at which $K$ deviates significantly from $1$, and therefore significant deviations from a standard general relativity plus matter model are to be expected.
Defining: $\rho_\Lambda=2\Lambda/(c^2\kappa)$, $p_\Lambda=-2\Lambda/\kappa$, $\Omega_k=kc^2/(R^2H^2)$, $\Omega_\Lambda=c^4\kappa\rho_\Lambda/(6H^2)$, $\Omega_\phi=c^4\kappa\rho_\phi/(6H^2)$ and $\Omega_M=c^4\kappa\rho_M/(6H^2)$, we can write the Friedmann equation (\[coeq1\]) in the “sum-rule” form: $$\label{Friedmann}
1+K\Omega_k=K\Omega_\Lambda+\Omega_\phi+K^{3/2}\Omega_M.$$ Although this form is similar to that which appears in standard cosmological models, the factor $K$ and $\Omega_\phi$ depend on the scalar field and therefore this is *not* just a simple sum of individual energy contributions. From (\[coeq2\]) we can derive an expression for the deceleration parameter: $$q=-\frac{\ddot{R}}{H^2R} %
=-\frac{\dot{K}}{2HK} +\frac{1}{2}(1+K\Omega_k)
+\frac{c^2\kappa}{4H^2}(p_\phi+K^{1/2}p_M+Kp_\Lambda).$$ The first term can be evaluated by using the scalar field equation (\[scalarco\]) to give $$\frac{\dot{K}}{2HK}=\frac{3(1-K)(1+\kappa B
K^{1/2}p_M)+H^{-1}B\dot{\phi}KV^\prime[\phi]}{1-\kappa c^2 B
K^{3/2}\rho_M}.$$ Note that the denominator is positive-definite wherever the metric (\[FRWmetric3\]) has the correct signature, the first term in the numerator is positive-definite whenever the metric (\[metric1\]) has the correct signature, and the second term in the numerator is positive provided that the universe is expanding $H>0$ and $\partial_t V[\phi]>0$.
Parameterization of the Cosmological Model
==========================================
We shall now consider the present universe modeled by a spatially flat FRW spacetime containing non-relativistic matter and the scalar field $\phi$. We shall assume the standard equation of state with negligible matter pressure $p_M\approx 0$, so that from the conservation equation (\[conserve\]) we get $$\frac{\rho_M}{\rho_{0\,M}}=\biggl(\frac{R_0}{R}\biggr)^3,$$ where $\rho_{0\,M}=\rho_M(t_0)$, $R_0=R(t_0)$ denote the present values of the density and cosmic scale factor with $t_0$ denoting the present value of time $t$. Taking $k=\Lambda=0$, Eq. (\[Friedmann\]) becomes $$\label{Friedmannred}
1=\Omega_{0\,\phi}+K_0^{3/2}\Omega_{0\,M}.$$ We can obtain a fit to the present data by essentially following the method of Perlmutter et al. [@Perlmutter] and choosing for a spatially flat universe the best fit value: $\Omega_{0\,M}=0.28$ and the parameterization: $$\Omega_{0\,\phi}=1-0.28K_0^{3/2}.$$
The present value of the deceleration parameter can be written $$q_0=-\frac{\dot{K}_0}{2H_0K_0} +\frac{1}{2}
+\frac{c^2\kappa}{4H_0^2}p_{0\,\phi}.$$ We see that in our gravity theory, we can achieve a negative deceleration parameter $q_0$, if the first term on the right-hand side dominates. We stress that the cosmic acceleration with ${\ddot R}(t) > 0$ is [*caused by the dynamics of the scalar field*]{} $\phi$, which acts as a gravitational field component, and the matter pressure $p_M$ can be negligible and the cosmological constant $\Lambda$ can be zero. Galaxy formation at earlier times can be obtained for small and positive $q$, if the solution for $\phi$ and the pressure $p_\phi$ are appropriately chosen. As has been demonstrated elsewhere, the standard horizon and flatness problems can also be resolved by superluminary models [@Vector; @Superlum1; @Albrecht; @Superlum2]. In the present theory, an early universe inflationary expansion phase can be obtained by a suitable choice of initial conditions for $\phi$ and $V(\phi)$. A solution that accomplishes this has been found and will be presented in a forthcoming article.
Experimental Signature for Gravitational Waves
==============================================
Let us define $${\bar c}=c\sqrt{K}=c(1-\frac{B}{c^2}{\dot\phi}^2)^{1/2},\quad
{\bar G}=GK^{3/2}=G(1-\frac{B}{c^2}{\dot\phi}^2)^{3/2},$$ and $$\bar{\Lambda}=\Lambda+\frac{1}{2} %
\Bigl(\frac{1}{2\bar{c}^2}\dot{\phi}^2+V[\phi]\Bigr),$$ so that we can rewrite (\[coeq1\]) as $$\label{eq:VPC}
H^2+\frac{{\bar c}^2k}{R^2} =\frac{1}{3}{\bar c}^2\bar{\Lambda}
+\frac{8\pi {\bar G}}{3}\rho_M.$$ This has the form of the Friedmann equation in Einstein gravity with an “effective” velocity of light ${\bar c}$, gravitational constant ${\bar G}$ and cosmological constant $\bar{\Lambda}$. We have therefore mapped our model to a particular case of the models considered in [@Albrecht]. It is interesting to note that in this particular model $\bar{G}/\bar{c}^3=G/c^3=\textrm{constant}$.
These effective time varying constants are merely definitions that allow us to write the Friedman equation in the standard form. Although $\bar{G}$ may be interpreted as an effective gravitational coupling to matter, we must be careful how we interpret $\bar{c}$ as a time varying speed of light. In fact, in this frame matter field perturbations will propagate at a speed determined by the line element $d\hat{s}^2=c^2
dt^2-R^2(t)\gamma_{ij}dx^idx^i$, whereas metric perturbations will propagate at a speed determined by $ds^2=\bar{c}^2
dt^2-R^2(t)\gamma_{ij}dx^idx^i$; in both cases $R(t)$ is the solution of (\[eq:VPC\]). This means that matter fields will behave as in conventional models, but the gravitational field cannot be understood in terms of a minimal coupling to these fields. Note that this rules out effects like those considered in [@Barrow+Magueijo:1999].
We can now predict that there will be a time lag $\Delta t$ between the speed of gravitational waves travelling along the null surface of the gravitational light cone $ds^2=0$ and photons and other matter particles travelling through and on the null surface of the matter light cone $d{\hat s}^2=0$. This could be the basis of an important observational signature for our gravity theory, for with $K\approx 0$ in the early universe the time lag $\Delta t$ could be a measurable quantity in gravitational wave experiments. Morover, the effective gravitational constant ${\bar G}$ would by the same reasoning be smaller in the early universe than its presently measured value $G$.
Conclusions
===========
We have developed a gravity theory based on a bimetric structure. When $B=0$, we regain standard GR. There are three light cones associated with the metrics $g_{\mu\nu}$, ${\hat g}_{\mu\nu}$ and ${\bar g}_{\mu\nu}$. All matter fields except the scalar field $\phi$ propagate in the geometry described by ${\hat g}_{\mu\nu}$ and material test particles and photons will propagate along geodesics determined by this geometry and obey the equivalence principle. In future work the properties and predictions of static spherically symmetric solutions for gravitational phenomena will be investigated. The theory could have significance for the problem of singularities and the physical properties of black holes. There is an interesting experimental signature in our gravity theory, due to the slowing down of gravitational waves emitted in the early universe which could be a detectable physical phenomenon in gravitational wave experiments.
The cosmological model is in agreement with the magnitude-redshift relation of Type Ia supernovae for $\Omega_{0\,M}=0.28$. In a recent paper, Caldwell [@Caldwell] considered the possibility that future observational constraints and more accurate supernovae data may require that quintessence models may need significantly more negative values for the pressure in the equation of state. In our model this would correspond to an increase in the size of $\Omega_{0\,\phi}$ keeping the density positive and $p_M\approx 0$.
0.2 true in [**Acknowledgments**]{} 0.2 true in J. W. Moffat was supported by the Natural Sciences and Engineering Research Council of Canada. 0.5 true in
[100]{}
S. Perlmutter et al., Nature [**391**]{}, 51 (1998); Ap. J. [**517**]{}, 565 (1999); A. Riess et al., Astron. Journ. [**117**]{}, 207 (1998); B. Schmidt et al., Ap. J. [**507**]{}, 46 (1998); P. Garnavich et al., Ap. J. [**509**]{}, 74 (1998). See the review, N. A. Bahcall, J. P. Ostriker, S. Perlmutter and J. P. Steinhardt, Science [**284**]{}, 1481 (1999), astro-ph/9906463, and references therein. See the review, V. Sahni and A. Starobinski, to appear in Int. J. Mod. Phys. D, astro-ph/9904398, and references therein. R. A. Battye, M. Bucher and D. Spergel, astro-ph/9908047. L. Parker and A. Raval, gr-qc/9908031 (1999); astro-ph/9908069. S. Weinberg, Rev. Mod. Phys. [**61**]{}, 1 (1989); S. M. Carroll, W. H. Press and E. L. Turner, Ann. Rev. Astron. Astrophys. [**30**]{}, 499 (1992). C. H. Brans and R. H. Dicke, Phys. Rev. [**124**]{}, 925 (1961). O. Bertolami and P. J. Martins, to appear in Phys. Rev. D, gr-qc/9910056. This paper contains references to other Brans-Dicke models and their consequences for cosmic acceleration. M. A. Clayton and J. W. Moffat, Phys. Letts. B[**460**]{}, 263 (1999), astro-ph/9812481. For a related reference, see: I. T. Drummond, gr-qc/9908058. J. W. Moffat, Int. J. Mod. Phys. D [**2**]{}, 351 (1993); astro-ph/9811390. A. Albrecht and J. Magueijo, Phys. Rev. D [**59**]{}, 043516 (1999); J. D. Barrow, Phys. Rev. D [**59**]{}, 043515 (1999). For more recent references, see: T. Harko and M. K. Mak, Class. and Quantum Grav. [**16**]{}, 1 (1999); A. Albrecht, to appear in proceedings of COSMO98, astro-ph/9904379; J. Magueijo and K. Baskerville, to appear in millenium issue of Phil. Trans. of the Royal Society, astro-ph/9905393. E. W. Kolb and M. S. Turner, *The Early Universe*, Addison-Wesley Pub. Co., California, 1990. R. R. Caldwell, astro-ph/9908168. J. D. Barrow and J. Magueijo, astro-ph/9907354; J. K. Webb et al., Phys. Rev. Lett. [**82**]{}, 884 (1999), astro-ph/9803165.
|
---
abstract: 'It is shown that the ‘arrow of time’ operator, $\hat{M}_F$, recently suggested by Strauss et al., in arXiv:0802.2448v1 \[quant-ph\], is simply related to the sign of the canonical ‘time’ observable, $T$ (apparently first introduced by Holevo). In particular, the monotonic decrease of $\langle \hat{M}_F\rangle$ corresponds to the fact that $\langle \,{\rm sgn~}T\rangle$ increases monotonically with time. This relationship also provides a physical interpretation of the property $\hat{M}_F\leq \hat{1}$. Some further properties and generalisations are pointed out, including to almost-periodic systems.'
author:
- 'Michael J. W. Hall'
title: 'Comment on “An Arrow of Time Operator for Standard Quantum Mechanics” (a sign of the time!)'
---
Introduction
============
Strauss et al. have recently given an interesting example of a ‘Lyapunov operator’ applicable to a wide class of quantum systems [@strauss], i.e., an operator which has a monotonically decreasing expectation value for all initial states. In particular, for any system with a Hamiltonian operator $\hat{H}$ with eigenstates of the form $$\label{hform}
\hat{H}|E,j\rangle = E |E,j\rangle,~~~~~E\in [0,\infty),~~~j=1,2,\dots,d$$ for some fixed $d$ that is independent of $E$, Strauss et al. define the operator $$\label{mf}
\hat{M}_F := \frac{i}{2\pi}\sum_j \int_0^\infty dE \int_0^\infty dE'\, \frac{|E,j\rangle\,\langle E',j|}{ E-E' +i0^+} ,$$ and demonstrate that for any state $|\psi_t\rangle$ the expectation value $\langle \hat{M}_F\rangle_{\psi_t}$ is monotonic decreasing with time [@strauss]. Strauss et al. further give some numerical examples, and determine the eigenstates of $\hat{M}_F$.
The physical origin of $\hat{M}_F$ above is not particularly obvious, and earlier work of Strauss only provides a rather mathematical motivation, related to a Hardy space representation of the Schrödinger equation [@s1; @s2]. Here it will be shown that the operator $\hat{M}_F$ is in fact closely related to the canonical time observable, $T$, apparently first introduced by Holevo [@holevo]. In particular, one has the general relation $$\label{sign}
\langle \hat{M}_F \rangle \equiv \frac{1}{2}\left( 1- \langle \,{\rm sgn}~ T \rangle \right).$$ Hence, [*the Lyapunov operator is closely related to the sign of the canonical time observable*]{}, providing a simple physical interpretation for the former.
It is important to note that the canonical time observable $T$ is a probability operator measure (POM) [@holevo], and hence is described by a set of positive operators which sum to the identity operator [@holevo; @nielsen], i.e., $$T \equiv \{ \hat{T}_t \}, ~~~~~\hat{T}_t\geq0,~~~~~~\int_{-\infty}^\infty dt\,\hat{T}_t = \hat{1} ,$$ with the probability density for a measurement of $T$ to give result $t$ for state $|\psi\rangle$ given by $$\label{pt}
p_T(t|\psi) = \langle \psi|\hat{T}_t|\psi\rangle .$$ Such POM observables are well known to be essential for describing all possible measurements that may be made on a given quantum system, and may always be represented in terms of measurement of a Hermitian operator on an ‘apparatus’ system which has interacted with the system [@holevo; @nielsen]. The main advantage of the POM formalism is that one does not have to describe such apparatus systems explicitly, when considering the possible measurements on a given system, which is particularly useful when determining optimal measurements for extracting information in various scenarios.
In the next section the origin and basic properties of the canonical time observable $T$ are reviewed. In particular, $T$ is the optimal observable for covariantly estimating time translations, i.e., it is the optimal ‘clock’ observable for the system. The relation (\[sign\]) between $\hat{M}_F$ and $T$ is demonstrated in section III, and the corresponding physical interpretation of $\hat{M}_F$ is discussed. Some further properties and possible generalisations are discussed in Sec. IV.
Canonical time observable
=========================
For a quantum system with energy eigenstates as per equation (\[hform\]), define the corresponding ‘time’ kets $$|t,j\rangle := (2\pi)^{-1/2} \int_0^\infty dE\, e^{-iEt} |E,j\rangle~~~~~~~j=1,2,\dots,d.$$ Note that natural units with $\hbar=1$ have been adopted, in keeping with [@strauss]. The corresponding canonical time observable $T$ is then defined as the POM observable $\{\hat{T}_t\}$, with $$\label{tt}
\hat{T_t} := \sum_j |t,j\rangle\,\langle t,j| .$$ It is easily checked that $$\int_{-\infty}^\infty dt\,\hat{T}_t = \sum_j \int_0^{\infty} dE\,|E,j\rangle\,\langle E,j| = \hat{1} ,$$ as required. This ‘canonical’ time observable appears to have first been considered in some detail by Holevo, primarily for the case of a free particle [@holevo].
It is worth noting some properties of $T$ here, to indicate why it is a natural time observable to consider at all. First, observe that if $E$ and $t$ were replaced by momentum and position coordinates $p$ and $x$ in the above definition of $|t,j\rangle$, with the range of integration extended over the whole real line, then one would obtain the usual Fourier relation between conjugate position and momentum kets. Hence, by analogy, $T$ can be said to be conjugate to the energy observable $\hat{H}$. Indeed, the (truncated) Fourier relation between $|E,d\rangle$ and $|t,d\rangle$ immediately implies the entropic uncertainty relation $$\label{ent} L_H L_T \geq \pi e\hbar ,$$ precisely as for the case of position and momentum observables [@bbm], where the ‘ensemble length’ $L_A$ is the natural geometric measure of the spread of observable $A$, given by the exponential of the entropy of $A$ [@ensemble]. Hence, the energy and time uncertainties cannot simultaneously be arbitrarily small.
Second, it follows immediately from Eqs. (\[pt\]) and (\[tt\]) that $$\label{cov}
p_T(t'|\psi_t) = p_T(t'-t|\psi_0) ,$$ i.e., the probability distribution simply translates under time evolution of the system. This time-covariance property is of course expected of any good ‘clock’ observable [@holevo]. Note in particular it implies that $$\label{clock}
\langle T\rangle_{t} := \int_{-\infty}^{\infty}dt'\,t'\,p_T(t'|\psi_t) = \langle T\rangle_{0} + t .$$
Third, Holevo has shown that the canonical time observable provides the best estimate of an unknown time shift of the system, for a particular figure of merit [@holevo]. It may further be shown that $T$ is optimal in the sense that measurement of any [*other*]{} time-covariant observable is equivalent to first subjecting the system to some ‘noise’ process, and then making a measurement of $T$ - see, for example, the analogous property in Ref. [@hallqcm] for optical phase.
Finally, it should be remarked that the ‘time’ kets $|t,j\rangle$ are [*not*]{} mutually orthogonal, due to the semiboundedness of the energy spectrum. Hence, $T$ cannot correspond to some Hermitian operator on the Hilbert space of the system. Indeed, the truncated Fourier transform defining $|t,j\rangle$ implies, via the Paley-Wiener theorem, that $p_T(t|\psi)$ cannot vanish on any non-zero finite interval [@paley], and so the canonical time distribution is always ‘fuzzy’, with support over the entire real axis. Of course, for each real function $f(t)$, the corresponding average value of $f(T)$ follows from (\[pt\]) as $$\label{ft}
\langle f(T)\rangle_\psi = \int_{-\infty}^{\infty}dt\,f(t)\,p_T(t|\psi) = \int_{-\infty}^{\infty}dt\,f(t)\, \langle \psi|\hat{T}_t|\psi\rangle,$$ and hence one can define a corresponding Hermitian operator $$\label{ftop} \widehat{f(T)} := \int_{-\infty}^\infty dt\, f(t)\hat{T}_t$$ satisfying $$\langle f(T)\rangle_\psi = \langle\psi |\widehat{f(T)}|\psi\rangle$$ for all states $\psi$. In general, however, these operators are not simply related algebraically - for example, one does not have $\widehat{(T^2)}=(\widehat{T})^2$. Hence, it is the POM $T$ which is of fundamental significance, rather than any particular Hermitian operator $\widehat{f(T)}$. This has a bearing on the interpretation of the Lyapunov operator in Eq. (\[mf\]), which corresponds to a particular choice of $f(t)$.
‘Arrow of time’ vs ‘canonical’ time
===================================
Consider now the observable corresponding to the [*sign*]{} of the canonical time observable, where ${\rm sgn~}t$ is defined to be $-1$, $0$ and $+1$ for $t<0$, $t=0$ and $t>0$ respectively. By definition this observable can only have measured values in $\{-1, 0, 1\}$, and hence one must have $$\label{bound} -1 \leq \langle\,{\rm sgn~}T \rangle_\psi \leq 1 .$$ Moreover, recalling from Eq. (\[cov\]) that the probability distribution of $T$ moves to the ‘right’ as $t$ increases, one expects that the sign of $T$ must increase monotonically on average, just as $T$ itself does as per Eq. (\[clock\]). Indeed, noting that, trivially, ${\rm sgn~}(t'+t)\geq {\rm sgn~}t'$ for $t\geq 0$, one has from Eq. (\[cov\]) that $$\label{mono}
\langle\,{\rm sgn~}T \rangle_{\psi_t} = \int dt' \,{\rm sgn~}t'\, p_T(t'-t)|\psi_0) = \int dt' \,{\rm sgn~}(t'+t)\, p_T(t'|\psi_0) \geq \langle \,{\rm sgn~}T \rangle_{\psi_0} .$$ In fact, strict inequality holds for $t>0$, since $p_T(t|\psi)$ cannot vanish on any non-zero finite interval as noted in the previous section.
It is seen from Eqs. (\[clock\]) and (\[mono\]) that both $T$ and ${\rm sgn~}T$ have monotonic increasing expectation values. It then follows immediately that, for example, $-T$ and $-{\rm sgn~}T$ both have monotonic [*decreasing*]{} expectation values for all initial states. Hence, in particular, the corresponding Hermitian operators $-\widehat{T}$ and $-\widehat{{\rm sgn~}T}$ defined via Eq. (\[ftop\]) are ‘Lyapunov’ operators in the sense of Strauss et al. [@strauss]. Clearly there are many more such operators, but what is of interest here is the connection between $T$ and the particular Lyapunov operator $\hat{M}_F$ in Eq. (\[mf\]).
In particular, from Eqs. (\[tt\]) and (\[ftop\]), and using a standard table of Fourier transforms, one has the explicit expression $$\begin{aligned}
\widehat{{\rm sgn~}T } & = & (2\pi)^{-1}\sum_j \int dE\int dE' |E,j\rangle\langle E',j| \int dt\,({\rm sgn~}t)\,e^{-i(E-E')t}\\
& = & (2\pi)^{-1} \sum_j \int dE\int dE' |E,j\rangle\langle E',j|~~ {\rm P.V.}\left[ -2i(E-E')^{-1}\right] \\
& = & 1 - 2 \hat{M}_F ,\end{aligned}$$ where P.V. denotes the principal value, and the last line follows via the definition of $\hat{M}_F$ in Eq. (\[mf\]). One hence obtains the desired simple relation in Eq. (\[sign\]) between $\hat{M}_F$ and ${\rm sgn~}T$.
Note that the property $0\leq \hat{M}_F\leq 1$, and the monotonic decrease of the expectation value of $\hat{M}_F$ with time, both proved by Strauss et al. [@strauss], follow immediately from Eqs. (\[sign\]), (\[bound\]) and (\[mono\]). They are seen to correspond to (i) the property $|\,{\rm sgn~}t|\leq 1$ and (ii) the monotonicity of ${\rm sgn~}t$. Further, the [*strict*]{} monotonic decrease of $\langle \hat{M}_F \rangle$ follows from the property that the canonical time distribution has support over the whole real axis (up to a set of measure zero). However, the eigenfunctions of $\hat{M}_F$, determined by Strauss et al. [@strauss], are seen not to have any particular fundamental significance - they are merely eigenfunctions of an operator corresonding to a particular function of the canonical time observable, where different eigenfunctions would be obtained by choosing a different function of $T$.
Discussion
==========
It has been shown that the particular Lyapunov operator $\hat{M}_F$, investigated by Strauss et al. [@strauss], has a simple relationship to the sign of the canonical time observable $T$, as per Eq. (\[sign\]), and that its main properties can be easily obtained from general properties of $T$. Further, many other Lyapunov operators can be constructed from $T$ - in particular, the operator $\widehat{g(T)}=\int dt\,g(t)\,\hat{T}_t$, for any monotonic decreasing function $g(t)$.
It is worth noting that in practice it is simplest to determine $\langle\hat{M}_F\rangle$, for a given state $|\psi\rangle$, by first calculating $$p_T(t|\psi_0) = \sum_j \left| \langle t,j|\psi_0\rangle \right|^2 ,$$ and then using Eqs. (\[sign\]) and (\[cov\]) to calculate $$\label{simp}
\langle \hat{M}_F\rangle_{\psi_t} = \int_{-\infty}^0 dt'\,p_T(t'-t|\psi_0)= \int_{-\infty}^{-t} dt'\,p_T(t'|\psi_0) .$$ Note it follows that $\langle M_F\rangle$ is the cumulative probability distribution for the canonical time observable, thus providing a physical interpretation for Eq. (7) in Ref. [@strauss].
For example, for an initially stationary free particle of mass $m$ in one dimension, with Gaussian momentum representation $$\psi_0(p) = e^{-p^2/(4\sigma^2)}$$ up to a normalisation factor, one finds the Gaussian integral $$\langle t,\pm|\psi_0\rangle = \int_0^\infty dp\,p^{1/2} e^{ip^2t/(2m)} e^{-p^2/(4\sigma^2)}$$ up to a normalisation factor (where $p=\pm (2mE)^{1/2}$), thus yielding a distribution of the form $$p_T(t|\psi_0) = N \left[ t^2 + (m/2\sigma^2)^2\right]^{-3/2}$$ for the canonical time observable, for some normalisation constant $N$. One therefore obtains from Eq. (\[simp\]), restoring general units, the explicit result $$\langle \hat{M}_F\rangle_{\psi_t}= 1/2 - (1/2)\,t\, [t^2 + (\hbar m/2\sigma^2)^2]^{-1/2} .$$
Note that the case of a free particle does not quite satisfy the assumed spectral condition in Eq. (\[hform\]), as the degeneracy breaks down at $E=0$. While not affecting the above results, this property leads, for example, to a divergence in the variance of the canonical time observable when the wavefunction has a non-zero component corresponding to $E=0$ [@holevo], which can be seen to occur for the free-particle example above. Hence, no Heisenberg type uncertainty relation can be written down in this case. In contrast, the ensemble lengths $L_H$ and $L_T$ in the time-energy uncertainty relation (\[ent\]) are perfectly well-defined for the above example [@ensemble], and can be explicitly calculated using standard tables of integrals (allowing evaluation of how close their product is to the lower bound $\pi e\hbar$).
It is of interest to consider how the canonical time observable may be generalised to quantum systems with energy spectra different to that of Eq. (\[hform\]). This is quite simple for an evenly spaced discrete energy spectrum, such as a harmonic oscillator, where essentially the Fourier transform defining the ‘time’ kets $|t,j\rangle$ is replaced by a discrete Fourier transform, yielding a periodic canonical time observable [@holevo; @helstrom; @hallqcm]. This replacement also applies to any energy spectra which is a subset of an evenly spaced discrete set. More generally, for an arbitrary continuous or uniform discrete energy spectrum, with a possibly continuous degeneracy which may depend on $E$, one may formally extend the energy spectrum to the form $\{|E,j\rangle\}$ with $j$ ranging over some sufficiently large measurable set $J$, with $|E,j\rangle=0$ for some values of $j$ if the degeneracy is not uniform, and define the corresponding canonical time observable $T\equiv \{\hat{T}_t\}$ as per Eq. (\[tt\]), where the $|t,j\rangle$ are defined as before (with integration replaced by summation for discrete energy spectra).
The cases of mixed continous and discrete energy spectra, and of discrete non-uniform energy spectra, appear to be more difficult - such systems simply may not make good ‘clocks’ (certainly this would be the case for chaotic classical systems). Note, however, that if the state of the system only has support on the continuous portion of the spectrum, or on a uniformly spaced subset of the discrete portion, then a time observable may be defined as above on the corresponding restricted Hilbert space.
Finally, it is of interest to note, assuming uniform degeneracies for convenience, that for a general discrete energy spectrum $\{ |E_k\rangle\} $ one may be able to at least speak of ‘clocks’ relative to a certain resolution. In particular, one can certainly always define a POM $T(\tau):=\{ \hat{T}_t(\tau);\hat{P}(\tau)\}$, with $t$ taking values in $[0,\tau)$, by $$\hat{T}_t(\tau):= \sum_j \hat{N}_\tau^{-1/2} |t,j\rangle\langle t,j| \hat{N}_\tau^{-1/2},$$ where $|t,j\rangle$ is defined as in Sec. II (with integration replaced by summation), $N_\tau$ is the positive operator $$\hat{N}_\tau := \sum_j \int_{0}^\tau dt\,|t,j\rangle\langle t,j| ,$$ $\hat{N}_\tau^{-1/2}$ is defined to be $0$ when acting outside the support of $N_\tau$, and $\hat{P}_\tau$ is the projection operator onto the zero eigenspace of $\hat{N}_\tau$. The limit $\tau\rightarrow\infty$ is well defined when the energy spectrum is uniform spaced, yielding a periodic time observable. More generally, the evolution will be almost-periodic, and for a given resolution $\epsilon$ there will be a period $\tau_\epsilon$ for which the system state will evolve arbitrarily close to its (arbitrary) initial state, to within a distance defined by $\epsilon$. Hence, it appears that one may define a time observable for such systems relative to a given resolution parameter $\epsilon$. This would be of interest for further investigation.
[99]{}
Y. Strauss, S. Silman, S. Machnes and L.P. Horwitz, Eprint arXiv:0802.2448v1 (2008). Y. Strauss, Eprint arXiv:0706.0268 \[math-ph\] (2007). Y. Strauss, Eprint arXiv:0710.3604 \[math-ph\] (2007). A.S. Holevo, [*Probabilistic and Statistical Aspects of Quantum Theory*]{} (North-Holland, Amsterdam, 1982). M.A. Nielsen and I.L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, U.K., 2000). I. Bialynicki-Birula and J. Mycielski, Commun. Math. Phys. [**44**]{} 129 (1975). M.J.W. Hall, Phys. Rev. A [**59**]{} 2602 (1999); Eprint arXiv:physics/9903045. M.J.W. Hall, J. Mod. Opt. [**40**]{} 809 (1993). A. Papoulis, [*Signal Analysis*]{} (McGraw-Hill, New York, 1977). C.W. Helstrom, Int. J. Theoret. Phys. [**11**]{} 357 (1974); J.H. Shapiro and S.R. Shephard, Phys. Rev. A [**43**]{} 3795 (1991).
|
---
author:
- Pietro Faccioli
- Carlos Lourenço
date: 'Received: 27 July 2018 / Accepted: 3 September 2018'
title: |
The fate of quarkonia in heavy-ion collisions at LHC energies:\
a unified description of the sequential suppression patterns
---
[leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore
Introduction
============
The theory of strong interactions, quantum chromodynamics (QCD), predicts the existence of a deconfined system of quarks and gluons (quark gluon plasma, QGP), formed when the QCD medium reaches a sufficiently-high temperature. To produce and study this extremely hot state of matter, experiments collide heavy nuclei at the highest possible energies and look for significant modifications in the rates and distributions of the produced particles, with respect to baseline properties measured in proton-proton collisions. One of the proposed signatures of QGP formation is that quarkonium bound states should be produced less and less frequently, as the binding potential between the constituent heavy quark and antiquark is screened by the colour-charge distribution of the surrounding quarks and gluons [@bib:MS]. The distinctive feature of this effect is its “sequentiality”: the suppression of the production of different quarkonium states should happen progressively, as the temperature of the medium increases, following a hierarchy in binding energy [@bib:DPS; @bib:KKS].
Obtaining convincing evidence of this sequential mechanism is a challenge to experiments. Only a small number of the many quarkonium states, in their variety of flavour compositions, masses, binding energies, sizes, angular momenta and spins, have been observed in nucleus-nucleus collisions. For several of them, even the baseline proton-proton production rates and kinematics are poorly known. Furthermore, even for the most copiously produced and best-known states, [${\rm J}/\psi$]{}and [$\Upsilon{\rm (1S)}$]{}, it is not well known how many of them are directly produced. The (large) fraction resulting from decays of heavier states must be precisely evaluated and accounted for, since it is the “mother” particle, with its specific properties, that undergoes the interaction with the QCD medium. Moreover, measurements of different states are not always directly comparable, because of differences in the kinematic phase space windows covered by the detectors or because of inconsistent choices in the binning of the published results. On the theory side, the interpretation of the data in terms of the “signal” sequential suppression effect is obfuscated by a variety of (hypothetical) “background” medium effects, such as quarkonium formation from initially-uncorrelated quarks and antiquarks, break-up interactions with other particles, energy loss in the nuclear matter, modifications of parton distributions inside the nuclei, etc.
We present a data-driven model that considers quarkonium suppression from proton-proton to nucleus-nucleus collisions through a minimal modification of the “universal” (state-independent) patterns recently observed in pp data [@bib:EPJCscaling]. It is based on a simple and single empirical hypothesis: the mechanism of nuclear modification depends *only* on the quarkonium binding energy, with no distinction between the charmonium and bottomonium families, nor between states of different masses and spins. The model is used to fit the nuclear modification factors, $R_{\rm AA}$, measured by CMS and at $\sqrt{s} = 5.02$TeV, in bins of collision centrality defined using the number of participant nucleons, $N_\mathrm{part}$. The result of this global fit, using the most detailed and precise measurements currently available, is that a simple hierarchy in binding energy can explain the observed quarkonium suppression patterns. In other words, the presently available data provide a clear signature of the sequential suppression conjecture, according to which the more strongly-bound states are progressively suppressed as the temperature of the medium exceeds certain thresholds.
Quarkonium suppression patterns {#sec:model}
===============================
![Direct production cross sections of quarkonia in pp collisions (normalized to the extrapolated cross section of a state of mass $2m_Q$), shown as a function of $E_{\mathrm{b}}$ at 7 and 13TeV.[]{data-label="fig:ppEbindingScaling"}](Figure1.pdf){width="0.72\linewidth"}
At the current level of experimental precision, the [$p_{\rm T}$]{}-differential charmonium and bottomonium production cross sections measured in 7 and 13TeV pp collisions at mid-rapidity [@bib:ATLASYnS; @bib:ATLASpsi2S; @bib:ATLASchic; @bib:CMSjpsi; @bib:CMSYnS; @bib:BPH15005] are well reproduced by a simple parametrization reflecting a universal (state-independent) energy-momentum scaling [@bib:EPJCscaling]. In this description, the *shape* of the mass-rescaled transverse momentum ($p_{\rm T}/M$) distribution is independent of the quarkonium state, while its *normalization* (at any chosen $p_{\rm T}/M$ value) shows a clear correlation with the binding energy, calculated as the difference between the open-flavour threshold and the quarkonium mass, $E_{\mathrm{b}} = 2M(D^0) - M(\psi{\rm (nS)})$ or $2M(B^0) - M(\Upsilon{\rm (nS)})$. The observed correlation, shown in Fig. \[fig:ppEbindingScaling\], is seemingly identical for the charmonium and bottomonium families, and for the two collision energies.
The linear correlation seen in the log-log representation of Fig. \[fig:ppEbindingScaling\] suggests that we can faithfully parametrize the $E_{\mathrm{b}}$ dependence of the direct-production cross sections using a power-law function: $$f_{\mathrm{pp}}^{\psi/\Upsilon}(E_{\mathrm{b}}) \equiv
\left( \frac{\sigma^{\mathrm{dir}}(\psi/\Upsilon)}{\sigma(2m_Q)} \right)_{\mathrm{pp}}
= \left( \frac{E_{\mathrm{b}}}{E_0}\right)^{\delta} \quad .
\label{eq:ppEbindingScaling}$$
Here, $\sigma(2m_Q)$ is the extrapolation (at fixed $p_{\rm T}/M$) of the cross section to twice the relevant heavy quark mass, computed from the mass of the lightest quarkonium state: $2m_c = M(\eta_c)$ and $2m_b = M(\eta_b)$ [@bib:PDG]. One single exponent parameter $\delta$ is used for both quarkonium families, so as to minimize the number of free parameters in the model, especially in view of the uncertainties of some experimental measurements. Independent fits at the two collision energies give the values $\delta = 0.63 \pm 0.02$ at 7TeV and $0.63 \pm 0.04$ at 13TeV [@bib:EPJCscaling]. The equation defines a universal “bound-state transition function”, $f_{\mathrm{pp}}^{\psi/\Upsilon}(E_{\mathrm{b}})$, proportional to the probability that the [$Q\overline{Q}$]{}pre-resonance evolves to a given $\psi/\Upsilon$ state. The transition process involves long-distance interactions between the quark and the antiquark, for which no theory calculations exist.
Given the current lack of cross section data, we assume that the direct production of $\chi_c$ and $\chi_b$ is described by an analogous bound-state transition function, with identical dependence on the binding energy (same $\delta$ value as for the S-wave states), but an independent $E_0$ value (reflecting the different angular momentum and wave-function shape). Complementing this long-distance scaling with the short-distance production ratio $\sigma(2m_b) / \sigma(2m_c) = (m_b / m_c)^{-6.63 \pm 0.08}$ [@bib:EPJCscaling], we obtain a complete parametrization of the direct production cross sections for all states of the charmonium and bottomonium families. Together with the relevant feed-down branching fractions [@bib:PDG], this provides a full picture of inclusive quarkonium production in pp collisions, including the detailed contributions of the feed-down decays from heavier to lighter states, as reported in Ref. [@bib:EPJCscaling]. This data-driven model is used in the present study as a baseline for the interpretation of the Pb-Pb data.
Our hypothesis on how the pp baseline is modified in Pb-Pb collisions is guided by the experimental observation that the [$\psi{\rm (2S)}$]{}and [${\rm J}/\psi$]{}exhibit very different suppression patterns in Pb-Pb collisions as a function of collision centrality [@bib:CMS_RAA_psi], as shown in Fig. \[fig:RAA\_DRs\_Data\]-top. The $R_{AA}$ of the [$\psi{\rm (2S)}$]{}shows a significant departure from unity already in the most peripheral bin probed by the experiments, corresponding to an average number of colliding nucleons of $N_\mathrm{part} = 22$, and then seems to be almost independent of $N_\mathrm{part}$ up to the most central Pb-Pb collisions. Instead, the $R_{AA}$ of the [${\rm J}/\psi$]{}shows a more gradual decrease from peripheral to central collisions, being relatively close to unity in the most peripheral bins. We can also see in Fig. \[fig:RAA\_DRs\_Data\]-top that the [$\Upsilon{\rm (1S)}$]{}and [$\Upsilon{\rm (2S)}$]{}suppression patterns [@bib:CMS_RAA_upsilon] are very similar to those of the [${\rm J}/\psi$]{}and [$\psi{\rm (2S)}$]{}, respectively.
![Top: Nuclear modification factor as a function of centrality for the (inclusive) [${\rm J}/\psi$]{}, [$\psi{\rm (2S)}$]{}, [$\Upsilon{\rm (1S)}$]{}, [$\Upsilon{\rm (2S)}$]{}and [$\Upsilon{\rm (3S)}$]{}quarkonia, as measured by CMS comparing pp and Pb-Pb data at 5.02TeV [@bib:CMS_RAA_psi; @bib:CMS_RAA_upsilon]. Bottom: Corresponding double ratio of the 2S and 1S nuclear modification factors.[]{data-label="fig:RAA_DRs_Data"}](Figure2.pdf){width="0.75\linewidth"}
The different suppression patterns of the 2S and 1S states can also be appreciated through the double suppression ratios ($R_{AA}(\mathrm{2S}) / R_{AA}(\mathrm{1S})$) measured by CMS, as shown in Fig. \[fig:RAA\_DRs\_Data\]-bottom for the two quarkonium families. The charmonium double ratio is significantly smaller than unity already in the most peripheral collisions, confirming that the [$\psi{\rm (2S)}$]{}is strongly suppressed even in the most “pp-like" nuclear collisions. The [${\rm J}/\psi$]{}and [$\psi{\rm (2S)}$]{}suppression patterns reported by ATLAS [@bib:ATLAS_RAA] show similar features.
The apparent fragility of the [$\psi{\rm (2S)}$]{}can be attributed to its binding energy, 44MeV, very small with respect to both its mass and the open charm mass threshold, $2M(D^0)$. A fluctuation of around 1% in the invariant mass of the pre-resonance [$Q\overline{Q}$]{}or in the threshold energy above which open charm production becomes possible is sufficient to inhibit the formation of this weakly-bound quarkonium state. This concept can be formalized through a minimal modification of the pp production baseline, in which the short-distance partonic production of the [$Q\overline{Q}$]{}state is assumed to remain unchanged, while the long-distance bound-state transition function (Eq. \[eq:ppEbindingScaling\]) becomes $$f_{\mathrm{PbPb}}^{\psi/\Upsilon}(E_{\mathrm{b}} , \epsilon) \equiv
\left( \frac{\sigma^{\mathrm{dir}}(\psi/\Upsilon)} {\sigma(2m_Q)} \right)_{\mathrm{PbPb}} =
\left( \frac{E_{\mathrm{b}}-\epsilon}{E_0} \right)^{\delta}
\label{eq:AAEbindingScaling}$$ for $E_{\mathrm{b}} - \epsilon > 0$ and vanishes for $E_{\mathrm{b}} - \epsilon < 0$. Here, $\epsilon$ represents a shift in the difference between the di-meson threshold energy and the [$Q\overline{Q}$]{}mass. The magnitude of $\epsilon$ measures the strength of the observable nuclear suppression effects: as $\epsilon$ increases it becomes progressively less probable to *form* the bound state and once $\epsilon$ exceeds $E_{\mathrm{b}}$ the [$Q\overline{Q}$]{}pair never binds into a quarkonium state.
This empirical parametrization implicitly reflects different possible physics effects. For example, multiple scattering effects may increase on average the relative momentum and invariant mass of the unbound quark and antiquark [@bib:JWQiu_suppression], pushing such pairs towards or beyond the di-meson threshold. Alternatively, or simultaneously, a screening of the attractive interaction between the quark and the antiquark may disfavour the formation of a bound state, tending to separate the two objects and ultimately leading to two independent hadronizations. Both examples can be described in this model, assuming $\epsilon > 0$.
We indicate with $\langle \epsilon \rangle$ and $\sigma_\epsilon$ the average and width of the $\epsilon$ distribution characterizing a given experimental condition, mainly defined by the collision energy and the centrality-distribution of the events. Correspondingly, we define the event-averaged bound-state transition function $$F_{\mathrm{PbPb}}^{\psi/\Upsilon}( E_{\mathrm{b}}, \langle \epsilon \rangle, \sigma_\epsilon) = \frac{\int_{0}^{ E_{\mathrm{b}}} [(E_{\mathrm{b}}-\epsilon)/E_0]^{\delta} \; G(\epsilon; \langle \epsilon \rangle, \sigma_\epsilon) \; {\ensuremath{{\rm d}}\xspace}\epsilon}{\int_{0}^{E_{\mathrm{b}}} G(\epsilon; \langle \epsilon \rangle, \sigma_\epsilon) \; {\ensuremath{{\rm d}}\xspace}\epsilon} \quad,
\label{eq:transfunctionPbPb_avg}$$ where $\epsilon$ is distributed following a function $G$, assumed, for simplicity, to be Gaussian.
The resulting nuclear suppression ratio for *direct* quarkonium production is calculated in this model as the ratio between the long-distance bound-state transition functions of the Pb-Pb and pp cases: $$R_{AA}^{\mathrm{dir}}( E_{\mathrm{b}}, \langle \epsilon \rangle, \sigma_\epsilon) = F_{\mathrm{PbPb}}^{\psi/\Upsilon}( E_{\mathrm{b}}, \langle \epsilon \rangle, \sigma_\epsilon) \; / \; f_{\mathrm{pp}}^{\psi/\Upsilon}(E_{\mathrm{b}}) \quad .
\label{eq:RAA_dir}$$
In principle, the energy-shift effect, and therefore $\langle \epsilon \rangle$ and $\sigma_\epsilon$, may depend on the identity of the quarkonium state. However, in line with the seemingly universal properties of quarkonium production in pp collisions, we will work under the hypothesis that also the suppression can be parametrized with a “universal” $\epsilon$ distribution, identical for all quarkonia. Throughout the following discussion this will remain our central hypothesis, which we want to test using the [${\rm J}/\psi$]{}, [$\psi{\rm (2S)}$]{}, [$\Upsilon{\rm (1S)}$]{}and [$\Upsilon{\rm (2S)}$]{}measurements.
![Top: Calculated nuclear suppression factor as a function of the quarkonium $E_\mathrm{b}$ for direct (curves) and inclusive (markers) production of S- and states. Results are shown for $\sigma_\epsilon = 30$MeV and for $\langle \epsilon \rangle = 20$, 250, 550 and 1000MeV, respectively shown in green, brown, red and violet. Bottom: Individual contributions (Eq. \[eq:RAA\_dir\]) of the long-distance bound-state transition functions in pp ($1/f_{\mathrm{pp}}^{\psi/\Upsilon}$, scaled by 0.1) and Pb-Pb ($F_{\mathrm{PbPb}}^{\psi/\Upsilon}$, for the same $\sigma_\epsilon$ and $\langle \epsilon \rangle$ values).[]{data-label="fig:RAA_computation"}](Figure3.pdf){width="0.85\linewidth"}
While $R_{AA}^{\mathrm{dir}}$ is defined continuously for any value of $E_\mathrm{b}$ (including values not corresponding to physical bound states), the nuclear suppression ratio for inclusive quarkonium production depends on the feed-down contributions specific to each observable state, therefore becoming a discreet set of points. We model the observable suppression for the quarkonium state $\psi_k$ as $$\label{eq:RAA_prompt}
\begin{split}
& R_{AA}^{\mathrm{inc}}(\psi_k, \langle \epsilon \rangle, \sigma_\epsilon) = \\
& \frac{
\sum_{j} R_{AA}^{\mathrm{dir}}[E_{\mathrm{b}}(\psi_j),\langle \epsilon \rangle, \sigma_\epsilon] \;
\sigma_{\mathrm{pp}}^{\mathrm{dir}}(\psi_j) \; {\mathcal B}(\psi_j \to \psi_k) }
{\sum_{j} \sigma_{\mathrm{pp}}^{\mathrm{dir}}(\psi_j) \; {\mathcal B}(\psi_j \to \psi_k) } \, ,
\end{split}$$ where, according to the hypothesis that the observed suppression is driven by a state-independent energy-shift effect, $\langle \epsilon \rangle$ and $\sigma_\epsilon$ do not depend on $j$ and $k$. Naturally, ${\mathcal B}(\psi_j \to \psi_k) = 0$ if $m(\psi_k) > m(\psi_j)$ and ${\mathcal B}(\psi_j \to \psi_j) = 1$.
For $\sigma_{\mathrm{pp}}^{\mathrm{dir}}(\psi_j)$ we use the full set of direct production cross sections determined, as mentioned above, in the global parametrization of mid-rapidity 7TeV data [@bib:EPJCscaling]. The choice of a specific energy for the pp reference does not affect $R_{AA}^{\mathrm{inc}}$ as long as the cross section *ratios* (as effectively appearing in Eq. \[eq:RAA\_prompt\]) do not depend on the pp collision energy, an hypothesis fully consistent with the scaling properties discussed in Ref. [@bib:EPJCscaling].
The $E_{\mathrm{b}}$ dependence of $R_{AA}^{\mathrm{dir}}$ is shown in Fig. \[fig:RAA\_computation\]-top as four continuous curves, corresponding to different $\langle \epsilon \rangle$ values, while the corresponding $R_{AA}^{\mathrm{inc}}$ discrete values, accounting for the state-specific feed-down contributions, are shown as sets of points, placed at the binding energies of the physical quarkonium states, reported in Table \[tab:EB\]. In this preliminary illustration we have not shown the effect of the uncertainties affecting $\delta$, the branching ratios, and the cross sections. The $R_{AA}^{\mathrm{dir}}$ continuous curves show a strong suppression up to $E_{\mathrm{b}} \simeq \langle \epsilon \rangle$, followed by a power-law increase (determined by $\delta$). This shape reflects the behavior of the Pb-Pb bound-state transition function, modelled in Eqs. \[eq:AAEbindingScaling\] and \[eq:transfunctionPbPb\_avg\], and shown in Fig. \[fig:RAA\_computation\]-bottom.
[cccc]{} Quarkonium & $E_\mathrm{b}$\[MeV\] & Quarkonium & $E_\mathrm{b}$\[MeV\]\
$\chi_{b2}$(3P) & 36 & $\chi_{c0}$ & 315\
[$\psi{\rm (2S)}$]{}& 44 & $\chi_{b0}$(3P) & 326\
$\chi_{b1}$(3P) & 47 & $\Upsilon$(2S) & 536\
$\chi_{b0}$(3P) & 62 & [${\rm J}/\psi$]{}& 633\
$\chi_{c2}$ & 174 & $\chi_{b2}$(1P) & 647\
$\Upsilon$(3S) & 204 & $\chi_{b1}$(1P) & 666\
$\chi_{c1}$ & 219 & $\chi_{b0}$(1P) & 700\
$\chi_{b2}$(2P) & 290 & $\Upsilon$(1S) & 1099\
$\chi_{b1}$(3P) & 304 & &\
![Comparison between the measured values [@bib:CMS_RAA_psi; @bib:CMS_RAA_upsilon; @bib:ATLAS_RAA], integrated over the probed centrality range, and the results computed for $\langle \epsilon \rangle = 450$, 500, 550 and 600MeV, respectively shown in green, brown, red and violet.[]{data-label="fig:RAA_examples"}](Figure4.pdf){width="0.85\linewidth"}
As illustrated in Fig. \[fig:RAA\_examples\], the model reproduces the observed hierarchy of centrality-integrated suppression values, with a universal $\langle \epsilon \rangle$ around 550MeV. The importance of accounting for the feed-down contributions is clearly seen in the [${\rm J}/\psi$]{}and [$\Upsilon{\rm (1S)}$]{}cases, where $R_{AA}^{\mathrm{inc}}$ and $R_{AA}^{\mathrm{dir}}$ are particularly different. The ATLAS [${\rm J}/\psi$]{}measurement does not include the most peripheral Pb-Pb collisions, which might explain the slightly smaller $R_{AA}$ value, relative to the CMS measurement. While the $\Upsilon$(nS) data points are very well reproduced by the computation made with $\langle \epsilon \rangle = 550$MeV, the measured [${\rm J}/\psi$]{}and [$\psi{\rm (2S)}$]{}$R_{AA}$ are higher than the computed values. This might be an indication that charmonium production in Pb-Pb collisions at the LHC energies includes an extra contribution with respect to those at work in pp collisions, one option being the binding of uncorrelated quarks and antiquarks (produced in different nucleon-nucleon interactions), made possible by the very large number of charm quarks produced in these collisions [@bib:Thews; @bib:PBM] and seemingly observed at low-[$p_{\rm T}$]{}by ALICE [@bib:ALICE]. The level of this contribution should be significantly reduced in the [$p_{\rm T}$]{}region probed by the CMS and ATLAS data, ${\ensuremath{p_{\rm T}}\xspace}> 6.5$ and 9GeV, respectively, but it could well be that the residual contamination has a visible effect.
It is worth highlighting that the maximum-suppression plateau in the region $E_{\mathrm{b}} < \langle \epsilon \rangle$ becomes wider for larger values of $\langle \epsilon \rangle$, but $R_{AA}$ never vanishes. This effect is qualitatively determined by the non-zero width $\sigma_\epsilon$ of the energy-shift distribution. The use of a distribution for $\epsilon$ (instead of a fixed average value) roughly simulates a realistic mixture of physical events where, for example, quarkonia produced in the scattering of nucleons in the nuclear halos (small $\epsilon$) can survive even in the most central collisions (large $\langle \epsilon \rangle$). The modelling of this “tail” effect and, therefore, of the shape of the plateau, reflects the shape of the $\epsilon$ distribution (here simply assumed to be a symmetric Gaussian). Future measurements of very small suppression factors close to the centre of the plateau (for example, the one of the [$\Upsilon{\rm (3S)}$]{}state, for which only upper limits exist so far) will probe different shape hypotheses. We also note that the increase of $R_{AA}$ as $E_{\mathrm{b}} \to 0$, determining the prediction that the [$\psi{\rm (2S)}$]{}is *less* suppressed than the [$\Upsilon{\rm (3S)}$]{}, is actually a “pp effect”, caused by the presence of $f_{\mathrm{pp}} \propto E_{\mathrm{b}}^\delta$ in the denominator of $R_{AA}$ (Eqs. \[eq:ppEbindingScaling\] and \[eq:RAA\_dir\]), as illustrated in the bottom panel of Fig. \[fig:RAA\_computation\].
Global fit of the $R_{AA}$ data {#sec:fit}
===============================
Having introduced and motivated our model in the previous section, we will now move to a more quantitative analysis of the experimental data. As mentioned before, the CMS and ATLAS Collaborations have reported quarkonium suppression measurements using pp and Pb-Pb collisions at 5.02TeV, in comparable experimental conditions, for four different states, [${\rm J}/\psi$]{}, [$\psi{\rm (2S)}$]{}, [$\Upsilon{\rm (1S)}$]{}and [$\Upsilon{\rm (2S)}$]{}, complemented by upper limits for the [$\Upsilon{\rm (3S)}$]{} [@bib:CMS_RAA_psi; @bib:CMS_RAA_upsilon; @bib:ATLAS_RAA]. We performed a global analysis of 37 $R_{AA}$ [${\rm J}/\psi$]{}, [$\psi{\rm (2S)}$]{}, [$\Upsilon{\rm (1S)}$]{}and [$\Upsilon{\rm (2S)}$]{}values, measured in several $N_\mathrm{part}$ bins, testing the hypothesis of a universal mechanism, the intensity of which is measured by an average shift $\langle \epsilon \rangle$ of the binding energy $E_\mathrm{b}$, common to all states and depending on $N_\mathrm{part}$.
Preliminary fits for individual $N_\mathrm{part}$ bins show a definite correlation between $\langle \epsilon \rangle$ and the logarithm of $N_\mathrm{part}$, while no significant dependence of $\sigma_\epsilon$ on $N_\mathrm{part}$ is seen. Therefore, the global fit of all data is performed assuming a linear dependence of $\langle \epsilon \rangle$ on $\ln(N_\mathrm{part})$. The two coefficients of this dependence and the ($N_\mathrm{part}$-independent) $\sigma_\epsilon$ are the parameters of the fit. While there is, a priori, no reason to assume that $\sigma_\epsilon$ is independent of $N_\mathrm{part}$, the accuracy of the presently available measurements justifies approximating it by a constant, which provides a very good description of the data and makes the global fit more robust than if we would have included more free parameters. Other functional forms can be considered once more precise and detailed data will become available.
Two global uncertainties, common to all ATLAS or CMS data points, and four uncertainties correlating the CMS points of each quarkonium state are taken into account by introducing corresponding constrained (nuisance) parameters in the fit. Further nuisance parameters are $\delta = 0.63 \pm 0.04$, parametrizing the power-law dependence on $E_\mathrm{b}$, and several constrained factors used to model the uncertainties in the direct production cross sections and feed-down branching ratios entering Eq. \[eq:RAA\_prompt\]. The latter uncertainties have a negligible impact in the results.
![$R_{AA}^{\mathrm{inc}}$ curves resulting from the global fit, with bands representing the 68, 95 and 99.7% confidence level intervals, compared to the corresponding charmonium (top) and bottomonium (middle) measurements by CMS (closed markers) and ATLAS (open markers), as a function of $N_\mathrm{part}$. The corresponding [$\psi{\rm (2S)}$]{}-to-[${\rm J}/\psi$]{}double ratios (not included as fit constraints) are shown in the bottom panel.[]{data-label="fig:RAA_results_vs_Npart"}](Figure5.pdf){width="0.8\linewidth"}
Figure \[fig:RAA\_results\_vs\_Npart\] shows how the fit results (coloured bands) compare to the measurements, as a function of $N_\mathrm{part}$. The fit has a high quality, with a total $\chi^2$ of 40 for 34 degrees of freedom, and a 22% probability that a higher $\chi^2$ value would be obtained if the data points were statistical fluctuations around perfectly modelled central values. The measured (inclusive) [$\psi{\rm (2S)}$]{}-to-[${\rm J}/\psi$]{}charmonium suppression double ratio, not included in the fit, is well reproduced in its seemingly inexistent $N_\mathrm{part}$ dependence. A strong deviation from the approximately flat behaviour should be observed through more precise and finely binned measurements in the low-$N_\mathrm{part}$ region.
![Fitted average energy shift $\langle \epsilon \rangle$ as a function of $N_\mathrm{part}$.[]{data-label="fig:results_vs_Npart"}](Figure6.pdf){width="0.8\linewidth"}
Figure \[fig:results\_vs\_Npart\] shows that the fitted $\langle \epsilon \rangle$ parameter grows logarithmically with $N_\mathrm{part}$, reaching $566\pm 15$MeV for $N_\mathrm{part}=400$. The fitted value of $\sigma_\epsilon$ is $30\pm 5$MeV.
{width="0.76\linewidth"}
The results of the analysis for the suppression factors are presented in Fig. \[fig:results\_vs\_Ebinding\], both at the level of the direct production ($R_{AA}^{\mathrm{dir}}$), as continuous grey bands, and after including the effect of the feed-down contributions ($R_{AA}^{\mathrm{inc}}$), as discreet coloured bands, one for each of the five S-wave states. The results are shown as a function of the binding energy and in 11 bins of $N_\mathrm{part}$. The sequence of panels shows the evolution, with increasing $N_\mathrm{part}$, of the $E_\mathrm{b}$ dependence of $R_{AA}^{\mathrm{dir}}$, with its characteristic strong-suppression plateau. The data points are compared to the corresponding $R_{AA}^{\mathrm{inc}}$ predictions, illustrating how the model, with its state-independent formulation, is able to simultaneously reproduce the measurements reported for the several different states, throughout the probed spectrum of collision centrality.
The analysis of the suppression patterns measured by CMS for the four 1S and 2S states in 2.76TeV Pb-Pb collisions [@bib:CMS_RAA_psi_276; @bib:CMS_RAA_upsilon_276] leads to completely analogous results, with slightly lower (and less precisely determined) $\langle \epsilon \rangle$ values.
Discussion
==========
The question addressed in this paper is the following: is there a clear sequential pattern in the correlation between the measured nuclear suppressions and the binding energies of the quarkonium states?
At first sight, there is no evidence of a sequential hierarchy of suppressions: the data indicate that the [$\Upsilon{\rm (2S)}$]{}is as much suppressed as the [$\psi{\rm (2S)}$]{}, at least in the most central collisions, despite its $\sim$12 times higher binding energy. Another relevant observation is that the relative suppression of the [$\psi{\rm (2S)}$]{}with respect to the [${\rm J}/\psi$]{}is seemingly independent of the collision centrality in the range covered by the existing measurements ($N_\mathrm{part} > 20$). If the nuclear suppression factors would be defined with respect to the most peripheral of the (available) Pb-Pb data ($R_{CP}$), instead of the pp data ($R_{AA}$), one might wrongly infer that the [$\psi{\rm (2S)}$]{}and [${\rm J}/\psi$]{}are equally suppressed.
Our model of how quarkonium production is modified in nucleus-nucleus collisions assumes that, indeed, the nuclear suppression depends sequentially on the binding energy of the quarkonium states. The starting point is the significant correlation patterns observed [@bib:EPJCscaling] in the precise and detailed 7 and 13TeV pp data [@bib:ATLASYnS; @bib:ATLASpsi2S; @bib:ATLASchic; @bib:CMSjpsi; @bib:CMSYnS; @bib:BPH15005]: S-wave quarkonium production in pp collisions can be parametrized assuming that the transition probability from the pre-resonant [$Q\overline{Q}$]{}state to the physically observable bound state is simply proportional to a power law in the binding energy, $P({\ensuremath{Q\overline{Q}}\xspace}\to \mathcal{Q})_\mathrm{pp} \propto E_\mathrm{b}^\delta$, equal for all states. We use this parametrization, extended to the P-wave states, as a reference description of pp quarkonium production, including the detailed structure of the indirect production via feed-down decays. The nuclear suppression effect is then modelled by a minimal modification of the pp reference formula, introducing a threshold mechanism parametrizable with a “penalty” applied to the binding energy: $P({\ensuremath{Q\overline{Q}}\xspace}\to \mathcal{Q})_{AA} \propto (E_\mathrm{b}-\epsilon)^\delta$, with $P = 0$ for $E_\mathrm{b} < \epsilon$, where $\epsilon$ depends on collision energy and centrality but is *identical for all $c\bar{c}$ and $b\bar{b}$ states*.
A global fit to ATLAS and CMS data at 5.02TeV shows that this hypothesis describes quantitatively the observed suppression patterns for the different final states, accounting for the similarity of the [$\Upsilon{\rm (2S)}$]{}and [$\psi{\rm (2S)}$]{}suppressions in central collisions and for the apparent constancy of the relative suppression of the [$\psi{\rm (2S)}$]{}with respect to the [${\rm J}/\psi$]{}. This latter effect is actually explained by the fact that the average binding-energy penalty $\langle \epsilon \rangle$, a “thermometer” of the global nuclear modification effect, increases *logarithmically* with the number of participants $N_\mathrm{part}$, while the data points, distributed uniformly in $N_\mathrm{part}$, mostly populate an asymptotic maximum-suppression region. By adopting a binning in $\ln(N_\mathrm{part})$, future measurements may provide a more complete view of the suppression effect, being the domain of the most peripheral collisions the one where the most characterizing $N_\mathrm{part}$ dependence should be observed.
Summary
=======
Despite the lack of experimental information on both the production (in pp collisions) and the suppression (in Pb-Pb collisions) of all the P-wave states, with their crucial and mostly unknown feed-down contributions to the (inclusive) production of the S-wave states, the available measurements already provide a very good starting point for a detailed investigation of how the nuclear effects depend on the [$Q\overline{Q}$]{}binding energy, over the maximally wide physical range $E_\mathrm{b} \simeq 50$–1100MeV.
The conclusion of our study is that the measurements provide evidence of a sequential nuclear suppression, which increasingly penalizes the production of the more weakly bound states, as foreseen [@bib:DPS; @bib:KKS] when the mechanism at play is a screening of the binding forces inside the quark gluon plasma. In first approximation, no additional nuclear effects are needed to describe the measured suppression patterns, integrated in the [$p_{\rm T}$]{}and $|y|$ domains probed by the CMS and ATLAS data. Such effects might become visible in more detailed multi-dimensional analyses, also including the dependences of the suppression rates on [$p_{\rm T}$]{}and $y$, to be performed once the experimental data will become more precise and/or will cover an extended phase space region. For instance, including accurate measurements of all S-wave quarkonia in the forward rapidity region covered by ALICE and LHCb should allow us to disentangle the impact of the nuclear effects on the parton distribution functions.
It is particularly remarkable that a single binding-energy scaling pattern seamlessly reproduces the state dependence of the quarkonium yield in pp collisions and also, with the additional “penalty” effect induced by the medium, in nucleus-nucleus collisions. This result consolidates the success and relevance of the simple, factorizable and universal description of quarkonium production discussed in Refs. [@bib:EPJCscaling; @bib:simplePatterns], further urging investigations on its consistency with non-relativistic QCD [@bib:NRQCD] and on the fundamental origin of such unexpected simplicity, as recently argued in Ref. [@bib:chic].
Acknowledgement: The proton-proton baseline model that we use as a starting point for the study described in this paper was developed in collaboration with M. Araújo and J. Seixas.
[99]{}
T. Matsui and H. Satz, Phys. Lett. **B178** (1986) 416.
S. Digal, P. Petreczky, and H. Satz, Phys. Rev. **D64** (2001) 094015.
F. Karsch, D. Kharzeev and H. Satz, Phys. Lett. **B637** (2006) 75.
P. Faccioli *et al.*, Eur. Phys. J. **C78** (2018) 118.
G. Aad *et al.* (ATLAS Coll.), Phys. Rev. **D87** (2013) 052004.
G. Aad *et al.* (ATLAS Coll.), JHEP **09** (2014) 079.
G. Aad *et al.* (ATLAS Coll.), JHEP **07** (2014) 154.
V. Khachatryan *et al.* (CMS Coll.), Phys. Rev. Lett. **114** (2015) 191802.
V. Khachatryan *et al.* (CMS Coll.), Phys. Lett. **B749** (2015) 14.
A.M. Sirunyan *et al.* (CMS Coll.), Phys. Lett. **B780** (2018) 251.
C. Patrignani *et al.* (Particle Data Group), Chin. Phys. **C40** (2016) 100001.
A.M. Sirunyan *et al.* (CMS Coll.), Eur. Phys. J. **C78** (2018) 509.
A.M. Sirunyan *et al.* (CMS Coll.), arXiv:1805.09215 (sub. to Phys. Lett. B)
ATLAS Coll., ATLAS-CONF-2016-109.
J.-W. Qiu, J.P. Vary, X.-F. Zhang, Phys. Rev. Lett. **88** (2002) 232301.
R.L. Thews, M. Schroedter and J. Rafelski, Phys. Rev. **C63** (2001) 054905.
A. Andronic *et al.*, Phys. Lett. **B652** (2007) 259.
B. Abelev *et al.* (ALICE Coll.), Phys. Lett. **B734** (2014) 314.
V. Khachatryan *et al.* (CMS Coll.), Eur. Phys. J. **C77** (2017) 252.
V. Khachatryan *et al.* (CMS Coll.), Phys. Lett. **B770** (2017) 357.
P. Faccioli *et al.*, Phys. Lett. **B773** (2017) 476.
G. Bodwin, E. Braaten, and P. Lepage, Phys. Rev. **D51** (1995) 1125 \[Erratum: Phys. Rev. **D55** (1997) 5853\].
P. Faccioli *et al.*, Eur. Phys. J. **C78** (2018) 268.
|
---
abstract: 'Monoaxial chiral magnets can form a noncollinear twisted spin structure called the chiral helimagnetic state. We study magnetic properties of such a chiral helimagnetic state, with emphasis on the effect of itinerant electrons. Modeling a monoaxial chiral helimagnet by a one-dimensional Kondo lattice model with the Dzyaloshinskii–Moriya interaction, we perform a variational calculation to elucidate the stable spin configuration in the ground state. We obtain a chiral helimagnetic state as a candidate for the ground state, whose helical pitch is modulated by the model parameters: the Kondo coupling, the Dzyaloshinski–Moriya interaction, and electron filling.'
address: 'Department of Applied Physics, The University of Tokyo, Hongo, Bunkyo, Tokyo 113-8656, Japan'
author:
- 'Shun Okumura, Yasuyuki Kato, and Yukitoshi Motome'
title: 'Chiral helimagnetic state in a Kondo lattice model with the Dzyaloshinskii–Moriya interaction'
---
chiral magnet, Kondo lattice model, Dzyaloshinskii–Moriya interaction, variational calculation
Introduction {#sec:introduction}
============
Chirality in the lattice structure plays an important role in magnetism through the spin–orbit coupling which couples the orbital motion of electrons and the spin degree of freedom. It often leads to noncollinear and noncoplanar spin textures, such as a chiral helimagnetic (CHM) state [@Dzyaloshinskii1964; @Kataoka1981; @Togawa2016] and a skyrmion crystal [@Skyrme1962; @Bogdanov1989; @Muhlbauer2009]. Such peculiar spin textures have attracted attention as they may result in unusual magnetoelectric phenomena, e.g., the topological Hall effect [@Ohgushi2000; @Lee2009; @Nueubauer2009] and the spin Hall effect [@Taguchi2009].
An archetypal example of the CHM state is found in CrNb$_3$S$_6$, which is a monoaxial chiral magnet with space group of $P6_{3}22$. At low temperatures, the compound exhibits a CHM order at zero magnetic field, while it turns into a chiral soliton lattice (CSL) in the magnetic field applied perpendicular to the chiral axis [@Moriya1982; @Miyadai1983]. The CHM and CSL states were observed by using the Lorentz microscopy with a transmission electron microscope and the small-angle electron diffraction [@Togawa2012]. Theoretically, since the pioneering work by Dzyaloshinskii [@Dzyaloshinskii1964; @Dzyaloshinskii1965], the CHM and CSL states have been studied for decades, whereas the most of them were limited to localized spin systems by omitting the degree of freedom for itinerant electrons [@Kishine2015; @Shinozaki2016; @Nishikawa2016; @Laliena2016]. Recently, the authors studied this problem by explicitly taking into account the coupling to itinerant electrons [@Okumura2017]. Monte Carlo simulations for an extended Kondo lattice model with the Dzyaloshinskii–Moriya (DM) interaction [@Dzyaloshinskii1958; @Moriya1960] successfully explain a correlation between the twist of CSL and the electrical conduction.
In this paper, we report our theoretical study for the ground state of the extended Kondo lattice model whose finite-temperature properties were studied by the Monte Carlo simulations. Focusing on the zero-field state, we obtain the stable magnetic configuration in the ground state by a variational calculation. We find that the model exhibits a CHM state whose helical pitch depends on the model parameters: the Kondo coupling, the DM interaction, and electron filling. Our results elucidate how the CHM state is stabilized by the competition between the DM interaction and an effective exchange interaction mediated by itinerant electrons.
The organization of this paper is as follows. In Section \[sec:model\_and\_method\], we introduce a ferromagnetic Kondo lattice model with the DM interaction and the method of variational calculations. The results for the optimized magnetic structures are shown in Section \[sec:result\]. Section \[sec:summary\] is devoted to the summary.
Model and method {#sec:model_and_method}
================
Following the previous study [@Okumura2017], we consider a ferromagnetic Kondo lattice model with the DM interaction between the localized spins in one dimension. The Hamiltonian is given by $$\begin{aligned}
H =& -t\sum_{l,\nu}(c^{\dagger}_{l\nu}c^{\;}_{l+1\nu}+\mathrm{h.c.})-\mu\sum_{l,\nu}c^{\dagger}_{l\nu}c^{\;}_{l\nu}\nonumber\\
&-J\sum_{l,\nu,\rho}c^{\dagger}_{l\nu}{\boldsymbol \sigma}_{\nu\rho}c^{\;}_{l\rho}\cdot{\mathbf S}_{l}-{\mathbf D}\cdot\sum_{l}{\mathbf S}_{l}\times{\mathbf S}_{l+1},
\label{eq:H}\end{aligned}$$ where $c_{l\nu}(c^{\dagger}_{l\nu})$ is an annihilation (creation) operator for a $\nu$-spin electron at site $l$ on the one-dimensional chain ($\nu = \uparrow$ or $\downarrow$), $\mu$ is the chemical potential, and ${\mathbf S}_{l}=(S_l^x,S_l^y,S_l^z)$ is a three-component vector with normalized length $|{\mathbf S}_{l}|=1$. We assume the periodic boundary condition. The first term describes the kinetic energy of itinerant electrons; $t$ is a transfer integral between the nearest-neighbor sites. The third term is for the onsite coupling between the itinerant electrons and localized moments; $J$ is a positive coupling constant and $\boldsymbol{\sigma} = (\sigma^x,\sigma^y,\sigma^z)$ are the Pauli matrices. The last term represents the DM interaction with the DM vector ${\mathbf D}=D\hat{z}$, where $D>0$ and $\hat{z}$ is a unit vector along the chain direction. In this study, we focus on the case in the absence of a magnetic field.
We study the ground state of the model in equation (\[eq:H\]) by a variational calculation. As the variational ground state, we assume a helical spin configuration represented by $$\begin{aligned}
{\mathbf S}_{l}=(\cos{Ql},\sin{Ql},0),\end{aligned}$$ where $Q$ is the wave number related with the helical pitch $L$ as $L = 2\pi/Q$ (we set the lattice constant as the length unit). For the spin configuration, we can calculate the energy dispersion of itinerant electrons as $$\begin{aligned}
\varepsilon(k) = &-2t\cos{k}\cos{\frac{Q}{2}}-\mu\nonumber\\
&\pm\sqrt{t^2(1-\cos{2k})(1-\cos{Q})+J^2}.
\label{eq:ep}\end{aligned}$$ Then, we can compute the total energy of the system by $$\begin{aligned}
E = \sum_{-k_F \leq k < k_F}\varepsilon(k)-DN\sin{Q},
\label{eq:E}\end{aligned}$$ where $k_F$ is the Fermi wave number \[$\varepsilon(k_F)=\mu$\] and $N$ is the number of sites. In the variational calculations, we optimize $E$ by varying $Q$ while tuning the chemical potential $\mu$ to set the electron filling $n$ at a particular value. The electron filling $n$ is defined by the average number of electrons per site: $n = \frac1N \sum_{-k_F \leq k < k_F}$, which varies from $0$ to $2$. The optimal value of $Q$, which we denote as $Q^*$, defines the pitch for the most stable helical spin configuration. We set the energy unit $t=1$ and take $N = 10^{4}$ in the following calculations.
Result {#sec:result}
======
![Contour plot of the optimal wave number of the helical state, $Q^{*}/2\pi$, as a function of $J$ and $D$. $Q^{*}/2\pi$ corresponds to the inverse of the optimal helical pitch. We set the electron filling at quarter filling $n = 0.5$.[]{data-label="f1"}](Fig1.eps){width="\columnwidth"}
Figure \[f1\] shows the result for the optimal wave number $Q^{*}$ as a function of $J$ and $D$ at quarter filling $n=0.5$. When $D=0$, an infinitesimal $J$ induces a helimagnetic order (without chirality) with the optimal wave number $Q^*=\pi/2$. This is due to the Ruderman–Kittel–Kasuya–Yosida interaction [@Ruderman1954; @Kasuya1956; @Yosida1957] dictated by the twice of the Fermi wave number $2k_F = \pi/2$ at quarter filling $n=0.5$. While increasing $J$ at $D=0$, $Q^*$ becomes smaller (namely, the helical pitch becomes longer), and $Q^*$ vanishes for $J\gtrsim 1.6$, as shown in Fig. \[f1\]. This indicates that the lowest-energy state is given by a simple ferromagnetic order without any twist for $J\gtrsim 1.6$ at $D=0$. The ferromagnetic state is stabilized by the effective ferromagnetic interaction mediated by the kinetic motion of itinerant electrons, called the double-exchange interaction [@Zener1951; @Anderson1955].
When $D$ is turned on, $Q^*$ increases as $D$ increases (except for $J=0$). As shown in Fig. \[f1\], however, the increase of $Q^*$ becomes slower for larger $J$. This is because the effective magnetic interaction mediated by itinerant electrons becomes stronger for a larger $J$. In the limit of $D\rightarrow\infty$, the system prefers the helical order with $\pi/2$ rotation of spins between neighboring sites for any value of $J$, and hence, $Q^*$ converges to $\pi/2$.
![$D$ dependence of the optimal wave number $Q^{*}/2\pi$ for several values of $n$. We take $J = 2$. []{data-label="f2"}](Fig2.eps){width="\columnwidth"}
We also investigate the $n$ dependence of the optimal wave number $Q^{*}$. Figure \[f2\] displays the values of $Q^{*}/2\pi$ as functions of $D$ at several electron fillings $n$ for $J=2$. At a low filling, for instance, at $n=0.1$, $Q^*$ is zero at $D=0$ (ferromagnetic state) and grows gradually as $D$ increases (CHM state). The growth rate is suppressed as $n$ increases because the effective ferromagnetic interaction is enhanced as the kinetic energy of itinerant electrons increases. Above $n\simeq 0.25$, however, the growth rate become more rapid again as $n$ increases. We confirm that, by Monte Carlo simulations at low temperatures, the nonmonotonic behavior of the growth rate of $Q^*$ for $D$ correlates with the stability of the ferromagnetism at $D=0$ (not shown here). With a further increase of $n$, $Q^*$ becomes nonzero at $D=0$ above $n\simeq0.56$. This is due to a phase separation between the ferromagnetic state at $n \simeq 0.56$ and the antiferromagnetic state at half filling $n=1$ [@Yunoki1998].
Summary {#sec:summary}
=======
To summarize, we have studied the ground state of the one-dimensional Kondo lattice model with the DM interaction by using the variational calculation. We showed that the competition between the DM interaction and the effective magnetic interaction induced by the kinetic motion of itinerant electrons stabilizes the CHM state. We clarified how the helical pitch depends on the model parameters, i.e., the Kondo coupling $J$, the DM interaction $D$, and electron filling $n$. In the previous study [@Okumura2017], the authors performed the Monte Carlo simulations by choosing the parameters to set the helical pitch $L = 2\pi/Q^* = 10$ at zero magnetic field. From Fig. \[f2\], we can find that this is achieved, e.g., by taking $J=2$, $D=0.035$, and $n=0.5$, which are indeed the parameters used in the previous study [@Okumura2017].
While we have focused on the magnetic state at zero magnetic field in the present study, the variational calculation can be extended to the ground state in an applied magnetic field. In the magnetic field applied to the chiral axis, a peculiar chiral soliton lattice (CSL) state is expected in this system, as mentioned in Sec. \[sec:introduction\]. It would be interesting to extend our variational study for investigating the optimal magnetic state while changing the magnetic field. In particular, it is an intriguing issue to clarify the role of itinerant electrons in the CSL state, by comparing with the previous results for the models with localized spins only.
[00]{}
I. E. Dzyaloshinskii, JETP [**19**]{}, 960 (1964). M. Kataoka and O. Naknishi, J. Phys. Soc. Jpn. [**50**]{} 3888 (1981). For review, see Y. Togawa, Y. Kousaka, K. Inoue, and J. Kishine, J. Phys. Soc. Jpn. [**85**]{}, 112001 (2016). T. H. R. Skyrme, Nucl. Phys. [**31**]{}, 556 (1962). A. N. Bogdanov and D. A. Yablonskii, JETP [**68**]{}, 101 (1989). S. Mühlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Böni, Science [**323**]{}, 915 (2009). K. Ohgushi, S. Murakami, and N. Nagaosa, Phys. Rev. B [**62**]{}, R6065 (2000). M. Lee, W. Kang, Y. Onose, Y. Tokura, and N. P. Ong, Phys. Rev. Lett. [**102**]{}, 186601 (2009). A. Neubauer, C. Pfleiderer, B. Binz, A. Rosch, R. Ritz, P. G. Niklowitz, and P. Böni, Phys. Rev. Lett. [**102**]{}, 186602 (2009). K. Taguchi, G. Tatara, Phys. Rev. B [**79**]{}, 054423 (2009). T. Moriya and T. Miyadai, Solid State Commun. [**42**]{}, 209 (1982). T. Miyadai, K. Kikuchi, H. Kondo, S. Sakka, M. Arai, and Y. Ishikawa, J. Phys. Soc. Jpn. [**52**]{}, 1394 (1983). Y. Togawa, T. Koyama, T. Takayanagi, S. Mori, Y. Kousaka, J. Akimitsu, S. Nishihara, K. Inoue, A. S. Ovchinnikov, and J. Kishine, Phys. Rev. Lett. [**108**]{}, 107202 (2012). I. E. Dzyaloshinskii, JETP [**20**]{}, 223 (1965). J. Kishine and A. S. Ovchinnikov, Solid State Phys. [**66**]{}, 1 (2015). M. Shinozaki, S. Hoshino, Y. Masaki, J. Kishine, and Y. Kato, J. Phys. Soc. Jpn. [**85**]{}, 074710 (2016). Y. Nishikawa and K. Hukushima, Phys. Rev. B [**94**]{}, 064428 (2016). V. Laliena, J. Campo, and Y. Kousaka, Phys. Rev. B **94**, 094439 (2016). S. Okumura, Y. Kato, and Y. Motome, J. Phys. Soc. Jpn. [**86**]{}, 063701 (2017). I. Dzyaloshinsky, J. Phys. Chem. Solids [**4**]{}, 241 (1958). T. Moriya, Phys. Rev. [**120**]{}, 91 (1960). M. A. Ruderman and C. Kittel, Phys. Rev. 96, 99 (1954). T. Kasuya, Prog. Theor. Phys. [**16**]{}, 45 (1956). K. Yosida, Phys. Rev. [**106**]{}, 893 (1957). C. Zener, Phys. Rev. [**82**]{}, 403 (1951). P. W. Anderson and H. Hasegawa, Phys. Rev. [**100**]{}, 675 (1955). S. Yunoki, J. Hu, A. L. Malvezzi, A. Moreo, N. Furukawa, and E. Dagotto, Phys. Rev. Lett. [**80**]{}, 845 (1998).
|
---
abstract: 'We analyze an epidemiological model to evaluate the effectiveness of multiple means of control in malaria-endemic areas. The mathematical model consists of a system of several ordinary differential equations, and is based on a multicompartment representation of the system. The model takes into account the mutliple resting-questing stages undergone by adult female mosquitos during the period in which they function as disease vectors. We compute the basic reproduction number $\mathcal R_0$, and show that that if $\mathcal R_0<1$, the disease free equilibrium (DFE) is globally asymptotically stable (GAS) on the non-negative orthant. If $\mathcal R_0>1$, the system admits a unique endemic equilibrium (EE) that is GAS. We perform a sensitivity analysis of the dependence of $\mathcal R_0$ and the EE on parameters related to control measures, such as killing effectiveness and bite prevention. Finally, we discuss the implications for a comprehensive, cost-effective strategy for malaria control.'
author:
- |
J. C. Kamgang$^{1, \dag, }$[^1][^2] [^3] , C. P. Thron$^{2}$\
\
$^1$ Department of Mathematics and Computer Sciences,\
ENSAI – University of N’Gaoundéré, P. O. Box 455 N’Gaoundéré (Cameroon)\
$^2$ Department of Sciences and Mathematics,\
Texas A& M University – Central Texas 76549 USA
title: 'Analysis of Malaria Control Measures’ Effectiveness Using Multi-Stage Vector Model'
---
[*Keywords:*]{} Epidemiological Model, Malaria, Basic Reproduction Number, Lyapunov function, Global Asymptotic Stability, Control Strategies, Sensitivity analysis.
[*2000 MSC:*]{} 34C60, 34D20, 34D23, 92D30
Introduction {#sec.Intro}
============
Malaria is a vector-borne infectious disease that is widespread in tropical regions, including parts of America, Asia and much of Africa. Humans contract malaria following effective bites of infectious Anopheles female mosquitoes during blood feeding by the infiltration of parasites contained in the mosquitoes saliva into the host’s bloodstream. Among those parasites, [*Plasmodium falciparum*]{} is the prevalent cause of malaria mortality in Africa. The chain of transmission can be broken through the use of insecticides and anti-malarial drugs, as well as other control strategies.
Malaria accounts for more than 207 million infections and results in over 627000 deaths globally in 2012 [@wmr2013]. About 90% of these fatalities occur in sub-Saharan Africa [@DCollCZim; @wmr2013]. Despite intensive social and medical research and numerous programs to combat malaria, the incidence of malaria across the African continent remains high.
Control measures that have been used against malaria include:
- Outdoor application of larvicides (chemical or biological)[@floore2006mosquito; @walker2007contributions] ;
- Breeding habitat reduction (e.g. draining standing water) [@keiser2005reducing; @walker2007contributions; @yohannes2005can];
- Outdoor vector control (mosquito fogging, attractive toxic sugar bait (ATSB)) [@zhu2017outdoor];
- Indoor residual spraying (IRS)[@pluess2010indoor; @sharp2007seven];
- Bed nets, including insecticide-treated bed nets (ITN), long-lasting insecticidal nets (LLIN), and untreated bed nets [@world2015world];
- Repellents, including topical repellents, mosquito coils, etc [@lawrance2004mosquito; @maia2015mosquito];
- Rapid diagnosis and treatment (RDT); [@awoleye2016improving; @shillcutt2008cost]
- Preventative drugs: seasonal malaria chemoprevention (SMC), intermittent preventative treatment (IPT). [@wilson2011systematic];
Numerous empirical studies have been conducted to assess the cost effectiveness of these different methods [@akhavan1999cost; @utzinger2001efficacy; @worrall2011large]. In [@morel2005cost] examined the cost effectiveness of ITN, IRS, IPT, case management with various drugs, and various combinations of these measures as applied to malaria control in sub-Saharan Africa. For 60 alternative strategies involving these measures, costs (in 2000 international dollars) per disability adjusted life years (DALY) averted were estimated. The most cost-effective intervention found involved the use of artemisinin based combination treatment (ACT) only. However, this option only acheived a relatively low level of DALY averted. The study found that the best way to improve DALY averted involved introducing other measures: first ITN and IPT, and subsequently IRS to acheive the maximum DALY averted.
A comprehensive (as of 2010) review of studies on cost effectiveness of ITN, IRS, and IPT interventions may be found in [@white2011costs]. Costs per death averted and per DALY averted are also given, as are costs per treatment. In general results show that costs are highly situation-dependent. Estimates for costs of protection per individual per year are given for numerous studies employing ITN, IRS, or IPT: results are summarized in Table \[tab.tabvd4\].
*Control measure* Mean (Standard Deviation) Median
------------------------------------------- --------------------------- -------- -- --
Indoor residual spraying (IRS) 6.3 (3.4 ) 6.7
Insecticide-treated bed nets (ITN) 2.9 (2.2) 2.2
Intermittent preventative treatment (IPT) 4.3 (5.7) 2.545
: Cost per person per year of protection (2009 US\$) across all studies[]{data-label="tab.tabvd4"}
Reference [@white2011costs] does not consider the impacts of different interventions on overall malaria prevalence. For example, there is evidence that use of ITNs decreases the vector population, which may reduce malaria rates even among non-users [@bayoh2010anopheles; @hawley2003community].
Besides financial costs, significant environmental costs may be incurred by control measures, particularly those that involve insecticides[@keiser2005reducing]. Some insecticides also pose health risks to humans [@ehiri2004mass]. Extensive use of insecticides also tends to increase resistance among vectors, which decreases the insecticides’ effectiveness. [@menze2016multiple; @ranson2011pyrethroid] Similarly, use of preventative drugs tends to produce resistant parasites. Some sources recommend an integrated approach which incorporates several different control strategies [@beier2008integrated; @russell2011increased]
In the field of mathematical epidemiology, numerous models have been proposed with the purpose of understanding various aspects of the disease. The foundational model of Sir Ronald Ross, originally proposed in 1911 [@Ross1911] and extended by Macdonald in 1957 [@Macdonald78], serves as the basis for many mathematical investigations of the epidemiology of malaria. A prominent example is the model of Ngwa and Shu [@NgwaMCM00], which introduces Susceptible ($\mathrm{S}$), Exposed ($\mathrm{E}$), and Infectious ($\mathrm{I}$) classes for both humans and mosquitoes, plus an additional Immune class ($\mathrm{R}$) for humans. This model is extended in the Ph.D. theses of Chitnis [@Chit_08] and Zongo [@Zongo09], both of which also provide comprehensive reviews on the state of the art. Chitnis introduces immigration into the host population, which is a significant effect since hosts migrating from a naive (disease-free) region to a region with high endemicity are especially susceptible to infection. Chitnis also performed a sensitivity analysis of model parameters, and identified the mosquito biting rate (and the recovery rate as the two important parameters which should be targeted in controlling malaria [@chitnis2008determining]. Chitnis’ conclusion was that the use of insecticide-treated bed nets, coupled with rapid medical treatment of new cases of infection, is the best strategy to combat malaria transmission. Zongo further extends the model by dividing the human population into non-immune and semi-immune subpopulations, which are modeled using $(\mathrm{SEIS})$ and $(\mathrm{SEIRS})$ model types, respectively. In this paper we include all of the above effects, and extend the model by dividing the human population into groups according to the method(s) they use to protect themselves against the mosquito bites (as in [@jckam201411]). This extension improves the applicability of the model because it represents the actual situation in many endemic areas, particularly in poor countries.
Malaria is highly seasonal[@9129525; @10697865]: the highest endemicity typically occurs during rainy seasons, when mosquito density is high due to high humidity and the presence of standing water where mosquitoes can breed. During this period, even people with predispositional immunity to malaria infection are at risk of attaining the critical level of malaria parasites in their bloodstream that could make them sick. In our model, we consider conditions characteristic of a rainy season in a region of high malaria endemicity: typically, such conditions last for a period between three to six months. This paper improves on the model in [@jckam201411] by including the effects of death, birth and migration on each host subpopulation. This inclusion is justified, since malaria is a major cause of death in endemic areas. As in [@jckam201411], we omit Exposed and Removed classes for hosts: the duration of Exposed and Removed states can be assumed to be negligible due to the high density of anopheles mosquitoes on the one hand, and rapid detection and treatment of infectious individuals on the other hand. Results for more sophisticated models that include Exposed and/or Removed state(s) are reserved for forthcoming papers.
The paper is organized as follows. Section \[sec.Bdnmdel\] describes our model and gives the corresponding system of differential equations. Section \[dfestabana\] establishes the well-posedness of the model by demonstrating invariance of the set of nonnegative states, as well as boundedness properties of the solution. The equilibria of the system, are calculated, and a threshold condition for the stability of the disease free equilibrium (DFE), which is based on the basic reproduction number $\mathcal R_0$ is calculated. Section \[sec.analysis\] analyzes the stability of equilibria. Section \[subsec.dfestabanan\] demonstrates the global asymptotic stability (GAS) of the disease free equilibrium (DFE) when $\mathcal R_0\leq 1$, and Section \[sec:eeqstana\], establishes the GAS of the endemic equilibrium (EE) when $\mathcal R_0>1$. Sections \[sec.discuss\] and \[sec.conclusion\] provide discussion and conclusions. Finally, the Appendix contains detailed proofs and computations required by the analysis.
Model description and mathematical specification {#sec.Bdnmdel}
================================================
The model assumes an area populated by human hosts and female mosquitoes (disease vectors) under conditions of elevated endemicity of malaria. Mosquitos in the model are assumed to be anthropophilic, and bite only humans: this assumption reflects situations in which malaria poses the biggest danger [@besansky2004no]. Both human and mosquito populations are homogeneously mixed, so no spatial effects are present. In the following subsections, we provide a detailed description of the population structure and dynamics of hosts and vectors.
Host population structure and dynamics {#subsec.Assumption2}
--------------------------------------
The human population is divided into $n+1$ groups, indexed by $0,1, \cdots, n$. Group $0$ consists of humans who use no prevention, while the other $n$ groups correspond to users who take various preventative measures such as bed nets (untreated or treated with insecticides of various degrees of toxicity), repellents, prophylactic drugs, indoor insecticides, and so on. At any given time,we let $H_i~(i=0,\,\cdots,\,n)$ denote the size of the $i^{th}$ group; note that $H = {{\displaystyle\sum}_{i=0}^n} H_i$.
The dynamics of the $i^{th}$ host group ($i=0,\,\cdots,\,n$) is described by a SIS-based compartment model as shown in Figure \[fig:figMulticomAppli1\]. The incidence of infection for humans in the $i^{th}$ group is given by $am_i{I_q}/{H}$, where $a$ is the average number of bites per mosquito per unit time (the entomological inoculation rate); $I_q$ is the number of Infectious mosquitoes; $m_i$ is the infectivity of mosquitoes relative to the human of the $i^{th}$ group, which is the probability that a bite by an infectious mosquito on a susceptible human of the $i^{th}$ group will transfer infection to the human. The transition rate from Infectious to Susceptible state within the $i^{th}$ group is $\gamma_i$. The force of migration into the $i^{th}$ group is $\Lambda_i$. The incoming $\tilde \nu_i$ and outgoing $\nu_i$ rates in the $i^{th}$ group include the effects of birth and death rates respectively, as well as the effects of hosts moving from one group to another.
Mosquito population structure and dynamics {#subsec.Assumption}
------------------------------------------
The population of disease vectors (adult female anopheles mosquitoes) is characterized by several classes, where each mosquito’s class membership is determined by its own history of past and present activity. Newly-emerged adult mosquitoes initially enter the Susceptible class: the rate of entry (that is, the recruitment rate) is $\Gamma$. Adult mosquitoes alternate between two activities: [*questing*]{} (that is, seeking a host to bite for a blood meal) and [*resting*]{} (to lay down eggs, or to digest a blood meal). In [@jckam201411] it was assumed that all susceptible mosquitoes are in the questing state—the current model improves on this by introducing an additional compartment for susceptible mosquitoes in the resting state that have successfully obtained blood meal(s) and are not yet infected.
At any given instant $t$, questing mosquitoes are equally likely to attempt to feed on any human, regardless of his/her protection method. Thus for any attempted blood meal, the time-dependent probability that the human host belongs to the $i^{th}$ group is $b_i(t)\equiv{H_i(t)}/{H(t)}$. During a blood meal attempt involving a human in the $i^{th}$ group, the mosquito is killed with probability $k_i$, and successfully feeds and enters the resting state with probability $f_i$. Letting $a$ denote the average number of bites per mosquito per unit time (the [*entomological inoculation rate*]{}), it follows that at any given instant $t$, the incidence rate of successful blood meals is $\varpi(t) \equiv {\displaystyle\sum}_{i=0}^nab_i(t)f_i$, while the additive death rate caused by the questing activity of mosquitoes is $d(t)\equiv{\displaystyle\sum}_{i=0}^nab_i(t)k_i$. If we let $I_i$ and $c_i$ denote respectively the number of infectious humans in group $i$ and the probability that a bite of a mosquito on an infectious human in group $i$ will infect the mosquito, then the incidence rate for mosquitoes becoming infected is $\varphi(t) \equiv {{\displaystyle\sum}_{i=0}^n} ac_if_i{I_i(t)}/{H(t)}$.
Susceptible questing mosquitoes that become infected enter the first exposed resting class ($\mathrm{E}_r^{(\text 1)}$), while those which have not experienced successful infection stay uninfected and enter the susceptible resting class ($\mathrm{S}_r$). Following initial infection, the mosquito must remain alive for a certain period before becoming infectious (this period is called the [*extrinsic incubation period*]{} in the biological and medical literature [@010047862]). During this period, the mosquito experiences a positive number of resting/question cycles. In our model, we suppose that a mosquito becomes infectious after a fixed number $\ell$ of resting/questing cycles following initial infection. These successive resting/questing cycles are modeled as a sequence of $2\ell $ Exposed states, and are denoted by $ \mathrm{E}^{(\text 1)}_q,\mathrm{E}^{(\text 2)}_r, \cdots, \mathrm{E}^{(\ell)}_q,\mathrm{E}^{(\ell+ 1)}_r$. If a mosquito survives through all of these states, it then enters the Infectious class, which is further divided into questing and resting subclasses ($\mathrm{I}_q$ and $\mathrm{I}_r$, respectively). Once a mosquito enters the Infectious class, it remains there for the rest of its life, alternating between questing and resting states.
The overall dynamics of the mosquito population is depicted in the multicompartment diagram in Figure \[fig:figMulticomAppli2\]: The fundamental model parameters are summarized in Table \[tab.tabvd\], while derived parameters are in Table \[tab.tabvd2\]. Some of the entries in Table \[tab.tabvd2\] are time-dependent, and are used to simplify the notation in our model and the derived system of differential equations.
Param. Description
--------------- -----------------------------------------------------------------------------------------------------------
Rate parameters that characterize the mosquito population
$a$ Rate of bite attempts for questing vectors
$\delta$ Rate at which resting vectors move to the questing state
$\Gamma$ Recruitment rate of vectors
$\mu$ Natural death rate of vectors
Parameters that characterize the mosquito population’s interaction with the $i^{\textrm{th}}$ host group
$c_i$ Probability that a vector that bites an infected host of the $i^{th}$ host group and survives is infected
$f_i$ Probability that a vector attempting to bite the $i^{th}$ host group survives and obtains a blood meal
$k_i$ Probability that a vector attempting to bite $i^{th}$ host group is killed
Parameters that characterize the $i^{th}$ host group
$m_i$ Probability that a host in group $i$ is infected due to a bite attempt of an infectious vector
$\Lambda_i$ Migration rate (hosts $/$ time)
$\gamma_i$ Transition rate from Infectious to Susceptible state
$\nu_i$ outgoing rate in $i^{th}$ host group
$\tilde\nu_i$ incoming rate rate in $i^{th}$ host group
: Fundamental model parameters[]{data-label="tab.tabvd"}
[p[1.2cm]{}p[1.7cm]{}p[13.5cm]{}]{} Param. & Formula & Description\
\
$b_i$ & $\frac{H_i}{H}$ &Proportion of hosts in group $i$ at a given time\
$d$ & ${\displaystyle\sum}_{i=0}^nab_ik_i$ &Death rate of vectors due to questing activity\
$\bar{c}_i$ & $1 - c_i$ &Probability that a vector which successfully bites infectious host of the $i^{th}$ group fails to get infected\
$f_q$ & $\dfrac{\varpi}{\hat\mu+\varpi}$ & Questing frequency of mosquitoes (i.e. questing mosquito survival proportion)\
$f_r$ & $\dfrac{\delta}{\mu+\delta}$ & Resting frequency of mosquitoes (i.e. resting mosquito survival proportion)\
$r_i$ & $1 - f_i$ &Repelling effectiveness of measures used for $i^{th}$ host group\
$\hat{\mu}$ & $\mu + d$ & Death rate of questing vectors\
$\varphi$ & ${{\displaystyle\sum}_{i=0}^n} ac_if_i\frac{I_i}{H}$ & Incidence rate of infection for questing susceptible vectors\
$\bar{\varphi}$ & ${{\displaystyle\sum}_{i=0}^n} ab_ic_if_i$ & Maximum incidence rate of infection for questing susceptible vectors\
$\varpi$ & ${\displaystyle\sum}_{i=0}^nab_if_i$ & Incidence rate of successful blood meal for questing vectors\
Model equations {#subsec.Assumption3}
---------------
The system of ordinary differential equations that characterize the model are given as follows: $$\left\{
\begin{aligned}
\dot S_i &~~=~~\Lambda_i+\tilde{\nu_i}H_i - \left(\nu_i+a\,m_i \frac{I_q}{H}\right)S_i +\gamma_i I_i & i = 0,\;1,\;\cdots,\; n \\\dot S_q &~~=~~\Gamma- (\hat\mu+\varpi) S_q + \delta S_r & \,\\
\dot S_r &~~=~~ (\varpi-\varphi) S_q-(\mu+\delta)S_r\\\dot E^{(1)}_r &~~=~~\varphi S_q-(\mu+\delta) E^{(1)}_r & \,\\\dot E^{(j)}_q &~~=~~\delta E^{(j)}_r - (\hat\mu+\varpi)E^{(j)}_q & j = 1,\;2,\;\cdots,\; \ell \\\dot E^{(j+1)}_r &~~=~~\varpi E^{(j)}_q-(\mu+\delta) E^{(j+1)}_r &j = 1,\;2,\;\cdots,\; \ell \\\dot I_i &~~=~~a\,m_i \frac{I_q}{H}S_i -\left(\gamma_i+\nu_i\right) I_i & i = 0,\;1,\;\cdots,\; n \\\dot I_q &~~=~~\delta E^{(\ell+ 1)}_r - (\hat\mu+\varpi) I_q +\delta I_r& \, \\\dot I_r &~~=~~\varpi I_q - (\mu+\delta)I_r & \,
\end{aligned}
\right.
\label{eq:eqbednet_}$$
The system together with initial conditions completely specifies the evolution of the multicompartment system shown in Figures \[fig:figMulticomAppli1\] and \[fig:figMulticomAppli2\].
Well-posedness, dissipativity and equilibria of the system {#dfestabana}
==========================================================
In this section we demonstrate well-posedness of the model by demonstrating invariance of the set of nonnegative states, as well as boundedness properties of the solution. We also calculate the equilibria of the system, whose stability properties will be examined in the following section.
Positive invariance of the nonnegative cone in state space {#subsec.pinn}
----------------------------------------------------------
The system can be rewritten in matrix form as $$\label{eq:eqmodel}\dot{\mathbf x}=\mathbf A(\mathbf x)\mathbf x + \mathbf b
\Leftrightarrow \left\{\begin{array}{ccc}\dot{\mathbf x}_S & = & \mathbf A_S(\mathbf x)\mathbf x_S + \mathbf A_{S,\,I}(\mathbf x)\mathbf x_I + \mathbf b_S\\
\dot{\mathbf x}_I & = & \;\;\;\;\;\;\mathbf A_I(\mathbf x)\mathbf x_I
\end{array}\right.
\Leftrightarrow \left\{\begin{array}{ccr}\dot{\mathbf x}_S & = & \mathbf A_S(\mathbf x)\, \left(\mathbf x_S - \mathbf x^*_S\right) + \hat{\mathbf A}_{S,\,I}(\mathbf x)\,\mathbf x_I \\
\dot{\mathbf x}_I & = & \;\;\;\;\;\;\mathbf A_I(\mathbf x)\,\mathbf x_I
\end{array}\right. ,$$ where $$\label{eq:eqmodel1}
\mathbf A(\mathbf x)=\begin{pmatrix}
\mathbf A_S(\mathbf x) &\mathbf A_{S,\,I}(\mathbf x)\cr \mathbf 0&\mathbf A_I(\mathbf x)
\end{pmatrix};~~~~ \mathbf b = \left(\mathbf b_S;~\mathbf 0\right)~\text{where}~\mathbf b_{S}=\left(\Lambda_0;~\cdots;~\Lambda_n;~\Gamma;~0\right);~~~~
\mathbf x^*_S \equiv A_S(\mathbf x^*)^{-1} \mathbf b_S.$$ $\mathbf x^*_S$ is a vector whose components are components of vector $\mathbf x_S$ in eq. at the disease free equilibrium; its computation is carried out in Proposition \[prop:stdst0\].
Equation is defined for values of the state variable $\mathbf x=(\mathbf x_S;\;\mathbf x_I)$ lying in the nonnegative cone of ${\mathbb{R}}^u$ ($
u=2n+ 2\ell +7$), which we denote as ${\mathbb{R}}^u_+$. Here $\mathbf x_S$ and $\mathbf x_I$ represent respectively the naive and non-naive components of the system state: explicitly, $$\label{eq:naive_and_non}
\mathbf x_S\equiv \left((S_i)_{0\leq i\leq
n};~~S_q;\;S_r\right) ; \qquad
\mathbf x_I\equiv \left(\;(E_r^{(j)};~~E_q^{(j)})_{1\leq j\leq \ell};~~E_r^{(\ell+ 1)};~~(I_i)_{0\leq i\leq
n};~~I_q;~~I_r\right).$$ This notation is consistent with [@KamSal07], and some results from this previous reference are used in our analysis.
The matrix $\mathbf A_S(\mathbf x) ={\mathrm{diag}}\left(\mathbf A_{S_h}(\mathbf x),\;\mathbf A_{S_v}(\mathbf x)\right)$ with $$\label{eq:A_S_v}
\mathbf A_{S_h}(\mathbf x) = -{\mathrm{diag}}\left(\nu_i-\tilde\nu_i+a\,m_i \frac{I_q}{H}\right)_{0\leq i\leq
n}\hbox{ and }\mathbf A_{S_v}(\mathbf x) = \left(\begin{array}{cc} -(\hat\mu+\varpi) & \delta \\ \varpi-\varphi & -(\mu+\delta) \end{array}\right),$$ $\mathbf A_{S,\,I}(\mathbf x)$ is the $(n+3)\times (n+2(\ell+ 2))$ matrix with components equal to zero except for the first $n+1$ components of the $(2\ell +2)^{th}$ column, which are given by $\nu_i+\gamma_i$, $0\leq i\leq n$. The matrix $\mathbf A_I(\mathbf x)$ may be written in block form as $$\mathbf A_I(\mathbf x) = \left(\begin{array}{cc} \mathbf A_{I_E}(\mathbf x)
& \mathbf A_{I_{I,\,E}}(\mathbf x) \\ \mathbf A_{I_{E,\,I}}(\mathbf x) & \mathbf
A_{I_I}(\mathbf x) \end{array}\right),
\label{eq:mati}$$ where the four matrix blocks may be described as follows:
First, the $(2\ell +1)\times(2\ell +1)$ matrix ${\mathbf}A_{I_E}(\mathbf x)$ expresses the interaction between exposed components of the system. It is a 2-banded matrix whose diagonal and subdiagonal elements are given by the vectors $\mathbf d_0$ and $\mathbf d_{-1}$ respectively, defined by $$\label{eq.eqdiagelts}\mathbf d_0 = {\left(\right.}\underbrace{-(\mu+\delta),\; -(\hat\mu+\varpi),\;\cdots,-(\mu+\delta),\;
-(\hat\mu+\varpi)}_{2\ell \;\; components},\;-(\mu+\delta) {\left.\right)};\qquad \mathbf d_{-1} = {\left(\right.} \underbrace{ \delta, \;\varpi, \;\cdots,
\;\delta,\;\varpi}_{2\ell \;\;components}{\left.\right.).}$$
Next, the $(2\ell +1)\times (n+3)$ matrix $$\mathbf A_{I_{I,\,E}}(\mathbf x)=a\frac{S_q}{H}\left(\begin{array}{ccccc}\,c_0f_0 &\cdots & c_nf_n&0&0\\0 & \cdots&0&0&0\\\vdots&\cdots&\vdots&\vdots&\vdots\\0 & \cdots&0&0&0\end{array}\right)$$ gives the dependence of the exposed components $E_r^{(j)}\; (j=1,\cdots,\; \ell+ 1)$, $E_q^{(j)},\;\; (j=1,\cdots,\; \ell)$ on the infectious components $I_i (i=0,\;\cdots,\; n)$, $I_r$ and $I_q$.
Next, the $(n+3)\times (2\ell +1)$ matrix $ \mathbf A_{I_{E,\,I}}(\mathbf x)$ gives the dependence of infectious components on exposed components. All entries are zero except the $(n+2,\; 2\ell +1)$ entry, which is equal to $\delta$ reflecting the transition rate of vectors from state $E^{(\ell+ 1)}_r$ to state $I_q$.
Lastly, the $(n+3)\times (n+3)$ matrix $ \mathbf A_{I_I}(\mathbf x)$ may be written in block form as $\mathbf A_{I_I}(\mathbf x)=\left(\begin{array}{cc}
\mathbf A_{I_{I_h}} & \mathbf A_{I_{I_{v,\,h}}} \\ \mathbf 0 & \mathbf A_{I_{I_v}}
\end{array}\right)$, with $$\begin{aligned}
\mathbf A_{I_{I_h}} = -\mathrm{diag} \left(
\nu_i+\gamma_i\right)_{0\leq i\leq n};~~~~
\mathbf A_{I_{I_v}} = \left(\begin{array}{cc}-\left(\hat\mu+\varpi\right) &
\delta\\\varpi & - \left( \mu+\delta\right)\end{array}\right);~~~~
\mathbf A_{I_{I_v,\,h}}&=\frac{a}{H}\left(\begin{array}{cc}
S_0\,m_0 & 0 \\
\vdots & \vdots \\
S_{n}\,m_{n} & 0
\end{array}
\right).\end{aligned}$$ For a given $\mathbf
x\in\mathbb
R^u_+$, the matrices $\mathbf A_S(\mathbf x)$, $\mathbf A_I(\mathbf x)$ and $\mathbf A(\mathbf x)$ are Metzler matrices (see Appendix \[appx:defs\]), and the vector $\mathbf b \in \mathbb R_+^u$.
The following proposition establishes that system is epidemiologically well-posed.
\[prop:invrnce\] The nonnegative cone $\mathbb R^u_+$ is positively invariant for the system .
*Proof:* The proof is similar to the standard proof that systems determined by Metzler matrices preserve invariance of the nonnegative cone. It can be shown directly that if $\mathbf x$ is on the boundary of $\mathbb R_+^u$, then $\dot{\mathbf x}$ is in $\mathbb R_+^u$, hence the trajectories never leave $\mathbb R_+^u$. [$\Box$ ]{}
Disease-free equilibrium (DFE) of the system
--------------------------------------------
The system admits two steady states. The trivial steady state that is the DFE is established in Proposition \[prop:stdst0\] below, while the nontrivial steady state will be established in Proposition \[prop:stdst1\] after some necessary preliminaries. Before characterizing the DFE, we first introduce some useful notation. The [*questing frequency*]{} $f_q$ and [*resting frequency*]{} $f_r$ are defined respectively as: $$f_q\equiv \frac{\varpi}{\hat{\mu}+\varpi}; \qquad f_r\equiv \frac{\delta}{\mu+\delta}.$$ $f_q$ may be interpreted as the proportion of questing mosquitoes that pass on to the resting state; while $f_r$ is conversely the proportion of resting mosquitoes that pass on to the questing state. In [@jckam201411] these parameters are constants of the model, but in the current model $f_q$ depends on the system state. The value of $f_q$ at the DFE is denoted by $f^*_q$. In the following we shall frequently make use of the following replacements: $$\label{eq:frfq}
\hat\mu+\varpi=\frac{\varpi}{f_q}; \qquad \mu+\delta=
\frac{\delta}{f_r}.$$ This new notation enables us to give a simple expression for the DFE and to shorten expressions in many other computations throughout this paper.
\[prop:stdst0\] The system admits a trivial equilibrium (the disease-free equilibrium (DFE)) given by $\mathbf x^*=\left(\mathbf x_S^*;\;\mathbf x_I^*\right)\in\mathbb R^u_+$, with $\mathbf x_I^*=\mathbf 0\in\mathbb R^{u-n-3}$; $\mathbf x_S^* =\left(\mathbf x_{S_h}^*;\;\mathbf x_{S_v}^*\right)$, where $$\label{eq:eqexpdfe}\mathbf x_{S_h}^* = \left(\frac{\Lambda_0}{\tilde \nu_0-\nu_0};\,\frac{\Lambda_1}{\tilde \nu_1-\nu_1};\,\cdots;\,\frac{\Lambda_n}{\tilde \nu_n-\nu_n}\right)\;\;\hbox{ and }\;\; \mathbf x_{S_v}^* = (S_q^*,S_r^*) =\left( \frac{f^*_q\Gamma}{\varpi^*(1-f^*_qf_r)};\;\frac{f_rf^*_q\Gamma}{\delta(1-f^*_qf_r)}\right).$$
[*Proof* ]{}: The DFE corresponds to a state $\mathbf x^*$ in which all components representing non-naive classes are equal to zero: that is, $\mathbf x^* = \left(\mathbf x^*_S;\;\mathbf x^*_I\right)$ with $\mathbf x^*_I\equiv 0$. The steady-state equation for the system with the constraint $\mathbf x_I\equiv0$ is $$\label{eq:Asys}
\mathbf
A_S(\mathbf x_S\,;\mathbf 0)\,.\,\mathbf x_S + \mathbf b_S=\mathbf 0\Leftrightarrow \left\{\begin{array}{l}
\mathbf A_{S_h}(\mathbf x_S\,;\mathbf 0)\,.\,\mathbf x_{S_h} + \mathbf b_{S_h}=\mathbf 0\cr \mathbf A_{S_v}(\mathbf x_S\,;\mathbf 0)\,.\,\mathbf x_{S_v} + \mathbf b_{S_v}=\mathbf 0
\end{array}\right. .$$ This system may be solved in two stages, since the subsystem $\mathbf A_{S_h}(\mathbf x_S\,;\mathbf 0)\,.\,\mathbf x_{S_h} + \mathbf b_{S_h}=\mathbf 0$ is uncoupled. The solution of this subsystem is $\mathbf x^*_{S_h} = \left(\frac{\Lambda_0}{\tilde \nu_0-\nu_0};\,\frac{\Lambda_1}{\tilde \nu_1-\nu_1};\,\cdots;\,\frac{\Lambda_n}{\tilde \nu_n-\nu_n}\right)$. Using this solution in system , we obtain the equation $$\mathbf A_{S_v}(\mathbf x_{S_h}\,;\mathbf x_{S_v}\,;\mathbf 0)\,.\,\mathbf x_{S_v} + \mathbf b_{S_v}=\mathbf 0
\implies
\mathbf x^*_{S_v}=-{\mathbf A_{S_v}(\mathbf x^*_{S_h}\,;\mathbf x_{S_v}\,;\mathbf 0)}^{-1}\,.\, \mathbf b_{S_v}.$$ Using expression for $\mathbf A_{S_v}$ (with $\varphi=0$ at DFE), and recalling that $\mathbf b_{S_v} = (\Gamma ; 0)$, we find the solution $$\label{eq:xsv}
\mathbf x^*_{S_v} = (S_q^*,S_r^*)= \left(\dfrac{\mu + \delta}{(\hat\mu+\varpi)(\mu + \delta) - \varpi \delta};\;\dfrac{\varpi}{(\hat\mu+\varpi)(\mu + \delta) - \varpi \delta}\right) = \left(\dfrac{f^*_q\Gamma}{\varpi^*(1-f^*_qf_r)};\;\dfrac{f_rf^*_q\Gamma}{\delta(1-f^*_qf_r)}\right),$$ where we have used the replacements to obtain the final equality in .
As a corollary we have
\[prop:stbsysred\] The system $$\label{eq.sysred}
\dot{\mathbf x} = \mathbf A_S(\mathbf x^*)\,.\,\left(\mathbf x-\mathbf x^*_S\right)$$ is GAS at $\mathbf x^*_S$ on $\mathbb R_+^{n+3}$.
The proof is straightforward, based on Proposition \[prop:blockdecomposition\].
Boundedness and dissipativity of the trajectories
-------------------------------------------------
We have the following proposition.
\[prop:dissip\] The simplex $$\Omega =\left\{\mathbf x\in\mathbb R^u_+\;\left|\;\left(S_q\leq S_q^* \right)\;\wedge \;\left(S_r\leq S_r^*\right)\;\wedge\;\left(H_i= H^*_i,\;\; 1\leq i\leq
n \right) \wedge\; \left( M_I\leq\frac{\bar \varphi}{\mu}S_q^* \right) \; \right.
\right\}\label{eq.simplex}$$ is a compact forward-invariant and absorbing set for the system , where $$M_I\equiv {{\displaystyle\sum}_{j=1}^\ell} \left(E_q^{(j)} + E_r^{(j)}\right) + E_r^{(\ell+ 1)} + I_q + I_r~;~~~~\bar \varphi\equiv a{{\displaystyle\sum}_{i=0}^n}b^*_ic_if_i,$$ and where $(S_q^*, S_r^*)$ are the DFE components for naive questing and resting mosquitoes respectively (given in ).
Note that $M_I$ is the overall population of non-naive mosquitoes; $\bar\varphi$ is the maximum incidence rate of infection for questing susceptible mosquitoes; and $b_i^* = H_i^*/H^*$.
The proof is given in Appendix \[ssec.supdissip\]. As a result of Proposition \[prop:dissip\], we may limit our study to the simplex specified in eq. .
Computation of the threshold condition {#sec.algo}
--------------------------------------
The following propostion gives a formula for the basic reproduction number $\mathcal R_0$, and shows that the condition $\mathcal R_0<1$ is a necessary and sufficient condition for local stability of the DFE.
\[prop:basicrepn\] The basic reproduction number $R_0$ of the system is $$\label{eq:R0} \mathcal
R_0 \equiv \frac{(f^*_qf_r)^{\ell+ 1}}{(1-f^*_qf_r)^2}\dfrac{f^*_q}{{\varpi^*}^2}\dfrac{\Gamma}{H^*}a^2\sum_{i=0}^n\frac{b^*_ic_if_im_i}{\gamma_i+\nu_i},$$ where $f^*_q$ and $f_r$ are the questing and resting frequencies respectively of mosquitoes at the DFE, $H^*\equiv {{\displaystyle\sum}_{i=0}^n}\Lambda_i/({\nu_i-\tilde \nu_i})\equiv {{\displaystyle\sum}_{i=0}^n}H^*_i$ is the total host population at the DFE, and $b^*_i\equiv {H^*_i}/{H^*}$ is the proportion of hosts in group $i$ at the DFE. Then $\mathcal R_0<1$ is a necessary and sufficient condition for local stability of the DFE.
The proof of Proposition \[prop:basicrepn\] is given in Appendix \[ssec.supprrprepn\].
Endemic equilibrium (EE) of the System
---------------------------------------
For our system, there is a unique endemic equilibrium that is specified by the following proposition.
\[prop:stdst1\] System admits a unique endemic equilibrium (EE) $\mathbf x^\star\in{\mathbb R_{>0}}^+$ with components given by
$$\label{eq:eqexpMinfctstateb}
\begin{array}{l}
S_q^\star=\frac{f^*_q}{\varpi^*(1-f^*_qf_r)}\Gamma - \frac{I_q^\star}{(f^*_qf_r)^l};\qquad S_r^\star=\frac{\delta f^*_q}{f_r(1-f^*_qf_r)}\Gamma - \frac{\varpi^*}{\delta}\frac{(1-(f^*_qf_r)^{\ell+ 1}+f^*_qf_r)I_q^\star}{ f^*_q}; \\
E_q^{(j)\star}=\dfrac{1-f^*_qf_r }{(f^*_qf_r)^{\ell+ 1-j}}I_q^\star;
\qquad E_r^{(j)\star}=\frac{\varpi^* }{\delta }\dfrac{1-f^*_qf_r}{ f^*_q(f^*_qf_r)^{\ell+ 1-j}} I_q^\star\qquad(1\leq j\leq \ell);
\\
I_r^\star=\frac{\varpi^* }{\delta }f_rI_q^\star; \qquad E_r^{(\ell+ 1)\star}=\frac{\varpi^* }{\delta }\frac{1-f^*_qf_r}{f^*_q} I_q^\star;\\
S_i^\star=\dfrac{(\nu_i+\gamma_i) H^* }{ (\nu_i + \gamma_i) H^* +a\,m_i I_q^\star} H^*_i; \qquad I_i^\star = \dfrac{a\,m_iI_q^\star }{(\nu_i+\gamma_i) H^* +a\,m_i I_q^\star} H^*_i ~~ (0\leq i\leq n),
\end{array}$$
where $I_q^\star\in~\big]0,\;\bar{I}_q^\star\big[$ is the unique finite root of the equation $$\label{eq:eqeqIQ}{\displaystyle\sum}_{i=0}^n \frac{a^2\,b^*_im_ic_if_i}{(\nu_i+\gamma_i)H^*+am_ix}
=\dfrac{{\varpi^*}^2(1-f^*_qf_r)^2 }{f^*_q(f^*_qf_r)^{\ell+ 1} \Gamma-\varpi^* f^*_qf_r (1-f^*_qf_r)x}.$$ and $$\label{eq:Iq-bar}
\bar{I}_q^\star \equiv \frac{\Gamma}{\varpi^*}\frac{f^*_q(f^*_qf_r)^{\ell+ 1}}{1-f^*_qf_r}$$
The proof of Proposition \[prop:stdst1\] is given in Appendix \[subsec.sptopropee\].
*\[rmk.rplctm\] According to , the dynamics of the mosquito population (expressed in the parameters $f_r, f_q$, and $\ell$) as well as the protection means used by the population (expressed in the parameter $\varpi$) strongly influence the location of the EE. This justifies our initial supposition that mosquito dynamics and host protection means are important practical factors in determining the prevalence of infection.*
Stability of system equilibria {#sec.analysis}
==============================
In this section we analyze the stability of the system equilibria given in Propositions \[prop:stdst0\] and \[prop:stdst1\].
Stability analysis of the disease free equilibrium (DFE) {#subsec.dfestabanan}
--------------------------------------------------------
We have the following result about the global stability of the disease free equilibrium:
\[thm.defstab\] When $\mathcal R_0 \leq 1$, then the DFE is GAS in $\mathbb R_+^u$.
[*Proof* ]{}: Our proof relies on Theorem 4.3 of [@KamSal07], which establishes global asymptotic stability (GAS) for epidemiological systems that can be expressed in the matrix form . This theorem is restated as Theorem \[thm:kamsal\] in the Appendix: for the proof, the reader may consult [@KamSal07]. To complete the proof, we need only to establish for the system that the five conditions ([**h**]{}1–[**h**]{}5) required in Theorem \[thm:kamsal\] are satisfied when $\mathcal R_0\leq 1$:
1. This condition is satisfied for the system as a result of Proposition \[prop:dissip\].
2. We note first that $n_S = n+3$, and the canonical projection of $\Omega$ on $\mathbb R_+^{n+3}$ is $\mathbb I = \{\mathbf x_{S_h}\}\times\left [0,\;\;S^*_q\right]\times \left [0,\;\;S^*_r\right]$. The system reduced to this subvariety is given in , and this system is GAS at its equilibrium ($\mathbf x^*_S$) as a result of Proposition \[prop:stbsysred\].
3. We consider first the case $\ell=1$ and $n=1$. In this case, the matrix $\mathbf A_I(\mathbf x)$ in the system is $$\mathbf A_I(\mathbf x) = \left(\begin{array}{ccccccc}-(\mu+\delta)&0&0&ac_0f_0\frac{S_q}{H} &ac_1f_1\frac{S_q}{H}&0&0\\\delta&-(\hat\mu+\varpi)&0&0&0&0&0 \\0&\varpi&-(\mu+\delta)&0&0&0&0\\ 0&0&0&-(\nu_0+\gamma_0)&0&am_0\frac{S_0}{H}&0 \\ 0&0&0&0&-(\nu_1+\gamma_1)&am_1\frac{S_1}{H}&0 \\ 0&0&\delta&0&0&-(\hat\mu+\varpi) &\delta \\ 0&0&0&0&0&\varpi&-(\mu+\delta)\end{array}\right).$$ In this case, the two properties required for condition ($\mathbf h3$) follow immediately: the off-diagonal terms of the matrix $\mathbf A_I(\mathbf x)$ are nonnegative, and the matrix is irreducible as can be seen from the associated directed graph $G(\mathbf A_I(\mathbf x))$ in Figure \[graph\]. For general $l$ and $n$, the proof of ([**h**]{}3) is similar.
![Graph associated with the matrix $\mathbf A_I(\mathbf x)$[]{data-label="graph"}](graph.eps)
4. Defining $\overline{\mathbf A}_I \equiv \mathbf A_I(\mathbf x^*)$, we have $\mathbf A_I(\mathbf x)\leq\overline{\mathbf A}_I$ $\forall \,\mathbf x\in\Omega$, and $\mathbf x^*\in\left(\mathbb R_+^{n+3}\times\{\mathbf 0\}\right)\cap\Omega$. Thus the upper bound of $\mathfrak M$ is attained at the DFE which is a point on the boundary of $\Omega$, and condition ($\mathbf h 4$) is satisfied.
5. We first observe that $\overline{\mathbf A}_I$ is the block matrix of the Jacobian matrix of the system corresponding to the Infected submanifold, taken at the DFE. As noted in [@KamSal07], the condition that all eigenvalues of $\overline{\mathbf A}_I$ have negative real parts, which is equivalent to the condition that $\overline{\mathbf A}_I$ is a stable Metzler matrix, is also equivalent to the condition $\mathcal R_0\leq1$. This fact is developed in the proof of Proposition \[prop:basicrepn\] (see Appendix) where we compute the value of $\mathcal R_0$ by establishing necessary and sufficient conditions for the stability of the Metzler matrix $\overline{\mathbf A}_I$.
Since the five conditions for Theorem 4.3 of [@KamSal07] are satisfied, the theorem follows. [$\Box$ ]{}
Stability analysis of the endemic equilibrium (EE) {#sec:eeqstana}
--------------------------------------------------
In this section we analyze the behavior of the system under the condition $\mathcal R_0>1$. From Proposition \[prop:basicrepn\] it follows that the DFE is not stable in this case. As stated in Proposition \[prop:stdst1\], the system also admits a unique nontrivial biologically feasible equilibrium (or endemic equilibrium (EE)). It remains to address the stability of the EE, which determines the behavior of the system when the disease persists. Our main result in this regard is the following theorem.
\[thm:thmstabee\] When $\mathcal R_0>1$, the EE $\mathbf x^\star$ of the system defined in is GAS on $\left(\mathbb R_{>0}\right)^u$.
\[rmq:rmqglstb\]*The above theorem implies that the GAS of the EE is in the nonnegative cone $\mathbb R^u_+$, since the positive cone $\left(\mathbb R_{>0}\right)^u$ is absorbing for the system .*
[*Proof* ]{}: Considering the system when $\mathcal R_0 > 1$, there is a unique endemic equilibrium $\mathbf x^\star$ with respective components given as in eq. . Let the function $V_{ee}$ be defined on $\left(\mathbb R_{>0}\right)^u$ as follows:
$$\label{eq:eqliapvee}
\begin{array}{rcl}V_{ee}(\mathbf x) & = & (S_q-S^\star_q\ln S_q)+(S_r-S^\star_r\ln S_r)+\, \sigma_r^{(1)}(E^{(1)}_r-E^{(1)\star}_r\ln E^{(1)}_r)\\&& +\,{\displaystyle\sum}_{j=1}^\ell\left(\sigma_q^{(j)}(E^{(j)}_q-E^{(j)\star}_q\ln E^{(j)}_q)+\sigma_r^{(j+1)}(E^{(i+1)}_r-E^{(j+1)\star}_r\ln E^{(j+1)}_r)\right)+\,\tau_q(I_q-I_q^{\star}\ln \,I_q)\\&& +\tau_r(I_r-I_r^{\star}\ln \,I_r)+\,{\displaystyle\sum}_{i=0}^{n}\upsilon_i\left((S_i-S_i^{\star}\ln \,S_i)+(I_i-I_i^{\star}\ln \,I_i)\right),
\end{array}$$
where $\sigma^{(j)}_r = (f^*_qf_r)^{1-j}$, for $j=1,\;2,\;\cdots,\;\ell+ 1$, $\sigma^{(j)}_q = (f^*_qf_r)^{1-j}/f_r$, for $j=1,\;2,\;\cdots,\;\ell$, $\tau_q = (f^*_qf_r)^{-\ell} /f_r$, $\tau_r =(f^*_qf_r)^{-\ell} $, $\upsilon_i=a\frac{S_q^\star}{H^*}\frac{f_i}{\nu_i - \tilde \nu_i}$ for $i=0,\;1,\;\cdots,\;n$ (the motivation for these coefficients, and the derivation of expression below for the derivative, are both provided in Appendix \[sec.supplyap\]). $V_{ee}(\mathbf x)$ is a $\mathcal C^{\infty}$ positive definite function defined on $\left(\mathbb R_{>0}\right)^u$, whose derivative along the trajectories of the system is given by:
$$\label{eq:eqliapeeder2}
\begin{array}{rcl}
{\frac{dV_{ee}}{d\,t}}(\mathbf x(t)) & = & \hat\mu S^\star_q\left(2-\frac{S_q}{S^\star_q}-\frac{S^\star_q}{S_q}\right) + \delta S^\star_r \left( \frac{S^\star_q}{S_q}+\frac{S_r}{S_r^\star}-\frac{S^\star_q}{S_q}\frac{S_r}{S_r^\star}-1\right) +\, I_q^\star\frac{\varpi \sigma_r^{(1)}}{(f_qf_r)^{l}} \left(2 - \frac{I_r}{I_r^\star}\frac{I_q^\star}{I_q}- \frac{I_q}{I_q^\star}\frac{I^\star_r}{I_r}\right)\\
&&+\,{{\displaystyle\sum}_{i=0}^n}\upsilon_i\Bigg[ \hat\nu_i\Big[S_i^\star \Big(4-\frac{S^\star_i}{S_i}-\frac{S^\star_q}{S_q}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\frac{S_r}{S^\star_r}\Big)\\&& +\,c_i I^\star_i\Big(2\ell +5 -\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q} -\frac{S^\star_q}{S_q} - \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}}\frac{E^{(j)}_q}{E^{(j)\star}_q}\frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\Big)
\\&&+\,\bar c_iI^\star_i\Big(4+\frac{I_q}{I^\star_q}-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q}-\frac{S^\star_q}{S_q}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S_r}{S^\star_r}\Big)\Big] + \hat\gamma_iI^\star_i\left(1 + \frac{I_q}{I^\star_q} - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{I_q}{I^\star_q} \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\right) \Bigg]\\
&=& f_1(\mathbf x)+{{\displaystyle\sum}_{i=0}^n}\upsilon_i G_{1_i}(\mathbf x),
\end{array}$$
where for each $i$, $\bar c_i$ is the complementary probability of $c_i$ (i.e. $\bar c_i+c_i=1$), $ \hat \nu_i = \nu_i - \tilde \nu_i$, $\hat{\gamma_i}\equiv\tilde\nu_i+\gamma_i$; $$f_1(\mathbf x) = \hat\mu S^\star_q\left(2-\frac{S_q}{S^\star_q}-\frac{S^\star_q}{S_q}\right) + \delta S^\star_r \left( \frac{S^\star_q}{S_q}+\frac{S_r}{S_r^\star}-\frac{S^\star_q}{S_q}\frac{S_r}{S_r^\star}-1\right) +\, I_q^\star\frac{\varpi \sigma_r^{(1)}}{(f_qf_r)^{l}} \left(2 - \frac{I_r}{I_r^\star}\frac{I_q^\star}{I_q}- \frac{I_q}{I_q^\star}\frac{I^\star_r}{I_r}\right);$$ and $$\label{eq:G1exp1}
G_{1_i}(\mathbf x) = g_{1_i}(\mathbf x)+h_{1_i}(\mathbf x)+p_{1_i}(\mathbf x),$$ where $$\begin{aligned}
{g_1}_i(\mathbf x)= &c_i\hat\nu_i I^\star_i\left(2\ell +5 -\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q} -\frac{S^\star_q}{S_q} -\; \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}} \frac{E^{(j)}_q}{E^{(j)\star}_q} \frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\right)\\
&+\hat\nu_i\,S_i^\star \left(4-\frac{S^\star_i}{S_i}-\frac{S^\star_q}{S_q}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\frac{S_r}{S^\star_r}\right),\\
{h_1}_i(\mathbf x)=& \bar c_i\hat\nu_iI^\star_i \left(4+\frac{I_q}{I^\star_q} -\frac{S^\star_i}{S_i} -\frac{S_i}{S^\star_i} \frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q}-\frac{S^\star_q}{S_q}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S_r}{S^\star_r}\right),\\
{p_1}_i(\mathbf x)=&\hat\gamma_iI^\star_i\left(1 + \frac{I_q}{I^\star_q} - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{I_q}{I^\star_q} \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\right).\end{aligned}$$
For each $i$, by adding and subtracting $\nu_iI^\star_i \left( 1-\frac{I^\star_i}{I_i} \frac{S_i}{S^\star_i}\right)$ the function ${G_1}_i(\mathbf x)$ may be rewritten as $$\label{eq:eqliapeeder20}
\begin{array}{rcl}
{G_1}_i(\mathbf x) & = & \hat\nu_i\Big[S_i^\star \Big(4-\frac{S^\star_i}{S_i}-\frac{S^\star_q}{S_q}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\frac{S_r}{S^\star_r}\Big)\\&& +\,c_i I^\star_i\Big(2\ell +6 -\frac{S^\star_i}{S_i} -\frac{S_i}{S^\star_i} \frac{I^\star_i}{I_i} - \frac{I_q}{I^\star_q} -\frac{S^\star_q}{S_q} - \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}}\frac{E^{(j)}_q}{E^{(j)\star}_q}\frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\Big)
\\&&+\,\bar c_iI^\star_i\Big(5-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}-\frac{S^\star_q}{S_q}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S_r}{S^\star_r}\Big)\Big] +\, \bar\gamma_iI^\star_i\Big(\frac{I_q}{I^\star_q} + \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i} - 1 -\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q}\Big) \\&& + \hat\gamma_iI^\star_i\left(2 - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\right) \\
&=& \tilde{{g_1}}_i(\mathbf x)+\tilde{{p_1}}_i(\mathbf x),
\end{array}$$ with $\bar{\gamma_i}\equiv\nu_i+\gamma_i$ , $$\begin{aligned}
\tilde{{g_1}}_i(\mathbf x)= &I^\star_i\Bigg[\hat\nu_i \Bigg[c_i\left(2\ell +6 -\frac{S^\star_i}{S_i} - \frac{I_q}{I^\star_q} -\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i} -\frac{S^\star_q}{S_q} -\; \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}} \frac{E^{(j)}_q}{E^{(j)\star}_q} \frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\right)\\
& + \,\bar c_i\left(5+\frac{I_q}{I^\star_q}-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}-\frac{S^\star_q}{S_q}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S_r}{S^\star_r}\right)\Bigg] + \hat\gamma_i\left(2 - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{S_i}{S^\star_i} \frac{I^\star_i}{I_i}\right) \Bigg] \\& +\,\hat\nu_i\,S_i^\star \left( 4-\frac{S^\star_i}{S_i} - \frac{S^\star_q}{S_q}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\frac{S_r}{S^\star_r}\right),\\
\tilde{{p_1}}_i(\mathbf x)=& \bar\gamma_iI^\star_i\left(\frac{I_q}{I^\star_q} + \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} -1 - \frac{I_q}{I^\star_q} \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\right).\end{aligned}$$
Alternatively, using the identity $\delta S_r^\star = (\varpi^\star-\varphi^\star)S_q^\star-\mu S_r^\star$ we may write $$\begin{array}{rcl}
{\dfrac{dV_{ee}}{d\,t}}(\mathbf
x(t)) & = & \hat\mu S^\star_q\Big(2-\frac{S_q}{S^\star_q}-\frac{S^\star_q}{S_q}\Big) + \mu S^\star_r \Big(1 + \frac{S^\star_q}{S_q}\frac{S_r}{S_r^\star} - \frac{S^\star_q}{S_q}-\frac{S_r}{S_r^\star}\Big) +\, I_q^\star\frac{\varpi \sigma_r^{(1)}}{(f_qf_r)^{l}} \Big(2 - \frac{I_r}{I_r^\star}\frac{I_q^\star}{I_q}- \frac{I_q}{I_q^\star}\frac{I^\star_r}{I_r}\Big)\\
&&+\,{{\displaystyle\sum}_{i=0}^n}\upsilon_i\Bigg[\hat\nu_i\Big[S_i^\star \Big(3-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\frac{S^\star_q}{S_q}\frac{S_r}{S^\star_r}\Big)\\
&& +\,c_i I^\star_i\Big(2\ell +5 -\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q} -\frac{S^\star_q}{S_q} -\; \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{l}}\frac{E^{(j)}_q}{E^{(j)\star}_q}\frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\Big)
\\&&+\,\bar c_iI^\star_i\Big(3+\frac{I_q}{I^\star_q}-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S^\star_q}{S_q}\frac{S_r}{S^\star_r}\Big)\Big] + \hat\gamma_iI^\star_i\left(1 + \frac{I_q}{I^\star_q} - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{I_q}{I^\star_q} \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\right)\Bigg]\\
&=& f_2(\mathbf x)+{{\displaystyle\sum}_{i=0}^n}\upsilon_i G_{2_i},
\end{array}
\label{eq:eqliapeeder21}$$ with $$\begin{aligned}
f_2(\mathbf x) = &\hat\mu S^\star_q\left(2-\frac{S_q}{S^\star_q}-\frac{S^\star_q}{S_q}\right) + \mu S^\star_r \left(1 +\frac{S^\star_q}{S_q}\frac{S_r}{S_r^\star}- \frac{S^\star_q}{S_q}-\frac{S_r}{S_r^\star}\right) +\, I_q^\star\frac{\varpi \sigma_r^{(1)}}{(f_qf_r)^{l}} \left(2 - \frac{I_r}{I_r^\star}\frac{I_q^\star}{I_q}- \frac{I_q}{I_q^\star}\frac{I^\star_r}{I_r}\right); \end{aligned}$$ and $$\label{eq:G2exp2}
{G_2}_i = {g_2}_i(\mathbf x)+{h_2}_i(\mathbf x)+{p_1}_i(\mathbf x),$$ where $$\begin{aligned}
{g_2}_i(\mathbf x) =&c_i\hat\nu_i I^\star_i\Big(2\ell +5 -\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q} -\frac{S^\star_q}{S_q} -\; \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}}\frac{E^{(j)}_q}{E^{(j)\star}_q}\frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\Big)\\
&+\hat\nu_i\,S_i^\star \left(3-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\frac{S_r}{S^\star_r}\frac{S^\star_q}{S_q}\right); \\
{h_2}_i(\mathbf x)=&\bar c_i\hat\nu_iI^\star_i\left(3+\frac{I_q}{I^\star_q}-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S_r}{S^\star_r}\frac{S^\star_q}{S_q}\right).\end{aligned}$$
For each $i$, by adding and subtracting $\nu_iI^\star_i \left( 1-\frac{I^\star_i}{I_i} \frac{S_i}{S^\star_i}\right)$ the function ${G_2}_i$ may be rewritten as $$\label{eq:eqliapeeder23}
\begin{array}{rcl}
{G_2}_i(\mathbf x) & = & \hat\nu_i\Big[S_i^\star \Big(3-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r} - \frac{S^\star_q}{S_q} \frac{S_r}{S^\star_r}\Big)\\&& +\,c_i I^\star_i\Big(2\ell +6 -\frac{S^\star_i}{S_i} -\frac{S_i}{S^\star_i} \frac{I^\star_i}{I_i} - \frac{I_q}{I^\star_q} -\frac{S^\star_q}{S_q} - \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}}\frac{E^{(j)}_q}{E^{(j)\star}_q}\frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\Big)
\\&&+\,\bar c_iI^\star_i\Big(4-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S^\star_q}{S_q}\frac{S_r}{S^\star_r}\Big)\Big] +\, \bar\gamma_iI^\star_i\Big(\frac{I_q}{I^\star_q} + \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i} - 1 -\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\frac{I_q}{I^\star_q}\Big) \\&& + \hat\gamma_iI^\star_i\left(2 - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}\right) \\
&=& \tilde{{g_2}}_i(\mathbf x)+\tilde{{p_1}}_i(\mathbf x),
\end{array}$$ with $\bar{\gamma_i}\equiv\nu_i+\gamma_i$ , and $$\begin{aligned}
\tilde{{g_2}}_i(\mathbf x)= &I^\star_i\Bigg[\hat\nu_i \Bigg[c_i\left(2\ell +6 -\frac{S^\star_i}{S_i} - \frac{I_q}{I^\star_q} -\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i} -\frac{S^\star_q}{S_q} -\; \frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{E^{(1)\star}_r}{E^{(1)}_r}- {{\displaystyle\sum}_{j=1}^{\ell}} \frac{E^{(j)}_q}{E^{(j)\star}_q} \frac{E^{(j+1)\star}_r}{E^{(j+1)}_r}-\, {{\displaystyle\sum}_{j=1}^{\ell}}\frac{ E^{(j)}_r}{ E^{(j)\star}_r}\frac{E^{(j)\star}_q}{E^{(j)}_q} -\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r}\frac{I^\star_q}{I_q}\right)\\
& + \,\bar c_i\left(4+\frac{I_q}{I^\star_q}-\frac{S^\star_i}{S_i}-\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}-\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q}\frac{S^\star_r}{S_r}-\,\frac{S^\star_q}{S_q}\frac{S_r}{S^\star_r}\right)\Bigg] + \hat\gamma_i\left(2 - \frac{I_i}{I^\star_i} \frac{S^\star_i}{S_i} - \frac{S_i}{S^\star_i} \frac{I^\star_i}{I_i}\right) \Bigg] \\& +\,\hat\nu_i\,S_i^\star \left( 3-\frac{S^\star_i}{S_i} -\frac{S_i}{S^\star_i} \frac{S_q}{S^\star_q} \frac{S^\star_r}{S_r}- \frac{S^\star_q}{S_q} \frac{S_r}{S^\star_r} \right).\end{aligned}$$ We split $\left(\mathbb R_{>0}\right)^u$ into two overlapping subsets, $$\Omega_1=\left\{\mathbf x\in\left(\mathbb R_{>0}\right)^u\;|\;\frac{S^\star_q}{S_q}+\frac{S_r}{S_r^\star}-\frac{S^\star_q}{S_q}\frac{S_r}{S_r^\star}-1\leq 0\right\}\;\hbox{ and }\;\Omega_2 = \left\{\mathbf x\in\left(\mathbb R_{>0}\right)^u\;|\;\frac{S^\star_q}{S_q}+\frac{S_r}{S_r^\star}-\frac{S^\star_q}{S_q}\frac{S_r}{S_r^\star}-1\geq 0\right\},$$ and we shall consider ${\frac{dV_{ee}}{d\,t}}(\mathbf x(t))$ on $\Omega_1$ and $\Omega_2$ as given in and , respectively.
On the entire set $\left(\mathbb R_{>0}\right)^u$ we have for each $i$, ${g_k}_i(\mathbf x)\leq0$ and ${\tilde {g_k}}_i(\mathbf x)\leq0$ by Corollary \[cor.agmi\] of the arithmetic–geometric means inequality (Lemma \[lem.agmi\]) for $k\in\{1,~2\}$. The same corollary implies $f_k(\mathbf x)\leq0$ on $\Omega_k$ for $k\in\{1,~2\}$. On the entire set $\left(\mathbb R_{>0}\right)^u$ and for each $i$, given that $(I_i-I^\star_i)(I_q-I^\star_q)\geq0$ we may show that $\tilde{{p_1}}_i(\mathbf x)\leq0$ by applying Corollary \[cor.agmi0\] to the ratios $\frac{I^\star_q}{I_q}$ and $\frac{I_q}{I^\star_q}$ with respective associated weights $\frac{I_q}{I^\star_q}$ and $\frac{S_i}{S^\star_i}\frac{I^\star_i}{I_i}$. On the other hand, when $(I_i-I^\star_i)(I_q-I^\star_q)\leq0$ we may show $h_1(\mathbf x)\leq0$, $h_2(\mathbf x)\leq0$ and ${p_1}_i(\mathbf x)\leq 0$ by applying the same corollary to the following data (given in pairs as (number, weight)) $\left(\frac{S_i^\star}{S_i},~1\right)$, $\left(\frac{I_i^\star}{I_i}\frac{S_i}{S^\star_i},~ \frac{I_q}{I^\star_q}\right)$, $\left(\frac{S^\star_q}{S_q},~ 1\right)$, $\left(\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q} \frac{S^\star_r}{S_r},~ 1 \right)$, $\left(\frac{S_r}{S^\star_r},~1\right)$ for $h_1$; $\left(\frac{S_i^\star}{S_i},~1\right)$, $\left(\frac{I_i^\star}{I_i}\frac{S_i}{S^\star_i},~ \frac{I_q}{I^\star_q}\right)$, $\left(\frac{I_i}{I^\star_i}\frac{S_q}{S^\star_q} \frac{S^\star_r}{S_r},~1\right)$, $\left(\frac{S^\star_q}{S_q}\frac{S_r}{S^\star_r},~ 1\right)$ for $h_2$; and $\left(\frac{S_i}{S^\star_i} \frac{I^\star_i}{I_i},~\frac{I_q}{I^\star_q}\right)$, $\left(\frac{S^\star_i}{S_i}\frac{I_i}{I^\star_i},~1\right)$ for $p_1$.
In view of expressions , for $G_{1_i}$ and expressions , for $G_{2_i}$, the results of the previous paragraph show that $G_{k_i} \le 0$ on the entire set $(\mathbb R_{>0})^u$ for $k \in \{1,2\}$. Since we have also shown that $f_k(\mathbf x)\leq0$ on $\Omega_k$ for $k\in\{1,~2\}$, in view of expressions and for ${\frac{dV_{ee}}{d\,t}}(\mathbf x(t))$ we may conclude that ${\frac{dV_{ee}}{d\,t}}(\mathbf x(t)) \le 0$ for all $\mathbf x(t) \in (\mathbb R_{>0})^u$.
In order to determine the subset of $(\mathbb R_{>0})^u$ where $\frac{dV_{ee}}{d\,t}(\mathbf x(t))= 0$, we make use of to conclude that $$\frac{dV_{ee}}{d\,t}(\mathbf x(t))= 0\;\Leftrightarrow(f_1(\mathbf x)=0)\wedge ({g_1}_i(\mathbf x),\;i=0,\;\cdots,\;n)\wedge ({h_1}_i(\mathbf x),\;i=0,\;\cdots,\;n)\wedge ({p_1}_i(\mathbf x),\;i=0,\;\cdots,\;n).$$ By Lemma \[lem.agmi\], $f_1(\mathbf x)=0$ if and only if $S_q=S^\star_q\wedge I_qI^\star_r=I^\star_qI_r$. Given that $f_1(\mathbf x)=0$, Lemma \[lem.agmi\] also implies that ${h_1}_i(\mathbf x)=0$ if and only if $(S_r=S^\star_r)\wedge (S_i=S^\star_i) \wedge (I_i=I^\star_i) \wedge (I_q=I^\star_q)$, and thus $I_r=I^\star_r$. Finally, assuming $f_1(\mathbf x)=0$ and ${h_1}_i(\mathbf x)=0$, Lemma \[lem.agmi\] also gives $${g_1}_i(\mathbf x)=0\;\;\hbox{ if and only if }\;\;1=\frac{E^{(1)\star}_r}{E^{(1)}_r} = \frac{ E^{(1)}_r}{ E^{(1)\star}_r}\frac{E^{(1)\star}_q}{E^{(1)}_q}= \frac{E^{(1)}_q}{E^{(1)\star}_q}\frac{E^{(2)\star}_r}{E^{(2)}_r}=\cdots = \frac{ E^{(l)}_r}{ E^{(l)\star}_r}\frac{E^{(l)\star}_q}{E^{(l)}_q}= \frac{ E^{(\ell+ 1)\star}_r}{ E^{(\ell+ 1)}_r}\frac{E^{(l)}_q}{E^{(l)\star}_q} =\frac{E^{(\ell+ 1)}_r}{E^{(\ell+ 1)\star}_r},$$ which implies $E^{(j)}_r=E^{(j)\star}_r, \;j=1,\;\cdots,\;\ell+ 1$ and $E^{(j)}_q=E^{(j)\star}_q, \;j=1,\;\cdots,\;\ell$. Thus, $\frac{dV_{ee}}{d\,t}(\mathbf x(t))= 0$ if and only if $\mathbf x = \mathbf x^\star$.
From the above discussion we may conclude that $V_{ee}$ is a strict Lyapunov function for the system on $(\mathbb R_{>0})^u$. The LaSalle invariance principle implies the global asymptotic stability of the EE $\mathbf x^\star$ of the system on the set $(\mathbb R_{>0})^u$ [@Bhatia70; @Las68; @MR0481301; @MR0594977].[$\Box$ ]{}
Sensitivity analysis of $\mathcal R_0$ and endemic equilibrium infection levels {#sec:secsam}
===============================================================================
The parameters of greatest interest in evaluating strategies for malaria control are the reproduction number $\mathcal R_0$ and the EE host infection levels $I_i^\star$. Ideally, one would hope to achieve a reproduction number $\mathcal R_0 < 1$, which would imply the eventual elimination of malaria. In the short range, a more immediate objective should be to reduce infection levels as much as possible.
Various practical control strategies will impact different model parameters. Control strategies are listed in Table \[tab.tabvd3\], along with affected model parameters.
--------------------------------------------------------------------------- ---------- ------- ------- ------- ------- ------------
*Control method* $\Gamma$ $\mu$ $f_i$ $k_i$ $m_i$ $\gamma_i$
Outdoor spraying with larvicides
Breeding habitat reduction (e.g. draining standing water)
Outdoor vector control
Indoor residual spraying
Insecticide-treated bed nets (ITN), Long-lasting insecticidal nets (LLIN)
Untreated bed nets
Repellents (topical repellents, mosquito coils)
Preventative drugs
Rapid diagnosis and treatment (RDT)
--------------------------------------------------------------------------- ---------- ------- ------- ------- ------- ------------
: Derived model parameters and time-dependent functions[]{data-label="tab.tabvd3"}
In order to quantify the effects of the parameters in Table \[tab.tabvd3\] on the two critical performance measures $\mathcal R_0$ and $I_i^\star$ we will make use of sensitivity indices, which measure the percentage change in a dependent variable in response to an incremental percentage change in a system parameter. When the variable is a differentiable function of the parameter, the sensitivity index may be formally defined as follows:
The normalized forward sensitivity index of a variable $u$ that depends differentiably on a parameter $p$ is defined as: $$\label{eq:seind}
\varUpsilon_p^u := \frac{\partial u}{\partial p}\times\frac{p}{u}.$$
Our analysis is facilitated by the following observations, which are easily proved using basic calculus:
- Sensitivity of additive terms: $\varUpsilon_p^{\sum_i u_i} =\dfrac{\sum_i \varUpsilon_p^{u_i} u_i}{\sum_i u_i}$;
- Additive sensitivity of multiplicative terms: $\varUpsilon_p^{uv} = \varUpsilon_p^{u} + \varUpsilon_p^{v}$;
- Negative sensitivity of reciprocal terms: $\varUpsilon_p^{u^{-1}} = -\varUpsilon_p^{u}$;
- Multiplicative sensitivity of compositions: i.e. if $u = u(x)$ and $x = x(p)$, then $\varUpsilon_p^{u} = \varUpsilon_x^{u}\varUpsilon_p^{x}$.
Sensitivities of $\mathcal R_0$ with respect to controllable parameters {#subsec:secsamnpkc}
-----------------------------------------------------------------------
To facilitate the calculation of sensitivities, we introduce some intermediate parameters: $$\begin{aligned}
\rho^\star &\equiv \dfrac{\hat{\mu}^\star}{\varpi^\star};\\
\widehat{\mathcal R}_0 &\equiv \frac{(f^*_qf_r)^{\ell+ 1}}{(1-f^*_qf_r)^2}\dfrac{f^*_q}{{\varpi^*}^2} = f_r^{\ell+ 1}(1 + \rho^\star)^{-\ell} \left( 1-f_r + \rho^\star \right)^{-2} (\varpi^\star)^{-2};\\
u_i &\equiv \frac{c_i f_i m_i}{\gamma_i+\nu_i};\\
U &\equiv \sum_{i=1}^n b_i^* u_i.\end{aligned}$$ The parameter $\rho^\star$ is interpretable as the failure/success ratio of questing mosquitos: that is, the fraction of mosquitos at each questing stage that fail to feed and survive divided by the fraction of mosquitos that succeed and pass on to the next resting stage. Note also that $U$ is the weighted average of $u_i$ values, where the weight is the EE population proportion. In terms of these parameters, we may rewrite $\mathcal R_0$ from as: $$\begin{aligned}
\label{eq:R0_rho}
\mathcal R_0 &= \frac{a^2\Gamma}{H^*} \widehat{\mathcal R}_0 U.\end{aligned}$$ We begin with the sensitivities of $\mathcal R_0$ on $k_i$ and $f_i$. First we calculate the sensitivities with respect to the intermediate parameters $\hat{\mu}^*$ and $\varpi^*$: note that $\hat{\mu}^*$ which depend on $k_i$ and $f_i$. Using the additive property of sensitivities mentioned above, it follows directly from that $$\varUpsilon_{\rho^\star}^{\mathcal R_0} =\varUpsilon_{\rho^\star}^{\widehat{\mathcal R}_0} = \frac{ -\ell \rho^\star}{1+\rho^\star} - 2\frac{ \rho^\star}{1 - f_r+\rho^\star},$$ which may be rewritten as $$\varUpsilon_{\rho^\star}^{\mathcal R_0}=\varUpsilon_{\rho^\star}^{\widehat{\mathcal R}_0} = \frac{-\ell \rho^\star}{1+\rho^\star} - 2 + \frac{2t}{1+\rho^\star},$$ where $$t \equiv \frac{1-f_r + \rho^* - f_r \rho^*}{1-f_r + \rho^*} = \frac{1-f_r}{1-f_q^*f_r}.$$ Note that $0\le t \le 1$: $t \approx 1$ when $f_r \approx 0$ (low resting survival rate) or when $f_q^* \approx 1$ (high feeding success); and $t \approx 0$ when $f_r \approx 1$ (high resting survival rate).
Using $\varUpsilon_{\hat{\mu}^\star}^{\rho^\star} = 1$, $\varUpsilon_{\varpi^\star}^{\rho^\star} = -1$, $f_q^\star = (1 + \rho^\star)^{-1}$ and the multiplicative property of sensitivities, we have $$\begin{aligned}
\varUpsilon_{\hat{\mu}^\star}^{\mathcal R_0} &= \varUpsilon_{\hat{\mu}^\star}^{\widehat{\mathcal R}_0} = \varUpsilon_{\rho^\star}^{\widehat{\mathcal R}_0}\varUpsilon_{\hat{\mu}^\star}^{\rho^\star} = - \ell (1-f_q^\star) - 2 + 2tf_q^\star = - \ell -2 + f_q^\star(\ell+ 2t),\\
\varUpsilon_{\varpi^\star}^{\mathcal R_0} &=\varUpsilon_{\varpi^\star}^{\widehat{\mathcal R}_0} = -\varUpsilon_{\hat{\mu}^\star}^{\widehat{\mathcal R}_0} - 2 = \ell (1-f_q^\star) - 2tf_q^\star.\end{aligned}$$
The sensitivity $\varUpsilon_{\hat{\mu}^\star}^{\mathcal R_0}$ is negative, as expected: if the kill rate of questing mosquitos is increased, we would expect $\mathcal R_0$ to decrease. Surprisingly, in the case where $f_q^\star \approx 1$ (high questing success rate) so that $t \approx 1$, then it is possible to achieve a negative sensitivity of $\mathcal R_0$ on $\varpi^*$: in other words, decreasing the success rate of questing mosquitos can actually increase the reproduction number! However, in the usual case the equation for $\varUpsilon_{\varpi^\star}^{\mathcal R_0}$ indicates a positive value, as expected.
We are now ready to obtain sensitivities based on $k_i$ and $f_i$. According to the definitions in Table \[tab.tabvd2\], the parameter $\hat{\mu}^*$ depends only on $k_i$, while the parameter $\varpi^\star$ depends only on $f_i$. We have $$\varUpsilon_{k_i}^{\hat{\mu}^\star} = \frac{a b_i^*k_i}{\hat{\mu}^*} ~~ \text{and}~~ \varUpsilon_{f_i}^{\varpi^\star} = \frac{a b_i^*f_i}{\varpi^*} = \frac{ b_i^*f_i}{\sum_i b_i^*f_i}$$ This leads finally to the following expressions for $\varUpsilon_{k_i}^{\mathcal R_0}$ and for $\varUpsilon_{f_i}^{\mathcal R_0}$:
$$\begin{aligned}
\varUpsilon_{k_i}^{\mathcal R_0} = \varUpsilon_{\hat{\mu}^\star}^{\mathcal R_0}\varUpsilon_{k_i}^{\hat{\mu}^\star} &=
b_i^*\left(\frac{ a k_i }{\hat{\mu}^*}\right)\left( - (\ell+ 2)(1-f_q^\star)- 2(1-t)f_q^\star\right),\label{eq:k_iSens}\\
\varUpsilon_{f_i}^{\mathcal R_0} = \varUpsilon_{\varpi^\star}^{\mathcal R_0}\varUpsilon_{f_i}^{\varpi^\star}
+ \frac{ b_i^\star u_i}{U} &= b_i^* \left(\left(\frac{ f_i}{\sum_i b_i^* f_i}\right)\left(\ell (1-f_q^\star) - 2tf_q^\star\right) + \frac{ u_i}{U} \right) \quad( 0 \le t \le 1)\label{eq:f_iSens}\end{aligned}$$
The sensitivites of $\mathcal R_0$ on $k_i$ and $ f_i$ given in - are proportional to $b_i^*$. This reflects the fact that the impacts of these parameters are directly proportional to the size of the host groups that they impact. But although sensitivities are larger for larger host groups, presumably the effort required to change $k_i$ and $f_i$ will also be larger since a larger population is involved.
The presence of the multiplicative factor $\ell$ (number of vector stages) in the sensitivites $\varUpsilon_{k_i}^{\mathcal R_0}$ and $\varUpsilon_{f_i}^{\mathcal R_0}$ is noteworthy. The parameters $k_i$ and $f_i$ affect the vector population during each stage, which leads to an $\ell$-fold reinforcement of the parameters’ impact on $\mathcal R_0$. Since $0 \le t \le 1$, terms in - that involve $t$ or $1-t$ will have a mitigated effect. The factor $1 - f_q^*$, which appears in both sensitivities, indicates that $k_i$ and $f_i$’s effects on $\mathcal R_0$ are suppressed if a large proportion of questing mosquitos survive to the next resting stage.
The sensitivities $\varUpsilon_{k_i}^{\mathcal R_0}$ and $\varUpsilon_{f_i}^{\mathcal R_0}$ are proportional to $k_i$ and $f_i$ respectively (note that $u_i$ is proportional to $f_i$). In the case of $\varUpsilon_{f_i}^{\mathcal R_0}$, this means that we should expect diminishing returns from a control strategy that targets $f_i$. On the other hand, $\varUpsilon_{k_i}^{\mathcal R_0}$ increases as $k_i$ increases towards 1 (although in practice further increases in kill rate are likely to become more difficult to achieve as the kill rate approaches 1, due to diminishing returns).
Next we compute the sensitivity of $\mathcal R_0$ with respect to $\mu$. Both $f_r$ and $\hat{\mu}^*$ depend on $\mu$, so we will need sensitivities with respect to $f_r$. We obtain $$\begin{aligned}
\varUpsilon_{\mu}^{\mathcal R_0} &= -\left(\ell+1 +\frac{2f_r}{1-f_r +\rho^\star}\right) \frac{\mu}{\mu+\delta}
- (\ell +2 - f_q^\star(\ell+ 2t))\frac{\mu}{\mu+d} \nonumber\\
& = -(1 - f_r)\left(\ell+1 +\frac{2f_q^*f_r}{1-f_q^*f_r}\right)
- \frac{\mu}{\hat{\mu}^\star} \left((\ell + 2)(1 - f_q^\star) + 2f_q^\star(1-t)\right), \label{eq:muSens}\end{aligned}$$ which is always negative, as expected. Once again we see a multiplicative factor $\ell$, which reflects the influence of $\mu$ throughout all questing stages. The terms that are proportional to $1-f_r$ and $1-f_q^*$ are reduced when resting (resp. questing) mosquito survival rates are high. Note the factor $\mu / \hat{\mu}^*$ is the ratio of resting/questing death rates, and is always less than 1.
The remaining controllable parameters $\Gamma,m_i, \gamma_i$ are much easier to deal with, because only a single term in the sum in depends on each of these parameters. The calculation of the corresponding sensitivity indices is straightforward, using the additive sensitivity rule: $$\label{eq:otherSens}
\varUpsilon_{\Gamma}^{\mathcal R_0} = 1;\qquad
\varUpsilon_{m_i}^{\mathcal R_0} =\frac{b_i^* u_i}{U} ;\qquad
\varUpsilon_{\gamma_i}^{\mathcal R_0} = -\frac{b_i^* u_i}{U} \left(\frac{\gamma_i}{\gamma_i+\nu_i}\right).$$ Eq. implies that $\sum_i \varUpsilon_{m_i}^{\mathcal R_0} = 1$ and $-1 < \sum_i \varUpsilon_{\gamma_i}^{\mathcal R_0} < 0$. In practical situations, the death rate $\nu_i$ is much less than the cure rate $\gamma_i$, which implies that sensitivities of $\mathcal R_0$ on $m_i$ and $\gamma_i$ are roughly equal but opposite in sign. Both are proportional to $\frac{b_i^* u_i}{U}$, which indicates that measures that target $m_i$ or $\gamma_i$ applied to groups for which $u_i$ is relatively large (compared to the population average $U$) will have relatively greater effect on $\mathcal R_0$. Since $u_i$ is proportional to $m_i$ and $(\gamma_i + \nu_i)^{-1}$, it follows that control measures that target $f_i$ and $m_i$ will produce diminishing returns.
Sensitivities of $I_q^\star$ and $I_j^\star$ with respect to controllable parameters {#subsec:secsamnpkc}
------------------------------------------------------------------------------------
In cases where it is not feasible to reduce $\mathcal{R}_0$ to a value less than 1, the target of a malaria control strategy should be to reduce $I_j^*$ as much as possible for as many host groups $j$ as possible—particularly those host groups for which infection is more dangerous, like pregnant mothers and infants. According to , the infection levels $I_j^*$ may be conveniently expressed in terms of $I_q^\star$. Thus we first calculate the sensitivities of $I_q^*$ with respect to controllable parameters.
Making use of and after some calculations, we find the sensitivity of $I_q^*$ with respect to $\Gamma$: $$\varUpsilon_\Gamma^{I_q^\star} = \left( \frac{a I_q^*}{Z} \sum_{i=0}^n \frac{b_i^\star z_i^2}{c_if_i } +
(1-t) \tilde{t} \left(1 - \frac{a I_q^*}{Z} \sum_{i=0}^n \frac{b_i^\star z_i^2}{c_if_i } \right)
\right)^{-1},$$ where
$$z_i \equiv \frac{m_ic_if_i}{(\nu_i+\gamma_i)H^*+am_iI_q^*}; \qquad Z \equiv \sum_{i=1}^n b_i^*z_i ; \qquad \tilde{t} \equiv \frac{I_q^\star}{ \Gamma \widehat{\mathcal R}_0 \hat{\mu}^\star}=\frac{I_q^\star}{\bar{I}_q^\star}\frac{f_q^\star(1-f_q^\star f_r)}{1-f_q^\star},$$
where $0 \le \tilde{t} \le 2$ (note $1 \le \frac{1-f_q^\star f_r}{1-f_q^\star}\le 2$ since $f_r \ge f_q^*$). In terms of $ \varUpsilon_\Gamma^{I_q^\star}$, the sensitivities with respect to $ p_i \in \{f_i,k_i,m_i,\gamma_i\}$ may be found from (after extended calculations) as: $$\label{eq:sensIq_wrt_pj}
\varUpsilon_{p_i}^{I_q^\star} =
\varUpsilon_{\Gamma}^{I_q^\star} \left(\varUpsilon_{p_i}^{\widehat{\mathcal R}_0}
+ t\varUpsilon_{p_i}^{t}\tilde{t} + \varUpsilon_{p_i}^{\hat{\mu}^\star}(1-t)\tilde{t}
+ \frac{b_i^*z_i}{Z} \varUpsilon_{p_i}^{z_i}
\left(1 - (1-t)\tilde{t} \right)
\right),$$ Equation gives specifically for $k_i,f_i,m_i, \gamma_i$: $$\begin{aligned}
\varUpsilon_{k_i}^{I_q^\star} &=
b_i^* \varUpsilon_{\Gamma}^{I_q^\star} \left(\frac{a k_i }{\hat{\mu}^*}\right)
\left( -(\ell+ 2)(1-f_q^\star)- 2(1-t)f_q^\star
- t \tilde{t} \left(\frac{f_rf_q^*(1-f_q^*)}{1-f_rf_q^*}\right) + (1-t)\tilde{t}
\right);\label{eq:sensIqstar_on_ki}\\
\varUpsilon_{f_i}^{I_q^\star} &=
b_i^* \varUpsilon_{\Gamma}^{I_q^\star} \left(\,\, \frac{af_i}{\varpi^*}\left(\ell (1-f_q^\star) - 2tf_q^\star
+ t\left(\frac{f_rf_q^*(1-f_q^*)}{1-f_q^*f_r}\right) \right)\tilde{t}
+ \frac{z_i}{Z}
\left(1 - (1-t)\tilde{t} \right)
\right);\label{eq:sensIqstar_on_fi}\\
\varUpsilon_{m_i}^{I_q^*} &= b_i^*\varUpsilon_{\Gamma}^{I_q^*} \left(\frac{z_i}{Z}\right) \left( 1 + \frac{a f_i I_q^*}{(\nu_i+\gamma_i)H^*+am_iI_q^*} \right) (1 - (1-t)\tilde{t} ); \label{eq:sensIqstar_on_mi}\\
\varUpsilon_{\gamma_i}^{I_q^*} &= -b_i \varUpsilon_{\Gamma}^{I_q^*}\left(\frac{z_i}{Z}\right) \left( \frac{ H^*\gamma_i }{(\nu_i+\gamma_i)H^*+am_iI_q^*}\right) (1 - (1-t)\tilde{t} ).\label{eq:sensIqstar_on_gammai}\end{aligned}$$ Similar calculations give the sensitivity of $I_q^*$ as a function of $\mu$: $$\begin{aligned}
\varUpsilon_\mu^{I_q^\star} &= \varUpsilon_\Gamma^{I_q^*} \left( \varUpsilon_\mu^{\mathcal R_0}+(1-t)\tilde{t}\frac{\mu }{\hat{\mu}^\star} \right)\end{aligned}$$
The expression in for $I_j^*$ in terms of $I_q^*$ can then be applied to find the sensitivites of $I_j^*$ for host group $j$: $$\begin{aligned}
\varUpsilon_{k_i}^{I_j^\star} &= \varUpsilon_{k_i}^{I_q^*}\left(1 - \frac{I_j^*}{H^*}\right) \\
\varUpsilon_{f_i}^{I_j^\star} &= \varUpsilon_{f_i}^{I_q^*}\left(1 - \frac{I_j^*}{H^*}\right) \\
\varUpsilon_{\gamma_i}^{I_j^\star} &= \varUpsilon_{\gamma_i}^{I_q^*}\left(1 - \frac{I_j^*}{H^*}\right) - \delta_{ij} \frac{\gamma_i I_i^*}{am_i I_q^*} \\
\varUpsilon_{m_i}^{I_j^\star} &= (\delta_{ij}+\varUpsilon_{m_i}^{I_q^*})\left(1 - \frac{I_j^*}{H^*}\right) \\
\varUpsilon_{\mu}^{I_i^\star} &= \varUpsilon_{\mu}^{I_q^*}\left(1 - \frac{I_j^*}{H^*}\right) \end{aligned}$$
In general, we would expect that the infected population of any particular group is a relatively small proportion of the total population, so that $\left(1 - \frac{I_j^*}{H^*}\right) \approx 1$. This in turn implies that the sensitivities of $I_j^*$ from variables $k_i, f_i$ and $\mu$ are almost identical to the corresponding sensitivities of $I_q^*$ on these variables. This reflects the fact that $k_i, f_i$ and $\mu$ only have an indirect effect on $I_j^*$ via their influence on the variable $I_q^*$. On the other hand, treatement rate $\gamma_j$ and infection rate $m_j$ have direct effects on group $j$, as expected.
From ()-() we may see that the variable $z_i$ plays a similar role in the sensitivities of $I_q^*$ and $I_j$ as the variable $u_i$ in the sensitivities of $\mathcal R_0$. Groups $i$ with larger $z_i$ values will exhibit larger sensitivities with respect to $f_i, m_i,$ and $\gamma_i$.
Discussion {#sec.discuss}
==========
We have developed and rigorously analyzed a model of the dynamics of malaria transmission within a system comprising populations of vectors and human hosts, where hosts are subdivided into several groups depending on the way they usually protect themselves against mosquito bites. The model includes essential parameters ($k_i, f_i, m_i, \mu,$ and $\gamma_i$) that are targeted by various realistic control strategies. These values determine the model’s prediction of the basic reproduction number $\mathcal R_0$, and we have established that the DFE of the model is GAS providing that $\mathcal R_0\leq 1$. We also have shown theat there is a unique EE when $\mathcal R_0 > 1$, as well as the level of the endemicity when $\mathcal R_0>1$. The level of the endemicity among human population groups is largely determined by the size of the infected questing vector population at the endemic equilibrium, denoted as $I_q^\star$. We do not have an explicit expression for $I_q^\star$, but we have computed an upper bound (see ). This upper bound is a decreasing function of the extrinsic incubation period, which is proportional to the number of questing-resting cycles $\ell$.
The sensitivities derived in previous sections have relatively complicated expressions. However, they reduce considerably in certain limiting cases: for example, if $1-f_r \ll 1-f_q^*$ then $t \approx 0$ so many terms in the sensitivities disappear. In the general case, notwithstanding the complication of the expressions we may still extract some useful information. Most sensitivities are proportional to the controllable parameter they depend on, which implies that diminishing returns will be obtained as the parameter value decreases as control measures are applied. This argues for a comprehensive strategy that targets several parameters, rather than focusing on just one. Table \[tab.tabvd3\] shows that some control measures target multiple parameters: in particular, ITNs influence three different control parameters. The model suggests that ameliorative effects from multiple parameters will be multiplied. This would seem to point to ITNs as the most effective option. However, the model doesn’t take into account the fact that a significant fraction of ITN owners do not use their bed nets, for various reasons [@atieli2011insecticide; @russell2015determinants]. Naturally this compliance issue must be addressed to achieve the full potential benefit of ITNs.
We have also seen that as far as different groups are concerned, those with large $u_i$ values (where $u_i = \frac{c_i f_i m_i}{\gamma_i + \nu_i}$) should be targeted to produce the greatest impact on $\mathcal R_0$, while those with large $z_i$ values (where $z_i = \frac{c_i f_i m_i}{(\gamma_i + \nu_i)H^* + am_iI_q^*}$) should be targeted to produce the greatest overall impact on $I_q^*$ (and hence the overall $I_j$ levels for all groups. Finally, we note the multiplier effect of the number of malaria stages on several sensitivies, including vector kill rate ($k_i$), biting success frequency ($f_i$), and overall death rate ($\mu$). This argues favorably for strategies that target these parameters (such as bed nets (treated or untreated), IRS, and outdoor vector control) over strategies that reduce infection probability or recovery rate of bitten humans (like IPT and RDT). Although untreated bed nets are not as effective as ITN’s (roughly half as effective, since they only affect $f_i$ and $m_i$ but not $k_i$) they still exhibit this multiplier effect—and the fact that they have no associated environmental and health risks may make them an attractive option.
Conclusion and perspective {#sec.conclusion}
==========================
The model presented in this paper represents an improvement over previous models that do not divide the human population into groups, or take into account multiple vector stages. The model does not take geographical effects into account, and in particular does not consider migration between geographically separated populations which may have different levels of prevalence. The effects of population displacements are very significant in practice, and research is ongoing to take them into account.
[10]{}
Dariush Akhavan, Philip Musgrove, Alexandre Abrantes, and Renato d’A Gusm[ã]{}o. Cost-effective malaria control in brazil: cost-effectiveness of a malaria control program in the amazon basin of brazil, 1988–1996. , 49(10):1385–1399, 1999.
Harrysone E Atieli, Guofa Zhou, Yaw Afrane, Ming-Chieh Lee, Isaac Mwanzo, Andrew K Githeko, and Guiyun Yan. Insecticide-treated net (itn) ownership, usage, and malaria transmission in the highlands of western kenya. , 4(1):113, 2011.
Olatunji Joshua Awoleye and Chris Thron. Improving access to malaria rapid diagnostic test in niger state, nigeria: an assessment of implementation up to 2013. , 2016, 2016.
A. D. Barbour. Macdonald’s model and the transmission of bilharzia. , 72(1):6–15, 1978.
M Nabie Bayoh, Derrick K Mathias, Maurice R Odiere, Francis M Mutuku, Luna Kamau, John E Gimnig, John M Vulule, William A Hawley, Mary J Hamel, and Edward D Walker. Anopheles gambiae: historical population decline associated with regional distribution of insecticide-treated bed nets in western nyanza province, kenya. , 9(1):62, 2010.
John C Beier, Joseph Keating, John I Githure, Michael B Macdonald, Daniel E Impoinvil, and Robert J Novak. Integrated vector management for malaria control. , 7(1):S4, 2008.
A Berman and R. J. Plemmons. , volume 9 of [*Classics in Applied Mathematics*]{}. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994. Revised reprint of the 1979 original.
Nora J Besansky, Catherine A Hill, and Carlo Costantini. No accounting for taste: host preference in malaria vectors. , 20(6):249–251, 2004.
N. P. Bhatia and G. P. Szeg[ö]{}. . Springer-Verlag, 1970.
P Carnevale and R Vincent. . IRD, 2009.
N. Chitnis. . PhD thesis, University of Arizona, 2005.
Nakul Chitnis, James M Hyman, and Jim M Cushing. Determining important parameters in the spread of malaria through the sensitivity analysis of a mathematical model. , 70(5):1272, 2008.
John E Ehiri and Ebere C Anyanwu. Mass use of insecticide-treated bednets in malaria endemic poor countries: public health concerns and remedies. , 25(1):9–22, 2004.
Thomas G Floore. Mosquito larval control practices: past and present. , 22(3):527–533, 2006.
D Fontenille, L Lochouarn, N Diagne, C Sokhna, J J Lemasson, M Diatta, L Konate, F Faye, C Rogier, and J F Trape. High annual and seasonal variations in malaria transmission by anophelines and vector species composition in [D]{}ielmo, a holoendemic area in [S]{}enegal. , 56:247–53, 1997.
D. Gollin and C. Zimmermann. Malaria: Disease impacts and long-run income differences. IZA Discussion Papers 2997, Institution for the Study of Labor (IZA), August 2007.
H. Guo, M.Y. Li, and Z. Shuai. , 14(3):259–284, 2006.
H. Guo, M.Y. Li, and Z. Shuai. A graph-theoretic approach to the method of global [L]{}yapunov functions. , 2008.
William A Hawley, Penelope A Phillips-Howard, Feiko O ter Kuile, Dianne J Terlouw, John M Vulule, Maurice Ombok, Bernard L Nahlen, John E Gimnig, Simon K Kariuki, Margarette S Kolczak, et al. Community-wide effects of permethrin-treated bed nets on child mortality and malaria morbidity in western kenya. , 68(4\_suppl):121–127, 2003.
J. A. Jacquez and C. P. Simon. Qualitative theory of compartmental systems. , 35(1):43–79, 1993.
J. C. Kamgang, V. C. Kamla, and S. Y. Tchoumi. Modeling the dynamics of malaria transmission with bed net protection perspective. , 5(19):3156–3205, 11 2014.
J. C. Kamgang and G. Sallet. Computation of threshold conditions for epidemiological models and global stability of the disease free equilibrium.,0. , 213(1):1–12, 2008.
Jennifer Keiser, Burton H Singer, and J[ü]{}rg Utzinger. Reducing the burden of malaria in different eco-epidemiological settings with environmental management: a systematic review. , 5(11):695–708, 2005.
A. Korobeinikov. , 14(6):697–699, 2001.
A. Korobeinikov. Lyapunov functions and global properties for [SEIR]{} and [SEIS]{} models. , 21:75–83, 2004.
A. Korobeinikov and P. K. Maini. A [L]{}yapunov function and global properties for [SIR]{} and [SEIR]{} epidemiological models with nonlinear incidence. , 1(1):57–60, 2004.
A. Korobeinikov and G. C. Wake. , 15(8):955–960, 2002.
J. P. LaSalle. Stability theory for ordinary differential equations. stability theory for ordinary differential equations. , 41:57–65, 1968.
J. P. LaSalle. . Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1976. With an appendix: “Limiting equations and stability of nonautonomous ordinary differential equations” by Z. Artstein, Regional Conference Series in Applied Mathematics.
J. P. LaSalle. Stability theory and invariance principles. In [*Dynamical systems (Proc. Internat. Sympos., Brown Univ., Providence, R.I., 1974), Vol. I*]{}, pages 211–222. Academic Press, New York, 1976.
Clare E Lawrance and Ashley M Croft. Do mosquito coils prevent malaria? a systematic review of trials. , 11(2):92–96, 2004.
J. Li;, D. Blakeley;, and R. J. Smith. The failure of $r_0$. , May 2011.
D. G. Luenberger. John Wiley & Sons Ltd., 1979.
Z. Ma, J. Liu, and J. Li. Stability analysis for differential infectivity epidemic models. , 4(5):841–856, 2003.
Marta F Maia, Merav Kliner, Martha Richardson, Christian Lengeler, and Sarah J Moore. Mosquito repellents for malaria prevention. , 4(CD011595), 2015.
C. C. McCluskey. Lyapunov functions for tuberculosis models with fast and slow progression. , to appear, 2006.
C. C. McCluskey. Global stability fo a class of mass action systems allowing for latency in tuberculosis. , 2007.
C.C. McCluskey. , 181(1):1–16, 2003.
C.C. McCluskey. , 409:100–110, 2005.
C.C. McCluskey and P. van den Driessche. , 16(1):139–166, 2004.
Benjamin D Menze, Jacob M Riveron, Sulaiman S Ibrahim, Helen Irving, Christophe Antonio-Nkondjio, Parfait H Awono-Ambene, and Charles S Wondji. Multiple insecticide resistance in the malaria vector anopheles funestus from northern cameroon is mediated by metabolic resistance alongside potential target site insensitivity mutations. , 11(10):e0163261, 2016.
Chantal M Morel, Jeremy A Lauer, and David B Evans. Cost effectiveness analysis of strategies to combat malaria in developing countries. , 331(7528):1299, 2005.
A. G. Ngwa and W.S. Shu. . , 32:747–763, 2000.
World Health Organization. . World Health Organization, 2015.
Bianca Pluess, Frank C Tanser, Christian Lengeler, and Brian L Sharp. Indoor residual spraying for preventing malaria. , 4(4), 2010.
Hilary Ranson, Raphael N’Guessan, Jonathan Lines, Nicolas Moiroux, Zinga Nkuni, and Vincent Corbel. Pyrethroid resistance in african anopheline mosquitoes: what are the implications for malaria control? , 27(2):91–98, 2011.
C Rogier, A Tall, N Diagne, D Fontenille, A Spiegel, and J F Trape. clinical malaria: lessons from longitudinal studies in [S]{}enegal. , 41(1-3):255–9, 2000.
R. Ross. . John Murray, 1911.
Cheryl L Russell, Adamu Sallau, Emmanuel Emukah, Patricia M Graves, Gregory S Noland, Jeremiah M Ngondi, Masayo Ozaki, Lawrence Nwankwo, Emmanuel Miri, Deborah A McFarland, et al. Determinants of bed net use in southeast nigeria following mass distribution of llins: implications for social behavior change interventions. , 10(10):e0139447, 2015.
Tanya L Russell, Nicodem J Govella, Salum Azizi, Christopher J Drakeley, S Patrick Kachur, and Gerry F Killeen. Increased proportions of outdoor feeding among residual malaria vector populations following increased use of insecticide-treated nets in rural tanzania. , 10(1):80, 2011.
Brian L Sharp, Immo Kleinschmidt, Elisabeth Streat, Rajendra Maharaj, Karen I Barnes, David N Durrheim, Frances C Ridl, Natasha Morris, Ishen Seocharan, Simon Kunene, et al. Seven years of regional malaria control collaboration—mozambique, south africa, and swaziland. , 76(1):42–47, 2007.
Samuel Shillcutt, Chantal Morel, Catherine Goodman, Paul Coleman, David Bell, Christopher JM Whitty, and A Mills. Cost-effectiveness of malaria diagnostic methods in sub-saharan africa in an era of combination therapy. , 86:101–110, 2008.
J. J. Tewa, J. L. Dimi, and S. Bowong. Lyapunov functions for a dengue disease transmission model. , 39(2):936–941, 2009.
J.J. Tewa, R. Fokouop, Mewoli, and S. Bowong. Mathematical analysis of a general class of ordinary differential equations coming from within-hosts models of malaria with immune effectors. , 218(14):7347–7361, march 2012.
J[ü]{}rg Utzinger, Yesim Tozan, and Burton H Singer. Efficacy and cost-effectiveness of environmental management for malaria control. , 6(9):677–687, 2001.
P. van den Driessche and J. Watmough. reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. , 180:29–48, 2002.
K Walker and M Lynch. Contributions of anopheles larval control to malaria suppression in tropical africa: review of achievements and potential. , 21(1):2–21, 2007.
Michael T White, Lesong Conteh, Richard Cibulskis, and Azra C Ghani. Costs and cost-effectiveness of malaria control interventions-a systematic review. , 10(1):337, 2011.
WHO. World malaria report 2013. Technical report, WHO, Dec 2013.
Anne L Wilson et al. A systematic review and meta-analysis of the efficacy and safety of intermittent preventive treatment of malaria in children (iptc). , 6(2):e16976, 2011.
Eve Worrall and Ulrike Fillinger. Large-scale use of mosquito larval source management for malaria control in africa: a cost analysis. , 10(1):338, 2011.
Mekonnen Yohannes, Mituku Haile, Tedros A Ghebreyesus, Karen H Witten, Asefaw Getachew, Peter Byass, and Steve W Lindsay. Can source reduction of mosquito larval habitat reduce malaria transmission in tigray, ethiopia? , 10(12):1274–1285, 2005.
Lin Zhu, G[ü]{}nter C M[ü]{}ller, John M Marshall, Kristopher L Arheart, Whitney A Qualls, WayWay M Hlaing, Yosef Schlein, Sekou F Traore, Seydou Doumbia, and John C Beier. Is outdoor vector control needed for malaria elimination? an individual-based modelling study. , 16(1):266, 2017.
P. Zongo. . PhD thesis, Universite de Ouagadougou, 2009.
[^1]: UMI 209 IRD/UPMC UMMISCO, Bondy, Projet MASAIE INRIA Grand Est, France
[^2]: LIRIMA – GRIMCAPE, Cameroun
[^3]: Corresponding author Jean Claude Kamgang email: jckamgang@gmail.com
|
[Laboratoire Leibniz-IMAG, 46 Avenue Félix Viallet,\
38031 Grenoble Cedex, France.\
(frederic.maffray@imag.fr, nicolas.trotignon@imag.fr)]{}
February 10, 2005
> We consider the class ${\cal A}$ of graphs that contain no odd hole, no antihole of length at least $5$, and no “prism” (a graph consisting of two disjoint triangles with three disjoint paths between them) and the class ${\cal A}'$ of graphs that contain no odd hole, no antihole of length at least $5$, and no odd prism (prism whose three paths are odd). These two classes were introduced by Everett and Reed and are relevant to the study of perfect graphs. We give polynomial-time recognition algorithms for these two classes. We proved previously that every graph $G\in{\cal
> A}$ is “perfectly contractile”, as conjectured by Everett and Reed \[see the chapter “Even pairs” in the book [*Perfect Graphs*]{}, J.L. Ramírez-Alfonsín and B.A. Reed, eds., Wiley Interscience, 2001\]. The analogous conjecture concerning graphs in ${\cal A}'$ is still open.
Introduction
============
A graph $G$ is *perfect* if every induced subgraph $G'$ of $G$ satisfies $\chi(G')=\omega(G')$, where $\chi(G')$ is the chromatic number of $G'$ and $\omega(G')$ is the maximum clique size in $G'$. Berge [[@ber60; @ber61; @ber85]]{} introduced perfect graphs and conjectured that *a graph is perfect if and only if it does not contain as an induced subgraph an odd hole or an odd antihole* (the Strong Perfect Graph Conjecture), where a *hole* is a chordless cycle with at least four vertices and an *antihole* is the complement of a hole. We follow the tradition of calling *Berge graph* any graph that contains no odd hole and no odd antihole. The Strong Perfect Graph Conjecture was the objet of much research (see the book [@ramree01]), until it was finally proved by Chudnovsky, Robertson, Seymour and Thomas [@CRST2002]: *Every Berge graph is perfect*. Moreover, Chudnovsky, Cornuéjols, Liu, Seymour and Vušković [@CCLSV2002; @CLV2002; @CS2002] gave polynomial-time algorithms to decide if a graph is Berge.
Despite those breakthroughs, some conjectures about Berge graphs remain open. An *even pair* in a graph $G$ is a pair of non-adjacent vertices such that every chordless path between them has even length (number of edges). Given two vertices $x,y$ in a graph $G$, the operation of *contracting* them means removing $x$ and $y$ and adding one vertex with edges to every vertex of $G\setminus
\{x,y\}$ that is adjacent in $G$ to at least one of $x,y$; we denote by $G/xy$ the graph that results from this operation. Fonlupt and Uhry [@fonuhr82] proved that *if $G$ is a perfect graph and $\{x,y\}$ is an even pair in $G$, then the graph $G/xy$ is perfect and has the same chromatic number as $G$*. In particular, given a $\chi(G/xy)$-coloring $c$ of the vertices of $G/xy$, one can easily obtain a $\chi(G)$-coloring of the vertices of $G$ as follows: keep the color for every vertex different from $x,y$; assign to $x$ and $y$ the color assigned by $c$ to the contracted vertex. This idea could be the basis for a conceptually simple coloring algorithm for Berge graphs: as long as the graph has an even pair, contract any such pair; when there is no even pair find a coloring $c$ of the contracted graph and, applying the procedure above repeatedly, derive from $c$ a coloring of the original graph. The polynomial-time algorithm for recognizing Berge graphs mentioned at the end of the preceding paragraph can be used to detect an even pair in a Berge graph $G$; indeed, two non-adjacent vertices $a,b$ form an even pair in $G$ if and only if the graph obtained by adding a vertex adjacent only to $a$ and $b$ is Berge. The problem of deciding if a graph contains an even pair is NP-hard in general graphs [@bien]. Given a Berge graph $G$, one can try to color its vertices by keeping contracting even pairs until none can be found. Then some questions arise: what are the Berge graphs with no even pair? What are, on the contrary, the graphs for which a sequence of even-pair contractions leads to graphs that are easy to color?
As a first step towards getting a better grasp on these questions, Bertschi [@ber90] proposed the following definitions. A graph $G$ is *even-contractile* if either $G$ is a clique or there exists a sequence $G_0, \ldots, G_k$ of graphs such that $G=G_0$, for $i=0,
\ldots, k-1$ the graph $G_i$ has an even pair $\{x_i, y_i\}$ such that $G_{i+1}=G_i/x_iy_i$, and $G_k$ is a clique. A graph $G$ is *perfectly contractile* if every induced subgraph of $G$ is even-contractile. Perfectly contractile graphs include many classical families of perfect graphs, such as Meyniel graphs, weakly chordal graphs, perfectly orderable graphs, see [@epsbook]. Everett and Reed proposed a conjecture aiming at a characterization of perfectly contractile graphs. To understand it, one more definition is needed: say that a graph is a *prism* if it consists of two vertex-disjoint triangles (cliques of size $3$) $\{a_1, a_2, a_3\}$, $\{b_1, b_2, b_3\}$, with three vertex-disjoint paths $P_1, P_2, P_3$ between them, such that for $i=1, 2, 3$ path $P_i$ is from $a_i$ to $b_i$, and with no other edge than those in the two triangles and in the three paths. We may also say that the three paths $P_1, P_2, P_3$ *form* the prism. Say that a prism is odd (or even) if all three paths have odd length (respectively all have even length). See Figure \[fig:prisms\].
=0.3cm
[ccc]{}
(6,9) (0,0)(0,6)[2]{} (2,2)(0,2)[2]{} (4,0)(0,6)[2]{} (0,0)(0,6)[2]{}[(1,0)[4]{}]{} (0,0)(4,0)[2]{}[(0,1)[6]{}]{} (2,2)[(0,1)[2]{}]{} (0,0)(2,4)[2]{}[(1,1)[2]{}]{} (0,6)(2,-4)[2]{}[(1,-1)[2]{}]{} (2,-1.5)[(0,0)[An odd prism]{}]{}
&
(6,9) (1,0)(4,0)[2]{} (0,2)(3,0)[3]{} (0,5)(3,0)[3]{} (1,7)(4,0)[2]{} (1,0)(0,7)[2]{}[(1,0)[4]{}]{} (0,2)(3,0)[3]{}[(0,1)[3]{}]{} (1,0)(2,5)[2]{}[(1,1)[2]{}]{} (1,7)(2,-5)[2]{}[(1,-1)[2]{}]{} (0,5)(5,-5)[2]{}[(1,2)[1]{}]{} (0,2)(5,5)[2]{}[(1,-2)[1]{}]{} (3,-1.5)[(0,0)[An odd prism]{}]{}
&
(6,9) (0,0)(0,4)[3]{} (2,2)(0,2)[3]{} (4,0)(0,4)[3]{} (0,0)(0,8)[2]{}[(1,0)[4]{}]{} (0,0)(4,0)[2]{}[(0,1)[8]{}]{} (2,2)[(0,1)[4]{}]{} (0,0)(2,6)[2]{}[(1,1)[2]{}]{} (0,8)(2,-6)[2]{}[(1,-1)[2]{}]{} (2,-1.5)[(0,0)[An even prism]{}]{}
Define two classes $\cal A$, ${\cal A}'$ of graphs as follows:
- ${\cal A}$ is the class of graphs that do not contain odd holes, antiholes of length at least $5$, or prisms.
- ${\cal A}'$ is the class of graphs that do not contain odd holes, antiholes of length at least $5$, or odd prisms.
Clearly ${\cal A}\subset {\cal A}'$.
\[conj:pc\] A graph is perfectly contractile if and only if it is in class ${\cal
A}'$.
The if part of this conjecture remains open. The only if part is not hard to establish, but it requires some careful checking; this was done formally in [@linmafree97]. A weaker form of this conjecture was also proposed by Everett and Reed; that statement is now a theorem:
\[thm:maftro\] If $G$ is a graph in class ${\cal A}$ and $G$ is not a clique, then $G$ has an even pair whose contraction yields a graph in ${\cal A}$ (and so $G$ is perfectly contractile).
The preceding conjecture and theorem suggest that it may be interesting to recognize the classes $\cal A$ and ${\cal A}'$ in polynomial time; this is the aim of this manuscript.
In order to decide if a graph is in class $\cal A$, it would suffice to decide separately if it is Berge, if it has an antihole of length at least $5$, and if it contains a prism. The first question, deciding if a graph is Berge, is now settled [@CCLSV2002; @CS2002; @CLV2002]. In Section \[sec:berge\] we will find it convenient for our purpose to give a summary of the polynomial time algorithm from [@CCLSV2002; @CS2002] that solves this problem. The second question is not hard: to decide if a graph $G$ contains a hole of length at least $5$, it suffices to test, for every chordless path $a$-$b$-$c$, whether $a$ and $c$ are in the same connected component of the subgraph of $G$ obtained by removing the vertices of $N(a)\cap
N(c)$ and those of $N(b)\setminus\{a, c\}$. This takes time $O(|V(G)|^5)$. To decide if a graph contains an antihole of length at least $5$, we need only apply this algorithm on its complementary graph. However, the third question, to decide if a graph contains a prism, turns out to be NP-complete; this is established in Section \[sec:npc\] below. Likewise, we will see that it is NP-complete to decide if a graph contains an odd prism. Thus we cannot solve the recognition problem for class $\cal A$ (or for class ${\cal A}'$) in the fashion that is suggested at the beginning of this paragraph. Instead, we will adapt the Berge graph recognition algorithm to our purpose. This is done in Sections \[sec:pyrpri\]–\[sec:aprime\].
Recognizing Berge graphs {#sec:berge}
========================
We give here a brief outline of the Berge graph recognition algorithm which follows from [@CCLSV2002] and [@CS2002]. Given a graph $G$ and a hole $C$ in $G$, say that a vertex $x\in V(G)\setminus V(C)$ is a *major neighbour* of $C$ if the set $N(x)\cap V(C)$ is not included in a $3$-vertex subpath of $C$. Say that set $X\subseteq
V(G)$ is a *cleaner* for the hole $C$ if $X$ contains all the major neighbours of $C$ and $X\cap V(C)$ is included in a $3$-vertex subpath of $C$. The algorithm is based on the results summarized in the following theorem.
\
1. There exist five types of configurations (graphs), types T1, …, T5, such that, for $i=1, \ldots, 5$, we have: (a) if a graph $G$ contains a configuration of type T$i$ then $G$ is not a Berge graph, and (b) there exists a polynomial time algorithm A$i$ that decides if a graph contains a configuration of type T$i$.
2\. There is a polynomial-time algorithm which, given a graph $G$ that does not contain a configuration of type T$i$ ($i=1, \ldots, 5$), returns a family $\cal F$ of $|V(G)|^5$ subsets of $V(G)$ such that for any shortest odd hole $C$ of $G$, some member of $\cal F$ is a cleaner for $C$.
3\. There is a polynomial-time algorithm which, given a graph $G$ that does not contain a configuration of type T$i$ ($i=1, \ldots, 5$) and the family $\cal F$ produced by step 2, decides if $G$ contains an odd hole (and if it does, returns a shortest odd hole of $G$).
We will not give the definition of all five types of configurations, but we recall from [@CCLSV2002; @CS2002] that, for $i=1, \ldots, 5$, the complexity of algorithm A$i$ is respectively $O(|V(G)|^5)$, $O(|V(G)|^6)$, $O(|V(G)|^6)$, $O(|V(G)|^6)$, $O(|V(G)|^9)$. We need to dwell on the configuration of type T$5$, which is called a *pyramid* in [@CS2002]. A pyramid is a graph that consists in three pairwise adjacent vertices $b_1, b_2, b_3$ (called the triangle vertices of the pyramid), a fourth vertex $a$ (called the apex of the pyramid), and three chordless paths $P_1, P_2, P_3$ such that:
- For $i=1, 2, 3$, path $P_i$ is between $a$ and $b_i$;
- For $1\le i<j\le 3$, $V(P_i)\cap V(P_j)=\{a\}$ and $b_ib_j$ is the only edge between $V(P_i)\setminus \{a\}$ and $V(P_j)\setminus \{a\}$;
- $a$ is adjacent to at most one of $b_1, b_2, b_3$.
We may say that the three paths $P_1, P_2, P_3$ *form* a pyramid. It is easy to see that a pyramid contains an odd hole (since two of the paths $P_1, P_2, P_3$ have the same parity, the union of their vertex sets induce an odd hole); so Berge graphs do not contain pyramids.
The pyramid-testing algorithm from [@CS2002] is the slowest algorithm in Step 1 of the Berge graph recognition algorithm. The algorithm of Step 2 has complexity $O(|V(G)|^6)$ [@CCLSV2002], and the algorithm of Step 3 has complexity $O(|V(G)|^9)$ [@CS2002]. Testing if a graph $G$ is Berge can be done by running the algorithms described in the previous theorem on $G$ and on its complementary graph $\overline{G}$. Thus the total complexity is $O(|V(G)|^9)$.
Recognizing pyramids and prisms {#sec:pyrpri}
===============================
We present a polynomial-time algorithm that decides if a graph contains a pyramid or a prism. This algorithm has the same flavor as the pyramid-testing algorithm from [@CS2002]. We describe this algorithm now.
If a graph contains a pyramid or a prism, it contains a pyramid or a prism that is *smallest* in the sense that there is no pyramid or prism induced by strictly fewer vertices. Smallest pyramids or prisms have properties that make them easier to handle. These properties are expressed in the next two lemmas.
\[lem:kpyr\] Let $G$ be a graph. Let $K$ be a smallest pyramid or prism in $G$. Suppose that $K$ is a pyramid, formed by paths $P_1, P_2, P_3$, with triangle $\{b_1, b_2, b_3\}$ and apex $a$. Let $R_1$ be a shortest path from $b_1$ to $a$ whose interior vertices are not adjacent to $b_2$ or $b_3$. Then the subgraph induced by $V(R_1)\cup V(P_2)\cup
V(P_3)$ is a smallest pyramid or prism in $G$.
*Proof.* Note that $|V(R_1)|\le |V(P_1)|$ since $P_1$ is a path from $b_1$ to $a$ whose interior vertices are not adjacent to $b_2$ or $b_3$. Let $P$ be the path induced by $(V(P_2) \setminus \{b_2\})\cup
(V(P_3) \setminus \{b_3\})$. If no vertex of $R_1\setminus \{a\}$ has any neighbour in $P\setminus\{a\}$, then $R_1, P_2, P_3$ form a pyramid in $G$, and its number of vertices is note larger than $|V(K)|$, so the lemma holds. So we may assume that some vertex $c$ of $R_1\setminus \{a\}$ has a neighbour in $P\setminus\{a\}$, and we choose $c$ closest to $b_1$ along $R_1$. Recall that $c$ is not adjacent to $b_2$ or $b_3$, by the definition of $R_1$. For $j=2, 3$, let $b'_j$ be the neighbour of $b_j$ along $P_j$ (so $b'_2, b'_3$ are the ends of $P$) and let $c_j$ be the neighbour of $c$ closest to $b'_j$ along $P$.
Suppose $c_2 = c_3$. We have $c_3 \neq a$ since $c$ has a neighbour along $P\setminus \{a\}$. Then the three chordless paths $c_2$-$c$-$R_1$-$b_1$, $c_2$-$P$-$b_2$, $c_2$-$P$-$b_3$ form a pyramid with triangle $\{b_1, b_2, b_3\}$ and apex $c_2$; this pyramid is strictly smaller than $K$, because it is included in $(V(R_1)\setminus
\{a\})\cup V(P_2)\cup V(P_3)$, a contradiction. So $c_2\neq c_3$. If $c_2, c_3$ are not adjacent, then the three chordless paths $c$-$R_1$-$b_1$, $c$-$c_2$-$P$-$b_2$, $c$-$c_3$-$P$-$b_3$ form a pyramid with triangle $\{b_1, b_2, b_3\}$ and apex $c$; again this pyramid has strictly fewer vertices than $K$, a contradiction. So $c_2, c_3$ are adjacent. Then the three chordless paths $c$-$R_1$-$b_1$, $c_2$-$P$-$b_2$ and $c_3$-$P$-$b_3$ form a prism $K'$, with triangles $\{b_1, b_2, b_3\}$ and $\{c, c_2, c_3\}$. If $a\notin \{c_2, c_3\}$ then $K'$ is smaller than $K$, a contradiction. So $a\in \{c_2, c_3\}$ and the prism $K'$ has the same size as $K$, so the lemma holds. $\Box$
\[lem:kpri\] Let $G$ be a graph. Let $K$ be a smallest pyramid or prism in $G$. Suppose that $K$ is a prism, formed by paths $P_1, P_2, P_3$, with triangles $\{a_1, a_2, a_3\}$ and $\{b_1, b_2, b_3\}$, so that, for $i=1, 2, 3$, path $P_i$ is from $a_i$ to $b_i$. Then:
- If $R_1$ is any shortest path from $a_1$ to $b_1$ whose interior vertices are not adjacent to $b_2$ or $b_3$, then $R_1, P_2, P_3$ form a prism of size $|V(K)|$ in $G$, with triangles $\{a_1, a_2, a_3\}$ and $\{b_1, b_2, b_3\}$.
- If $R_2$ is any shortest path from $a_1$ to $b_2$ whose interior vertices are not adjacent to $b_1$ or $b_3$, then either the three paths $P_1, R_2\setminus a_1, P_3$ form a smallest prism in $G$, or the three paths $P_1, R_2, P_3+a_1$ form a pyramid of size $|V(K)|$ in $G$, with triangle $\{b_1, b_2, b_3\}$ and apex $a_1$.
*Proof.* Let us prove the first item of the lemma. Note that $|V(R_1)|\le |V(P_1)|$ since $P_1$ is a path from $a_1$ to $b_1$ whose interior vertices are not adjacent to $b_2$ or $b_3$. Let $P$ be the path induced by $(V(P_2) \setminus \{b_2\}) \cup (V(P_3) \setminus
\{b_3\})$. If no interior vertex of $R_1$ is adjacent to any vertex of $V(P)$, then the three paths $R_1, P_2, P_3$ form a prism in $G$ whose size is not larger than the size of $K$, so it must be a smallest prism and the lemma holds. So we may assume that there is an interior vertex $c$ of $R_1$ that has a neighbour in $V(P)$ and we choose $c$ closest to $b_1$ along $R_1$. For $j=2, 3$, let $b'_j$ be the neighbour of $b_j$ along $P_j$ (so $b'_2, b'_3$ are the ends of $P$) and let $c_j$ be the neighbour of $c$ closest to $b'_j$ along $P$.
Suppose $c_2 = c_3$. Then the three paths $c_2$-$c$-$R_1$-$b_1$, $c_2$-$P$-$b_2$, $c_2$-$P$-$b_3$ form a pyramid with triangle $\{b_1,
b_2, b_3\}$ and apex $c_2$; this pyramid is strictly smaller than $K$ (since $|V(R_1\setminus\{a\})|< |V(P_1)|$), a contradiction. So $c_2\neq c_3$. If $c_2, c_3$ are adjacent, then the three paths $c$-$R_1$-$b_1$, $c_2$-$P$-$b_2$, $c_3$-$P$-$b_3$ form a prism, with triangles $\{b_1, b_2, b_3\}$ and $\{c, c_2, c_3\}$, that is strictly smaller than $K$, a contradiction. So $c_2, c_3$ are not adjacent. But then the three paths $c$-$R_1$-$b_1$, $c$-$c_2$-$P$-$b_2$, $c$-$c_3$-$P$-$b_3$ form a pyramid with triangle $\{b_1, b_2, b_3\}$, apex $c$, and this pyramid is strictly smaller than $K$, a contradiction. So the first item is proved.
Now we prove the second item of the lemma. Note that $|V(R_2)|\le
|V(P_2)|+1$ since $P_2+a_1$ is a path from $a_1$ to $b_2$ whose interior vertices are not adjacent to $b_2$ or $b_3$. Let $P$ be the path induced by $(V(P_1)\setminus\{b_1\})\cup (V(P_3) \setminus
\{b_3\})$. If no interior vertex of $R_2$ has any neighbour in $V(P\setminus a_1)$ then $P_1, R_2, P_3+a_1$ form a pyramid, which is not larger than $K$; so it is a smallest pyramid and the theorem holds. Now assume that some interior vertex of $R_2$ has a neighbour in $V(P)$, and choose the vertex $c$ that has this property and is closest to $b_2$. For $i= 1, 3$, let $b'_i$ be the neighbour of $b_i$ along $P_i$ (so $b'_1, b'_3$ are the ends of $P$) and let $c_i$ be the neighbour of $c$ along $P$ that is closest to $b'_i$.
Suppose $c_1 = c_3$. Then $c_1 \neq a_1$ since $c$ has a neighbour in $V(P\setminus a_1)$. Then the three paths $c_1$-$c$-$R_2$-$b_2$, $c_1$-$P$-$b_1$, $c_1$-$P$-$b_3$ from a pyramid with triangle $\{b_1,
b_2, b_3\}$ and apex $c_1$. This pyramid is strictly smaller than $K$, a contradiction. So $c_1\neq c_3$. If $c_1, c_3$ are not adjacent, then the three paths $c$-$R_2$-$b_2$, $c$-$c_1$-$P$-$b_1$, $c$-$c_3$-$P$-$b_3$ form a pyramid with triangle $\{b_1, b_2, b_3\}$ and apex $c$; this pyramid has size strictly smaller than $K$, a contradiction. So $c_1, c_3$ are adjacent. Then the three paths $c$-$R_2$-$b_2$, $c_1$-$P$-$b_1$, $c_3$-$P$-$b_3$ form a prism $K'$, with triangles $\{b_1, b_2, b_3\}$ and $\{c, c_1, c_3\}$. If $a_1\notin \{c_1, c_3\}$ then this prism is strictly smaller than $K$, a contradiction. So $a_1\in \{c_1, c_3\}$ and $K'$ has the same size as $K$, and the lemma holds. This completes the proof of the lemma. $\Box$
On the basis of the preceding lemmas we can present an algorithm for testing if a graph contains a pyramid or a prism.
[Detection of a pyramid or prism]{} \[alg:pyrpri\]
*Input:* A graph $G$.
*Output:* An induced pyramid or prism of $G$, if $G$ contains any; else the negative answer “$G$ contains no pyramid and no prism.”
*Method:* For every quadruple $a, b_1, b_2, b_3$ of vertices of $G$ such that $b_1, b_2, b_3$ are pairwise adjacent and $a$ is adjacent to at most one of them, do: Compute a shortest path $P_1$ from $a$ to $b_1$ whose interior vertices are not adjacent to $b_2,
b_3$, if any. Compute paths $P_2$ and $P_3$ similarly. If the three paths $P_1, P_2, P_3$ exist, and if $V(P_1)\cup V(P_2)\cup V(P_3)$ induces a pyramid or a prism, then return this subgraph of $G$, and stop.
If no quadruple has produced a pyramid or a prism, return the negative answer.
*Complexity:* $O(|V(G)|^6)$.
*Proof of correctness.* If $G$ contains no pyramid and no prism then clearly the algorithm will return the negative answer. Conversely, suppose that $G$ contains a pyramid or a prism. Let $K$ be a smallest pyramid or prism. Let $b_1, b_2, b_3$ be the vertices of a triangle of $K$, and let $a$ be such that if $K$ is a pyramid then $a$ is its apex and if $K$ is a prism then $a$ is a vertex of the other triangle of $K$. When our algorithm considers the quadruple $a,
b_1, b_2, b_3$, it will find paths $P_1, P_2, P_3$ since some paths in $K$ do have the required properties. Then, three applications of lemmas \[lem:kpyr\] and \[lem:kpri\] imply that $P_1, P_2, P_3$ do form a pyramid or a prism of $G$. So the algorithm will detect this subgraph.
Complexity analysis: Testing all quadruples take time $O(|V(G)|^4)$. For each quadruple, finding the three paths takes time $O(|V(G)|^2)$ and checking that the corresponding subgraph is a pyramid or prism takes time $O(|V(G)|)$. Thus the overall complexity is $O(|V(G)|^6)$. $\Box$
We now show how the results of the preceding algorithm can be performed a little bit faster with a simple trick.
\[lem:3sorties\] Let $H$ be a connected graph and let $V_1, V_2, V_3$ be non-empty subsets of $V(H)$. Then $H$ has an induced subgraph $F$ such that either:
1. \[fp\] $F$ is a chordless path such that, up to a permutation of $V_1, V_2, V_3$, one end of $F$ is in $V_1$, the other is in $V_3$, some vertex of $F$ is in $V_2$ and no interior vertex of $F$ is in $V_1\cup V_3$;
2. \[fc\] $F$ consists of three chordless paths $F_1, F_2, F_3$ of length at least $1$ such that: for $i=1, 2, 3$, $F_i$ is from $f$ to $v_i$ and $v_i\in V_i$; for $1\le i<j\le 3$, $V(F_i)\cap
V(F_j)=\{f\}$ and there is no edge between $F_i\setminus f$ and $F_j\setminus f$; and $F\setminus\{v_1, v_2, v_3\}$ contains no vertex of $V_1\cup V_2\cup V_3$;
3. \[ft\] $F$ consists of three vertex-disjoint chordless paths $F_1, F_2, F_3$ (possibly of length $0$) such that: for $i=1, 2, 3$, $F_i$ is from $w_i$ to $v_i$ and $v_i\in V_i$; vertices $w_1, w_2,
w_3$ are pairwise adjacent; for $1\le i<j\le 3$ there is no edge between $F_i$ and $F_j$ other than $w_iw_j$; and $F\setminus\{v_1,
v_2, v_3\}$ contains no vertex of $V_1\cup V_2\cup V_3$.
*Proof.* Let $P$ be a shortest path in $H$ such that $P$ has one end in $V_1$ and the other in $V_3$; let $v_1\in V_1, v_3\in V_3$ be the ends of $P$. Thus no interior vertex of $P$ is in $V_1\cup V_3$. If $P$ contains a vertex of $V_2$ then we have outcome \[fp\] of the lemma with $F=P$. Therefore let us assume that $P$ contains no vertex of $V_2$. Let $Q$ be a shortest path such that one end $v_2$ of $Q$ is in $V_2$ and the other end $v$ of $Q$ has a neighbour on $P$. Let $w, x$ be the neighbours of $v$ on $P$ that are closest to $v_1$ and $v_3$ respectively. Note that $Q\setminus v_2$ contains no vertex of $V_2$ by the definition of $Q$. If $Q$ contains vertices of both $V_1, V_3$ then some subpath $F$ of $Q$ contains vertices of each of $V_1, V_2, V_3$ and is minimal with this property, and so $F$ satisfies outcome \[fp\] of the lemma. If $Q$ contains vertices of $V_1$ and not of $V_3$, then $v_3$-$P$-$x$-$v$-$Q$-$v_2$ is a path $F$ that satisfies outcome \[fp\]. A similar outcome happens if $Q$ contains vertices of $V_3$ and not of $V_1$. So we may assume that $Q$ contains no vertex of $V_1\cup V_3$.
Suppose $w=x$. If $x\in\{v_1, v_3\}$, we have outcome \[fp\] with $F= P+Q$. If $x\notin\{v_1, v_3\}$, the three paths $x$-$P$-$v_1$, $x$-$v$-$Q$-$v_2$, $x$-$P$-$v_3$ form a subgraph $F$ that satisfies outcome \[fc\]. Now suppose that $w,x$ are different and not adjacent. If $v=v_2$, then $v_1$-$P$-$w$-$v_2$-$x$-$P$-$v_3$ is a path $F$ that satisfies outcome \[fp\]. If $v\not=v_2$, the three paths $v$-$w$-$P$-$v_1$, $v$-$Q$-$v_2$, $v$-$x$-$P$-$v_3$ form a subgraph $F$ that satisfies outcome \[fc\]. Finally, suppose that $w,x$ are different and adjacent. Then the three paths $w$-$P$-$v_1$, $v$-$Q$-$v_2$, $x$-$P$-$v_3$ form a subgraph $F$ that satisfies the properties of outcome \[ft\]. This completes the proof of the lemma. $\Box$
Now we can give an algorithm:
[Detection of a pyramid or prism]{} \[alg:pyrpri2\]
*Input:* A graph $G$.
*Output:* The positive answer “$G$ contains a pyramid or a prism” if it does; else the negative answer “$G$ contains no pyramid and no prism.”
*Method:* For every triple $b_1, b_2, b_3$ of vertices of $G$ such that $b_1, b_2, b_3$ are pairwise adjacent, do:\
Step 1. Compute the set $X_1$ of those vertices of $V(G)$ that are adjacent to $b_1$ and not adjacent to $b_2$ or $b_3$, and the similar sets $X_2,
X_3$, and compute the set $X$ of those vertices of $V(G)$ that are not adjacent to any of $b_1, b_2, b_3$. If some vertex of any $X_i$ has a neighbour in each of the other two $X_j$’s, return the positive answer and stop. Else:\
Step 2. Compute the connected components of $X$ in $G$.\
Step 3. For each component $H$ of $X$, and for $i=1, 2, 3$, if some vertex of $H$ has a neighbour in $X_i$ then mark $H$ with label $i$. If any component $H$ of $X$ gets the three labels $1, 2, 3$, return the positive answer and stop.\
If no triple yields the positive answer, return the negative answer.
*Complexity:* $O(|V(G)|^5)$.
*Proof of correctness.* Suppose that $G$ contains a pyramid or a prism $K$. Let $b_1, b_2, b_3$ be the vertices of a triangle of $K$, and for $i=1, 2, 3$ let $c_i$ be the neighbour of $b_i$ in $K
\setminus\{b_1, b_2, b_3\}$. The algorithm will place the three vertices $c_1, c_2, c_3$ in the sets $X_1, X_2, X_3$ respectively, one vertex in each set. If $K$ has only six vertices, the algorithm will find that one of the $c_i$’s is adjacent to the other two, so it will return the positive answer at the end of Step 1. If $K$ has at least seven vertices, then the algorithm will place the vertices of $K'=K\setminus\{b_1, b_2, b_3, c_1, c_2, c_3\}$ in $X$; at Step 2 these vertices will all be in one component of $X$ since $K'$ is connected, and at Step 3 this component with get the three labels $1,
2, 3$ since $K'$ contains a neighbour of $c_i$ for each $i=1, 2, 3$, so the algorithm will return the positive answer.
Conversely, suppose that the algorithm returns the positive answer when it is examining a triple $\{b_1, b_2, b_3\}$ that induces a triangle of $G$. If this is at the end of Step 1, this means that, up to a permutation of $\{1, 2, 3\}$, the algorithm has found a vertex $c_1\in X_1$ that has a neighbour $c_2\in X_2$ and a neighbour $c_3\in
X_3$. Then the six vertices $b_1, b_2, b_3, c_1, c_2, c_3$ induce a pyramid if $c_2, c_3$ are not adjacent or a prism if $c_2, c_3$ are adjacent; so the positive answer is correct. Now suppose that the positive answer is returned at the end of step 3. This means that some component $H$ of $X$ gets the three labels $1, 2, 3$. So, for each $i=1, 2, 3$, the set $V_i$ of vertices of $H$ that have a neighbour in $X_i$ is not empty. We can apply Lemma \[lem:3sorties\] to $H$, with the same notation, and we consider the subgraph $F$ of $H$ described in the lemma, which leads to the following three cases. In each case we will see that $G$ contains a prism or a pyramid.
[*Outcome \[fp\] of Lemma \[lem:3sorties\]: $F$ is a chordless path such that, up to a permutation of $V_1, V_2, V_3$, one end of $F$ is a vertex $v_1\in V_1$, the other is a vertex $v_3\in V_3$, no interior vertex of $F$ is in $V_1\cup V_3$, and $F$ has a vertex of $V_2$.*]{} There exists a neighbour $c_1$ of $v_1$ in $X_1$, a neighbour $c_3$ of $v_3$ in $X_3$, and a vertex $c_2$ of $X_2$ that has a neighbour in $F$. Note that there is at most one edge among $c_1,
c_2, c_3$, for otherwise we would have stopped at Step 1. Let $x,y$ be the neighbours of $c_2$ along $F$ that are closest respectively to $v_1$ and $v_3$. If $c_1, c_2$ are adjacent and $y\not=v_1$ then $c_2$-$c_1$-$b_1$, $c_2$-$b_2$, $c_2$-$y$-$F$-$v_3$-$c_3$-$b_3$ form a pyramid, while if $c_1, c_2$ are adjacent and $y=v_1$ then $c_1$-$b_1$, $c_2$-$b_2$, $v_1$-$F$-$v_3$-$c_3$-$b_3$ form a prism. So suppose $c_2$ is not adjacent to $c_1$ and likewise not to $c_3$. If $c_1, c_3$ are adjacent and $v_1=v_3$ then $c_1$-$b_1$, $v_1$-$c_2$-$b_2$, $c_3$-$b_3$ form a prism. If $c_1, c_3$ are adjacent and $v_1\not=v_3$ then either $x\neq v_3$ or $y\neq v_1$, so let us assume up to symmetry that $x\neq v_3$; then $c_1$-$b_1$, $c_1$-$v_1$-$F$-$x$-$c_2$-$b_2$, $c_1$-$c_3$-$b_3$ form a pyramid. So suppose $c_1, c_3$ are not adjacent. If $x=y$ then $x$-$F$-$v_1$-$c_1$-$b_1$, $x$-$c_2$-$b_2$, $x$-$F$-$v_3$-$c_3$-$b_3$ form a pyramid. If $x,y$ are different and not adjacent, then $c_2$-$x$-$F$-$v_1$-$c_1$-$b_1$, $c_2$-$b_2$, $c_2$-$y$-$F$-$v_3$-$c_3$-$b_3$ form a pyramid. If $x,y$ are different and adjacent, then $x$-$F$-$v_1$-$c_1$-$b_1$, $c_2$-$b_2$, $y$-$F$-$v_3$-$c_3$-$b_3$ form a prism.
[*Outcome \[fc\]) of Lemma \[lem:3sorties\], with the same notation.*]{} For $i=1, 2, 3$, there exists a neighbour $c_i$ of $v_i$ in $X_i$. Since the vertices $v_1, v_2, v_3$ are pairwise different, for each $i=1, 2, 3$, vertex $c_i$ has no other neighbour in $F$ than $v_i$. If $c_1, c_2$ are adjacent, then $c_1$-$b_1$, $c_1$-$c_2$-$b_2$, $c_1$-$v_1$-$F_1$-$f$-$F_3$-$v_3$-$c_3$-$b_3$ form a pyramid. So suppose, by symmetry, that $c_1, c_2, c_3$ are pairwise not adjacent. Then for $i=1, 2, 3$ the paths $f$-$F_i$-$v_i$-$c_i$-$b_i$ form a pyramid.
[*Outcome \[ft\]) of Lemma \[lem:3sorties\], with the same notation.*]{} For $i=1, 2, 3$, there exists a neighbour $c_i$ of $v_i$ in $X_i$. Since the vertices $v_1, v_2, v_3$ are pairwise different, for each $i=1, 2, 3$ vertex $c_i$ has no other neighbour in $F$ than $v_i$. If $c_1, c_2$ are adjacent, then $c_1$-$b_1$, $c_1$-$c_2$-$b_2$, $c_1$-$v_1$-$F_1$-$w_1$-$w_3$-$F_3$-$v_3$-$c_3$-$b_3$ form a pyramid. So suppose, by symmetry, that $c_1, c_2, c_3$ are pairwise non adjacent. Then for $i=1, 2, 3$, the paths $w_i$-$F_i$-$v_i$-$c_i$-$b_i$ form a prism. So in either case $G$ contains a pyramid or a prism, and the proof of correctness is complete.
*Complexity analysis:* Finding all triples takes time $O(|V(G)|^3)$. For each triple, computing the sets $X_1, X_2, X_3, X$ takes time $O(|V(G)|)$. Finding the components of $X$ takes time $O(|V(G)|^2)$. Marking the components can be done as follows: for each edge $uv$ of $G$, if $u$ is in a component $H$ of $X$ and $v$ is in some $X_i$ then mark $H$ with label $i$; so this takes time $O(|V(G)|^2)$. Thus the overall complexity is $O(|V(G)|^5)$. $\Box$
We observe that the above two algorithms are faster than the algorithm from [@CS2002] for finding a pyramid.
Recognition of graphs in class ${\cal A}$ {#sec:classa}
=========================================
We can now present the algorithm for recognizing graphs in the class $\cal A$.
[Recognition of graphs in class $\cal A$]{} \[alg:classa\]
*Input:* A graph $G$.
*Output:* The positive answer “$G$ is in class $\cal A$” if it is; else the negative answer “$G$ is not in class $\cal A$”.
*Method:*\
Step 1. Test whether $G$ contains no antihole of length at least $5$ as explained at the end of the introduction.\
Step 2. Test whether $G$ has no pyramid or prism using Algorithm \[alg:pyrpri2\] above.\
Step 3. Test whether $G$ is Berge using the algorithm from the preceding section.
*Complexity:* $O(|V(G)|^9)$.
The correctness of the algorithm is immediate from the correctness of the algorithms it refers to and from the fact that Berge graphs contain no pyramid. The complexity is dominated by the last step of the Berge recognition algorithm, which is $O(|V(G)|^9)$. Note that the other step of complexity $O(|V(G)|^9)$ in the Berge recognition algorithm (deciding if the input graph contains a pyramid) can be replaced by Step 2. Additionally, we can remark that it is not necessary to test for the existence of configurations of types T1, …, T4 when we call the Berge recognition algorithm, because—this is not very hard to prove—any such configuration contains an antihole of length at least $5$, so it is already excluded by Step 2. But this does not bring the overall complexity down from $O(|V(G)|^9)$.
The algorithm for recognizing graphs in class $\cal A$ can also be used to color graphs in class $\cal A$. Recall that Theorem \[thm:maftro\] states that: [*If a graph $G$ is in class $\cal A$ and is not a clique, it admits a pair of vertices whose contraction yields a graph in class $\cal A$.*]{} Therefore we could enumerate all pairs of non-adjacent vertices of $G$ and test whether their contraction produces a graph in class $\cal A$; Theorem \[thm:maftro\] insures that at least one pair will work. We can then iterate this procedure until the contractions turn the graph into a clique. Since each vertex of the clique is the result of contracting a stable set of $G$, a coloring of this clique corresponds to an optimal coloring of $G$. In terms of complexity, we may need to check $O(|V(G)|^2)$ pairs at each contraction step, and there may be $O(|V(G)|)$ steps. So we end up with complexity $O(|V(G)|^{12})$. This is not as good as the direct method from [@maftro02], which has complexity $O(|V(G)|^6)$.
Even prisms {#sec:evenprisms}
===========
In this section we show how to decide in polynomial-time if a graph that contains no odd hole contains an even prism. Let $K$ be an even prism, formed by paths $P_1, P_2, P_3$, with triangles $\{a_1, a_2,
a_3\}$ and $\{b_1, b_2, b_3\}$ so that for $1\le i\le 3$ path $P_i$ is from $a_i$ to $b_i$. Let $m_i$ be the middle vertex of path $P_i$. We say that the $9$-tuple $(a_1, a_2, a_3, b_1, b_2, b_3, m_1, m_2,
m_3)$ is the *frame* of $K$. When we talk about a prism, the word small refers to its number of vertices.
\[lem:epri\] Let $G$ be a graph that contains no odd hole and contains an even prism, and let $K$ be a smallest even prism in $G$. Let $K$ be formed by paths $P_1, P_2, P_3$ and have frame $(a_1, a_2, a_3, b_1, b_2,
b_3, m_1, m_2, m_3)$, with $a_i, m_i, b_i\in V(P_i)$ ($1\le i\le 3$). Let $R$ be any path of $G$ whose ends are $a_1, m_1$, whose interior vertices are not adjacent to $a_2, a_3, b_2$ or $b_3$, and which is shortest with these properties. Then $a_1$-$R$-$m_1$-$P_1$-$b_1$ is a chordless path $R_1$ and $R_1, P_2, P_3$ form a smallest even prism in $G$.
*Proof.* Let $k$ be the length (number of edges) of path $P_1$; so $k$ is even. Note that $|E(R)|\le k/2$ since the path $a_1$-$P_1$-$m_1$ satisfies the properties required for $R$. Call $Q$ the chordless path induced by $V(P_2)\cup V(P_3) \setminus \{a_2,
a_3\}$ and call $a'_2, a'_3$ the ends of $Q$ so that for $j=2, 3$ vertex $a'_j$ is adjacent to $a_j$.
Suppose that no interior vertex of $R$ has any neighbour in $Q$. Let $R_1$ be a shortest path from $a_1$ to $b_1$ contained in $a_1$-$R$-$m_1$-$P_1$-$b_1$. So $|E(R_1)|\le k$ and $R_1, P_2, P_3$ form a prism $K'$ with $|V(K')|\le |V(K)|$. Since $G$ contains no odd hole, $R_1$ has even length (else $V(R_1)\cup V(P_2)$ would induce an odd hole), so $K'$ is an even prism. Thus $K'$ is a smallest even prism, and we have equality in the above inequalities; in particular $R_1$ is equal to $a_1$-$R$-$m_1$-$P_1$-$b_1$ and the theorem holds.
We may now assume that some vertex $c$ of $R$ has a neighbour in $Q$, and we choose $c$ closest to $m_1$ along $R$. Let $S$ be a chordless path from $c$ to $b_1$ contained in $c$-$R$-$m_1$-$P_1$-$b_1$. We have $|E(S)|< k$ since $|E(R)|\le k/2$ and $c\neq a_1$. By the choice of $c$ no vertex of $S\setminus b_1$ has a neighbour in $P_2$ or $P_3$. Let $x,y$ be the neighbours of $c$ along $Q$ that are closest respectively to $a'_2$ and to $a'_3$. If $x=y$ then $V(S)\cup V(P_2)
\cup V(P_3)$ induces a pyramid with triangle $\{b_1, b_2, b_3\}$ and apex $x$, so $G$ contains an odd hole, a contradiction. Thus $x\neq
y$. If $x,y$ are not adjacent then $V(S) \cup V(P_2)\cup V(P_3)$ contains a pyramid with triangle $\{b_1, b_2, b_3\}$ and apex $c$, a contradiction. So $x,y$ are different and adjacent and, up to symmetry, we may assume that they lie in the interior of $P_2$. Now $V(S)\cup V(P_2)\cup V(P_3)$ induces a prism $K'$, with triangles $\{b_1, b_2, b_3\}$ and $\{c, x, y\}$, and $|V(K')|< |V(K)|$ since $|E(S)|< k$. Thus $K'$ is an odd prism, which means that $y$-$P_2$-$b_2$ is an odd path, and so $a_2$-$P_2$-$x$ is an even path. Let $R'$ be a chordless path from $c$ to $a_1$ contained in $c$-$R$-$m_1$-$P_1$-$a_1$. We have $|E(R')|< k$ since $|E(R)|\le
k/2$. By the choice of $c$ no vertex of $R'\setminus a_1$ has a neighbour in $P_2$ or $P_3$. Then $R'$ has even length for otherwise $V(R')\cup V(a_2$-$P_2$-$x)$ induces an odd hole. Now $V(R')\cup
V(P_2)\cup V(P_3)$ induces a prism $K''$ with triangles $\{a_1, a_2,
a_3\}$ and $\{c, x, y\}$, and $K''$ is an even prism, and we have $|V(K'')|< |V(K)|$ since $|E(R')|< k$. This is a contradiction, which completes the proof. $\Box$
Now we can give an algorithm:
[Detection of an even prism in a graph that contains no odd hole]{} \[alg:epri\]
*Input:* A graph $G$ that contains no odd hole.
*Output:* An induced even prism of $G$ if $G$ contains any; else the negative answer “$G$ does not contain an even prism.”
*Method:* For every $9$-tuple $(a_1, a_2, a_3, b_1, b_2, b_3,
m_1, m_2, m_3)$ of vertices of $G$ such that $\{a_1, a_2, a_3\}$ and $\{b_1, b_2, b_3\}$ induce triangles, do:\
For $i=1, 2, 3$, compute the set $F_i$ of those vertices that are not adjacent to $a_{i+1},
a_{i+2}, b_{i+1}, b_{i+2}\}$ (with indices modulo $3$); look for a shortest path $R_i$ from $a_i$ to $m_i$ whose interior vertices are in $F_i$, and look for a shortest path $S_i$ from $m_i$ to $b_i$ whose interior vertices are in $F_i$. If the six paths $R_1, R_2, R_3, S_1,
S_2, S_3$ exist and their vertices induce an even prism, then return this prism and stop.\
If no $9$-tuple yields an even prism, return the negative answer.
*Complexity:* $O(|V(G)|^{11})$.
*Proof of correctness.* If the algorithm returns an even prism then clearly $G$ contains this prism. So suppose conversely that $G$ contains an even prism. Let $K$ be a smallest even prism, and let vertices $a_1, a_2, a_3,$ $b_1, b_2, b_3,$ $m_1, m_2, m_3$ be the frame of $K$. When the algorithm considers this $9$-tuple, it will find paths $R_1, R_2, R_3, S_1, S_2, S_3$ since some paths in $K$ do have the required properties. Then, six applications of Lemma \[lem:epri\] imply that the vertices of these six paths do induce an even prism of $G$. So the algorithm will detect this subgraph.
Complexity analysis: Testing all $9$-tuples take time $O(|V(G)|^9)$. For each $9$-tuple, finding the six paths takes time $O(|V(G)|^2)$ and checking that the corresponding subgraph is an even prism takes time $O(|V(G)|)$. Thus the overall complexity is $O(|V(G)|^{11})$. $\Box$
Line-graphs of subdivisions of $K_4$ {#sec:lgbsk4}
====================================
The *line-graph* of a graph $R$ is the graph whose vertices are the edges of $R$ and where two vertices are adjacent if the corresponding edges of $R$ have a common endvertex. *Subdividing* an edge $xy$ in a graph means replacing it by a path of length at least two. A *subdivision* of a graph $R$ is any graph obtained by repeatedly subdividing edges. Berge graphs that do not contain the line-graph of a bipartite subdivision of $K_4$ play an important role in the proof of the Strong Perfect Graph Theorem [@CRST2002]. Thus recognizing them may be of interest on its own. Moreover, solving this question is also useful for later use in the recognition of graphs in the class ${\cal A}'$ (see Section \[sec:aprime\]). Again it turns out that decide if a graph contains the line-graph of a subdivision of $K_4$ is NP-complete in general, see Section \[sec:npc\].
![[Line-graph of a subdivision of $K_4$]{}[]{data-label="fig:FLK4"}](fig.reco.1)
We will first deal with subdivisions of $K_4$ that are not necessarily bipartite, but are not too trivial in the following sense: say that a subdivision of $K_4$ is *proper* if at least one edge of the $K_4$ is subdivided. It is easy to see that the line-graph of a subdivision of $K_4$ is proper if and only if it has a vertex that lies in only one triangle. If $F$ is the line-graph of a proper subdivision $R$ of $K_4$, let us denote by $a, b,c, d$ the four vertices of $K_4$, i.e., the vertices of degree $3$ in $R$. Then the three edges incident to each vertex $x = a,b,c,d$ form a triangle in $F$, which will be labelled $T_x$ and called a *basic* triangle of $F$. ($F$ may have as many as two more, non-basic, triangles.) In $F$ there are six paths, each path being between vertices $x,y$ of distinct triangles of $F$ (and so this path can be labelled $R_{xy}$ accordingly). Note that $R_{xy}= R_{yx}$, and the six distinct paths are vertex disjoint. Some of these paths may have length $0$. In the basic triangle $T_x$, we denote by $v_{xy}$ the vertex that is the end of the path $R_{xy}$. Thus $F$ has paths $R_{ab}$, $R_{ac}$, $R_{ad}$, $R_{bc}$, $R_{bd}$, $R_{cd}$, and the vertices of the basic triangles of $F$ are $v_{ab}$, $v_{ac}$, $v_{ad}$, $v_{ba}$, $v_{bc}$, $v_{bd}$, $v_{ca}$, $v_{cb}$, $v_{cd}$, $v_{da}$, $v_{db}$ and $v_{dc}$. The subgraph $F$ has no other edge than those in the four basic triangles and those in the six paths.
For each of the six paths $R_{xy}$ of $F$, we call $m_{xy}$ one vertex that is roughly in the middle of $R_{xy}$, so that if $\alpha$ denotes the length of $v_{xy}$-$R_{xy}$-$m_{xy}$ and $\beta$ denotes the length of $m_{xy}$-$R_{xy}$-$v_{yx}$, then $\alpha-\beta\in\{-1, 0,
1\}$. Paths $R_{xy}$ are called the *rungs* of $F$; vertices $v_{xy}$ are called the *corners* of $F$; and the $18$-tuple $(v_{ab}, v_{ac}, \dots, v_{cd}, m_{ab}, \dots, m_{cd})$ is called a *frame* of $F$.
\[lem:lgpsk4\] Let $G$ be a graph that contains no pyramid. Let $F$ be an induced subgraph of $G$ that is the line-graph of a proper subdivision of $K_4$ and $F$ has smallest size with this property, and let $(v_{ab},
v_{ac}, \dots, v_{cd}, m_{ab}, \dots, m_{cd})$ be a frame of $F$. Let $P$ be a path from $v_{ab}$ to $m_{ab}$ such that the interior vertices of $P$ are non adjacent to every corner of $F$ other than $v_{ab}$ and $P$ is a shortest path with these properties. Then $(V(F) \setminus V(R_{ab})) \cup V(P)$ induces the line-graph of a proper subdivision of $K_4$ of smallest size.
![$F$ and $P$ for the proof of Lemma \[lem:lgpsk4\][]{data-label="fig:LK4"}](fig.reco.3)
*Proof.* Put $F' = F \setminus R_{ab}$. If $v_{ab}, m_{ab}$ are equal or adjacent, then $P=v_{ab}$-$R_{ab}$-$m_{ab}$ and the conclusion is immediate. So we may assume that $v_{ab}, m_{ab}$ are distinct and not adjacent, which also implies $m_{ab} \neq v_{ba}$.
\[clm:pnn\] If the interior vertices of $P$ have no neighbour in $F'$ then the lemma holds.
*Proof.* Let $u$ be the vertex of $v_{ab}$-$P$-$m_{ab}$ that has neighbours in $m_{ab}$-$R_{ab}$-$v_{ba}$ and is closest to $v_{ab}$. Let $u'$ be the neighbour of $u$ in $m_{ab}$-$R_{ab}$-$v_{ba}$ closest to $v_{ba}$. Then $v_{ab}$-$P$-$u$-$u'$-$R_{ab}$-$v_{ba}$ is a chordless path $R$, and $V(F')\cup V(R)$ induce the line-graph of a proper subdivision of $K_4$. So this subgraph has size at least the size of $F$, which is possible only if $u=m_{ab}$, and this case $V(F')\cup V(R)$ induce the line-graph of a proper subdivision of $K_4$ of smallest size, so the lemma holds. $\Box$
Now we may assume that there exists a vertex $c_1\in V(P)$ that has neighbours in $F'$, and choose $c_1$ closest to $v_{ab}$ along $P$. Also there exists a vertex $d_1\in V(P)$ that has neighbours in $F'$ and is chosen closest to $m_{ab}$ along $P$. Let us show that this leads to a contradiction. One may look at Figure \[fig:LK4\].
\[clm:nc1nd1\] \
1. The set $N(c_1) \cap V(F')$ consists of an edge of $F'$.\
2. The set $N(d_1) \cap F'$ consists of an edge of $F'$.
*Proof.* Call $H$ the hole induced by $V(R_{ac}) \cup V(R_{bc})
\cup V(R_{bd}) \cup V(R_{ad})$.
First suppose that $c_1$ has no neighbour on $H$. So $c_1$ has neighbours in the interior of $R_{cd}$. Let $c_2, c_3$ be the neighbours of $c_1$ respectively closest to $v_{cd}$ and to $v_{dc}$ along $R_{cd}$. If $c_2=c_3$, the three paths $c_2$-$c_1$-$P$-$v_{ab}$, $c_2$-$R_{cd}$-$v_{cd}$-$v_{ca}$-$R_{ca}$-$v_{ac}$, $c_2$-$R_{cd}$-$v_{dc}$-$v_{da}$-$R_{ad}$-$v_{ad}$ form a pyramid with triangle $\{ v_{ab}, v_{ac}, v_{ad}\}$ and apex $c_2$, a contradiction. If $c_2, c_3$ are distinct and not adjacent, the three paths $c_1$-$P$-$v_{ab}$, $c_1$-$c_2$-$R_{cd}$-$v_{cd}$-$v_{ca}$-$R_{ca}$-$v_{ac}$, $c_1$-$c_3$-$R_{cd}$-$v_{dc}$-$v_{da}$-$R_{ad}$-$v_{ad}$ form a pyramid with triangle $\{ v_{ab}, v_{ac}, v_{ad}\}$, and apex $c_1$, a contradiction. If $c_2, c_3$ are adjacent, we have item 1 of the claim.
Now suppose that $c_1$ has neighbours on $H$. Define two chordless subpaths of $H$: $H_{ac} = H\setminus v_{ad}$ and $H_{ad} = H\setminus
v_{ac}$. Let $c_2$ be the neighbour of $c_1$ on $H_{ac}$ closest to $v_{ac}$, and let $c_3$ be the neighbour of $c_1$ on $H_{ad}$ closest to $v_{ad}$. If $c_2=c_3$ then $V(H)\cup V(c_1$-$P$-$v_{ab})$ induces a pyramid with triangle $\{ v_{ab}, v_{ac}, v_{ad}\}$ and apex $c_2$, a contradiction. So $c_2 \neq c_3$. If $c_2, c_3$ are not adjacent then the three paths $c_1$-$P$-$v_{ab}$, $c_1$-$c_2$-$H_{ac}$-$v_{ac}$, $c_1$-$c_3$-$H_{ad}$-$v_{ad}$ form a pyramid with triangle $\{ v_{ab}, v_{ac}, v_{ad}\}$ and apex $c_1$, a contradiction. So $c_2, c_3$ are adjacent and are the only neighbours of $c_1$ on $H$. Up to a symmetry, and by the definition of $R$, we may assume that $c_2, c_3$ are in the interior of $R_{ac}$ or $R_{bc}$. If $c_1$ has no neighbour on $R_{cd}$ then conclusion 1 holds. So suppose that $c_1$ has a neighbour $c_4$ on $R_{cd}$ and $c_4$ is closest to $v_{dc}$. Then the three paths $c_1$-$P$-$v_{ab}$, $c_1$-$c_4$-$R_{cd}$-$v_{dc}$-$v_{da}$-$R_{da}$-$v_{ad}$, $c_1$-$c_2$-$H_{ac}$-$v_{ac}$ form a pyramid with triangle $\{ v_{ab},
v_{ac}, v_{ad}\}$ and apex $c_1$, a contradiction. This complete the proof of item 1.
The proof of item 2 is similar, with the following adjustment: whenever path $c_1$-$P$-$v_{ab}$ was used for item 1, we can use for item 2 a chordless path from $d_1$ to $v_{ba}$ contained in $d_1$-$P$-$m_{ab}$-$R_{ab}$-$v_{ba}$. This completes the proof of the claim. $\Box$
\[clm:j\] If $J$ is the line-graph of a subdivision of $K_4$ with $V(J)\subseteq
V(F') \cup V(P)$ and $c_1$ is a corner of $J$, then $J$ is the line-graph of a proper subdivision of $K_4$.
*Proof.* This claim follows immediately from the fact that $c_1$ belongs to exactly one triangle of $J$. $\Box$
In view of Claim \[clm:nc1nd1\], let $c_2, c_3$ be the two neighbours of $c_1$ in $F'$ and $d_2, d_3$ be the two neighbours of $d_1$ in $F'$, with $c_2c_3, d_2d_3\in E(G)$.
\[clm:c2c3\] We may assume that $c_2, c_3$ lie in $R_{ac}$ and $d_2, d_3$ in $R_{cb}$ or $R_{bd}$.
*Proof.* Recall from the definition of $P$ that $c_2, c_3, d_2,
d_3$ cannot be corners of $F$. If $c_2 c_3$ is an edge of $R_{cd}$, then $V(v_{ab}$-$P$-$c_1) \cup V(R_{ac}) \cup V(R_{ad}) \cup
V(R_{cd})$ induces the line-graph of a subdivision de $K_4$, which is proper by Claim \[clm:j\] and is strictly smaller than $F$, a contradiction. If $c_2 c_3$ is an edge of $R_{bc}$, then $V(v_{ab}$-$P$-$c_1) \cup V(F')$ induces the line-graph of a subdivision of $K_4$, which is proper by Claim \[clm:j\] and is strictly smaller than $F$, a contradiction. So $c_2 c_3$ is an edge of $R_{ac}$ or $R_{ad}$. Similarly we may assume that $d_2 d_3$ is an edge of $R_{bc}$ or $R_{bd}$. Then by symmetry the claim holds. $\Box$
We may assume that $v_{ac}, c_2, c_3, v_{ca}, d_2, d_3, v_{ad}$ appear in this order along $H$.
\[clm:c1d1\] Vertices $c_1, d_1$ are distinct and not adjacent.
*Proof.* By Claims \[clm:nc1nd1\] and \[clm:c2c3\], we know that $c_1, d_1$ are distinct. If they are adjacent, the set $V(H')\cup \{c_1, d_1\}$ induces the line-graph of a subdivision of $K_4$, which is proper by Claim \[clm:j\] and is strictly smaller than $F$, a contradiction. $\Box$
Let $e_1$ be the vertex of $c_1$-$P$-$v_{ab}$ that has a neighbour $e_2$ in the interior of $m_{ab}$-$R_{ab}$-$v_{ab}$ and is closest to $c_1$. Let $e_4$ be the vertex of $d_1$-$P$-$m_{ab}$ that has neighbour a neighbour $e_3$ in the interior of $m_{ab}$-$R_{ab}$-$v_{ab}$, and is closest to $d_1$. Given $e_1,
e_4$, take $e_2, e_3$ as close to each other as possible along $R_{ab}$.
\[clm:e1\] $e_1 \neq v_{ab}$.
*Proof.* For suppose $e_1=v_{ab}$. Then the three paths $v_{ab}$-$P$-$c_1$, $v_{ab}$-$v_{ac}$-$R_{ac}$-$c_2$, $v_{ab}$-$R_{ab}$-$e_3$-$e_4$-$P$-$d_1$-$d_2$-$H_{ac}$-$c_3$ form a pyramid with triangle $\{c_1, c_2, c_3\}$ and apex $v_{ab}$, a contradiction. $\Box$
At this point we have obtained that $c_1$-$P$-$e_1$-$e_2$-$R_{ab}$-$e_3$-$e_4$-$P$-$d_1$ is a chordless path $R$ whose interior vertices have no neighbour in $F'$. Moreover the subgraph $F_R$ induced by $V(F')\cup V(R)$ is the line graph of a subdivision of $K_4$, and it is proper by Claim \[clm:j\].
\[clm:fr\] $|V(F_R)|<|V(F)|$.
*Proof.* We need only show that the total length of the rungs of $F_R$ is strictly smaller than the total length of the rungs of $F$. Let $\alpha$ be the length of $v_{ab}$-$R_{ab}$-$m_{ab}$, let $\beta$ be the length of $v_{ba}$-$R_{ab}$-$m_{ab}$, and let $\delta$ be the number of those edges of $F'$ that belong to the rungs of $F$.
The total length $l$ of the rungs of $F$ is equal to $\alpha + \beta +
\delta = 2 \alpha - \varepsilon + \delta$, with $\varepsilon = \alpha
- \beta \in \{-1, 0,1\}$ by the definition of $m_{ab}$.
The total length $l_R$ of the rungs of $F_R$ is at most $\delta
+2\alpha -3 $, and it is equal to this value only in the following case: $e_4=m_{ab}$, there is only one vertex of $R_{ab}$ between $c_1$ and $d_1$, $e_1 v_{ab}\in E(G)$, $e_2 v_{ab}\in E(G)$, and the paths $P$ and $v_{ab}$-$R_{ab}$-$m_{ab}$ have the same length. Indeed in this case the length of the rung of $F_R$ whose ends are $c_1, d_1$ is equal to $2\alpha -3$.
Thus in either case we have $l_R < l$ and the claim holds. $\Box$
Now the preceding claim leads to a contradiction, which proves the lemma. $\Box$
Lemma \[lem:lgpsk4\] is the basis of an algorithm for deciding if a graph contains a pyramid or the line-graph of a proper subdivision of $K_4$.
[Detection of a line-graph of a proper subdivision of $K_4$ in a graph that contains no pyramid]{} \[alg:pyrlgpsk4\]
*Input:* A graph $G$ that contains no pyramid.
*Output:* An induced subgraph of $G$ that is the line-graph of a proper subdivision of $K_4$ (if $G$ contains any); else the negative answer “$G$ does not contain the line-graph of a proper subdivision of $K_4$”.
*Method:* For every $18$-tuple of vertices $(v_{ab}$, $v_{ac}$, $\dots$, $v_{cd}$, $m_{ab}$, $\dots$, $m_{cd})$, do the following:
For each $i,j\in\{a, b, c, d\}$ with $i\neq j$, find a shortest path $S_{ij}$ from $v_{ij}$ to $m_{ij}$;
If the subgraph induced by the union of the twelve paths $S_{ij}$ ($i,j\in\{a,b,c,d\}$, $i\neq j$) is the line-graph of a proper subdivision of $K_4$, return this subgraph and stop.
If no $18$-tuple has produced such a subgraph, return the negative answer.
*Complexity:* $O(|V(G)|^{20})$.
*Proof of correctness.* When the algorithm returns the line-graph of a proper subdivision of $K_4$, clearly this answer is correct.
Conversely, suppose that $G$ contains the line-graph of a proper subdivision of $K_4$. Then $G$ has an induced subgraph $F$ that is the line-graph of a proper subdivision of $K_4$ and has minimal size.
At some step the algorithm will consider an $18$-tuple $(v_{ab}$, $v_{ac}$, $\dots$, $v_{cd}$, $m_{ab}$, $\dots$, $m_{cd})$ which is a frame of $F$. The algorithm will find the paths $S_{ij}$ since the corresponding paths of $F$ do have the required properties. With twelve applications of Lemma \[lem:lgpsk4\], it follows that the subgraph formed by these twelve paths is the line-graph of a proper subdivision of $K_4$ and is actually a smallest such subgraph. So the algorithm will detect this subgraph.
Complexity analysis: There are $O(|V(G)|^{18})$ frames to test. For each such subset, finding the shortest paths $S_{ij}$ takes time $O(|V(G)|^2)$, and checking that the subgraph they form is the line-graph of a proper subdivision of $K_4$ takes time $O(|V(G)|)$. Thus the algorithm finishes in time $O(|V(G)|^{20})$. $\Box$
Let us now focus on finding line-graphs of *bipartite* subdivisions of $K_4$.
\[lem:fodd\] Let $R$ be a subdivision of $K_4$ and $F$ be the line-graph of $R$. Then either $R=K_4$, or $F$ contains an odd hole, or $R$ is a bipartite subdivision of $K_4$.
*Proof.* Suppose $R \neq K_4$. Call $a, b, c, d$ the four vertices of the $K_4$ of which $R$ is a subdivision (i.e., the vertices of degree $3$ in $R$), and for $i,j \in \{a, b, c, d\}$ with $i\neq j$, call $C_{ij}$ the subdivision of edge $ij$. Suppose that $F$ contains no odd hole and $R$ is not bipartite. Then $R$ contains an odd cycle $Z$. This cycle must be a triangle, for otherwise $L(R)$ contains an odd hole, a contradiction. So me may assume up to symmetry that $a, b, c$ induce a triangle. Since $R\neq K_4$, we may assume that $C_{ad}$ has length at least $2$. But then one of $E(C_{ad}) \cup \{ad\} \cup E(C_{cd})$ or $E(C_{ad}) \cup \{ab\} \cup
\{bc\} \cup C_{cd}$ is the edge set of an odd cycle of $R$, of length at least $5$, so $L(R)$ contains an odd hole, a contradiction. $\Box$
Now we can devise an algorithm that decides if a graph with no odd hole contains the line-graph of a bipartite subdivision of $K_4$. This algorithm is simply Algorithm \[alg:pyrlgpsk4\] applied to graphs that contain no odd hole, by the preceding lemma.
Recognition of graphs in class ${\cal A}'$ {#sec:aprime}
==========================================
To decide if a graph is in class ${\cal A}'$, it suffices to decide separately if it is Berge, if it has an antihole of length at least $5$, and if it contains an odd prism. But again it turns out that this third question—deciding if a graph contains an odd prism—is NP-complete (see Section \[sec:npc\]). However, we can decide in polynomial time if a graph with no odd hole contains an odd prism. For this purpose the next lemmas will be useful.
\[lem:lgpsoddprism\] Let $F$ be the line-graph of a bipartite subdivision of $K_4$. Then $F$ contains an odd prism.
*Proof.* Let $R$ be a bipartite subdivision of $K_4$ such that $F$ is the line-graph of $R$, and let $a,b,c,d$ be the four vertices of degree $3$ in $R$. We may suppose without loss of generality that $a, b$ lie on the same side of the bipartition of $R$. Thus edge $ab$ is subdivided to a path $R_{ab}$ of even length, with the usual notation. Now it is easy to see that $F\setminus V(R_{cd})$ is an odd prism. $\Box$
![A graph with six odd prisms[]{data-label="fig:6LK4"}](fig.reco.2)
Before we present an algorithm for recognizing graphs in class ${\cal
A}'$, we can remark that the technique which worked well for detecting even prisms tends to fail for odd prisms. The graph featured in Figure \[fig:6LK4\] illustrates this problem. This graph $G$ is the line-graph of a bipartite graph, so it is a Berge graph. For any two grey triangles, there exists one (and only one) odd prism that contain these two triangles. Moreover, the paths $P_1, P_2, P_3$ form an odd prism of $G$ of minimal size. Yet, replacing $P_1$ (or the path $a_1$-$P_1$-$m_1$) by a shortest path with the same ends does not produce an odd prism. Thus an algorithm that would be similar to the even prism testing algorithm presented above may work incorrectly. We note however that in this example the graph $G$ contains the line-graph of a proper subdivision of $K_4$ (the subgraph obtained by forgetting the black vertices). The next lemma shows that this remark holds in general.
\[lem:oddprism\] Let $G$ be a graph that contains no odd hole and no line-graph of a proper subdivision of $K_4$. Let $H$ be a prism in $G$, with triangles $\{a_1, a_2, a_3\}$ and $\{b_1, b_2, b_3\}$. Let $P$ be any chordless path from $a_1$ to $b_1$ whose interior vertices are not adjacent to $a_2$, $a_3$, $b_2$, $b_3$. Then the three paths $P, P_2,
P_3$ form a prism of $G$ of the same parity as $H$.
*Proof.* If the interior vertices of $P$ have no neighbour on $P_2\cup P_3$ then the lemma holds. So suppose that some interior vertex $c_1$ of $P$ has neighbours on $P_2\cup P_3$, and choose $c_1$ closest to $a_1$ along $P$. Define paths $H_2 = P_2 + P_3 \setminus
\{b_3\}$ and $H_3 = P_2 + P_3 \setminus \{b_3\}$. For $i=2, 3$, let $c_i$ be the neighbour of $c_1$ closest to $b_i$ along $H_i$.
If $c_2 = c_3$, then the three paths $c_2$-$c_1$-$P$-$a_1$, $c_2$-$P_2$-$a_2$, $c_2$-$P_2$-$b_2$-$b_3$-$P_3$-$a_3$ form a pyramid with triangle $\{a_1, a_2, a_3\}$ and apex $c_2$, a contradiction. So $c_2\neq c_3$. If $c_2, c_3$ are not adjacent, then the three paths $c_1$-$P$-$a_1$, $c_1$-$c_2$-$P_2$-$a_2$, $c_1$-$c_3$-$P_2$-$b_2$-$b_3$-$P_3$-$a_3$ form a pyramid with triangle $\{a_1, a_2, a_3\}$ and apex $c_1$, a contradiction. So $c_2, c_3$ are adjacent. Up to symmetry, $c_2 c_3$ is an edge of $P_2$. If $c_1, b_1$ are adjacent, then the three paths $c_1$-$b_1$, $c_1$-$c_3$-$P_2$-$b_2$, $c_1$-$P$-$a_1$-$a_3$-$P_3$-$b_3$ form a pyramid with triangle $\{b_1, b_2, b_3\}$ and apex $c_1$. So we may assume that $c_1, b_1$ are not adjacent. Let $a'_1$ be the neighbour of $a_1$ in $P_1$. Let $d_1$ be the vertex of $a'_1$-$P$-$c_1$ that has neighbours in $P_1$ and is closest to $c_1$. Let $d_2, d_3$ be the neighbours of $d_1$ along $P_1$ that are closest to $a_1$ and $b_1$ respectively.
If $d_2 = d_3$, then the three paths $d_2$-$d_1$-$P$-$c_1$, $d_2$-$P_1$-$a_1$-$a_2$-$P_2$-$c_2$, $d_2$-$P_1$-$b_1$-$b_2$-$P_2$-$c_3$ form a pyramid with triangle $\{c_1, c_2, c_3\}$ and apex $d_2$, a contradiction. So $d_2\neq
d_3$. If $d_2, d_3$ are not adjacent, then the three paths $d_1$-$d_2$-$P_1$-$a_1$, $d_1$-$d_3$-$P_1$-$b_1$-$b_3$-$P_3$-$a_3$, $d_1$-$P$-$c_1$-$c_2$-$P_2$-$a_2$ form a pyramid with triangle $\{a_1,
a_2, a_3\}$ and apex $d_1$, a contradiction. So $d_2, d_3$ are adjacent. Then the four triangles $\{a_1, a_2, a_3\}$, $\{b_1, b_2,
b_3\}$, $\{c_1, c_2, c_3\}$, $\{d_1, d_2, d_3\}$ and the six paths $P_3$, $a_2$-$P_2$-$c_2$, $a_1$-$P_1$-$d_2$, $b_2$-$P_2$-$c_3$, $b_1$-$P_1$-$d_3$, $c_1$-$P$-$d_1$ form the line-graph of a subdivision of $K_4$, and it is not the line-graph of $K_4$ since $a_3\neq b_3$; so $G$ contains the line-graph of a proper subdivision of $K_4$, a contradiction. $\Box$
Now we can present an algorithm that decides if a graph with no odd hole contains an odd prism.
[Detection of an odd prism in a graph that contains no odd hole]{} \[alg:opri\]
*Input:* A graph $G$ that contains no odd hole.\
*Output:* An odd prism induced in $G$, if $G$ contains any, else the negative answer “$G$ contains no odd prism”.
*Method:* Using Algorithm \[alg:pyrlgpsk4\], test whether $G$ contains the line-graph of a proper subdivision of $K_4$. If $G$ contains such a subgraph $F$, for each of the six rungs $R$ of $F$, test if $F\setminus V(R)$ is an odd prism, and if it is, return this odd prism. If Algorithm \[alg:pyrlgpsk4\] answers that $G$ does not contain the line-graph of a proper subdivision of $K_4$, then for every $6$-tuple $(a_1, a_2, a_3, b_1, b_2, b_3)$ do:
For $i=1, 2, 3$ compute a shortest path $P_i$ from $a_i$ to $b_i$ whose interior vertices are not adjacent to $a_{i+1}$, $a_{i+2}$, $b_{i+1}$ and $b_{i+2}$ (subscripts are understood modulo $3$). If paths $P_1, P_2, P_3$ exist and form an odd prism, return the answer no and stop.
If no $6$-tuple has produced a prism, return the answer yes.
*Complexity:* $O(|V(G)|^{20})$.
*Proof of correctness.* If $G$ contains the line-graph of proper subdivision of $K_4$, this will be detected by Algorithm \[alg:pyrlgpsk4\]. If $G$ contains no odd hole and no odd prism, then Lemma \[lem:lgpsoddprism\] ensures that $G$ cannot contain the line-graph of a proper subdivision of $K_4$. So the algorithm will return the correct answer.
Now suppose that $G$ does not contain the line graph of a proper subdivision of $K_4$ and $G$ contains an odd prism, with triangles $\{a_1, a_2, a_3\}$ and $\{b_1, b_2, b_3\}$. Then in some step the algorithm will consider these six vertices, and it will find paths $P_i$ since the corresponding paths of the prism have the required properties. By three applications of Lemma \[lem:oddprism\], we obtain that $P_1, P_2, P_3$ form an odd prism, and so the algorithm will detect it.
*Complexity analysis:* The complexity is clearly determined by its costliest step, which is Algorithm 4. $\Box$
Now deciding if a graph is in class ${\cal A}'$ can be done as follows: test if $G$ contains an antihole of length at least $5$ as explained earlier; test if $G$ is Berge using the algorithm from Section \[sec:berge\]; then use Algorithm \[alg:opri\] to test if $G$ contains no odd prism. The complexity is the same as that of Algorithm \[alg:opri\].
We note that if Conjecture \[conj:pc\] is true then the algorithm for recognizing graphs in class ${\cal A}'$ can be used to color optimally the vertices of any graph $G\in {\cal A}'$ (even if a proof of Conjecture \[conj:pc\] is not algorithmic); this can be done similarly to the remark made at the end of Section \[sec:classa\], as follows. Enumerate all pairs of non-adjacent vertices of $G$ and test whether their contraction produces a graph in class ${\cal A}$; the assumed validity of Conjecture \[conj:pc\] insures that at least one pair will work. Then iterate this procedure until the contractions turn the graph into a clique. In terms of complexity, since we may need to check $O(|V(G)|^2)$ pairs at each contraction step, and there may be $O(|V(G)|)$ steps, we end up with total complexity $O(|V(G)|^{23})$; thus it is desirable to find a proof of Conjecture \[conj:pc\] that produces an algorithm with lower complexity.
NP-complete problems {#sec:npc}
====================
In this section we show that the following problems are NP-complete:
- Decide if a graph contains a prism.
- Decide if a graph contains an even prism.
- Decide if a graph contains an odd prism.
- Decide if a graph contains the line-graph of a proper subdivision of $K_4$.
- Decide if a graph contains the line-graph of a bipartite subdivision of $K_4$.
We have seen in the preceding sections that all these problems are polynomial when the input is restricted to the class of graphs that contain no odd hole.
The above NP-completeness results can all be derived from the following theorem. Let us call problem $\Pi$ the decision problem whose input is a triangle-free graph $G$ and two non-adjacent vertices $a,b$ of $G$ of degree $2$ and whose question is: “Does $G$ have a hole that contains both $a,b$?” Bienstock [@bien] mentions that this problem is NP-complete in general (i.e., not restricted to triangle-free graphs). We adapt his proof here for triangle-free graphs.
\[thm:pinpc\] Problem $\Pi$ is NP-complete.
*Proof.* Let us give a polynomial reduction from the problem [$3$-Satisfiability]{} of Boolean functions to problem $\Pi$. Recall that a Boolean function with $n$ variables is a mapping $f$ from $\{0,
1\}^n$ to $\{0, 1\}$. A Boolean vector $\xi\in\{0, 1\}^n$ is a *truth assignment* for $f$ if $f(\xi)=1$. For any Boolean variable $x$ on $\{0, 1\}$, we write $\overline{x}:=1-x$, and each of $x, \overline{x}$ is called a *literal*. An instance of [$3$-Satisfiability]{} is a Boolean function $f$ given as a product of clauses, each clause being the Boolean sum $\vee$ of three literals; the question is whether $f$ admits a truth assignment. The NP-completeness of [$3$-Satisfiability]{} is a fundamental result in complexity theory, see [@garjoh79].
Let $f$ be an instance of [$3$-Satisfiability]{}, consisting of $m$ clauses $C_1, \ldots, C_m$ on $n$ variables $x_1, \ldots, x_n$. Let us build a graph $G_f$ with two specialized vertices $a,b$, such that there will be a hole containing both $a,b$ in $G$ if and only if there exists a truth assignment for $f$.
For each variable $x_i$ ($i=1, \ldots, n$), make a graph $G(x_i)$ with eight vertices $a_i, b_i, t_i, f_i, a'_i, b'_i, t'_i, f'_i,$ and ten edges $a_it_i, a_if_i, b_it_i, b_if_i$ (so that $\{a_i, b_i, t_i,
f_i\}$ induces a hole), $a'_it'_i, a'_if'_i, b'_it'_i, b'_if'_i$ (so that $\{a'_i, b'_i, t'_i, f'_i\}$ induces a hole) and $t_if'_i,
t'_if_i$. See Figure \[fig:gxi\].
![Graph $G(x_i)$[]{data-label="fig:gxi"}](fig.reco.4)
![Graph $G(C_j)$[]{data-label="fig:gcj"}](fig.reco.5)
![The two edges added to $G_f$ in the case $u_j^p=x_i$[]{data-label="fig:gf"}](fig.reco.6)
![Graph $G_f$[]{data-label="fig:gf2"}](fig.reco.7)
For each clause $C_j$ ($j=1, \ldots, m$), with $C_j=u_j^1\vee
u_j^2\vee u_j^3$, where each $u_j^p$ ($p=1, 2, 3$) is a literal from $\{x_1, \ldots, x_n, \overline{x}_1, \ldots, \overline{x}_n\}$, make a graph $G(C_j)$ with five vertices $c_j, d_j, v_j^1, v_j^2, v_j^3$ and six edges so that each of $c_j, d_j$ is adjacent to each of $v_j^1,
v_j^2, v_j^3$. See Figure \[fig:gcj\]. For $p=1, 2, 3$, if $u_j^p=x_i$ then add two edges $u_j^pf_i, u_j^pf'_i$, while if $u_j^p=\overline{x}_i$ then add two edges $u_j^pt_i, u_j^pt'_i$.
The graph $G_f$ is obtained from the disjoint union of the $G(x_i)$’s and the $G(C_j)$’s as follows. For $i=1, \ldots, n-1$, add edges $b_ia_{i+1}$ and $b'_ia'_{i+1}$. Add an edge $b'_nc_1$. For $j=1,
\ldots, m-1$, add an edge $d_jc_{j+1}$. Introduce the two specialized vertices $a,b$ and add edges $aa_1, aa'_1$ and $bd_m, bb_n$. See Figure \[fig:gf\]. Clearly the size of $G_f$ is polynomial (actually linear) in the size $n+m$ of $f$. Moreover, it is easy to see that $G_f$ contains no triangle, and that $a,b$ are non-adjacent and both have degree $2$.
Suppose that $f$ admits a truth assignment $\xi\in\{0, 1\}^n$. We build a hole in $G$ by selecting vertices as follows. Select $a,b$. For $i=1, \ldots, n$, select $a_i, b_i, a'_i, b'_i$; moreover, if $\xi_i=1$ select $t_i, t'_i$, while if $\xi_i=0$ select $f_i, f'_i$. For $j=1, \ldots, m$, since $\xi$ is a truth assignment for $f$, at least one of the three literals of $C_j$ is equal to $1$, say $u_j^p=1$ for some $p\in\{1, 2, 3\}$. Then select $c_j, d_j$ and $v_j^p$. Now it is a routine matter to check that the selected vertices induce a cycle $Z$ that contains $a,b$, and that $Z$ is chordless, so it is a hole. The main point is that there is no chord in $Z$ between some subgraph $G(C_j)$ and some subgraph $G(x_i)$, for that would be either an edge $t_iv_j^p$ (or $t'_iv_j^p$) with $u_j^p=x_i$ and $\xi_i=1$, or, symmetrically, an edge $f_iv_j^p$ (or $f'_iv_j^p$) with $u_j^p=\overline{x}_i$ and $\xi_i=0$, in either case a contradiction to the way the vertices of $Z$ were selected.
Conversely, suppose that $G_f$ admits a hole $Z$ that contains $a,b$. Clearly $Z$ contains $a_1, a'_1$ since these are the only neighbours of $a$ in $G_f$.
\[clm:zgxi\] For $i=1, \ldots, n$, $Z$ contains exactly six vertices of $G_i$: four of them are $a_i, a'_i, b_i, b'_i$, and the other two are either $t_i,
t'_i$ or $f_i, f'_i$.
*Proof.* First we prove the claim for $i=1$. Since $a, a_1$ are in $Z$ and $a_1$ has only three neighbours $a, t_1, f_1$, exactly one of $t_1, f_1$ is in $Z$. Likewise exactly one of $t'_1, f'_1$ is in $Z$. If $t_1, f'_1$ are in $Z$ then the vertices $a, a_1, a'_1, t_1,
f'_1$ are all in $Z$ and they induce a hole that does not contain $b$, a contradiction. Likewise we do not have both $t'_1, f_1$ in $Z$. Therefore, up to symmetry we may assume that $t_1, t'_1$ are in $Z$ and $f_1, f'_1$ are not. If a vertex $u_j^p$ of some $G(C_j)$ ($1\le
j\le m$, $1\le p\le 3$) is in $Z$ and is adjacent to $t_1$ then, since this $u_j^p$ is also adjacent to $t'_1$, we see that the vertices $a,
a_1, a'_1, t_1, t'_1, u_j^p$ are all in $Z$ and induce a hole that does not contain $b$, a contradiction. Thus the neighbour of $t_1$ in $Z\setminus a_1$ is not in any $G(C_j)$ ($1\le j\le m$), so that neighbour is $b_1$. Likewise $b'_1$ is in $Z$. So the claim holds for $i=1$. Since $b_1$ is in $Z$ and exactly one of $t_1, f_1$ is in $Z$, and $b_1$ has degree $3$ in $G_f$, we obtain that $a_2$ is in $Z$, and similarly $b_2$ is in $Z$. Now the proof of the claim for $i=2$ is essentially the same as for $i=1$, and by induction the claim holds up to $i=n$. $\Box$
\[clm:zgcj\] For $j=1, \ldots, m$, $Z$ contains $c_j, d_j$ and exactly one of $v_j^1, v_j^2, v_j^3$.
*Proof.* First we prove this claim for $j=1$. By Claim \[clm:zgxi\], $b'_n$ is in $Z$ and exactly one of $t'_n, f'_n$ is in $Z$, so (since $b'_n$ has degree $3$ in $G_f$) $c_1$ is in $Z$. Consequently exactly one of $u_1^1, u_1^2, u_1^3$ is in $Z$, say $u_1^1$. The neighbour of $u_1^1$ in $Z\setminus c_1$ cannot be a vertex of some $G(x_i)$ ($1\le i\le n$), for that would be either $t_i$ (or $f_i$) and thus, by Claim \[clm:zgxi\], $t'_i$ (or $f'_i$) would be a third neighbour of $u_1^1$ in $Z$, a contradiction. Thus the other neighbour of $u_1^1$ in $Z$ is $d_1$, and the claim holds for $j=1$. Since $d_1$ has degree $4$ in $G_f$ and exactly one of $v_1^1, v_1^2, v_1^3$ is in $Z$, it follows that its fourth neighbour $c_2$ is in $Z$. Now the proof of the claim for $j=2$ is the same as for $j=1$, and by induction the claim holds up to $j=m$. $\Box$
We can now make a Boolean vector $\xi$ as follows. For $i=1, \ldots,
n$, if $Z$ contains $t_i, t'_i$ set $\xi_i = 1$; if $Z$ contains $f_i,
f'_i$ set $\xi_i = 0$. By Claim \[clm:zgxi\] this is consistent. Consider any clause $C_j$ ($1\le j\le m$). By Claim \[clm:zgcj\] and up to symmetry we may assume that $v_j^1$ is in $Z$. If $u_j^1 =
x_i$ for some $i\in\{1, .., n\}$, then the construction of $G_f$ implies that $f_i, f'_i$ are not in $Z$, so $t_i, t'_i$ are in $Z$, so $\xi_i=1$, so clause $C_j$ is satisfied by $x_i$. If $u_j^1 =
\overline{x}_i$ for some $i\in\{1, .., n\}$, then the construction of $G_f$ implies that $t_i, t'_i$ are not in $Z$, so $f_i, f'_i$ are in $Z$, so $\xi_i=0$, so clause $C_j$ is satisfied by $\overline{x}_i$. Thus $\xi$ is a truth assignment for $f$. This completes the proof of the theorem. $\Box$
Now we can prove the main result of this section.
\[thm:prismsnpc\] The following problems are NP-complete:
1. Decide if a graph contains a prism.
2. Decide if a graph contains an odd prism.
3. Decide if a graph contains an even prism.
4. Decide if a graph contains the line-graph of a proper subdivision of $K_4$.
5. Decide if a graph contains the line-graph of a bipartite subdivision of $K_4$.
*Proof.* For each of these five problems we show a reduction from problem $\Pi$ to this problem. So let $(G, a, b)$ be any instance of problem $\Pi$, where $G$ is a triangle-free graph and $a, b$ are non-adjacent vertices of $G$ of degree $2$. Let us call $a', a''$ the two neighbours of $a$ and $b', b''$ the two neighbours of $b$ in $G$.
*Reduction to Problem 1:* Starting from $G$, build a graph $G'$ as follows (see Figure \[fig:reco.8\]): replace vertex $a$ by five vertices $a_1, a_2, a_3, a_4, a_5$ with five edges $a_1 a_2$, $a_1
a_3$, $a_2 a_3$, $a_2 a_4$, $a_3 a_5$, and put edges $a_4 a'$ and $a_5
a''$. Do the same with $b$, with five vertices named $b_1, \ldots,
b_5$ instead of $a_1, \ldots, a_5$ and with the analogous edges. Add an edge $a_1 b_1$. Since $G$ has no triangle, $G'$ has exactly two triangles $\{a_1, a_2, a_3\}$ and $\{b_1, b_2, b_3\}$. Moreover we see that $G'$ contains a prism if and only if $G$ contains a hole that contains $a$ and $b$. So every instance of $\Pi$ can be reduced polynomially to an instance of Problem 1, which proves that Problem 1 is NP-complete.
![Problem 1: $G$ and $G'$[]{data-label="fig:reco.8"}](fig.reco.8)
*Reduction to Problem 2:* Starting from $G$, build the same graph $G'$ as above. Then build eight graphs $G_{i,j,k}$ ($i, j, k \in
\{0,1\}$) as follows: if $i=1$, subdivide the edge $a_2 a_4$ into a path of length $2$; else do not subdivide it. Likewise, subdivide the edge $a_3 a_5$ if and only if $j=1$; and subdivide the edge $a_1 b_1$ if and only if $k=1$. Now $G$ contains a hole that contains $a$ and $b$ if and only if at least one of the eight graphs $G_{i,j,k}$ contains an odd prism. So every instance of $\Pi$ can be reduced polynomially to eight instances of Problem 2.
*Reduction to Problem 3:* Starting from $G$, build the eight graphs $G_{i,j,k}$ as above. Then $G$ contains a hole that contains $a$ and $b$ if and only if at least one of the eight graphs $G_{i,j,k}$ contains an even prism. So every instance of $\Pi$ can be reduced polynomially to eight instances of Problem 3.
![Problem 4: $G$ and $G''$[]{data-label="fig:reco.9"}](fig.reco.9)
*Reduction to Problem 4:* Starting from $G$, build a graph $G''$ as follows (see Figure \[fig:reco.9\]): remove vertices $a$ and $b$ and add twelve vertices $v_{ab}$, $v_{ac}$, $v_{ad}$, $v_{ba}$, $v_{bc}$, $v_{bd}$, $v_{ca}$, $v_{cb}$, $v_{cd}$, $v_{da}$, $v_{db}$, $v_{dc}$. Add edges such that each of $\{v_{ab}, v_{ac}, v_{ad}\}$, $\{ v_{ba}, v_{bc}, v_{bd}\}$, $\{ v_{ca}, v_{cb}, v_{cd}\}$ and $\{v_{da}, v_{db}, v_{dc}\}$ is a triangle. Add edges $v_{ab}
v_{ba}$, $v_{dc} v_{cd}$, $v_{bd} v_{db}$, $v_{bc} v_{cb}$, $v_{ad}
a'$, $v_{ac} a''$, $v_{da} b'$, $v_{ca} b''$. The graph $G''$ contains exactly four triangles, and $G$ contains a hole through $a$ and $b$ if and only if $G''$ contains the line-graph of a proper subdivision of $K_4$. So every instance of $\Pi$ can be reduced polynomially to an instance of Problem 4.
*Reduction to Problem 5:* Starting from $G''$, make four graphs $G''_{i,j}$ ($i, j\in \{0,1\}$) as follows: if $i=1$ subdivide the edge $v_{ad} a'$ into a path of length $2$, else do not subdivide it. Subdivide likewise the edge $v_{ac} a''$ if and only if $j=1$. Now $G$ contains a hole through $a$ and $b$ if and only if one of the four graphs $G''_{i,j}$ contains the line-graph of a bipartite subdivision of $K_4$. So every instance of $\Pi$ can be reduced polynomially to four instances of Problem 5. This completes the proof of the theorem. $\Box$
Conclusion
==========
We summarize the complexity results mentioned in this paper in the following table, whose columns correspond to the class of graphs taken as instances and whose lines correspond to the subgraph that we look for. The symbol $n$ refers to the number of vertices of the input graph; 0 means trivial, NPC means NP-complete, and a question mark means unsolved.
[l|c|c|c|]{} & General graphs & Graphs with & Graphs with\
& & no pyramid & no odd hole\
------------------------------------------------------------------------
Pyramid or prism & $n^5$ & $n^5$ & $n^5$\
Pyramid & $n^9$ [@CS2002] & $0$ & $0$\
Prism & NPC & $n^5$ & $n^5$\
LGPS$K_4$ & NPC & $n^{20}$ & $n^{20}$\
LGBS$K_4$ & NPC & ? & $n^{20}$\
Odd prism & NPC & ? & $n^{20}$\
Even prism & NPC & ? & $n^{11}$\
[99]{}
C. Berge. Les problèmes de coloration en théorie des graphes. [*Publ. Inst. Stat. Univ. Paris*]{} 9 (1960), 123–160.
C. Berge. Färbung von Graphen, deren sämtliche bzw. deren ungerade Kreise starr sind (Zusammenfassung). [*Wiss. Z. Martin Luther Univ. Math.-Natur. Reihe*]{} (Halle-Wittenberg) 10 (1961), 114–115.
C. Berge. . North-Holland, Amsterdam/New York, 1985.
M.E. Bertschi, Perfectly contractile graphs. [*J. Comb. Th. B*]{} [50]{} (1990), 222–230.
D. Bienstock. On the complexity of testing for even holes and induced odd paths. *Disc. Math.* 90 (1991), 85–92. Corrigendum in *Disc. Math.* 102 (1992), 109.
M. Chudnovsky, G. Cornuéjols, X. Liu, P. Seymour, K. Vušković. Cleaning for Bergeness. Manuscript, 2002.
M. Chudnovsky, N. Robertson, P. Seymour, R. Thomas. The strong perfect graph theorem. Manuscript, Princeton Univ., 2002.
M. Chudnovsky, P. Seymour. Recognizing Berge graphs. Manuscript, Princeton Univ., 2002.
G. Cornuéjols, X. Liu, K. Vušković. A polynomial algorithm for recognizing perfect graphs. Manuscript, Carnegie-Mellon Univ., 2002.
H. Everett, C.M.H. de Figueiredo, C. Linhares Sales, F. Maffray, O. Porto, B.A. Reed. Even pairs. In [@ramree01], 67–92.
J. Fonlupt, J.P. Uhry. Transformations which preserve perfectness and $h$-perfectness of graphs. [*Ann. Disc. Math.*]{} [16]{} (1982), 83–85.
M.R. Garey, D.S. Johnson. [*Computer and Intractability : A Guide to the Theory of NP-completeness.*]{} W.H. Freeman, San Fransisco, 1979.
C. Linhares Sales, F. Maffray, B.A. Reed. On planar perfectly contractile graphs. [*Graphs and Combin.*]{} [13]{} (1997), 167–187.
F. Maffray, N. Trotignon. A class of perfectly contractile graphs. Research report 67, Laboratoire Leibniz, Grenoble, France, http://www-leibniz.imag.fr/LesCahiers. Submitted for publication.
J.L. Ramírez-Alfonsín, B.A. Reed. [*Perfect Graphs*]{}. Wiley Interscience, 2001.
B.A. Reed. Problem session on parity problems (Public communication). [*DIMACS Workshop on Perfect Graphs*]{}, Princeton University, New Jersey, 1993.
|
---
abstract: 'Stellar systems consisting of multiple stars tend to undergo tidal interactions when the separations between the stars are short. While tidal phenomena have been extensively studied, a certain tidal effect exclusive to hierarchical triples (triples in which one component star has a much wider orbit than the others) has hardly received any attention, mainly due to its complexity and consequent resistance to being modelled. This tidal effect is the tidal perturbation of the tertiary by the inner binary, which in turn depletes orbital energy from the inner binary, causing the inner binary separation to shrink. In this paper, we develop a fully numerical simulation of these “tertiary tides” by modifying established tidal models. We also provide general insight as to how close a hierarchical triple needs to be in order for such an effect to take place, and demonstrate that our simulations can effectively retrieve the orbital evolution for such systems. We conclude that tertiary tides are a significant factor in the evolution of close hierarchical triples, and strongly influence at least $\sim1\%$ of all multiple star systems.'
author:
- |
Yan Gao$^{1,2}$[^1], Alexandre C.M. Correia$^{3,4,5}$, Peter P. Eggleton$^{6}$ and Zhanwen Han$^{1,2}$\
$^{1}$Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011, China\
$^{2}$Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming 650011, China\
$^{3}$Department of Physics, University of Coimbra, 3004-516 Coimbra, Portugal\
$^{4}$CIDMA, Department of Physics, University of Aveiro, 3810-193 Aveiro, Portugal\
$^{5}$ASD, IMCCE, Paris Observatory, PSL University, 77 Av. Denfert-Rochereau, 75014 Paris, France\
$^{6}$Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94551, USA
date: Accepted for publication in MNRAS
title: Numerical Modelling of Tertiary Tides
---
\[firstpage\]
celestial mechanics, (stars:) binaries (including multiple): close, stars: evolution
Introduction
============
Stars in close multiple systems are subject to tidal forces, which play a pivotal role in shaping their futures. The intrinsic mechanism behind these tidal forces is that, for every celestial body within a multiple system, the motion of the other bodies subjects it to a changing gravitational field, inducing internal motion within it, which in turn affects the gravitational field emanating from it, thereby influencing the rest of the system as a whole. In conjunction with dissipative processes (see for a review of such processes), tidal forces facilitate, among many other effects, the migration of angular momentum from one part of the system to another. Due to the importance of the various roles of these forces, previous studies have conducted extensive investigations about their nature.
Despite the relative simplicity of the concept, clarity has yet to be achieved as to exactly how tidal forces ought to be modelled. Some researchers favour a treatment based on equilibrium tides (usually referred to as the “equilibrium tide model”), while others advocate a treatment that approximates the celestial body receiving the tidal force as an oscillator with many different oscillation modes, each one absorbing energy in its own way (known as the “dynamical tide model”). It has been pointed out that the two models may be complementary [e.g. @1998ApJ...499..853E], with each model being optimized for a special set of cases, but even so, it is still unclear where the line should be drawn when dealing with specific systems.
Yet however great the controversy may be when it comes to modelling tidal processes, there is a general consensus regarding the macroscopic effects of tidal forces; that they tend to synchronise the rotations and orbits of all the bodies involved, circularise orbits by causing a decay in their ellipticities, and convert certain portions of the kinetic and potential energies of the bodies involved into heat, which can then be radiated away. For instance, for a 2-body system in a Keplerian orbit under tidal effects, given time, the system must ultimately evolve into a circular orbit, with the orbital angular velocity being equal to the respective rotational angular velocities of each body, regardless of how eccentric their initial orbit may be or how much their initial angular velocities may differ.
Of all the tidal effects to which close multiple systems are exposed, only three remain relevant for the orbital evolution of a hierarchical triple system (consisting of an inner binary of masses $m_{\rm 1}$ and $m_{\rm 2}$, as well as an outer tertiary of mass $m_{\rm 3}$). The first is the tidal locking between $m_{\rm 1}$ and $m_{\rm 2}$, which is no different from 2-body tidal effects in general, and has historically been the subject of intense study . The second is the tidal locking of $m_{\rm 3}$ to the inner binary, which will eventually synchronize the rotation and the orbit of $m_{\rm 3}$ [e.g. @2016CeMDA.126..189C]. The third and final effect is the dumping of energy from $m_{\rm 1}$ and $m_{\rm 2}$ to $m_{\rm 3}$, which $m_{\rm 1}$ and $m_{\rm 2}$ achieve by tidally distorting $m_{\rm 3}$ as illustrated in the cartoon depiction in Fig. \[Fig1\] (see also Animated Figure 1).
As portrayed in Fig. \[Fig1\], $m_{\rm 3}$ receives the greatest amount of tidal force from $m_{\rm 1}$ and $m_{\rm 2}$ when all 3 bodies are aligned (left panel), and receives the least when they are orthogonal (right panel). This change in received tidal force translates into a change in the degree of tidal distortion (elongation in the the direction of $m_{\rm 1}$ and $m_{\rm 2}$) that $m_{\rm 3}$ undergoes. Consequently, if the internal tidal friction in $m_{\rm 3}$ is strong enough to (at least partly) brake the resultant internal motion, this leads to (at least part of) the energy carried in the tidal distortion difference being converted to heat. At whatever rate this process generates heat, it must essentially be fuelled by the orbital energy within the orbit of $m_{\rm 1}$ and $m_{\rm 2}$, which is the driving motion behind the tidal distortion of $m_{\rm 3}$. Therefore, this effect should serve also to drive the inner binary separation to be smaller. Here, it should be noted that these tidal effects will not end in tidal locking, as is often the case with 2-body tidal effects, since no rotation of $m_{\rm 3}$ can decrease the difference in self-gravitational potential energy in the transition between the left and right panels of Fig. \[Fig1\].
![Illustration of how the inner binary affects the third body when tertiary tides become significant (see also the animated figure attached to this paper). The state when all three bodies are aligned, as depicted in the left panel, is defined as state $\alpha$, and the state in which the three bodies are at the vertices of an isosceles triangle, as depicted in the right panel, is defined as state $\beta$. The solid lines display the shape of the tertiary at equilibrium tidal distortion, while the dotted lines represent the same shape in the other state for comparison. The tidal distortion of the tertiary is greatly exaggerated. \[Fig1\]](Fig1.eps)
Of the three effects mentioned above, the first two have already been extensively investigated, as is evident from the literature. Very little attention, however, has been paid to the third. Admittedly, this is not entirely without good reason; in a vast majority of cases, $m_{\rm 3}$ is much smaller than its Roche Lobe, and the tidal distortion it undergoes is consequently insignificant. However, whenever the condition comes to pass that $m_{\rm 3}$ is more or less the same size as its Roche Lobe, this third effect becomes interesting for one simple reason: as mentioned above, this third effect can never be mitigated by tidal locking, and therefore can theoretically form an endless drain of the orbital energy of $m_{\rm 1}$ and $m_{\rm 2}$. Furthermore, we shall show that, unlike any other merger-contributing mechanism investigated so far, [*the greater the inner binary separation, the greater this energy drain per unit time will be*]{}. In other words, this effect is rare in that it preferentially allows large binary separations to decrease. So far, triple systems, in which $m_{\rm 3}$ is close enough for this third effect to have been prominent in the past, have occasionally been observed [e.g. @2011Sci...332..216D]. Speculation has also arisen that the inner binaries have been driven closer together due to 3-body tidal effects, which are not inconsistent with observed properties of these triples [e.g. @2013MNRAS.429.2425F]. However, there is not, to the knowledge of the authors, as of yet any work that provides a simulation which can recover the exact details of this third effect, and therefore the way in which the orbits of triple systems under its influence evolve is not well understood. We seek to remedy this.
For the rest of this paper, we shall refer to this third effect by the names “tertiary tides" or “TTs" for short. In what follows in this paper, we describe our model and its numerical implementation in §2, and present the results of our calculations for some specific systems in §3. Finally, our conclusions regarding the influence of tertiary tides in general, as well as the limitations of our work, are provided in §4, along with an extrapolation of what work could be done in the future.
Treatment of Tides and Tidal Lags
=================================
To reliably simulate a close triple system undergoing TTs, we adopt a 2-stage simulation based on 8th-order Runge Kutta methods (hereafter RK8), by modelling the orbital motion of three bodies in a hierarchical triple configuration. We treat the bodies constituting the inner binary $m_{\rm 1}$ and $m_{\rm 2}$ as point masses, whereas the third body $m_{\rm 3}$ is modelled as a body with a gravitational field varying with time, in order to account for its tidal distortion. In the first stage, we calculate the amount of energy extracted from the inner orbit per unit time, in which TTs are taken into account by means of a modified version of classical 2-body tidal models . For the second stage, we adopt a viscoelastic tidal model , calibrating an unknown parameter $\tau$ in this model by varying the parameter until the energy extraction rates of the two models match. This provides detailed positions and velocities of all 3 bodies, as well as the rotation and deformation of $m_{\rm 3}$, as a function of time. From these positions and velocities, orbital parameters (such as semimajor axes and periods) of both the inner and outer orbits can be retrieved. The details of each of these two stages are presented below.
Stage 1 Simulations {#s1ss1}
-------------------
We consider the special case of a hierarchical triple, consisting of a double point mass inner binary ($m_{\rm 1}$, $m_{\rm 2}$) in a circular orbit, and a coplanar third body ($m_{\rm 3}$), also in a circular orbit around the center of mass (COM) of the inner binary (Jacobi coordinates). For simplicity, we assume that $m_{\rm 1}=m_{\rm 2}$, and that $m_{\rm 3}$ is tidally locked to the inner binary’s COM - in such a system, we do not consider rotational effects. Had it been the case that internal dissipation within $m_{\rm 3}$ were very efficient, the rate at which orbital energy is dumped from the inner binary to $m_{\rm 3}$ can be shown to be $${\frac{{\rm d}E}{{\rm d}t}}{\sim}{\frac{135}{4}}\frac{Gm^{2}R_{\rm 3}^{5}a_{\rm 1}^{2}}{a_{\rm 2}^{8}}{\frac{4}{P_{\rm in}}},
\label{delta_p_homo}$$ via a set of trivial calculations (see appendix \[app\_other\] for details). Here, $m=m_{\rm 1}=m_{\rm 2}$, $R_{\rm 3}$ is the radius of $m_{\rm 3}$, $a_{\rm 1}$ and $a_{\rm 2}$ are the semi-major axes of the inner and outer orbits respectively, and $P_{\rm in}$ is the inner orbital period. However, since dissipation efficiency might be very low for this process (and we indeed find it to be so in our work), we need a much more detailed set of simulations, detailed below.
At each moment, $m_{\rm 3}$ has a proclivity to assume the distortion corresponding to the equilibrium tide of that particular moment. This equilibrium distortion at each moment leads to a gravitational field that can be approximately expressed by (see appendix \[delayed\_to\_visco\]) $$V\left(r,{\psi}\right)=-\frac{Gm_{\rm 3}}{r} \left[1+k_{\rm 2} \, \zeta(\phi)\left(\frac{R_{\rm 3}}{r}\right)^{2}P_{\rm 2}(\cos{\psi}) \right],
\label{phi}$$ where $k_{\rm 2}$ is the Love number for $m_{\rm 3}$ (for polytropic stars with $n=1.5$, we use $k_{\rm 2}=0.2$ as an approximation, following @2017MNRAS.472.4965Y), $r$ is the distance measured from the center of $m_{\rm 3}$, the angle $\psi$ is defined to be zero in the direction of the tertiary bulge maximum, and ${\zeta}$ is the tidal distortion parameter, $$\zeta (\phi) = \left[\frac{P_{\rm 2}(\cos{\psi_1})}{(r_{\rm 1}/a_{\rm 2})^3}+\frac{P_{\rm 2}(\cos{\psi_2})}{(r_{\rm 2}/a_{\rm 2})^3}\right]
\frac{m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3
, \label{zetaphi}$$ where $r_{\rm 1}$ and $r_{\rm 2}$ are the distances from $m_{\rm 1}$ to $m_{\rm 3}$ and $m_{\rm 2}$ to $m_{\rm 3}$, respectively, ${\phi}$ is the angle between $\overrightarrow{m_{\rm 1}m_{\rm 2}}$ and $\overrightarrow{m_{\rm 3}C}$ ($C$ being the COM of the inner binary), and $\psi_1$ and $\psi_2$ are the $\psi$ values for $m_{\rm 1}$ and $m_{\rm 2}$ respectively. For circular orbits and $\alpha = a_1/a_2$, $$\begin{split}
\left(\frac{r_{\rm 1}}{a_{\rm 2}}\right)^3&=\left(1+\alpha{\cos\phi}+\frac{1}{4}\alpha^2\right)^{3/2} \ ,\\
\left(\frac{r_{\rm 2}}{a_{\rm 2}}\right)^3&=\left(1-\alpha{\cos\phi}+\frac{1}{4}\alpha^2\right)^{3/2} \ .
\end{split}$$ Since internal dissipation is not instantaneous, the tertiary never achieves this equilibrium. Instead, it assumes some tidal distortion equivalent to its equilibrium state a certain amount of time $t_{\rm lag}$ ago, where $t_{\rm lag}$ is usually termed the tidal lag time.
But how much time is $t_{\rm lag}$? To answer this question, we draw an analogy from the $t_{\rm lag}$ in binary tidal locking. In a binary with component stars $A$ and $B$ (totally different from and irrelevant to the triple system mentioned above), where star $A$ is an extended object which is not rotating, and star $B$ is a point mass orbiting star $A$ in a circular orbit at an angular velocity of $\omega$, binary tidal locking occurs as follows. The existence of star $B$ is supposed to distort star $A$ at every epoch in a way such that star $A$ is elongated in the direction of star $B$, forming two bulges on its surface, one pointing towards and the other away from star $B$, if equilibrium tides are assumed. However, since star $A$ undergoes internal friction due to viscous processes, the orientation of those bulges always lags behind the orientation corresponding to equilibrium tide, or in other words, star $A$ is constantly in a state of distortion corresponding to its tidal equilibrium a certain amount of time $t_{\rm lag}$ ago. Thus, the bulges are always aligned towards a position that star $B$ was the same amount of time $t_{\rm lag}$ ago, corresponding to an angle $\lambda$ away from $B$, thereby introducing a torque on the orbit of star $B$, decreasing its orbital angular momentum. Conversely, the rotational angular momentum of star $A$ must increase due to conservation of angular momentum throughout the system.
It has been shown that $t_{\rm lag}$ can be expressed as $$t_{\rm lag}=\frac{P}{2{\pi}}{\lambda},
\label{tlag}$$ where $P$ is the orbital period of the binary, and $\lambda$ is the tidal lag angle, which can be expressed as $${\lambda}={\omega}t_{\rm dyn}^{2}\left(\frac{1}{t_{\rm diss}}\right)
\label{lag_angle}$$ where $t_{\rm dyn}$ is the dynamical timescale of star $A$, $\omega$ is the orbital angular velocity described above, and $t_{\rm diss}$ is the typical dissipation timescale of star $A$. According to its definition, $t_{\rm dyn}$ is simply $$t_{\rm dyn}=\sqrt{\frac{R^3}{GM}}.
\label{tdyn}$$ The value of $t_{\rm diss}$, however, is a somewhat more complicated issue, and here we focus only on the aspects of its calculation immediately relevant to this paper. For a star with a convective envelope (which is the case for both low-mass main sequence stars and red giants), turbulent convection dominates the dissipation process for equilibrium tides [e.g. @2005ASPC..333....4Z], and $t_{\rm diss}$ is simply the convective timescale $t_{\rm conv}$ when the tidal period (the period of variation of the tidal forcefield, which is also the orbital period in a 2-body scenario) is longer than $t_{\rm conv}$: $$t_{\rm diss}=t_{\rm conv}=\left(\frac{MR^2}{L}\right)^{1/3}.
\label{tconv}$$ where $M$, $R$ and $L$ are the mass, radius and luminosity of star $A$, respectively. However, it is important to note that the above equation is only valid when the tidal forcefield changes very slowly, giving the perturbed body ample time to dissipate energy, as is the case when the tidal period is longer than $t_{\rm conv}$. When the tidal period is shorter than $t_{\rm conv}$, the perturbed body doesn’t have sufficient time to dissipate this energy before the tidal forcefield reverts back to its former state, in which case a phenomenon called “fast tides" starts to come into effect. When this happens, $t_{\rm diss}$ ought to be calculated via either $$t_{\rm diss}=\left(\frac{t_{\rm conv}}{P}\right)t_{\rm conv},
\label{tdiss1}$$ or $$t_{\rm diss}=\left(\frac{t_{\rm conv}}{P}\right)^{2}t_{\rm conv},
\label{tdiss2}$$ or perhaps $$t_{\rm diss}=\left(\frac{t_{\rm conv}}{P}\right)^{5/3}t_{\rm conv},
\label{tdiss3}$$ according to @2005ASPC..333....4Z, @1977ApJ...211..934G, and @1997ApJ...486..403G, respectively. It is not currently known which, if any at all, of these treatments approximates fast tides well , but recent results seem to favour the first prescription for stellar interiors [@2007ApJ...655.1166P; @2009ApJ...704..930P], and hence we will use this prescription for our following calculations.
Having found a way to calculate $t_{\rm lag}$ for binary tides, we return to our previous triple system with $m_{\rm 1}$, $m_{\rm 2}$, and $m_{\rm 3}$. The method above can be converted into a calculation for $t_{\rm lag}$ in TTs as shown below.
The tidal distortion of $m_{\rm 3}$ experienced during TTs depicted in Fig. \[Fig1\] is a combination of separate tidal distortions by $m_{\rm 1}$ and $m_{\rm 2}$ (see Fig. \[Fig2\] for an illustration of a single component). Since $m_{\rm 1}=m_{\rm 2}$, and considering the general symmetry of the inner binary, the $t_{\rm lag}$ of the distortion caused by $m_{\rm 1}$ must be equal to that caused by $m_{\rm 2}$. Hence, we only need to calculate the value of $t_{\rm lag}$ due to either $m_{\rm 1}$ or $m_{\rm 2}$ in order to retrieve the $t_{\rm lag}$ for the tidal distortion of $m_{\rm 3}$.
![Dissection of tertiary tides - how the tertiary is tidally distorted in reaction to the tidal forcing from one component of the inner binary alone. Here, the effects of the other component of the inner binary have been eliminated, but its companion is assumed to travel in the same orbit as before. The solid lines display the shape of the tertiary at equilibrium tidal distortion, while the dotted lines represent its shape when no tidal forces are applied. The dash-dotted lines indicate the direction of elongation of the equilibrium tide distortion, which is invariably in the direction of the perturbing body. The time given in the upper left corner of each panel is given in units of inner binary orbital period. It can be seen that the net result is an oscillatory rotational effect, but this rotation is largely cancelled out by the effects of the other inner binary component when full tertiary tides are considered. The tidal distortion of the tertiary is greatly exaggerated. \[Fig2\]](Fig2.eps)
How, then, should this $t_{\rm lag}$ be calculated, and how long is it for the triple system in question? To answer this, we revert to the calculation of the tidal lag time in a binary system, as presented in Eqs. \[tlag\] and \[lag\_angle\]. For our TTs, the tertiary lag time should also be calculable with these equations, albeit with minor modifications as to what physical quantities each of the variables correspond to in a triple system under TTs. The dynamical timescale $t_{\rm dyn}$ is indisputably the dynamical timescale of the tertiary, but for the other variables, namely $P$, $\omega$, and $t_{\rm diss}$, it may not be so obvious.
To find what value to substitute for $P$, one must discern exactly what $P$ is in Eq. \[tlag\]. Considering that $\lambda$ is simply the tidal lag angle, and hence whatever remains is merely a conversion factor from lag angle to lag time, it becomes clear that $P$ is more related to how the perturbing body is moving relative to the perturbed body than it is to the intrinsic period of the acting tidal force. In other words, it would not matter at any particular moment if the perturbing body were travelling in a circular orbit, or in a straight line tangential to that circular orbit, and had happened to be at the point of intersection at that particular moment - both scenarios would result in the same $t_{\rm lag}$, had the lag angle been the same. In fact, Eqs. \[tlag\] and \[lag\_angle\] could be more accurately expressed as $$t_{\rm lag}=\frac{d}{v}\left({\omega}t_{\rm dyn}^{2}\frac{1}{t_{\rm diss}}\right).
\label{tlag_alt}$$ where $d$ is the distance between the perturbed body and its companion, and $v$ is the relative velocity between the two bodies. To extrapolate this to tertiary tides, it may be beneficial to imagine the moment when the inner binaries are in state $\alpha$ of Fig. 1 (left panel). Assuming that the tertiary is not rotating relative to the inner binary, and considering that $a_{\rm 2}{\gg}a_{\rm 1}$, one can see that, at this moment, $d$ should be substituted by $a_{\rm 2}$, and $v$ by the inner orbital velocity (the velocity at which $m_{\rm 1}$ and $m_{\rm 2}$ move relative to their COM), hereby denoted as $v_{\rm in}$. At moments when the triple system is not in state $\alpha$, the calculation of what values to substitute will be more problematic (possibly starting with the second term in Equation 12 of @1998MNRAS.300..292K), and is beyond the scope of this paper. For the purposes of this study, we assume that $t_{\rm lag}$ is the same at all epochs, and hence we use $d=a_{\rm 2}$, $v=v_{\rm in}$ for all epochs, which is equivalent to $$P=\frac{2{\pi}a_{\rm 2}}{v_{\rm in}}.
\label{perdef}$$
The $\omega$ in Eq. \[lag\_angle\] is a measure of the lack of synchronism between the rotation of the perturbed body and its companion’s orbit, and is therefore a function of the periodical variation of the tidal force acting upon the perturbed body. Thus, for tertiary tides, we set $${\omega}=2{\pi}\left({\frac{1}{P_{\rm in}}}-{\frac{1}{P_{\rm rot}}}\right)=2{\pi}\left({\frac{1}{P_{\rm in}}}-{\frac{1}{P_{\rm out}}}\right){\approx}\frac{2{\pi}}{P_{\rm in}},
%{\omega}=\frac{2{\pi}}{P_{\rm in}},
\label{omegadef}$$ where $P_{\rm in}$ is the orbital period of the inner binary, $P_{\rm rot}$ is the rotational period of $m_{\rm 3}$, which we assume in our Stage 1 simulations to be equal to the outer orbital period $P_{\rm out}$ due to tidal locking.
As for the dissipation timescale $t_{\rm diss}$, since the tidal period is equal to $P_{\rm in}$, which is consistently greater than the convective timescale $t_{\rm conv}$ for all cases of TTs we consider, $t_{\rm diss}$ needs to be calculated via a prescription for fast tides, for which we use Eq. \[tdiss1\], and therefore $$t_{\rm diss}=\frac{t_{\rm conv}^{2}}{P_{\rm in}},
\label{tdissdef}$$ where $t_{\rm conv}$ can be calculated, as with any normal star, via Eq. \[tconv\] above.
Having ascertained the value of $t_{\rm lag}$, we then proceed with three sets of simulations to calculate the rate at which TTs extract orbital energy.
In the first set, we run a simulation where $m_{\rm 1}$, $m_{\rm 2}$ and $m_{\rm 3}$ are all treated as point masses by solving the following set of equations using 8th-order Runge-Kutta:
$$\frac{{\rm d}{\bm v}_{\rm i}}{{\rm d}t}=\frac{Gm_{\rm j}}{r_{\rm ij}^{\rm 3}}({\bm R}_{\rm j}-{\bm R}_{\rm i})+\frac{Gm_{\rm k}}{r_{\rm ik}^{\rm 3}}({\bm R}_{\rm k}-{\bm R}_{\rm i})$$
where blackfont denotes vectors, ${\bm R}_{\rm i}$ is the position vector of $m_{\rm i}$, i=(1,2,3), j=(2,1,1), and k=(3,3,2). This is done to check that the triple system is dynamically stable without the effects of tidal forces, and also serves to establish a baseline for the errors incurred during the simulations.
In the second set, we run a 3-body simulation as before, but modulate the gravitational field of $m_{\rm 3}$ according to Eq. \[phi\] with a giant tertiary (large radius), and a tidal lag. The lag is implemented by letting ${\zeta}$ at each timestep be what its equilibrium value would have been $t_{\rm lag}$ ago. In other words, $$\begin{split}
\frac{{\rm d}{\bm v}_{\rm 1}}{{\rm d}t}&=\frac{Gm_{\rm 2}}{r_{\rm 12}^{\rm 3}}({\bm R}_{\rm 2}-{\bm R}_{\rm 1})-\frac{V\left(r_{\rm 13},{\psi}_{\rm 1}\right)}{r_{\rm 13}^{\rm 2}}({\bm R}_{\rm 3}-{\bm R}_{\rm 1}),\\
\frac{{\rm d}{\bm v}_{\rm 2}}{{\rm d}t}&=\frac{Gm_{\rm 1}}{r_{\rm 21}^{\rm 3}}({\bm R}_{\rm 1}-{\bm R}_{\rm 2})-\frac{V\left(r_{\rm 23},{\psi}_{\rm 2}\right)}{r_{\rm 23}^{\rm 2}}({\bm R}_{\rm 3}-{\bm R}_{\rm 2}),\\
\frac{{\rm d}{\bm v}_{\rm 3}}{{\rm d}t}&=\frac{Gm_{\rm 1}}{r_{\rm 31}^{\rm 3}}({\bm R}_{\rm 1}-{\bm R}_{\rm 3})+\frac{Gm_{\rm 2}}{r_{\rm 32}^{\rm 3}}({\bm R}_{\rm 2}-{\bm R}_{\rm 3}).
\end{split}$$ where the gravitational potential function $V\left(r,{\psi}\right)$ is given by Eq. \[phi\], and the value of $\zeta$ in the function is given as the equilibrium tide value $t_{\rm lag}$ ago. This is done to check that the triple system is dynamically stable after TTs are applied, and that the distortions of $m_{\rm 3}$ won’t disintegrate the system before TTs come into effect. Theoretically, the energy extraction rate can be found using this method, but the errors induced by our approximations are larger than the benchmark set by our first set of simulations, and therefore we need another set of simulations to find this extraction rate.
In the third set of simulations, we model the inner binary orbit only, adding an additional varying gravitational field centred at a distance $a_{\rm 2}$ from the COM of the inner binary. This field is equivalent to the effect of an $m_{\rm 3}$ tidally distorted by the orbiting inner binary, minus an $m_{\rm 3}$ tidally distorted by a point mass of $m_{\rm 1}+m_{\rm 2}$. The tidal lags are dealt with as before. The effective equations for this set of simulations are $$\begin{split}
\frac{{\rm d}{\bm v}_{\rm 1}}{{\rm d}t}&=\frac{Gm_{\rm 2}}{r_{\rm 12}^{\rm 3}}({\bm R}_{\rm 2}-{\bm R}_{\rm 1})+k_{\rm 2}\left({\zeta}\left({\phi}\right)-{\zeta}_{\rm eq}\right)\\
&~~~~\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^{2}P_{\rm 2}(\cos{\psi_{\rm 1}})\frac{Gm_{\rm 3}}{r_{\rm 13}^{\rm 3}}({\bm R}_{\rm 3}-{\bm R}_{\rm 1}),\\
\frac{{\rm d}{\bm v}_{\rm 2}}{{\rm d}t}&=\frac{Gm_{\rm 1}}{r_{\rm 21}^{\rm 3}}({\bm R}_{\rm 1}-{\bm R}_{\rm 2})+k_{\rm 2}\left({\zeta}\left({\phi}\right)-{\zeta}_{\rm eq}\right)\\
&~~~~\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^{2}P_{\rm 2}(\cos{\psi_{\rm 2}})\frac{Gm_{\rm 3}}{r_{\rm 23}^{\rm 3}}({\bm R}_{\rm 3}-{\bm R}_{\rm 2})\\
{\bm v}_{\rm 3}&=\frac{{\bm v}_{\rm 1}+{\bm v}_{\rm 2}}{2}.
\end{split}$$ here, $${\zeta}_{\rm eq}=\frac{(m_{\rm 1}+m_{\rm 2})}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3$$ is the $\zeta$ value for an $m_{\rm 3}$ perturbed by a point mass of $m_{\rm 1}+m_{\rm 2}$ at the COM of the inner binary, and tidal lags are applied via $\zeta\left({\phi}\right)$ as in the second set. This excludes all effects other than TTs, and yields the rate of energy extraction, with which we then use to calibrate $\tau$ in our Stage 2 simulations (see below), by varying $\tau$ until the energy extraction rate matches that given by this model.
Stage 2 Simulations {#s1ss2}
-------------------
While our Stage 1 simulations can provide the rate at which TTs extract energy from the inner binary, some details of the process (such as the rotation of the tertiary) are lost in the approximations. For a more convincing picture of how a hierarchical triple behaves under TTs, we resort to the following model. Again, we consider the previous hierarchical coplanar triple system consisting of three stars with masses $m_{\rm 1}$, $m_{\rm 2}$ and $m_{\rm 3}$. As before, $m_{\rm 1}$ and $m_{\rm 2}$ are considered to be point masses, while the tertiary is considered to be an oblate ellipsoid with mean radius $R_3$ and gravity field coefficients $J_2$, $C_{22}$ and $S_{22}$, sustained by the reference frame (${\bm I}$,${\bm J}$,${\bm K}$), where ${\bm K}$ is the axis of maximal inertia. We furthermore assume that the spin axis of the tertiary, with rotation rate $\Omega$, is also along ${\bm K}$, and that ${\bm K}$ is orthogonal to the orbital plane (which corresponds to zero obliquity). The gravitational potential of the tertiary is then given by [e.g. @2013ApJ...767..128C]: $$\begin{split}
V ({\bm r}) =& - \frac{G m_3}{r} - \frac{G m_3 R_{\rm 3}^2 J_2}{2 r^3} \\
& - \frac{3 G m_3 R_{\rm 3}^2}{r^3} \big( C_{22} \cos {2 \gamma} - S_{22} \sin {2 \gamma} \big),
\label{c1}
\end{split}$$ where $$\cos {2 \gamma} = ({\bm I} \cdot {\bm {\hat r}})^2 - ({\bm J} \cdot {\bm {\hat r}})^2
\quad \mathrm{and} \quad
\sin {2 \gamma} = - 2 ({\bm I} \cdot {\bm {\hat r}}) ({\bm J} \cdot {\bm {\hat r}}) \ ,$$ where ${\bm r}$ is a generic position with respect to the center of the tertiary, and ${\bm {\hat r}} = {\bm r} / r $ is the unit vector. We neglect terms in $(R_{\rm 3}/r)^3$ (quadrupolar approximation). We can also express ${\gamma} = \theta - f$, where $\theta$ is the rotation angle, and ${f}$ is the true longitude. The total potential energy of the system is thus given by $$U ({\bm r}_1,{\bm r}_2) = - \frac{G m_1 m_2}{| {\bm r}_2-{\bm r}_1 |} + m_1 V({\bm r}_1) + m_2 V({\bm r}_2) \ , \label{c3}$$ where ${\bm r}_i = {\bm R}_i - {\bm R}_3$, and ${\bm R}_i$ is the position of the star with mass $m_i$ in an inertial frame. Note that the quantities $a_1$ and $a_2$ (Jacobi coordinates) are not the norms of ${\bm r}_1$ and ${\bm r}_2$, which are astrocentric coordinates (see appendix \[delayed\_to\_visco\] for more details).
The equations of motion governing the orbital evolution of the system in an inertial frame are given by: $$\frac{d^2 {\bm R}_i}{dt^2} = - \frac{1}{m_i} \frac{\partial U}{\partial {\bm R}_i} = - \frac{1}{m_i} \frac{\partial U}{\partial {\bm r}_i} \ , \label{161223b}$$ $$\frac{d^2 {\bm R}_3}{dt^2} = - \frac{1}{m_3} \frac{\partial U}{\partial {\bm R}_3} = \frac{1}{m_3} \left( \frac{\partial U}{\partial {\bm r}_1} + \frac{\partial U}{\partial {\bm r}_2} \right) \ , \label{161223c}$$ with $$\begin{aligned}
\frac{\partial U}{\partial {\bm r}_i} \!&=&\! (-1)^i \frac{G m_1 m_2}{|{\bm r}_2-{\bm r}_1|^3} ({\bm r}_2-{\bm r}_1)
+ \frac{G m_i m_3}{r_i^3} {\bm r}_i \nonumber \\ &&
+ \frac{3 G m_i m_3 R_{\rm 3}^2}{2 r_i^5} \left[ J_2 + 6 \big( C_{22} \cos {2 \gamma}_i - S_{22} \sin {2 \gamma}_i \big) \right] {\bm r}_i \nonumber \\ &&
- \frac{6 G m_i m_3 R_{\rm 3}^2}{r_i^5} \big( C_{22} \sin {2 \gamma}_i + S_{22} \cos {2 \gamma}_i \big) {\bm K} \times {\bm r}_i \ .
\label{161223d} \end{aligned}$$
In an astrocentric frame they simply become $$\frac{d^2 {\bm r}_i}{dt^2} = - \left( \frac{1}{m_i} + \frac{1}{m_3} \right) \frac{\partial U}{\partial {\bm r}_i} - \frac{1}{m_3} \frac{\partial U}{\partial {\bm r}_j} \ , \label{161223e}$$ where $i = 1,2$ and $j=3-i$.
The torque acting to modify the rotation of the tertiary is given by $$I_{\rm 3} \frac{d {\Omega}}{dt} = \left( {\bm r}_1 \times \frac{\partial U}{\partial {\bm r}_1} + {\bm r}_2 \times \frac{\partial U}{\partial {\bm r}_2} \right) \cdot {\bm K} \ , \label{161223f}$$ for a tertiary of constant radius, where $I_{\rm 3}$ is the principal moment of inertia of $m_{\rm 3}$ along the axis ${\bm K}$. We hence obtain for the rotation angle $\dot \theta = {\Omega}$: $$\begin{aligned}
\frac{d^2 \theta}{dt^2} \!&=&\! -\frac{6Gm_1m_3R_{\rm 3}^2}{I_{\rm 3}r_1^3} \left[ C_{22}\sin2{\gamma}_1 + S_{22}\cos2{\gamma}_1\right] \nonumber \\ &&
-\frac{6Gm_2m_3R_{\rm 3}^2}{I_{\rm 3}r_2^3} \left[ C_{22}\sin2{\gamma}_2 + S_{22}\cos2{\gamma}_2\right] \ .
\label{161223g} \end{aligned}$$ For a tertiary with varying radius, the above equation becomes $$\begin{aligned}
\frac{d^2 \theta}{dt^2} \!&=&\! -2{\Omega}\frac{\dot R_{\rm 3}}{R_{\rm 3}}-\frac{6Gm_1m_3R_{\rm 3}^2}{I_{\rm 3}r_1^3} \left[ C_{22}\sin2{\gamma}_1 + S_{22}\cos2{\gamma}_1\right] \nonumber \\ &&
-\frac{6Gm_2m_3R_{\rm 3}^2}{I_{\rm 3}r_2^3} \left[ C_{22}\sin2{\gamma}_2 + S_{22}\cos2{\gamma}_2\right] \ .\end{aligned}$$ When this is the case, we find a discrete $R_{\rm 3}=R_{\rm 3}(t)$ via stellar evolution codes, and use cubic spline interpolation to determine both $R_{\rm 3}$ and ${\rm d}R_{\rm 3}/{\rm d}t$ at each epoch.
The tertiary is deformed under the action of self rotation and tides. Therefore, the gravity field coefficients can change with time as the shape of the tertiary is continuously adapting to the equilibrium figure. According to the Maxwell viscoelastic rheology, the deformation law for these coefficients is given by : $$\begin{aligned}
\label{max1}
&&J_2+\tau\dot{J}_2 = J_2^{r} + J_2^t \ ,\nonumber\\
&&C_{22}+\tau\dot{C}_{22} = C_{22}^t \ ,\\
&&S_{22}+\tau\dot{S}_{22} = S_{22}^t \ ,\nonumber\end{aligned}$$ where $$\label{j2r}
J_2^{r} = k_2 \frac{{\Omega}^2R_{\rm 3}^3}{3Gm_3}$$ is the rotational deformation, and $$\begin{aligned}
\label{max2}
&&J_2^t=k_2 \frac{m_1}{2m_3}\left(\frac{R_{\rm 3}}{r_1}\right)^3 + k_2 \frac{m_2}{2m_3}\left(\frac{R_{\rm 3}}{r_2}\right)^3 \ ,\\
&&C_{22}^t=\frac{k_2}{4}\frac{m_1}{m_3}\left(\frac{R_{\rm 3}}{r_1}\right)^3\cos2{\gamma}_1 + \frac{k_2}{4}\frac{m_2}{m_3}\left(\frac{R_{\rm 3}}{r_2}\right)^3\cos2{\gamma}_2 \ , \nonumber \\
&&S_{22}^t=-\frac{k_2}{4}\frac{m_1}{m_3}\left(\frac{R_{\rm 3}}{r_1}\right)^3\sin2{\gamma}_1 -\frac{k_2}{4}\frac{m_2}{m_3}\left(\frac{R_{\rm 3}}{r_2}\right)^3\sin2{\gamma}_2 \nonumber \ ,\end{aligned}$$ are the tidal equilibrium values for the gravity coefficients, and $\tau$ is the relaxation time of the tertiary in response to deformation. Usually, $\tau = \tau_v + \tau_e$, where $\tau_v$ and $\tau_e$ are the viscous (or fluid) and Maxwell (or elastic) relaxation times, respectively. However, for simplicity, we consider $\tau_e=0$, since this term does not contribute to the tidal dissipation . This $\tau$ is the previously mentioned unknown parameter calibrated using our Stage 1 simulations. For an evolving tertiary, $\tau$ admittedly changes with time, but its degree of variation is not prominent enough to warrant treating it as a variable, for the purposes of the simulations mentioned in this paper.
Examples of Systems Undergoing TTs
==================================
To showcase the effects of TTs, as well as the capabilities of our simulations, we run two sets of simulations: one for a purely hypothetical system consisting of two WDs and a MS star, with orbital parameters designed to maximise TTs, and the other for an observed multiple star system, namely HD97131. This section provides the details of these systems, as well as our results.
Hypothetical Scenario {#s2ss1}
---------------------
Here, we consider a purely hypothetical hierarchical triple, in which the inner binary consists of a pair of tidally locked white dwarfs (WDs), and the tertiary is a MS star, tidally locked to the inner binary’s COM. The masses are given as $m_{\rm 1}=m_{\rm 2}=0.8M_{\odot}$, $m_{\rm 3}=1.6M_{\odot}$, and the orbital semimajor axes as $a_{\rm 1}=0.2$AU, $a_{\rm 2}=2$AU. The orbits are set to be coplanar and prograde, and all orbits are given to be circular. In this system, the WDs can readily be approximated as point masses, thus forming a ripe testing ground for tertiary tides. It should be noted that its circular and coplanar orbits also preclude Lidov-Kozai Resonance from this system.
There are two main reasons why we choose such a system for our demonstration. The first is that this system is realistic - with an inner orbit of 25.8 days and an outer orbit of 577.4 days, this system has similar orbital periods to triple systems that have actually been observed. In fact, extensive studies by have found many triple systems with inner and outer orbital periods close to and straddling these (see their Figure 3). The second is that this system is stable according to conventional wisdom, if all three bodies were point masses. Adopting the methods and criteria of , we check this by following the dynamical evolution of the system over 4000 outer orbits using RK8, and examining their trajectories. The orbits are found to be stable, which is expected, given that the system falls within well-established stable zones .
Using our two-stage simulation method, we find that the effect of TTs is negligible when $m_{\rm 3}$ is still an MS star. This is expected, since the radius of a $1.6M_{\odot}$ MS star is relatively small (about 1$R_{\odot}$), whereas tidal phenomena typically require radii on the order of the Roche Lobes of the systems involved. However, MS stars evolve into red giants later in their lifetimes, and red giants have much larger radii. Using well-established stellar evolution algorithms [@1973MNRAS.163..279E; @1995MNRAS.274..964P; @2011ApJS..192....3P], we find that a $1.6M_{\odot}$ star stays in the red giant phase for many Myrs, during which its radius expands to more than 140 solar radii. This radius is close to, but just short of, its Roche Lobe, and therefore we need not consider the effects of Roche Lobe overflow.
![Orbital evolution of a hierarchical triple with $m_{\rm 1}=m_{\rm 2}=0.8M_{\odot}$, $m_{\rm 3}=1.6M_{\odot}$, $a_{\rm 1}=0.2$AU, $a_{\rm 2}=2$AU, $e_{\rm 1}=e_{\rm 2}=0$, and a constant tertiary radius of 100 solar radii. The inner binary orbit shrinks significantly within just a few Myrs due to TTs alone, while other orbital parameters also undergo some evolution. The rotational velocity of $m_{\rm 3}$ deviates from being perfectly tidally locked with the inner binary by a small amount, due to reasons explained in the text. \[Fig3\]](Fig3.eps)
Again adopting our two-stage simulation method, and assuming a constant radius of 100 solar radii for $m_{\rm 3}$, we retrieve a $\tau$ of 0.534 years (see Table \[sim\_params\]), and find that the inner binary orbit shrinks significantly within just a few Myrs due to TTs alone (Fig. \[Fig3\]). Throughout the inner binary orbital shrinkage, angular momentum from the inner orbit is transferred to the outer orbit, and $a_{\rm 2}$ marginally increases as a result, though not enough to shut down further shrinkage of $a_{\rm 1}$ due to TTs.
Parameter Hypothetical Scenario HD97131
------------------- ----------------------- ---------
$a_1$/AU 0.2 0.0373
$a_2$/AU 2.0 0.7955
$e_1$ 0 0
$e_2$ 0 0.191
$m_1$/$M_{\odot}$ 0.8 1.29
$m_2$/$M_{\odot}$ 0.8 0.90
$m_3$/$M_{\odot}$ 1.6 1.50
$\tau$/years 0.534 0.019
: Initial parameters for our second-stage simulations in the tertiary RGB phase for both our hypothetical scenario and HD97131.[]{data-label="sim_params"}
We also find that, after $m_{\rm 3}$ becomes a red giant, its rotational velocity is never exactly locked to the inner binary, even though the deviation is small. This is probably due to the fact that, for a perfectly locked $m_{\rm 3}$, the mass elements of $m_{\rm 3}$ that are closer to the inner binary will have a tendency to move in the same direction as the closer inner binary component, thereby inducing a rotation that deviates from a perfectly tidally locked scenario. While this deviation is unlikely to be of much physical significance in our model, calculations pertaining to tidal effects in close triple systems, performed with models of dynamical oscillation modes under the assumption of perfect tidal locking, may require special attention in this regard.
![Prominence of tertiary tides in $a_{\rm 1}$-$a_{\rm 2}$ space for our hypothetical hierarchical triple system with $m_{\rm 1}=m_{\rm 2}=0.8M_{\odot}$, $m_{\rm 3}=1.6M_{\odot}$, all orbits being coplanar and circular. The black crosses indicate the region in which $a_{\rm 2}/a_{\rm 1}$ is too small, and the system is dynamically unstable; the blue triangles cover the region in which $a_{\rm 2}$ is too large, and TTs have no noticeable effect; the red circles represent areas where $m_{\rm 3}$ would fill its Roche lobe, in which Roche lobe overflow will compete with TTs for dominance. Only in the region with the filled red pentagons are TTs the exclusive dominating factor in merging the binary. \[Fig4\]](Fig4.eps)
But what if the inner or outer orbital separations in the triple system were larger or smaller? After all, realistic triple systems have a great range of values for $a_{\rm 1}$ and $a_{\rm 2}$. To check the separation dependence of TTs, we conduct a grid of first-stage simulations in $a_{\rm 1}$-$a_{\rm 2}$ space for the same $m_{\rm 1}=m_{\rm 2}=0.8M_{\odot}$, $m_{\rm 3}=1.6M_{\odot}$ system, and check how fast TTs can remove orbital energy from the inner binary for each set of ($a_{\rm 1}$, $a_{\rm 2}$). The conclusion is that, for different sets of ($a_{\rm 1}$, $a_{\rm 2}$), one of 4 different scenarios are possible: (i) if $a_{\rm 2}/a_{\rm 1}$ is too small, the system is dynamically unstable, and the orbits will evolve unpredictably whether TTs are considered or not; (ii) if $a_{\rm 2}$ is too large, TTs will have no noticeable effect; (iii) if $a_{\rm 2}$ is too small, $m_{\rm 3}$ will fill its Roche Lobe at some point during its evolution. While this does not invalidate the influence of TTs (TTs can lead to very significant orbital shrinkage of the inner binary before Roche Lobe overflow even begins, as demonstrated later in this section), it does lead to complications as to which effect dominates the evolution of the binary thereafter, which are beyond the scope of this paper; (iv) only in a triangular region straddled by these three regions are TTs the exclusive dominating factor. We plot these four regions in $a_{\rm 1}$-$a_{\rm 2}$ space for our $m_{\rm 1}=m_{\rm 2}=0.8M_{\odot}$, $m_{\rm 3}=1.6M_{\odot}$ system (Fig. \[Fig4\]). It can be seen that it is only in some of the closest hierarchical triples that TTs play a dominant role.
HD97131 {#s2ss2}
-------
How would TTs influence a realistic hierarchical triple system? To answer this question, we refer ourselves to the real-world hierarchical triple HD97131. HD97131 is a coplanar triple system [@2003AJ....125..825T] with $m_{\rm 3}$ being an MS star of spectral type F0. The inner orbit (between $m_{\rm 1}$ and $m_{\rm 2}$) is circular, while the outer orbit (which we assume to be prograde) has an eccentricity of $e_{\rm 2}=0.191$. The other relevant orbital parameters are $m_{\rm 1}=1.29M_{\odot}$, $m_{\rm 2}=0.90M_{\odot}$, $m_{\rm 3}=1.50M_{\odot}$, $a_{\rm 1}=0.0373$AU, and $a_{\rm 2}=0.7955$AU . At such a small $a_{\rm 2}$, the Roche Lobe for $m_{\rm 3}$ is small, only 57.1 solar radii [@1983ApJ...268..368E]. Thus, $m_{\rm 3}$ will inevitably fill its Roche Lobe during its red giant phase. However, since the effects of Roche Lobe mass transfer do not become significant until after its onset, this fact will not affect our analysis of TTs, which will shrink $a_{\rm 1}$ long before this happens. Unlike our previous simulation, we account for the radius evolution of $m_{\rm 3}$ by calculating the radius as a function of time using the aforementioned stellar evolution codes [@1973MNRAS.163..279E; @1995MNRAS.274..964P; @2011ApJS..192....3P], and performing a cubic spline interpolation on the results. For our following simulations, we use the final 4.8 Myrs of the radius evolution of $m_{\rm 3}$ up to 1 Myr after Roche Lobe overflow. This means that our simulation starts with the initial orbital parameters, but with $m_{\rm 3}$ already well into its RGB phase, and filling its Roche Lobe at $t=$3.8 Myrs.
To simulate HD97131, we use our 2-stage method described in §2. However, since many of the assumptions regarding our first-stage simulations break down for systems with $m_{\rm 1}{\neq}m_{\rm 2}$ and $e_{\rm 2}{\neq}0$, we make the following modifications to our methods. For our first-stage simulations, we set the masses of both inner binary components to $\frac{m_{\rm 1}+m_{\rm 2}}{2}$, and set the tertiary in a circular orbit with a semimajor axis of $a_{\rm 2}(1-e_{\rm 2}^{2})$. The justification for the latter is that, assuming conservation of angular momentum, the semimajor axis of the outer orbit will evolve to that particular value if the orbit were to be tidally circularised. Our results show that this is indeed the case. With these modifications to the system, our original assumptions hold, and the first-stage simulations can be conducted. While this leads to a first stage simulation of a system somewhat different from the actual HD97131, this difference is not important, as our first stage simulations are only used to calibrate the value of $\tau$ for our second-stage simulations, which are responsible for the recovery of exact details of the orbital evolution. We find a $\tau$ of 0.019 years. For the second-stage simulations, we use the orbital parameters of the actual HD97131 (as documented in Table \[sim\_params\]).
![Projected orbital evolution for HD97131, after its tertiary becomes a red giant. It can be seen that the inner orbital separation $a_{\rm 1}$ decreases significantly due to TTs, while the evolution of the outer orbital separation $a_{\rm 2}$ is negligible. The evolution of the orbital eccentricities is evident, as is that of $\Omega$, the rotation of $m_{\rm 3}$. Note that the initial $e_{\rm 2}=0.191$ vanishes in just a few thousand years, and is consequently not visible in this plot. $\Omega$ evolves to deviate from the tidally locked value as expected, due to reasons explained in the paper. \[Fig5\]](Fig5.eps)
Tracing the orbital evolution of HD97131 during the red giant phase of $m_{\rm 3}$, we find that the outer orbit is rapidly circularised, reducing $e_{\rm 2}$ to less than $0.01$ in just a few thousand years. This is expected [e.g. @2016CeMDA.126..189C], and will therefore warrant no further attention here. Thereafter, the inner binary orbit shrinks as witnessed in our previous simulations, with some minor evolution of the other orbital parameters (Fig. \[Fig5\]). It should be noted that the GW merging timescale of the inner binary is brought down to less than half its original value during this process [see @1964PhRv..136.1224P]. The slight deviation from exact tidal locking in the rotation of $m_{\rm 3}$ is again seen.
Discussion
==========
Our results unequivocally show that TTs have a profound impact on very close hierarchical triples. While it is evident that this translates into a negligible effect when considering stellar populations in general, our understanding of certain exotic systems can be spurious if it were to be neglected altogether.
For starters, gravitational wave mergers [e.g. @2016PhRvL.116f1102A] require very close massive objects as progenitors. It is also well known that multiplicity is enhanced in stellar objects of such masses [e.g. @2012Sci...337..444S], and that GW mergers can arise from multiple interactions in globular clusters [e.g. @2016ApJ...824L...8R]. In both of these cases, TTs will be much more prevalent than in any general stellar population, though it is difficult to be certain by how much.
Of the many possible sources [e.g. @1973ApJ...186.1007W; @2004MNRAS.350.1301H; @1984ApJ...277..355W; @2013ApJ...770L...8P] of Type Ia supernovae (SNe Ia for short), one proposed progenitor system involving three-body interactions has received a certain degree of attention in recent years (@2011ApJ...741...82T,@2013ApJ...766...64S, see also @2007ApJ...669.1298F and @2015MNRAS.454L..61D), despite the fact that it is unlikely to be one of the main sources of SN Ia production [@2013MNRAS.430.2262H]. In these systems, the existence of a tertiary drives a WD binary into a merger or collision, by means of Lidov-Kozai oscillations. While Lidov-Kozai oscillations are less diminished by large values of $a_{\rm 2}$ than are TTs, it is conceivable that such systems preferably have small values of $a_{\rm 2}$, and therefore at least some of these systems must be susceptible to TTs. Furthermore, Lidov-Kozai oscillations are only an issue when mutual inclinations between the inner and outer orbits are high ($\sim40$ degrees or more), whereas TTs work for both coplanar and highly inclined systems. Thus, SN Ia production rates from such progenitor systems will be underestimated, should TTs be left unaccounted for. Another analogous issue is the enrichment in the high-mass end of WD mass functions [e.g. @2015MNRAS.452.1637R], which cannot be explained by WD mergers via gravitational waves alone. While TTs are unlikely to have contributed significantly to this enrichment, they could potentially amplify the rate of WD mergers if WD binaries are found to have a greater degree of multiplicity than previously thought.
Last but not least, there have been attempts to explain observational phenomena with models of binary mergers occurring inside the envelopes of giant stars [e.g. @2017MNRAS.471.3456H]. Should such studies ever reach the point where a detailed simulation of a progenitor system is required, TTs must be considered, as any binary must undergo a phase of non-negligible TTs before it can end up inside the envelope of a giant star.
In summary, TTs should play a pivotal role in the orbital evolution of certain systems. This role is even more preponderant when one considers the fact that a smaller $a_{\rm 1}$ can further exacerbate other mechanisms that drive the inner binary closer together (e.g. gravitational waves). The only limiting factor of their general influence on 3-body evolution is the fraction of systems that will experience significant TTs; as of yet, observational evidence of how frequently they occur is not available. An examination of observed triple systems seems to imply that only a very small fraction (${\sim}1\%$) will undergo significant TTs in the future; however, since TTs and other effects that act in close triples have a tendency to destroy their host systems, resulting in them ending up as binaries, not to mention observational biases that may limit the amount of very close triples seen, it is fairly hard to say what fraction of triples would be influenced by future TTs at the time of their birth. Perhaps the best way to ascertain this would be to collect samples of hierarchical triples in which the tertiaries are stars that have already evolved beyond their red giant phases, and to compare their $a_{\rm 1}$ values against those of triples with less advanced tertiaries; such observations of post-red-giant tertiaries, however, are currently rare. Opportunities to directly observe TTs in action may present themselves from time to time, judging from the existence of systems such as HD181068 [@2011Sci...332..216D] and KOI-126 [@2011ApJ...740L..25F], and theoretical modelling by means of adding TTs to existing triple star evolution codes [e.g. @2017AAS...22932605T] may also shed further light on this phenomenon that we know so little about, but such endeavours will be the contents of a future paper.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank a number of colleagues, including but not limited to Sverre Aarseth, Robert Izzard, Simon Jefferey, Philipp Podsiadlowski, Bo Wang, You Wu, Cong Yu, and Jilin Zhou, for discussion and encouragement.
This work was partly supported by the Natural Science Foundation of China (Grant Nos. 11521303, 11733008, 11390374), the Science and Technology Innovation Talent Programme of Yunnan Province (Grant No. 2017HC018), the Chinese Academy of Sciences (Grant No. KJZD-EW-M06-01), CIDMA strategic project (UID/MAT/04106/2013), ENGAGE SKA (POCI-01-0145-FEDER-022217), and PHOBOS (POCI-01-0145-FEDER-029932), funded by COMPETE 2020 and FCT, Portugal.
[99]{}
Abbott B. P., et al., 2016, PhRvL, 116, 061102
Correia A. C. M., Rodr[í]{}guez A., 2013, ApJ, 767, 128
Correia A. C. M., Bou[é]{} G., Laskar J., Rodr[í]{}guez A., 2014, A&A, 571, A50
Correia A. C. M., Bou[é]{} G., Laskar J., 2016, CeMDA, 126, 189
Counselman C. C., III, 1973, ApJ, 180, 307
Cuntz M., 2014, ApJ, 780, 14
Derekas A., et al., 2011, Sci, 332, 216
Dong S., Katz B., Kushnir D., Prieto J. L., 2015, MNRAS, 454, L61
Eggleton P. P., 1973, MNRAS, 163, 279
Eggleton P. P., 1983, ApJ, 268, 368
Eggleton P. P., Kiseleva L. G., Hut P., 1998, ApJ, 499, 853
Fabrycky D., Tremaine S., 2007, ApJ, 669, 1298
Feiden G. A., Chaboyer B., Dotter A., 2011, ApJ, 740, L25
Fuller J., Derekas A., Borkovits T., Huber D., Bedding T. R., Kiss L. L., 2013, MNRAS, 429, 2425
Goldreich P., Keeley D. A., 1977, ApJ, 211, 934
Goodman J., Oh S. P., 1997, ApJ, 486, 403
Hamers A. S., Pols O. R., Claeys J. S. W., Nelemans G., 2013, MNRAS, 430, 2262
Han Z., Podsiadlowski P., 2004, MNRAS, 350, 1301
Hillel S., Schreier R., Soker N., 2017, MNRAS, 471, 3456
Hut P., 1981, A&A, 99, 126
Kiseleva L. G., Eggleton P. P., Mikkola S., 1998, MNRAS, 300, 292
Kozai Y., 1962, AJ, 67, 591
Kumar P., Goodman J., 1996, ApJ, 466, 946
Lidov M. L., 1962, P&SS, 9, 719
Mardling R. A., 1995, ApJ, 450, 722
Mardling R. A., 1995, ApJ, 450, 732
Mathis S., Auclair-Desrotour P., Guenel M., Gallet F., Le Poncin-Lafitte C., 2016, A&A, 592, A33
Musielak Z. E., Cuntz M., Marshall E. A., Stuit T. D., 2005, A&A, 434, 355
Ogilvie G. I., 2014, ARA&A, 52, 171
Pakmor R., Kromer M., Taubenberger S., Springel V., 2013, ApJ, 770, L8
Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3
Penev K., Sasselov D., Robinson F., Demarque P., 2007, ApJ, 655, 1166
Penev K., Sasselov D., Robinson F., Demarque P., 2009, ApJ, 704, 930
Peters P. C., 1964, PhRv, 136, 1224
Pols O. R., Tout C. A., Eggleton P. P., Han Z., 1995, MNRAS, 274, 964
Press W. H., Teukolsky S. A., 1977, ApJ, 213, 183
Ragazzo C., Ruiz L. S., 2017, CeMDA, 128, 19
Rebassa-Mansergas A., Rybicka M., Liu X.-W., Han Z., Garc[í]{}a-Berro E., 2015, MNRAS, 452, 1637
Rodriguez C. L., Haster C.-J., Chatterjee S., Kalogera V., Rasio F. A., 2016, ApJ, 824, L8
Sana H., et al., 2012, Sci, 337, 444
Seidov Z. F., Skvirsky P. I., 2000, astro, arXiv:astro-ph/0003064
Shappee B. J., Thompson T. A., 2013, ApJ, 766, 64
Thompson T. A., 2011, ApJ, 741, 82
Tokovinin A. A., 1997, A&AS, 124, 75
Tokovinin A., 2010, yCat, 73890925
Toonen S., Hamers A., Portegies Zwart S., 2017, AAS, 229, 326.05
Torres G., Guenther E. W., Marschall L. A., Neuh[ä]{}user R., Latham D. W., Stefanik R. P., 2003, AJ, 125, 825
Tuchman Y., Sack N., Barkat Z., 1978, ApJ, 219, 183
Webbink R. F., 1984, ApJ, 277, 355
Whelan J., Iben I., Jr., 1973, ApJ, 186, 1007
Yip K. L. S., Leung P. T., 2017, MNRAS, 472, 4965
Zahn J.-P., 1977, A&A, 57, 383
Zahn J.-P., 2005, ASPC, 333, 4
Theoretical Calculations of TTs under Ideal Conditions {#app_other}
======================================================
For a hierarchical triple undergoing TTs, if it were the case that the internal dissipation of $m_{\rm 3}$ is infinitely efficient (which it is not), all the energy stored in the difference in the self gravitational potential energy of $m_{\rm 3}$ between states $\alpha$ and $\beta$ (see Fig. \[Fig1\]) would be effectively dissipated. Therefore, the amount of energy extracted from the inner binary orbit between $m_{\rm 1}$ and $m_{\rm 2}$, during the time it takes for the system to evolve from state $\alpha$ to state $\beta$, must be the same as the self gravitational potential energy difference of $m_{\rm 3}$. As the system evolves back and forth twice between states $\alpha$ and $\beta$ for every inner orbit, the rate of energy extraction from the inner orbit must equal four times this energy difference per inner orbital period.
How, then, should one calculate the difference in self gravitational potential energy between the third body at states $\alpha$ and $\beta$? A spherical, perfectly homogeneous elastic body under the influence of a tidal force will assume the geometric shape of an ellipsoid. The self-gravitational potential energy of a homogeneous triaxial ellipsoid can be calculated from the equations given in @2000astro.ph..3064S, repeated below: $$\begin{split}
E_{\rm P}&={\frac{3}{10}}GM^{2}{\int_{0}^{+\infty}}{\frac{{\rm d}s}{Q_{\rm s}}},\\
Q_{\rm s}&=\sqrt{({a_{\rm x}}^{2}+s)({a_{\rm y}}^{2}+s)({a_{\rm z}}^{2}+s)}.
\end{split}
\label{q1}$$ Here, $W$ is the potential energy, $a_{\rm x}$, $a_{\rm y}$, and $a_{\rm z}$ are the semi-axes of the ellipsoid along the $x$, $y$ and $z$ axes, respectively, and $M$ is the mass of the body. Thus, the potential energy difference between states $\alpha$ and $\beta$ is simply $${\Delta}E_{\rm P}={\frac{3}{10}}GM^{2}{\int_{0}^{+\infty}}({\frac{1}{Q_{\rm \alpha}}}-{\frac{1}{Q_{\rm \beta}}}){\rm d}s.
\label{q2}$$ However, this treatment has the inconvenience that it is difficult to modify for an inhomogeneous ellipsoid, which is something we have to address later in this section. Therefore, we adopt a different approach, as follows.
Under the influence of a small geometrical distortion, which is true in our case, the ellipsoid resulting from the aforementioned homogeneous spherical body is still roughly spherical in shape. Adopting a spherical coordinate system ($r$,$\psi$,$\xi$) where $\psi=0$ along the direction pointing towards the COM of the inner binary, the difference of the self-gravitational potential energy, between the initial sphere and that of the ellipsoid resulting from tidal influence, is simply the potential energy difference due to the change in radius at every ($\psi$,$\xi$), integrated over the surface of the sphere. Further assuming that the density difference of the body before and after applying the tidal force is negligible, the gravitational potential energy difference at ($\psi$,$\xi$) is equal to $(|{\Delta}R(\psi,\xi)|{\rho}g){\times}({\frac{1}{2}}|{\Delta}R(\psi,\xi)|)$, and the integral of this over the entire surface of the sphere is $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\int_{0}^{2\pi}\int_{0}^{\pi}(|{\Delta}R(\psi,\xi)|{\rho}g)\\
&({\frac{1}{2}}|{\Delta}R(\psi,\xi)|)R_{\rm 3}^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}.
\end{split}
\label{deltap1}$$ Here, $E_{\rm P,ell}$ and $E_{\rm P,sph}$ are the potential energies of the body when it is a homogeneous ellipsoid and a homogeneous sphere, respectively; $|{\Delta}R(\psi,\xi)|$ is the absolute value of the change in radius at ($\psi$,$\xi$), $(|{\Delta}R(\psi,\xi)|{\rho}g)$is the amount of mass displaced at ($\psi$,$\xi$) due to the change in radius, $({\frac{1}{2}}|{\Delta}R(\psi,\xi)|)$ is the displacement of the centre of mass of the displaced mass, $R_{\rm 3}$ is the radius of the original sphere, and $R_{3}^{2}\sin{\psi}$ is the Jacobian determinant for spherical integration. The general expression for ${\Delta}R(\psi,\xi)$ can be derived from $$\begin{split}
{\Delta}R(\psi,\xi)&=R(\psi,\xi)-R_{\rm 3},\\
R(\psi,\xi)&=R_{\rm 3}[1+\frac{5}{3}k_{\rm 2}{\zeta}P_{\rm 2}(\cos{\psi})],
\end{split}
\label{deltar1}$$ where $k_{\rm 2}$ is the Love number, which is equal to $\frac{3}{2}$ for a homogeneous fluid body, ${\zeta}$ is a parameter reflecting the magnitude of the tidal effects, the value of which we will deal with later in this section, and $P_{\rm 2}(\cos{\psi})$ is a Legendre polynomial, equal to ${\frac{1}{2}}(3{\cos}^2{\psi}-1)$. Since all stars are fluid bodies, we set $k_{\rm 2}=\frac{3}{2}$ for a homogeneous body, and since $R({\psi},{\xi})$ does not explicitly contain $\xi$, Eq. \[deltar1\] thus becomes $$\begin{split}
{\Delta}R({\psi},{\xi})&={\Delta}R(\psi)\\
&={\frac{5}{4}}R_{\rm 3}{\zeta}(3{\cos}^2{\psi}-1),
\end{split}
\label{delta_R}$$ and, by extension, Eq. \[deltap1\] can be calculated, by substituting the expressions for ${\rho}$ and $g$, as well as Eq. \[delta\_R\], to be $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\int_{0}^{2\pi}\int_{0}^{\pi}{\frac{1}{2}}{\rho}g({\Delta}R(\psi))^2{R_{3}}^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&=\int_{0}^{2\pi}\int_{0}^{\pi}{\frac{1}{2}}(\frac{m_{\rm 3}}{\frac{4}{3}{\pi}{R_{3}}^{3}})(\frac{Gm_{\rm 3}}{{R_{3}}^{2}})\\
&~~~~({\frac{5}{4}}R_{3}{\zeta}(3{\cos}^2{\psi}-1))^2{R_{3}}^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&={\frac{75}{128{\pi}}}{\frac{Gm_{\rm 3}^2{\zeta}^2}{R_{3}}}\int_{0}^{2\pi}\int_{0}^{\pi}(3{\cos}^2{\psi}-1)^2\\
&~~~~\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}.
\end{split}$$ Since $$\begin{split}
&~~~~{\int}(3{\cos}^{2}x-1)^2{\sin}x{\rm d}x\\
&=-\frac{9}{5}{\cos}^{5}x+2{\cos}^{3}x-{\cos}x,
\end{split}$$ the previous equation becomes $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&={\frac{75}{128{\pi}}}{\frac{Gm_{\rm 3}^2{\zeta}^2}{R_{3}}}\int_{0}^{2\pi}\int_{0}^{\pi}(3{\cos}^2{\psi}-1)^2\\
&~~~~\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&={\frac{75}{128{\pi}}}{\frac{Gm_{\rm 3}^2{\zeta}^2}{R_{3}}}({2\pi})\frac{8}{5}\\
&={\frac{15}{8}}{\frac{Gm_{\rm 3}^2{\zeta}^2}{R_{3}}},
\end{split}
\label{deltap2}$$ where $G$ is the gravitational constant.
For a two-body tide, where one object experiences tidal force from the other, ${\zeta}$ is given by $${\zeta}=\frac{m_{\rm per}}{M}\left(\frac{R_{\rm 0}}{a}\right)^3
\label{zeta1}$$ where $m_{\rm per}$ is the mass of the perturbing body, $M$ is the mass of the perturbed body, $R_{\rm 0}$ is the spherical radius of the receiving body in the absence of tidal forces, and $a$ is the distance between the two bodies. It follows that, in our situation, when $m=m_{\rm 1}=m_{\rm 2}$, for states $\alpha$ and $\beta$ (see Figure \[Fig1\] for definition), $$\begin{split}
{\zeta}_{\rm \alpha}&=\left(\frac{a_{\rm 2}^3}{(a_{\rm 2}+\frac{1}{2}a_{\rm 1})^3}+\frac{a_{\rm 2}^3}{(a_{\rm 2}-\frac{1}{2}a_{\rm 1})^3}\right)\frac{m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3\\
{\zeta}_{\rm \beta}&=\left[3\left(\frac{a_{\rm 2}}{\sqrt{a_{\rm 2}^2+(\frac{1}{2}a_{\rm 1})^2}}\right)^5-\left(\frac{a_{\rm 2}}{\sqrt{a_{\rm 2}^2+(\frac{1}{2}a_{\rm 1})^2}}\right)^3\right]\frac{m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3,
\end{split}
\label{modzeta1}$$ where $a_{\rm 1}$, $a_{\rm 2}$, $m_{\rm 1}$, $m$ and $m_{\rm 3}$ have already been defined in the text. Setting $u=({a_{\rm 1}}/{a_{\rm 2}})^2$, the above equations are strictly equivalent to $$\begin{split}
{\zeta}_{\rm \alpha}&=\frac{m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3\frac{2+\frac{3}{2}u}{\left( 1-\frac{1}{4}u\right)^3}\\
{\zeta}_{\rm \beta}&=\frac{m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3\left[3\left(\frac{1}{1+\frac{1}{4}u}\right)^{5/2}-\left(\frac{1}{1+\frac{1}{4}u}\right)^{3/2}\right].
\end{split}$$ When $a_{\rm 1}<<a_{\rm 2}$, it follows that $u$ is small, and therefore terms of order $u^2$ and higher can be omitted: $$\begin{split}
\frac{2+\frac{3}{2}u}{\left( 1-\frac{1}{4}u\right)^3}&=\left(2+\frac{3}{2}u\right)\left(1+\left(\frac{3}{4}u-\frac{3}{16}u^2+\frac{1}{64}u^3\right)...\right)\\
&{\sim}\left(2+\frac{3}{2}u\right)\left(1+\frac{3}{4}u\right)\\
&{\sim}\left(2+3u\right)\\
\left(\frac{1}{1+\frac{1}{4}u}\right)^{3/2}&=\left(1-\frac{1}{4}u+\frac{1}{16}u^2...\right)^{3/2}\\
&{\sim}\left(1-\frac{1}{4}u\right)^{3/2}\\
&=1^{3/2}-(3/2)\frac{1}{4}u+\frac{1}{2}(3/4)\frac{1}{16}u^2...\\
&{\sim}\left(1-\frac{3}{8}u\right)\\
\left(\frac{1}{1+\frac{1}{4}u}\right)^{5/2}&{\sim}\left(1-\frac{1}{4}u\right)^{5/2}\\
&{\sim}\left(1-\frac{5}{8}u\right).
\end{split}$$ where “..." in each case denotes the use of a Taylor expansion. Substituting with these approximations, we arrive at $$\begin{split}
{\zeta}_{\rm \alpha}&{\sim}\frac{2m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3\left(1+\frac{3}{2}\frac{a_{\rm 1}^2}{a_{\rm 2}^2}\right)\\
{\zeta}_{\rm \beta}&{\sim}\frac{2m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3\left(1-\frac{3}{4}\frac{a_{\rm 1}^2}{a_{\rm 2}^2}\right).
\end{split}
\label{modzeta2}$$ Since $\frac{5}{2}{\zeta}$ is the magnitude of displacement at the surface of the third body at the points nearest and furthest to the binary system, it can be seen that the third body is still an approximate sphere despite the application of tidal forces.
Thus, combining Eqs. \[deltap2\] and \[modzeta2\], and comparing with Eq. \[q2\], we arrive at $$\begin{split}
{\Delta}E_{\rm P}&=E_{\rm P,ell,{\alpha}}-E_{\rm P,sph,{\beta}}\\
&=(E_{\rm P,ell,{\alpha}}-E_{\rm P,sph})-(E_{\rm P,sph,{\beta}}-E_{\rm P,sph})\\
&{\sim}{\frac{135}{4}}\frac{Gm^{2}R_{\rm 3}^{5}a_{\rm 1}^{2}}{a_{\rm 2}^{8}},
\end{split}
\label{delta_p_homo}$$ which is the difference in self-gravitational potential energy for the third body at equilibrium tide between states $\alpha$ and $\beta$ for a homogeneous body. Note that, interestingly, it is invariant with the mass (or density) of the third body, and is a function of its radius only. This somewhat counterintuitive result is due to our previous assumptions that all tidal distortions are small - a smaller mass would result in a larger geometric distortion, and below a certain mass threshold (when ${\zeta}{\sim}1$), our assumption of small distortion will simply cease to hold.
It should be noted that, in the derivations above, the third body is assumed to be homogeneous. When the mass of the third body is not homogeneously, but only spherically symmetrically, distributed, as is the case for many models of celestial bodies, the equation corresponding to Eq. \[deltap1\] is $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{R_{\rm 3}}|{\Delta}R(\psi,r)|\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)\\
&~~~~~~g(r)({\frac{1}{2}}|{\Delta}R(\psi,r)|)r^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
\end{split}
\label{nonhomo1}$$ where ${\Delta}R(\psi,r)$ is the vertical displacement of a point mass at ($\psi,r$), and the somewhat elusive ${\rm d}r$ (apparently missing at the end of the expression) is located in the ${\rho}(r+{\rm d}r)$ term. For example, for a body composed of an extremely compact central core and an outer envelope, with the core accounting for 60% of its mass and the remaining mass being distributed in the envelope according to ${\rho}(r)=kr^{-1.5}$ ($k$ is a constant), $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{R_{\rm 3}}|{\Delta}R(\psi,r)|\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)\\
&~~~~~~g(r)({\frac{1}{2}}|{\Delta}R(\psi,r)|)r^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&=\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{R_{\rm 3}}\frac{1}{2}{\Delta}R^{2}(\psi,r)\\
&~~~~\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)g(r)r^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&~~~~+\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{R_{\rm core}}\frac{1}{2}{\Delta}R^{2}(\psi,r)\\
&~~~~\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)g(r)r^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}.
\end{split}$$ For a very small core, under the limit of when $R_{\rm core}$ goes to zero, the second term vanishes. To evaluate the first term, one should note that $$\begin{split}
{\Delta}R(\psi,r)&=\frac{5}{4}r{\zeta}(r)(3{\cos}^2{\psi}-1),\\
{\zeta}(r)&={\zeta}\frac{m_{\rm 3}}{m_{\rm 3}(<r)}\left(\frac{r}{R_3}\right)^3,\\
g(r)&=G\frac{m_{\rm 3}(<r)}{r^2}
\end{split}$$ where ${\zeta}(r)$ is the value of ${\zeta}$ corresponding to a radius of $r$ instead of the surface (i.e. $r=R_3$) of the perturbed body, and $m_{\rm 3}(<r)$ is the total mass included within a sphere of radius $r$ centred at the centre of the perturbed body. It should also be noted that $$\begin{split}
\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)&=kr^{-1.5}-k(r+{\rm d}r)^{-1.5}\\
&=kr^{-1.5}-kr^{-1.5}\left(1+\frac{{\rm d}r}{r}\right)^{-1.5}\\
&=kr^{-1.5}-kr^{-1.5}\left(1-\frac{3}{2}\frac{{\rm d}r}{r}\right)\\
&=\frac{3}{2}kr^{-2.5}{\rm d}r,
\end{split}$$ and consequently $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{R_{\rm 3}}|{\Delta}R(\psi,r)|\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)\\
&~~~~~~g(r)({\frac{1}{2}}|{\Delta}R(\psi,r)|)r^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&=\lim_{R_{\rm core} \to 0}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{R_{\rm 3}}\frac{1}{2}{\Delta}R^{2}(\psi,r)\\
&~~~~\left({\rho}(r)-{\rho}(r+{\rm d}r)\right)g(r)r^{2}\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&=\lim_{R_{\rm core} \to 0}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{R_{\rm 3}}\frac{1}{2}\frac{25}{16}r^2{\zeta}^2\left(\frac{m_{\rm 3}}{m_{\rm 3}(<r)}\right)^2\\
&~~~~\left(\frac{r}{R_3}\right)^6(3{\cos}^2{\psi}-1)^2\left(\frac{3}{2}kr^{-2.5}{\rm d}r\right)G\frac{m_{\rm 3}(<r)}{r^2}\\
&~~~~r^2\sin{\psi}{\rm d}{\psi}{\rm d}{\xi}\\
&=Gk\frac{75}{64}\lim_{R_{\rm core} \to 0}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{R_{\rm 3}}r^{-0.5}{\zeta}^2{m_{\rm 3}^2}\frac{1}{m_{\rm 3}(<r)}\\
&~~~~\left(\frac{r}{R_3}\right)^6(3{\cos}^2{\psi}-1)^2\sin{\psi}{\rm d}r{\rm d}{\psi}{\rm d}{\xi}\\
&=Gk\frac{75}{64}{\times}\left(\int_{0}^{2\pi}{\rm d}{\xi}\right){\times}\left(\int_{0}^{\pi}(3{\cos}^2{\psi}-1)^2\sin{\psi}{\rm d}{\psi}\right)\\
&~~~~{\times}\left(\frac{{\zeta}^2m_{\rm 3}^2}{R_3^6}\right){\times}\left(\lim_{R_{\rm core} \to 0}\int_{R_{\rm core}}^{R_{\rm 3}}\frac{r^{5.5}}{m_{\rm 3}(<r)}{\rm d}r\right)\\
&=Gk\frac{75}{64}{\times}\left(2\pi\right){\times}\left(\frac{8}{5}\right){\times}\left(\frac{{\zeta}^2m_{\rm 3}^2}{R_3^6}\right)\\
&~~~~{\times}\left(\lim_{R_{\rm core} \to 0}\int_{R_{\rm core}}^{R_{\rm 3}}\frac{r^{5.5}}{m_{\rm 3}(<r)}{\rm d}r\right)\\
&=\frac{15{\pi}}{4}k{\frac{Gm_{\rm 3}^{2}{\zeta}^2}{R_{\rm 3}^{6}}}\lim_{R_{\rm core} \to 0}\int_{R_{\rm core}}^{R_{\rm 3}}\frac{r^{5.5}}{m_{\rm 3}(<r)}{\rm d}r.
\end{split}
\label{nonhomo2}$$ To proceed from here, we must ascertain the value of $m_{\rm 3}(<r)$ as a function of $r$, which is $$\begin{split}
m_{\rm 3}(<r)&=m_{\rm 3,core}+m_{\rm 3,envelope}(<r)\\
&=m_{\rm 3,core}+\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{r}{\rho}(\tilde{r})\tilde{r}^2\sin{\psi}{\rm d}\tilde{r}{\rm d}{\psi}{\rm d}{\xi},
\end{split}$$ which, under the small $R_{\rm core}$ limit, can be calculated to be $$\begin{split}
m_{\rm 3}(<r)&=m_{\rm 3,core}+\lim_{R_{\rm core} \to 0}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{r}\\
&~~~~~~{\rho}(\tilde{r})\tilde{r}^2\sin{\psi}{\rm d}\tilde{r}{\rm d}{\psi}{\rm d}{\xi}\\
&=\frac{3}{5}m_{\rm 3}+\lim_{R_{\rm core} \to 0}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{R_{\rm core}}^{r}\\
&~~~~~~\left(k\tilde{r}^{-1.5}\right)\tilde{r}^2\sin{\psi}{\rm d}\tilde{r}{\rm d}{\psi}{\rm d}{\xi}\\
&=\frac{3}{5}m_{\rm 3}+4{\pi}k\lim_{R_{\rm core} \to 0}\int_{R_{\rm core}}^{r}\tilde{r}^{0.5}{\rm d}\tilde{r}\\
&=\frac{3}{5}m_{\rm 3}+\frac{8\pi}{3}kr^{1.5}.
\end{split}
\label{nonhomo3}$$ Substituting Eq. \[nonhomo3\] back into Eq. \[nonhomo2\], $$E_{\rm P,ell}-E_{\rm P,sph}=\frac{15{\pi}}{4}k{\frac{Gm_{\rm 3}^{2}{\zeta}^2}{R_{\rm 3}^{6}}}\int_{0}^{R_{\rm 3}}\frac{r^{5.5}}{(3/5)m_{\rm 3}+(8\pi/3)kr^{1.5}}{\rm d}r.$$ To calculate this integral, we carry out a numerical integration as follows. Setting $G=1$, $m_{\rm 3}=1$, $\zeta=1$, and $R_3=1$, whereupon $k=\frac{3}{20\pi}$, $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\frac{15{\pi}}{4}k{\frac{Gm_{\rm 3}^{2}{\zeta}^2}{R_{\rm 3}^{6}}}\int_{0}^{R_{\rm 3}}\frac{r^{5.5}}{(3/5)m_{\rm 3}+(8\pi/3)kr^{1.5}}{\rm d}r\\
&=\frac{15{\pi}}{4}\frac{3}{20{\pi}}\int_{0}^{1}\frac{r^{5.5}}{(3/5)+(2/5)r^{1.5}}{\rm d}r\\
&=0.5625{\times}\int_{0}^{1}\frac{x^{5.5}}{0.6+0.4x^{1.5}}{\rm d}x\\
&=0.0940,
\end{split}$$ and hence $$\begin{split}
E_{\rm P,ell}-E_{\rm P,sph}&=\frac{15{\pi}}{4}k{\frac{Gm_{\rm 3}^{2}{\zeta}^2}{R_{\rm 3}^{6}}}\int_{0}^{R_{\rm 3}}\frac{r^{5.5}}{\frac{3}{5}m_{\rm 3}+\frac{8\pi}{3}kr^{1.5}}{\rm d}r\\
&\sim\frac{1}{20}\left({\frac{15}{8}}{\frac{Gm_{\rm 3}^2{\zeta}^2}{R_{\rm 3}}}\right).
\end{split}
\label{delta_p_layered}$$
The density distribution used in the example above is typical of a red giant. Admittedly, real red giant internal density distributions are much more complicated [e.g. @1978ApJ...219..183T], but since the final result is not too sensitive to the index, this is presumably not too bad an approximation. In other words, using a realistic density distribution for $m_{\rm 3}$ will induce a decrease of the self-gravitational potential energy difference, by about an order of magnitude. It can likewise be demonstrated that the final result is not very sensitive to the index of $r$ involved. Again, as with a homogeneous body, the total mass of the receiving body is irrelevant to the result as long as ${\zeta}{\ll}1$, since the $m_{\rm 3}^{2}$ term is cancelled out by the ${\zeta}^2$ term.
It should be noted that these calculations establish only a very generous upper limit of the energy extraction rate of TTs, and should only be regarded as a order-of magnitude estimate of how close a triple system needs to be for TTs to be non-negligible; for an exact calculation, please refer to our Stage 1 simulations in §2.
Tidal potential for TTs {#delayed_to_visco}
=======================
The gravitational potential of the tertiary is given by expression (\[c1\]). Assuming that the shape of the tertiary only departs from a perfect sphere due to the tides raised by $m_1$ and $m_2$, the gravity field coefficients are solely given by the equilibrium tide contribution, i.e., $J_2 = J_2^t$, $C_{22} = C_{22}^t$, and $S_{22} = S_{22}^t$ (Eq.(\[max2\])). The gravitational potential for coplanar orbits is thus given by $$\label{gravP}
V({\bm r}) = - \frac{G m_3}{r} + V_1({\bm r}) + V_2({\bm r}) \ ,$$ where $V_i ({\bm r})$ is the partial contribution of the mass $m_i$ $$\begin{split}
V_i ({\bm r}) %&= - \frac{G m_3}{r} - \frac{G m_3 R_{\rm 3}^2 J_2^t}{2 r^3} - \frac{3 G m_3 R_{\rm 3}^2}{r^3} \big( C_{22}^t \cos {2 \gamma} - S_{22}^t \sin {2 \gamma} \big)\\
&= %- \frac{G m_3}{r}
- \frac{G m_3 R_{\rm 3}^2}{2 r^3}\left[k_2 \frac{m_i}{2m_3}\left(\frac{R_{\rm 3}}{r_i}\right)^3\right]\\
&~~~~ - \frac{3 G m_3 R_{\rm 3}^2}{r^3} \left[\frac{k_2}{4}\frac{m_i}{m_3}\left(\frac{R_{\rm 3}}{r_i}\right)^3\cos2{\gamma}_i\right] \cos {2 \gamma}\\
&~~~~ + \frac{3 G m_3 R_{\rm 3}^2}{r^3} \left[-\frac{k_2}{4}\frac{m_i}{m_3}\left(\frac{R_{\rm 3}}{r_i}\right)^3\sin2{\gamma}_i\right] \sin {2 \gamma}\\
%&= - \frac{G m_3}{r} - \frac{G m_3 R_{\rm 3}^2}{2 r^3}\frac{k_2}{2}\zeta-\frac{3 G m_3 R_{\rm 3}^2}{r^3}\frac{k_2}{4}\zeta \\
%&~~~~\left(\cos2{\gamma}_i\cos {2 \gamma}+\sin2{\gamma}_i\sin {2 \gamma}\right)\\
%&= - \frac{G m_3}{r} - \frac{G m_3 R_{\rm 3}^2}{2 r^3}\frac{k_2}{2}\zeta-\frac{3 G m_3 R_{\rm 3}^2}{r^3}\frac{k_2}{4}\zeta \cos{(2{\gamma}_i-2\gamma)}\\
%&= - \frac{G m_3}{r}\left[1+k_2\zeta\left(\frac{R_{\rm 3}}{r}\right)^2 \left(\frac{1}{4}+\frac{3}{4}\cos{(2{\gamma}_i-2\gamma)}\right) \right].
&= - \frac{G m_3}{r}\left[k_2 \, \zeta_i \left(\frac{R_{\rm 3}}{r}\right)^2 P_{\rm 2}(\cos({\gamma}_i-\gamma)) \right] ,
\end{split}
\label{appB1}$$ with $$\label{rphii}
\zeta_i =\frac{m_i}{m_3}\left(\frac{R_{\rm 3}}{r_i}\right)^3 \ .$$ For ${\bm r}$ in the orbital plane, we additionally have $$\label{cospsii}
\cos ({\psi_i}-\psi) = \cos ({\gamma}_i-\gamma) \ , %= \frac{{\bm r}_i \cdot {\bm r}}{r_i \, r} \ .$$ and we can rewrite the gravitational potential (\[gravP\]) as $$V({\bm r})=-\frac{Gm_{\rm 3}}{r} \left[1+k_{\rm 2} \, \zeta \left(\frac{R_{\rm 3}}{r}\right)^{2}P_{\rm 2}(\cos{\psi}) \right].
\label{appBphi}$$ For a single perturber, for instance $m_1$, we have $\zeta = \zeta_1$ and $\psi_1 = 0$, so expression (\[appBphi\]) gives the usual tidal potential (since $m_2=0 \Rightarrow \zeta_2=0$). For two perturbers, $\zeta$ depends on the relative position of these perturbers with respect to the tertiary, and so we have $$\zeta \, P_{\rm 2}(\cos{\psi}) = \zeta_1 P_{\rm 2}(\cos({\psi_1}-\psi)) + \zeta_2 P_{\rm 2}(\cos({\psi_2}-\psi)) \ .$$
To simplify things, we can approximate $m_{\rm 3}$ as an ellipsoid with its singular bulge constantly pointed towards the inner binary COM. In other words, we assume that the respective tidal bulges raised by $m_{\rm 1}$ and $m_{\rm 2}$ can be approximated to coalesce to from a single set of bulges, equal to that raised by a point mass at the COM of the inner binary. In this scenario, we only need to find the value of $\zeta$ for any value of $\psi$ in order to find the $\zeta$ that characterises the deformation of the entire $m_{\rm 3}$, regardless of $\zeta$. Therefore, setting $\psi=0$, and noting that $m=m_{\rm 1}=m_{\rm 2}$, we have $$\begin{split}
\zeta &= \zeta_1 P_{\rm 2}(\cos{\psi_1}) + \zeta_2 P_{\rm 2}(\cos{\psi_2})\\
&=\frac{m}{m_3}\left(\frac{R_{\rm 3}}{r_1}\right)^3P_{\rm 2}(\cos{\psi_1}) + \frac{m}{m_3}\left(\frac{R_{\rm 3}}{r_2}\right)^3P_{\rm 2}(\cos{\psi_2})\\
&=\left[\frac{P_{\rm 2}(\cos{\psi_1})}{(r_{\rm 1}/a_{\rm 2})^3}+\frac{P_{\rm 2}(\cos{\psi_2})}{(r_{\rm 2}/a_{\rm 2})^3}\right]\frac{m}{m_{\rm 3}}\left(\frac{R_{\rm 3}}{a_{\rm 2}}\right)^3.
\end{split}$$ which is exactly Eq. \[zetaphi\].
\[lastpage\]
[^1]: E-mail: ygbcyy@ynao.ac.cn
|
---
abstract: 'Let $X$ be a smooth projective curve of genus $g$ and $L$ be a degree $l$ line bundle on $X$ with $l\geq 2g-1$. Denote the stack of rank two Higgs bundles on $X$ with value in $L$ by ${\mathcal{H}iggs}$ and the semistable part by ${\mathcal{H}iggs}_{ss}$. Let $H$ be the Hitchin base. In this paper we will construct the Poincaré sheaf $\mathcal{P}$ on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$ which is a maximal Cohen-Macaulay sheaf and flat over ${\mathcal{H}iggs}_{ss}$. In particular this includes the locus of nonreduced spectral curves. The present work generalizes the construction of the Poincaré sheaf in $[1]$, $[3]$ and $[13]$.'
author:
- Mao Li
title: Construction of the Poincaré sheaf on the stack of rank $2$ Higgs bundles
---
Introduction
============
Poincaré sheaf
--------------
Let $C$ be a smooth projective curve and $J$ be the Jacobian of $C$. Then it is well known that there is a Poincaré line bundle $\mathcal{P}$ on $J\times J$ which is the universal family of topologically trivial line bundles on $J$(See $[14]$). When $C$ is an integral planar curve, the Jacobian $J$ is no longer projective, but we can consider the compactified Jacobian ${\overline{J}}$ ($[7]$,$[8]$) which parameterizes torsion free rank $1$ sheaves on $C$. In this case there exists a Poincaré line bundle on $\mathcal{P}$ on $J\times{\overline{J}}$ ($[2]$) defined in the following way. Consider $$\begin{CD}
C\times J\times{\overline{J}}\\
@V{\pi}VV\\
J\times{\overline{J}}\\
\end{CD}$$ Then: $$\label{Poincare line bundle}
\mathcal{P}={\textrm{det}}(R\pi_{*}(L\otimes F))\otimes {\textrm{det}}(R\pi_{*}O_{C})\otimes {\textrm{det}}(R\pi_{*}(L))^{-1}\otimes {\textrm{det}}(R\pi_{*}(F))^{-1}$$ where $F$ and $L$ are the universal sheaves on $C\times J$ and $C\times{\overline{J}}$. It is interesting to ask whether we can extend $\mathcal{P}$ to ${\overline{J}}\times{\overline{J}}$. For curves with double singularities, this has been answered in $[15]$, and the generalization to all integral planar curves is obtained in $[1]$(Similar results have also been obtained by Margarida Melo, Antonio Rapagnetta and Filippo Viviani in $[3]$, where they work with moduli space instead of stacks):
There exists a maximal Cohen-Macaulay sheaf $\mathcal{P}$ on ${\overline{J}}\times{\overline{J}}$ such that the restriction of $\mathcal{P}$ to $J\times{\overline{J}}\cup{\overline{J}}\times J$ is the Poincaré line bundle given by formula \[Poincare line bundle\]. Moreover, $\mathcal{P}$ is flat over both component ${\overline{J}}$.
We shall call $\mathcal{P}$ the Poincaré sheaf. In fact, even though the theorem is stated only for integral curves, the argument presented in $[1]$ also works for reduced planar curves.\
For the construction of the Poincaré line bundle on $J\times{\overline{J}}\cup{\overline{J}}\times J$, we do not need to assume that $C$ is reduced. Similarly, Lemma \[equivariance property\] also holds for Poincaré line bundles of nonreduced planar curves.
One of the main motivations for studying compactified Jacobians is that they are fibers of the Hitchin fibration. Let $X$ be a smooth projective curve, $L$ a line bundle on $X$. Denote the stack of rank $n$ Higgs bundle with value in $L$ by ${\mathcal{H}iggs}$. Let $H$ be the Hitchin base which parameterizes spectral curves. We have the Hitchin fibration ${\mathcal{H}iggs}\xrightarrow{h} H$. It is well-known that a Higgs bundle on $X$ can be naturally viewed as a torsion-free rank $1$ sheaf on the spectral curve $C$(See $[16]$). Moreover, let $H_{r}$ be the open subscheme of $H$ corresponding to reduced spectral curves. Then it is well-known that the fibers of $h$ over $H_{r}$ can be identified with the compactified Jacobian of $C$. Let ${\mathcal{H}iggs}^{reg}$ denote the open substack of ${\mathcal{H}iggs}$ corresponding to line bundles on spectral curves. Then formula \[Poincare line bundle\] defines a Poincaré line bundle $\mathcal{P}$ on $${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}\cup{\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}^{reg}$$ Moreover, denote the open substack of Higgs bundles with generically regular Higgs field by $\widetilde{{\mathcal{H}iggs}}$ (Definition \[generically regular Higgs field\]). As we will see later in Proposition \[definition of Q\], the construction in $[1]$ actually provides a maximal Cohen-Macaulay sheaf $\mathcal{P}$ on $$\widetilde{{\mathcal{H}iggs}}\times_{H}{\mathcal{H}iggs}\cup{\mathcal{H}iggs}\times_{H}\widetilde{{\mathcal{H}iggs}}$$ which extends the Poincaré line bundle on $${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}\cup{\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}^{reg}$$ Moreover, it is shown in $[1]$ that $\mathcal{P}$ induces an autoequivalence of the derived category. This establishes the Langlands duality for Hitchin systems for $GL(n)$ over the locus of integral spectral curves(See $[1]$ for discussions about its relations with automorphic sheaves). Hence it is a very interesting question whether we can extend the maximal Cohen-Macaulay sheaf above to ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}$. The main issue here is how to extend $\mathcal{P}$ to the locus of nonreduced spectral curves. In this paper we provide a partial answer in the case of rank $2$ Higgs bundles on a projective smooth curve $X$. Namely, let ${\mathcal{H}iggs}_{ss}$ be the open substack of semistable Higgs bundles. We are going to construct a maximal Cohen-Macaulay sheaf on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$ such that it is an extension of the Poincaré line bundle.
Main result and the Organization of the paper
---------------------------------------------
Let $X$ be a smooth projective curve of genus $g$, $L$ a line bundle on $X$ of degree $l\geq 2g-1$. Let ${\mathcal{H}iggs}$ be the stack of rank two Higgs bundles on $X$ with value in $L$, and $H$ be the Hitchin base. Let ${\mathcal{H}iggs}_{ss}$ denote the open substack of semistable Higgs bundles, and let ${\mathcal{H}iggs}^{reg}$ denote the open substack of ${\mathcal{H}iggs}$ corresponds to Higgs bundles that are line bundles on its spectral curve. The main theorem of the paper is the following:
\[the main theorem\] There exists a maximal Cohen-Macaulay sheaf $\mathcal{P}$ on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$ which extends the Poincaré line bundle on ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}_{ss}$. Moreover, $\mathcal{P}$ is flat over ${\mathcal{H}iggs}_{ss}$.
By Proposition \[uniqueness of CM sheaf\], such an extension of the Poincaré line bundle on ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}_{ss}$ is unique if exists. Let $\widetilde{{\mathcal{H}iggs}}$ be the open substack of ${\mathcal{H}iggs}$ corresponds to Higgs bundles with generically regular Higgs field(See Definition \[generically regular Higgs field\]). The construction in $[1]$ provides the Poincaré sheaf on $\widetilde{{\mathcal{H}iggs}}\times_{H}{\mathcal{H}iggs}_{ss}$. The problem is that for Higgs bundles with nonreduced spectral curves, the Higgs field may not be generically regular, hence the construction in $[1]$ does not extend to ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$. In the previous work $[13]$ we constructed the Poincaré sheaf for the stack of Higgs bundles of ${\mathbf{P}^{1}}$. In this paper we generalize the construction to higher genus curves.
The rest of the paper is organized as follows. In $1.3$ we shall review the construction in $[1]$ and adapt it to our setting. In $1.4$ we sketch the main idea of the construction, which is the fundamental diagram $1.2$ below. We will discuss its motivation and also the relation with our previous work for ${\mathbf{P}^{1}}$. In section $2$ we gather some preliminary results that will be used in the paper. Most of these are similar to the previous paper, the new feature is the vanishing result in Lemma \[vanishing of pushforward\]. In section $3$ we first review the geometry of the stack of Higgs bundles. In subsection $3.3$ we come to the main construction, which involves the stack $\mathcal{Y}$, a closed substack $\mathcal{Z}$ of $\mathcal{Y}$ and the blowup of $\mathcal{Y}$ along $\mathcal{Z}$ which is denoted by $\mathcal{B}$. The main result is the fundamental diagram established in Proposition \[resolve the rational map\], which roughly speaking says the following(See the discussions in subsection $1.4$ and subsection $3.3$ for more detail): There exists a natural morphism $$\mathcal{Y}\backslash\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$$ and it extends to a morphism $\mathcal{B}\rightarrow {\textrm{Hilb}_{S}}^{d}$, hence we have the following diagram: $$\label{first appearence of the diagram}
\xymatrix{
& \mathcal{B} \ar[d]_{\pi} \ar[r]^{p} & {\textrm{Hilb}_{S}}^{d}\\
\mathcal{Z} \ar[r] & \mathcal{Y} & \\
}$$ In section $4$ we use the fundamental diagram above to establish Lemma \[the main lemma\], and in the last section we use it to finish the construction of the Poincaré sheaf.
Review of the construction of the Poincaré sheaf
------------------------------------------------
This subsection is essentially the same as subsection $1.3$ of the previous work $[13]$, we include it here for reader’s convenience. We will review the construction of the Poincaré sheaf in $[1]$ and $[3]$. And we will also adapt the construction to our setup. Let $C$ be a reduced planar curve embedded into a smooth surface $C\hookrightarrow S$. It is well known that ${\textrm{Hilb}_{C}}^{d}$ is a complete intersection in ${\textrm{Hilb}_{S}}^{d}$ of codimension $d$. Let $\overline{J}$ be the stack of torsion free rank $1$ sheaves on $C$. The Poincaré sheaf on ${\overline{J}}\times\overline{J}$ can be constructed as follows. Fix an ample line bundle $N$ of $C$. We have a natural Abel-Jacobian map: $${\textrm{Hilb}_{C}}^{d}\xrightarrow{\alpha} \overline{J} \qquad D\rightarrow I_{D}^{\vee}\otimes N^{-m}$$ Let $U_{d}$ be the open subscheme of ${\textrm{Hilb}_{C}}^{d}$ given by the condition $H^{1}(I_{d}^{\vee})=0$. Then the restriction of $\alpha$ to $U_{d}$ is smooth, and the union of the image of all $U_{d}$ in $\overline{J}$ covers $\overline{J}$. So we only need to construct Poincaré sheaf on $U_{d}\times \overline{J}$ and show it descends to $\overline{J}\times \overline{J}$. Let $F$ be the universal sheaf on $C\times \overline{J}$. The Hilbert scheme of the surface is denoted by ${\textrm{Hilb}_{S}}^{d}$. It is well known that ${\textrm{Hilb}_{S}}^{d}$ is smooth. Let ${\textrm{Flag}_{S}}^{d}$ be the flag Hilbert scheme of $S$, which parameterizes length $d$ subschemes together with a complete flag: $$\emptyset=D_{0}\subseteq D_{1}\subseteq\cdots\subseteq D_{d}=D$$ Consider the following diagram: $$\begin{CD}
{\textrm{Hilb}_{S}}\times \overline{J}@<{\psi\times id}<<\widetilde{{\textrm{Hilb}_{S}}}\times \overline{J}@>{\sigma_{d}\times id}>>S^{d}\times \overline{J}@<{l^{d}\times id}<<C^{d}\times \overline{J}\\
@V{p_{1}}VV\\
{\textrm{Hilb}_{S}}\end{CD}$$ where $\widetilde{{\textrm{Hilb}_{S}}}$ stands for the isospectral Hilbert scheme of $S$ (See $[1]$ Proposition $3.7$ or $[17]$ for the definition). It is known that $\psi$ is finite flat. Moreover, let ${\textrm{Hilb}_{S}}^{' d}$ be the open subscheme of ${\textrm{Hilb}_{S}}^{d}$ parameterizing subschemes that can be embedded into smooth curves, Then we have: $$\widetilde{{\textrm{Hilb}_{S}}^{d}}|_{{\textrm{Hilb}_{S}}^{' d}}\simeq{\textrm{Flag}_{S}}^{' d}={\textrm{Flag}_{S}}^{d}|_{{\textrm{Hilb}_{S}}^{' d}}$$ Let $\mathcal{D}\xrightarrow{\pi} {\textrm{Hilb}_{S}}$ be the universal finite subscheme over ${\textrm{Hilb}_{S}}$ and set $\mathcal{A}=\pi_{*}O_{D}$, then we define: $$Q=((\psi\times id)_{*}(\sigma_{d}\times id)^{*}(l^{d}\times id)_{*}F^{\boxtimes d})^{sign}\otimes p_{1}^{*}det(\mathcal{A})^{-1}$$ where the upper index “sign” stands for the space of antiinvariants with respect to the natural action of the symmetric group. Then it is proved in $[1]$ that $Q$ is supported on ${\textrm{Hilb}_{C}}$ and it’s a maximal Cohen-Macaulay sheaf. Moreover, if we restrict $Q$ to $U_{d}$, then it descends down to ${\overline{J}}\times {\overline{J}}$. (In $[1]$ the statement is proved only for integral curves, but the same argument works for any reduced planar curves. The construction also works for families of planar curves). Let ${\textrm{Hilb}_{S}}^{'}\subseteq {\textrm{Hilb}_{S}}$ be the open subscheme parameterizing subchemes $D$ such that $D$ can be embedded into a smooth curve. Then we have a simpler description of the restriction of $\widetilde{{\textrm{Hilb}_{S}}}$ to ${\textrm{Hilb}_{S}}^{'}$ thanks to the following proposition ($[1]$ Proposition $3.7$):
\[rewrite using flag\] Let ${\textrm{Flag}_{S}}^{d}$ be the flag Hilbert scheme of $S$, then we have $$\widetilde{{\textrm{Hilb}_{S}}^{d}}|_{{\textrm{Hilb}_{S}}^{' d}}\simeq {\textrm{Flag}_{S}}^{' d}={\textrm{Flag}_{S}}^{d}|_{{\textrm{Hilb}_{S}}^{' d}}$$
Hence over ${\textrm{Hilb}_{S}}^{'}$ we have ${\textrm{Flag}_{S}}^{'}\simeq \widetilde{{\textrm{Hilb}_{S}}}$, and the construction can be written in terms of ${\textrm{Flag}_{S}}^{'}$.\
We shall adapt the construction above to our setting. Namely, let ${\mathcal{H}iggs}$ be the stack of rank $2$ Higgs bundles on $X$ with value in $L$, and $x_{0}$ a point on $X$. We work with the family of spectral curves over $H$: $$\xymatrix{
\mathcal{C} \ar[rr] \ar[rd] & & S\times H \ar[ld]\\
& H\\
}$$ Let $N$ be the line bundle on $\mathcal{C}$ which is the pullback of $O(x_{0})$ on $X$. Let $\widetilde{{\mathcal{H}iggs}}$ be the stack of Higgs bundles with generically regular Higgs field. Then we still have Abel-Jacobian map: $$\textrm{Hilb}^{d}_{\mathcal{C}|H}\xrightarrow{\alpha}\widetilde{{\mathcal{H}iggs}}:\qquad D\rightarrow I_{D}^{\vee}\otimes N^{-m}$$ Moreover, if we set $U_{d}$ to be the open subscheme of $\textrm{Hilb}^{d}_{\mathcal{C}|H}$ consists of $D$ such that $H^{1}(\check{I}_{D})=0$, then $\alpha$ is smooth on $U_{d}$. Also if we vary $d$, the image of all $U_{d}$ will cover $\widetilde{{\mathcal{H}iggs}}$.\
Consider the diagram: $$\begin{CD}
{\textrm{Hilb}_{S}}\times{\mathcal{H}iggs}@<{\psi\times id}<<\widetilde{{\textrm{Hilb}_{S}}}\times{\mathcal{H}iggs}@>{\sigma_{d}\times id}>>S^{d}\times{\mathcal{H}iggs}@<{l^{d}\times id}<<\mathcal{C}^{d}\times_{H}{\mathcal{H}iggs}\\
@V{p_{1}}VV\\
{\textrm{Hilb}_{S}}\end{CD}$$ where $\mathcal{C}^{d}$ is $d$ fold Cartesian product of $\mathcal{C}$ over $H$: $$\mathcal{C}^{d}=\mathcal{C}\times_{H}\cdots\times_{H}\mathcal{C}$$ Similarly, set: $$\label{Q}
Q=((\psi\times id)_{*}(\sigma_{d}\times id)^{*}(l^{d}\times id)_{*}F^{\boxtimes d})^{sign}\otimes p_{1}^{*}det(\mathcal{A})^{-1}$$ Then by essentially the same argument, we get a Cohen-Macaulay sheaf $Q$ of codimension $d$ on ${\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}$. If we denote the open subscheme of $H$ corresponding to reduced spectral curves by $H_{r}$, and its complement by $H_{nr}$. Then over $H_{r}$, the sheaf $Q$ is supported on $\textrm{Hilb}^{d}_{\mathcal{C}|H_{r}}\times_{H_{r}}{\mathcal{H}iggs}|_{H_{r}}$. It is not hard to check that $H_{nr}$ has codimension $2l+1-g$ in $H$. Also since ${\mathcal{H}iggs}$ is flat over $H$, the complement of ${\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}|_{H_{r}}$ also has codimension $2l+1-g$. From the construction in $[1]$ it is not hard to check that the codimension of $$\textrm{Supp}(Q)\cap{\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}|_{H_{nr}}$$ is greater than or equals to $d+2l+1-g$. Since $Q$ is Cohen-Macaulay of codimension $d$, the support of $Q$ is of pure dimension without embedded components and has codimension equal to $d$ in ${\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}$ (Proposition \[codimension of supp of CM sheaves\]). We conclude that $\textrm{Hilb}^{d}_{\mathcal{C}|H_{r}}\times_{H_{r}}{\mathcal{H}iggs}|_{H_{r}}$ is a dense open subset in $\textrm{Supp}(Q)$ with the codimension of the complement greater than or equals to $2l+1-g$. Hence $Q$ is supported on $\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}$ over the entire $H$. In conclusion, we have:
\[definition of Q\] Let $Q$ be the sheaf on ${\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}$ given by formula \[Q\], then
$Q$ is a maximal Cohen-Macaulay sheaf of codimension $d$ on ${\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}$ supported on $\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}$.
If we consider its restriction to $U_{d}\times_{H}{\mathcal{H}iggs}$ (Recall that $U_{d}$ is the open subscheme of $\textrm{Hilb}^{d}_{\mathcal{C}|H}$ consists of $D$ such that $H^{1}(\check{I}_{D})=0$), then it descend to $\widetilde{{\mathcal{H}iggs}}\times_{H}{\mathcal{H}iggs}$ and agrees with the Poincaré line bundle on ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}$.
Using the fact that the complement of ${\mathcal{H}iggs}^{reg}$ has codimension greater than or equals to $2$, we conclude from Proposition \[uniqueness of CM sheaf\] that the maximal Cohen-Macaulay sheaves on $\widetilde{{\mathcal{H}iggs}}\times_{H}{\mathcal{H}iggs}$ and ${\mathcal{H}iggs}\times_{H}\widetilde{{\mathcal{H}iggs}}$ constructed from the previous Proposition glues together, hence we have the following:
\[CM sheaf constructed in 1\] We have a maximal Cohen-Macaulay sheaf on $$\widetilde{{\mathcal{H}iggs}}\times_{H}{\mathcal{H}iggs}\cup{\mathcal{H}iggs}\times_{H}\widetilde{{\mathcal{H}iggs}}$$ which agrees with the Poincaré line bundle on $${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}\cup{\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}^{reg}$$
For later use, we will also denote $$\label{Q'}
Q'=((\sigma_{d}\times id)^{*}(l^{d}\times id)_{*}F^{\boxtimes d})\otimes (id\times\psi)^{*}(p_{1}^{*}det(\mathcal{A})^{-1})$$ The following lemma is clear from the formula \[Q\]:
\[summand\] $Q$ is a direct summand of $(\psi\times id)_{*}(Q')$
For the purpose of the last section, let us also record the following equivariance properties of the Poincaré sheaf established in $[1]$. All the properties still hold for the Poincaré sheaf in Corollary \[CM sheaf constructed in 1\], because of Proposition \[uniqueness of CM sheaf\].
\[equivariance property\]
Let $L$ be a line bundle on $C$. Consider the automorphism $\mu_{L}$ of ${\overline{J}}$ defined by $F\rightarrow F\otimes L$, then we have that: $$(\mu_{L}\times id)^{*}\mathcal{P}\simeq \mathcal{P}\otimes p_{2}^{*}(\mathcal{P}_{L})$$ where $\mathcal{P}_{L}$ is a line bundle on ${\overline{J}}$ obtained by restriction of the Poincaré line bundle to $\{L\}\times{\overline{J}}\hookrightarrow J\times{\overline{J}}$
Let $\nu$ be the involution of ${\overline{J}}$ given by $F\rightarrow \check{F}={\mathcal{H}om}_{O_{C}}(F,O_{C})$. Consider: $${\overline{J}}\times{\overline{J}}\xrightarrow{\nu\times\textrm{id}}{\overline{J}}\times{\overline{J}}$$ Then we have: $$(\nu\times\textrm{id})^{*}(\mathcal{P})\simeq\check{\mathcal{P}}={\mathcal{H}om}(\mathcal{P},O)$$
Consider the diagram: $$\xymatrix{
J\times{\overline{J}}& J\times{\overline{J}}\times{\overline{J}}\ar[l]_{p_{13}} \ar[r]^{p_{23}} \ar[d]^{\mu\times\textrm{id}} & {\overline{J}}\times{\overline{J}}\\
& {\overline{J}}\times{\overline{J}}}$$ where $J\times{\overline{J}}\xrightarrow{\mu}{\overline{J}}$ is given by $(L,F)\rightarrow L\otimes F$. Then we have $$(\mu\times\textrm{id})^{*}(\mathcal{P})\simeq p_{13}^{*}(\mathcal{P})\otimes p_{23}^{*}(\mathcal{P})$$
Motivation of the construction and a reformulation of the main theorem
----------------------------------------------------------------------
In this subsection we will sketch the idea of the construction of the Poincaré sheaf. The motivation comes from Drinfeld’s construction of the automorphic sheaf which we shall briefly recall.(See $[11]$ and $[12]$) Let $X$ be a smooth projective curve over a finite field, and $\mathcal{E}$ an irreducible rank two $l$-adic local system on $X$. Denote the stack of rank two vector bundles of $X$ by $\textrm{Bun}_{2}$. In $[11]$ Drinfeld constructed an automorphic perverse sheaf $\textrm{Aut}_{\mathcal{E}}$ on $\textrm{Bun}_{2}$ via the following procedure. Let $d$ be an integer greater than $4g-4$. Let $S={\textrm{Pic}^{d}}$ be the Picard scheme of $X$ corresponding to degree $d$ line bundles. Let $P=X^{(d)}$. It is well known that the symmetric power $X^{(d)}$ is a projective bundle over ${\textrm{Pic}^{d}}$. Let $\check{P}$ be the dual projective bundle. By definition, $\check{P}$ classifies nontrivial extensions up to scalers: $$0\rightarrow \Omega^{1}_{X}\rightarrow L_{2}\rightarrow L_{1}\rightarrow 0$$ $\check{P}$ is equipped with natural morphisms: $$\check{P}\rightarrow \textrm{Bun}_{2}: \{0\rightarrow \Omega^{1}_{X}\rightarrow L_{2}\rightarrow L_{1}\rightarrow 0\} \rightarrow L_{2}$$ $$\check{P}\rightarrow S: \{0\rightarrow \Omega^{1}_{X}\rightarrow L_{2}\rightarrow L_{1}\rightarrow 0\} \rightarrow L_{1}$$ There exists a nonempty open subscheme $U\subseteq \check{P}$ such that the morphism $U\rightarrow \textrm{Bun}_{2}$ is smooth. Let $Z$ be the scheme of universal hyperplane in $P\times_{S}\check{P}$. Hence we have the following commutative diagram: $$\xymatrix{
& Z \ar[ld]_{\check{\rho}} \ar[rd]^{\rho} & \\
\check{P} \ar[rd]_{\check{\pi}} \ar[d]_{\sigma} & & P \ar[ld]^{\pi}\\
\textrm{Bun}_{2} & S\\
}$$ Let $\mathcal{E}^{(d)}$ be Laumon’s sheaf on $P=X^{(d)}$. Also, $\textrm{det}(\mathcal{E})$ determines a rank $1$ local system on $\textrm{Pic}_{X}$, denote its fiber at $\Omega^{1}_{X}$ by $\mathcal{K}$. Then we have the following(Proposition $4.2.4$ and Remark $5.2$ of $[12]$):
The Radon transform of $\mathcal{E}$ given by $$R\check{\rho}_{*}\rho^{*}(\mathcal{E}^{(d)}[d])[d-g-1]$$ is an irreducible perverse sheaf on $\check{P}$.
The restriction of the perverse sheaf $$\widetilde{K}_{\mathcal{E}}=\mathcal{K}\otimes R\check{\rho}_{*}\rho^{*}(\mathcal{E}^{(d)}[d])[d-g-1]$$ to $U$ descends down to $\textrm{Bun}_{2}$ and this gives the automorphic sheaf $\textrm{Aut}_{\mathcal{E}}$.
Since we expect the Poincaré sheaf on the stack of Higgs bundles to be the classical limit of the automorphic sheaf, it should be possible to construct the Poincaré sheaf by modifying the Radon transform construction to the case of Higgs sheaves. Hence it is natural to consider the following(This diagram is contained in the appendix of $[12]$): $$\xymatrix{
& T^{*}_{Z}(\check{P}\times P) \ar[ld]_{\pi} \ar[rd]^{p} & \\
T^{*}\check{P} & & T^{*}P\\
}$$ It is not hard to see that $T^{*}P$ classifies the data $(D,t_{D})$ where $D$ is a length $d$ subscheme of $X$, and $t_{D}\in H^{0}(\Omega^{1}_{X}\otimes O_{D})$. Hence if we denote the cotangent bundle of $X$ by $S$, $T^{*}P$ can be naturally identified as an open subscheme of ${\textrm{Hilb}_{S}}^{d}$ by embedding $D$ into $S$ using the section $t_{D}$. Similarly, $T^{*}\check{P}$ classifies the data: $$0\rightarrow \Omega^{1}_{X}\rightarrow L_{2}\rightarrow L_{1}\rightarrow 0, \varphi\in\textrm{Hom}(L_{2},L_{1}\otimes \Omega^{1}_{X})$$ Let $\mathcal{U}$ be the open subscheme of $T^{*}\check{P}$ satisfying the condition $H^{0}(\check{L}_{2}\otimes \Omega^{2}_{X})=0$. Also, let ${\mathcal{H}iggs}'$ be the stack classifying the data: $$(E,\phi); 0\rightarrow \Omega^{1}_{X}\hookrightarrow E$$ where $(E,\phi)$ is a rank two Higgs bundle with value in $\Omega^{1}_{X}$, and $\Omega^{1}_{X}\hookrightarrow E$ is a subsheaf such that the quotient is a line bundle, and we require that $H^{0}(\check{E}\otimes\Omega^{2}_{X})=0$. Then arguing as Lemma \[elementary properties of Y\] we see that ${\mathcal{H}iggs}'$ is naturally a closed subscheme of $\mathcal{U}$. Projective duality implies that there are natural isomorphisms: $$T^{*}\check{P}\backslash T^{*}S\times_{S}\check{P}\simeq T^{*}_{Z}(\check{P}\times P)\backslash T^{*}_{S}(S\times S)\times_{S}Z\simeq T^{*}P\backslash T^{*}S\times_{S}P$$ Hence it induces: $$(T^{*}\check{P}\backslash T^{*}S\times_{S}\check{P})\times{\mathcal{H}iggs}\simeq(T^{*}P\backslash T^{*}S\times_{S}P)\times{\mathcal{H}iggs}$$ Moreover, if we denote the universal spectral curve by $\mathcal{C}$, then under this isomorphism, the image of $\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}$ in the $(T^{*}P\backslash T^{*}S\times_{S}P)\times{\mathcal{H}iggs}$ corresponds to $({\mathcal{H}iggs}'\cap(T^{*}P\backslash T^{*}S\times_{S}P))\times_{H}{\mathcal{H}iggs}$. The sheaf $Q$ on ${\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}=T^{*}P\times{\mathcal{H}iggs}$ obtained from Proposition \[definition of Q\] should be viewed as the analogue of the Laumon’s sheaf in the Higgs setting. Since $Q$ is supported on $\textrm{Hilb}^{d}_{C|H}\times_{H}{\mathcal{H}iggs}$, via the isomorphisms above, $Q$ can also be viewed as a coherent sheaf on $({\mathcal{H}iggs}'\cap(T^{*}P\backslash T^{*}S\times_{S}P))\times_{H}{\mathcal{H}iggs}$. In order the extend is to the entire ${\mathcal{H}iggs}'\times_{H}{\mathcal{H}iggs}$, it is natural to consider the full diagram: $$\xymatrix{
& T^{*}_{Z}(\check{P}\times P)\times{\mathcal{H}iggs}\ar[ld]_{\pi} \ar[rd]^{p} & \\
T^{*}\check{P}\times{\mathcal{H}iggs}& & T^{*}P\times{\mathcal{H}iggs}\\
}$$ and ask whether $\pi_{*}(p^{*}(Q))$ is a Cohen-Macaulay sheaf supported on ${\mathcal{H}iggs}'\times_{H}{\mathcal{H}iggs}$. Also, to generalize this picture to Higgs bundles with values in other line bundles, we need to give a more intrinsic characterization of $T^{*}_{Z}(\check{P}\times P)$. It is not hard to show in this case that $T^{*}_{Z}(\check{P}\times P)$ can be identified with the blowup of $T^{*}\check{P}$ along the subscheme $T^{*}S\times_{S}\check{P}$. Hence all these motivates the following construction(See Construction \[definition of Y\] in subsection $3.3$)
Let $X$ be a smooth projective curve, $L$ a line bundle on $X$ with degree $l\geq 2g-1$, and $x_{0}$ a fixed point on $X$. Let $\mathcal{Y}$ be the stack classifying the data $(E,\varphi,s,\sigma)$ where $E$ is a rank two vector bundle on $X$ of degree $m$ with $H^{1}(E)=0$, $H^{0}(\check{E}\otimes L)=0$, and $E$ is globally generated, $s$ a nonzero global section of $E$ such that the quotient $M=E/O_{X}$ is a line bundle, and $\sigma$ is a trivialization of $M_{x_{0}}$, and $\varphi\in \textrm{Hom}(E,M\otimes L)$.
We also need:
Let ${\mathcal{H}iggs}'^{m}$ be the moduli stack classifying the data $(E,\phi,s,\sigma)$ where $(E,s,\sigma)$ satisfying the same condition as in $\mathcal{Y}$, and $(E,\phi)$ is a Higgs bundle.
In Lemma \[elementary properties of Y\] we will prove that ${\mathcal{H}iggs}^{'m}$ is a closed substack of $\mathcal{Y}$. Now let us consider: $$\xymatrix{
X\times\mathcal{Y} \ar[d]^{f}\\
\mathcal{Y}\\
}$$ By the definition of $\mathcal{Y}$, we have the following morphism of vector bundles on $X\times\mathcal{Y}$: $$E\xrightarrow{\varphi} M\otimes L$$ Hence this induced a section of $M\otimes L$ via: $$O_{X}\xrightarrow{s} E\xrightarrow{\varphi} M\otimes L$$ So we get a global section of the vector bundle $f_{*}(M\otimes L)$ on $\mathcal{Y}$: $$O\rightarrow f_{*}(M\otimes L)$$ by pushing it forward. Let $\mathcal{Z}$ be the vanishing locus of this section. $\mathcal{Z}$ is the analog of the closed subscheme $T^{*}S\times_{S}\check{P}\subseteq T^{*}\check{P}$. We will prove that $\mathcal{Z}$ is a smooth closed substack $\mathcal{Z}$ of $\mathcal{Y}$ and a complete intersection in $\mathcal{Y}$(See Lemma \[definition of Z\]). There exists a natural morphism $\mathcal{Y}\backslash\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$ defined by the following procedure:\
We have a morphism $O_{\mathcal{Y}}\rightarrow f_{*}(M\otimes L)$ on $\mathcal{Y}$ which is nonvanishing over $\mathcal{Y}\backslash\mathcal{Z}$, hence we get a nonvanishing global section $t$ of $M\otimes L$ on $X\times(\mathcal{Y}\setminus\mathcal{Z})$. So if we denote the vanishing locus of $t$ by $D$, then $D$ is a closed substack of $X\times\mathcal{Y}$ and it is a family of finite subscheme of length $d$ of $X$ over $\mathcal{Y}\backslash\mathcal{Z}$. Notice that $t$ is given by the composition: $$O_{X}\xrightarrow{s} E\rightarrow M\otimes L$$ If we restrict the above morphisms of vector bundles to $D$, we see that the composition: $$O_{D}\xrightarrow{s_{D}} E\otimes O_{D}\rightarrow M\otimes L\otimes O_{D}$$ is equal to $0$ by the definition of $D$. Since we have the exact sequence of vector bundles: $$0\rightarrow O_{X}\rightarrow E\rightarrow M\rightarrow 0$$ So we get: $$0\rightarrow O_{D}\rightarrow E\otimes O_{D}\rightarrow M\otimes O_{D}\rightarrow 0$$ From this we see that the morphism: $$E\otimes O_{D}\rightarrow M\otimes L\otimes O_{D}$$ factors through: $$E\otimes O_{D}\rightarrow M\otimes O_{D}\rightarrow M\otimes L\otimes O_{D}$$ This gives a section $t_{D}\in H^{0}(L\otimes O_{D})$, and we embed $D$ into $S$ via $t_{D}$. So this defines a morphism $\mathcal{Y}\setminus\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$.\
We want to resolve the rational map $\mathcal{Y}\backslash\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$. To do that we need the blowup of $\mathcal{Y}$ along $\mathcal{Z}$. Denote the blowup of $\mathcal{Y}$ along $\mathcal{Z}$ by $\mathcal{B}$. In Proposition \[resolve the rational map\] we will show that the morphism $$\mathcal{Y}\backslash\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$$ extends to a morphism $$\mathcal{B}\rightarrow {\textrm{Hilb}_{S}}^{d}$$ So in the end we have the following diagram which will be called the fundamental diagram(The construction of the diagram is given in Proposition \[resolve the rational map\] of subsection $3.3$): $$\label{fundamental diagram}
\xymatrix{
& \mathcal{B} \ar[r] \ar[d]^{\pi} & {\textrm{Hilb}_{S}}^{d}\\
\mathcal{Z} \ar[r] & \mathcal{Y} & \\
}$$ Let ${\mathcal{H}iggs}^{(n)}$ be the open substack of ${\mathcal{H}iggs}$ defined at the beginning of section $4$. The fundamental diagram above induces the following diagram: $$\xymatrix{
\mathcal{B}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{\pi} \ar[r]^{p} & {\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}^{(n)}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}\\
}$$ From the dimension calculations in Lemma \[elementary properties of Y\] and Lemma \[definition of Z\], it follows easily that ${\mathcal{H}iggs}^{'m}\times_{H}{\mathcal{H}iggs}^{(n)}$ has codimension $d$ in $\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}$ where $d=l+m$. The main theorem of the paper can be reformulated as follows:
\[rewrite main theorem\] $\pi_{*}(p^{*}(Q))$(Here all functors are derived) is a Cohen-Macaulay sheaf of codimension $d$ supported on ${\mathcal{H}iggs}^{'m}\times_{H}{\mathcal{H}iggs}^{(n)}$.
Even though this theorem looks weaker than Theorem \[the main theorem\], in section $5$ we will show that we can deduce Theorem \[the main theorem\] from Theorem \[rewrite main theorem\]. The proof of this theorem will be given in section $5$. It is obtained by more detailed study of the properties of the morphism $p$ in subsection $3.3$ and a cohomological calculation in section $4$.\
\
In the rest of this subsection, let us discuss the relationship between this construction and our previous construction of the Poincaré sheaf on the stack of Higgs bundles for ${\mathbf{P}^{1}}$ in $[13]$. Let us recall the following construction:
Let ${\mathcal{H}iggs}'$ be the moduli stack classifying the data $(E,\phi,s)$ where $(E,\phi)$ is a rank two Higgs bundle on ${\mathbf{P}^{1}}$ with value in $O(n)$ such that the underlying vector bundle $E$ is isomorphic to $O\oplus O$, and $s$ is a nonzero global section of $E$. Here we assume $n\geq 2$.
The main result of our previous work can be summarized in the following theorem:
There exists a smooth closed substack $\mathcal{Z}$ of ${\mathcal{H}iggs}'$ of codimension $n+1$ such that $\mathcal{Z}$ is a complete intersection in ${\mathcal{H}iggs}'$.
There exists a morphism ${\mathcal{H}iggs}'\backslash\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{n}$. Moreover, if we denote the blowup of ${\mathcal{H}iggs}'$ along $\mathcal{Z}$ by ${\mathcal{H}iggs}''$, then the morphism ${\mathcal{H}iggs}'\backslash\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{n}$ extends to: $$\xymatrix{
{\mathcal{H}iggs}'' \ar[d]^{\pi} \ar[r]^{g} & {\textrm{Hilb}_{S}}^{n}\\
{\mathcal{H}iggs}'\\
}$$
Consider the diagram: $$\xymatrix{
{\mathcal{H}iggs}''\times_{H}{\mathcal{H}iggs}^{(-n)} \ar[d]^{\pi} \ar[r]^{g} & {\textrm{Hilb}_{S}}^{n}\times{\mathcal{H}iggs}^{(-n)}\\
{\mathcal{H}iggs}'\times_{H}{\mathcal{H}iggs}^{(-n)}\\
}$$ where ${\mathcal{H}iggs}^{(-n)}$ denotes the open substack of $hig^{-n}$ such that the underlying vector bundle is isomorphic to $O(-n)\oplus O(-n)$. Then $\pi_{*}(g^{*}(Q))$ is a maximal Cohen-Macaulay sheaf on ${\mathcal{H}iggs}'\times_{H}{\mathcal{H}iggs}^{(-n)}$
Now let us describe the relationship between this paper and our previous work on the Poincaré sheaf in the case of Higgs bundles of ${\mathbf{P}^{1}}$ in $[13]$. The construction in the previous work is similar to the construction in this paper, except that in the case of ${\mathbf{P}^{1}}$ we take the blowup of ${\mathcal{H}iggs}'$, while in this paper we take the blowup of $\mathcal{Y}$ instead of ${\mathcal{H}iggs}^{'m}$. It turns out that we can still consider the stack $\mathcal{Y}$ for ${\mathbf{P}^{1}}$. And we still have the closed substack $\mathcal{Z}'$ of $\mathcal{Y}$ and ${\mathcal{H}iggs}'$ will be a closed substack of $\mathcal{Y}$. What makes ${\mathbf{P}^{1}}$ special is that the closed substack $\mathcal{Z}'$ and ${\mathcal{H}iggs}'$ inside $\mathcal{Y}$ are transversal to each other, hence their intersection $$\mathcal{Z}=\mathcal{Z}'\cap{\mathcal{H}iggs}'$$ is still a complete intersection in ${\mathcal{H}iggs}'$. So the restriction of the blowup $\mathcal{B}$ to ${\mathcal{H}iggs}'$ is isomorphism to the blowup of ${\mathcal{H}iggs}'$ along $\mathcal{Z}$. This explains that in the case of ${\mathbf{P}^{1}}$ is suffices to consider the stack ${\mathcal{H}iggs}'$ and its blowup ${\mathcal{H}iggs}''$. In the case of higher genus curves this is no longer true, we need to work with $\mathcal{Y}$ in order to get reasonable objects.
Acknowledgments
---------------
I’m very grateful to Professor Dima Arinkin for introducing me to this fascinating subject as well as his encouragement and support throughout the entire project. This work is supported by NSF grant DMS-1603277.
Preliminaries
=============
In this section we gather some preliminaries that will be needed in the later parts of the paper. Most of the materials in subsection $2.1$ through $2.3$ have been discussed in section $2$ of our previous work $[13]$, hence we will list the statements without proof.
Blowup and cohomology
---------------------
Let $X$ be a scheme, $\mathcal{E}$ a vector bundle of rank $n+1$ on $X$ with a global section $s\in H^{0}(X,\mathcal{E})$, such that the vanishing locus of $s$ is a regular embedding of codimension $n+1$. Denote the vanishing locus of $s$ by $Z$. Then we have the following description about the blowup of $X$ along $Z$ (See Chapter $11$ of $[10]$):
\[blowup as regular embedding\]
The blowup of $X$ along $Z$ is a regular embedding of codimension $n$ in $\textbf{P}(\mathcal{E})$:
$$\xymatrix{
Bl_{Z}X \ar[rr]^{i} \ar[dr]^{p}
& & \textbf{P}(\mathcal{E})=Proj(Sym_{X}\check{\mathcal{E}}) \ar[dl]^{\pi} \\
& X
}$$ described by the following: On $\mathbf{P}(\mathcal{E})$ we have a natural morphism of vector bundles: $$\pi^{*}\mathcal{E}\xrightarrow{\varphi} T_{P(\mathcal{E})\mid X}\otimes O(-1)$$ The blowup is the vanishing locus of $\varphi(\pi^{*}s)\in H^{0}(T_{P(\mathcal{E})\mid X})$. It is a regular embedding in $P(\mathcal{E})$ so we have a Koszul resolution: $$\label{bl}
\bigwedge^{*}(\Omega_{P(\mathcal{E})\mid X}\otimes O(1))\rightarrow O_{Bl_{Z}X}$$ In particular, $p$ is a local complete intersection morphism
\[bl resolve nonflatness\] Let $Y\xrightarrow{f} X$ be a proper flat morphism of schemes with geometrically integral fibers. Let $L$ be a line bundle on $Y$ such that $f_{*}(L)$ is a vector bundle of rank $n+1$ and the formation of $f_{*}(L)$ commutes with arbitrary base change. Let $s$ is a global section of $L$ such that the vanishing locus of the induced section $t$ of $f_{*}(L)$ has codimension $n+1$ on $X$. Let $Z$ be the vanishing locus of $t$, Set $Y'=Y\times_{X}Bl_{Z}X$. Consider: $$\begin{CD}
Y'@>{g'}>>Y\\
@V{f'}VV@V{f}VV\\
Bl_{Z}X@>{g}>>X\\
\end{CD}$$ Then We have:
The vanishing locus of $s$ on $Y$ is a relative effective Cartier divisor over the open subset $X\backslash Z$.
The section $s$ extends to a morphism $O(E)\xrightarrow{s'} g^{'*}(L)$ on $Y'$ such that the vanishing locus of $s'$ is a relative effective Cartier divisor over $Bl_{Z}X$ which extends the relative effective Cartier divisor over $X\backslash Z$
The following lemma will be useful when we compute cohomology of sheaves on blowups:
\[pushforward of bl\] Keep the same assumption as above. Consider: $$\xymatrix{
Bl_{Z}X \ar[rr]^{i} \ar[dr]^{p}
& & \textbf{P}(\mathcal{E})=Proj(Sym_{X}\check{\mathcal{E}}) \ar[dl]^{\pi} \\
& X
}$$ Suppose $K$ is an object in $D^{b}_{coh}(\mathbf{P}(\mathcal{E}))$ such that $R\pi_{*}(K\otimes O(a))\in D^{\leq 0}_{coh}(X)$ for $a\geq 0$, then we have $Rp_{*}Li^{*}K\in D^{\leq 0}_{coh}(X)$
The following property results from Grothendieck duality:
\[dualizing sheaf of bl\] Keep the same assumption as the beginning of this subsection. Let $E$ be the exceptional divisor in ${Bl_{Z}A}$. The dualizing sheaf $\omega_{{Bl_{Z}A}\mid A}\simeq O(nE)$.
Cohen-Macaulay sheaves
----------------------
In this section we shall review some facts about Cohen-Macaulay sheaves that will be used freely in the paper. For simplicity we shall work with a Gorenstein schemes $X$ (since all schemes(stacks) appearing in the paper are Gorenstein), so that the dualizing complex of $X$ can be taken to be $O_{X}$. Most of these properties can be found in $[18]$.
Let $M$ be a coherent sheaf on $X$ such that $\textrm{codim} (\textrm{Supp(M)})=d$. Then $M$ is called Cohen-Macaulay of codimension $d$ if ${\mathcal{R}Hom}_{O_{X}}(M,O_{X})$ sits in degree $d$. In particular, $M$ is a Cohen-Macaulay module. $M$ is called maximal Cohen-Macaulay if $d=0$
Notice that if $d=0$ then both $M$ and $\mathcal{H}om_{O_{X}}(M,O_{X})$ are maximal Cohen-Macaulay, and the functor $\mathcal{H}om_{O_{X}}(-,O_{X})$ induces an anti-auto-equivalence in the category of maximal Cohen-Macaulay sheaves.
\[codimension of supp of CM sheaves\] If $M$ is a Cohen-Macaulay sheaf of codimension $d$ on a Gorenstein scheme $X$, then the support of $M$ is of pure dimension without embedded components, and $\textrm{codim}(\textrm{Supp}(M))=d$. Moreover, the functor $M\rightarrow \mathcal{E}xt^{d}_{O_{X}}(M,O_{X})$ induces an anti-auto-equivalence of the category of Cohen-Macaulay sheaves of codimension $d$.
We will use the following property of maximal Cohen-Macaulay sheaves ($[1]$ Lemma $2.2$):
\[uniqueness of CM sheaf\] Let $X$ be a Gorenstein scheme of pure dimension, $M$ a maximal Cohen-Macaulay sheaf and $Z\subseteq X$ a closed subscheme of codimension greater than or equals to $2$, then we have $M\simeq j_{*}(M|_{X-Z})$ for $j$: $X-Z\hookrightarrow X$. Hence for any two maximal Cohen-Macaulay sheaves $M$ and $N$, we have $Hom_{X}(M,N)\simeq Hom_{X\backslash Z}(M|_{X\backslash Z},N|_{X\backslash Z})$
\[flatness of CM sheaves\] Let $X$ be a Gorenstein scheme and $Y$ be a smooth scheme. Let $X\xrightarrow{f} Y$ be a flat morphism. If $M$ is a maximal Cohen-Macaulay sheaf on $X$, then $M$ is flat over $Y$
Hilbert scheme of points
------------------------
In this section we review some facts about Hilbert scheme of points of curves and surfaces that will be used in the proof of the main theorem. First let us look at Hilbert schemes of $X$. It is well known that the symmetric power $X^{(d)}$ is the Hilbert scheme of $X$ parameterizing finite subschemes of length $d$, and $X^{d}$ classifies finite subscheme of length $d$ together with a flag: $${\O}=D_{0}\subseteq D_{1}\subseteq\cdots\subseteq D_{d}=D$$ such that $\textrm{ker}(O_{D_{i}}\rightarrow O_{D_{i-1}})$ has length $1$. The natural morphism: $$X^{d}\xrightarrow{\eta} X^{(d)} \qquad ({\O}=D_{0}\subseteq D_{1}\subseteq\cdots\subseteq D_{d}=D)\rightarrow D$$
Now let us review some facts about Hilbert scheme of surfaces and planar curve. The following theorem is well-known (see $[9]$):
Let $S$ be a smooth surface, then ${\textrm{Hilb}_{S}}^{n}$ is smooth of dimension $2n$
We also set ${\textrm{Hilb}_{S}}^{' d}$ be the open subscheme of ${\textrm{Hilb}_{S}}^{d}$ parameterizing subschemes $D$ that can be embedded into a smooth curve (This notion is introduced in $[1]$). In the rest of the paper, $S$ is going to be the total space of a line bundle $L$ on $X$, and we shall work with a particular open subscheme of ${\textrm{Hilb}_{S}}^{d}$ defined by the following proposition:
\[an open subscheme of HS\] View $X^{(d)}$ as the Hilbert scheme of $X$ parameterizing length $d$ subschemes. Let $D$ be the universal subscheme: $$\xymatrix{
D \ar[rr] \ar[dr] & & X\times X^{(d)} \ar[ld]^{\pi}\\
& X^{(d)}\\
}$$ Then the total space of the vector bundle $\pi_{*}(L_{D})$ can be naturally identified with an open subscheme of ${\textrm{Hilb}_{S}}^{d}$. Moreover, it is contained in the open subscheme ${\textrm{Hilb}_{S}}^{' d}$.
Observe that if $D$ is length $d$ subscheme of $X$ and $s$ a global section of $L_{D}$, then we can embed $D$ into $S$ using the section $s$. This gives $D$ the structure of a closed subscheme of $S$. Now the assertion follows immediately from this.
\[Hilbert scheme of P1 and S\] Keep the same notation as the previous Proposition. Let $V$ denote the total space the vector bundle $\pi_{*}(L_{D})$ on $X^{(d)}$, and denote its pullback to $X^{d}$ by $V'$.
there exist an open embedding $V\hookrightarrow {\textrm{Hilb}_{S}}^{d}$ with image contained in ${\textrm{Hilb}_{S}}^{' d}$.
There are Cartesian diagrams: $$\xymatrix{
V' \ar[r] \ar[d]^{\psi} & {\textrm{Flag}_{S}}^{' d} \ar[d]\\
V \ar[r] & {\textrm{Hilb}_{S}}^{' d}
}$$ $$\xymatrix{
V' \ar[r]^{r'} \ar[d]^{\psi} & X^{d} \ar[d]^{h}\\
V \ar[r]^{r} & X^{(d)}\\
}$$
The composition of the morphisms: $$V'\subseteq {\textrm{Flag}_{S}}^{' d}\xrightarrow{\sigma} S^{d}\rightarrow X^{d}$$ is the natural projection $V'\xrightarrow{r'} X^{d}$
Let $D'$ be the universal subscheme of $S$ over ${\textrm{Hilb}_{S}}^{' n}$ and $D$ be the universal subscheme of $X$ over $X^{(d)}$. Consider $O_{D'}$ and $O_{D}$ as vector bundles on ${\textrm{Hilb}_{S}}^{' n}$ and $X^{(d)}$. Let $r$ be the natural projection $V\xrightarrow{r}X^{(d)}$. Then we have: ${\textrm{det}}(O_{D'})\simeq r^{*}({\textrm{det}}(O_{D}))$ over $V$
The identification of $V$ as an open subscheme of ${\textrm{Hilb}_{S}}^{' d}$ is defined as follows. A point of $V$ corresponds to pairs $(D,s)$ where $D$ is a closed subscheme of length $d$ of $X$ and $s\in H^{0}(L_{D})$. Since the surface $S$ is the total space of the line bundle $L$, we can embed $D$ into $S$ using the section $s$. So this defines a morphism $V\rightarrow {\textrm{Hilb}_{S}}^{' d}$. Now let us consider: $$\begin{CD}
X\times X^{(d)}\\
@V{p}VV\\
X^{(d)}\\
\end{CD}$$ In Proposition \[an open subscheme of HS\] we proved that $V$ can be identified with an open subscheme of ${\textrm{Hilb}_{S}}^{d}$ contained in ${\textrm{Hilb}_{S}}^{' d}$. This proves $(1)$\
For part $(2)$, notice that over $V$, the subscheme $D$ of $S$ comes from a subscheme of $X$, hence giving a flag of $D$ as a subscheme of $S$ is the same thing as giving a flag of $D$ as a subscheme of $X$, and we know that $Flag^{d}_{X}\simeq X^{d}$. This also proves part $(3)$\
Part $(4)$ follows almost immediately from the fact that over $V$, the subscheme $D'$ of $S$ comes from the subscheme $D$ of $X$. Hence the assertion follows.
Jacobian of smooth curves and Fourier-Mukai transform
-----------------------------------------------------
In this section we briefly review some well-known facts about the Jacobian of smooth curves. Let $X$ be a smooth projective curve of genus $g$ with a fixed point $x_{0}$. Then the Jacobian ${\textrm{Pic}^{d}}$ of $X$ parameterizes degree $d$ line bundles on $X$ with a trivialization at $x_{0}$. It is well known that we have the Abel-Jacobian map: $$X^{(d)}\xrightarrow{A} {\textrm{Pic}^{d}}$$ which is smooth when $d\geq 2g$. The following summarizes the properties we need:
\[properties of abel-jacobian map\] Consider the following diagram: $$\xymatrix{
X\times X^{d} \ar[r]^{h'} \ar[d]^{q''} & X\times X^{(d)} \ar[r]^{A'} \ar[d]^{q'} & X\times{\textrm{Pic}^{d}}\ar[d]^{q}\\
X^{d} \ar[r]^{h} \ar@/_2pc/[rr]^{f} & X^{(d)} \ar[r]^{A} & {\textrm{Pic}^{d}}\\
}$$ Let $\mathcal{L}$ be the universal line bundle on $X\times{\textrm{Pic}^{d}}$, then we have:
$A'^{*}(\mathcal{L})\simeq O_{X\times X^{(d)}}(D)\otimes q'^{*}(O(-x_{0})^{(d)})$ where $D$ is the universal divisor on $X\times X^{(d)}$
$X^{(d)}$ is a projective bundle over ${\textrm{Pic}^{d}}$ associated with the vector bundle $q_{*}(\mathcal{L})$, and $O(1)_{X^{(d)}|{\textrm{Pic}^{d}}}\simeq O(x_{0})^{(d)}$
Let $\Delta_{ij}$ be the divisor on $X^{d}$ given by $x_{i}=x_{j}$. Then $h^{*}({\textrm{det}}(q'_{*}(O_{D}))^{-1})\simeq O(\sum_{i<j}\Delta_{ij})$
Let $\Theta$ be the theta divisor on ${\textrm{Pic}^{d}}$. We have: $$f^{*}O(\Theta)\simeq \Omega_{X}((d-g+1)x_{0})^{\boxtimes d}\otimes O(-\sum_{i<j}\Delta_{ij})$$
The dualizing sheaf $\omega_{X^{d}|X^{(d)}}\simeq O(\sum_{i<j}\Delta_{ij})$.
For part $(1)$, since $X^{(d)}$ parameterizes degree $d$ effective divisors of $X$, we have a universal divisor $D\hookrightarrow X\times X^{(d)}$ and the corresponding line bundle $O_{X}(D)$ on $X\times X^{(d)}$. Let $\mathcal{M}$ be the line bundle on $X^{(d)}$ given by $O_{X}(D)|_{x_{0}\times X^{(d)}}$. Then by pulling back to $X^{d}$, it is easy to see that $\mathcal{M}\simeq O(x_{0})^{(d)}$. The morphism $X^{(d)}\xrightarrow{A} {\textrm{Pic}^{d}}$ is given by $$[D]\in X^{(d)}\rightarrow O_{X}(D)\otimes(\mathcal{M}^{-1}|_{[D]})$$ Hence $A^{'*}(\mathcal{L})\simeq O_{X}(D)\otimes q^{'*}(O(-x_{0})^{(d)})$. This establishes $(1)$.\
For $(2)$, notice that by $(1)$, we have a natural injection of line bundles: $$q'^{*}(O(-x_{0})^{(d)})\hookrightarrow A'^{*}(\mathcal{L})\simeq O_{X\times X^{(d)}}(D)\otimes q'^{*}(O(-x_{0})^{(d)})$$ Hence $O(-x_{0})^{(d)}$ is naturally a subbundle of $q'_{*}(A'^{*}(\mathcal{L}))$ on $X^{(d)}$: $$O(-x_{o})^{(d)}\hookrightarrow q'_{*}(A'^{*}(\mathcal{L}))$$ So the claim follows easily from this.\
For $(3)$, notice that we have the following short exact sequences on $X\times X^{d}$ for each $i$: $$0\rightarrow O_{X}(-\Delta_{1}-\cdots-\Delta_{i})\rightarrow O_{X}(-\Delta_{1}-\cdots\Delta_{i-1})\rightarrow O_{\Delta_{i}}(-\Delta_{1}-\cdots-\Delta_{i-1})\rightarrow 0$$ And from this it is easy to see that $$h^{*}(\textrm{det}(q'_{*}(O_{D}))^{-1})\simeq \otimes^{i}q'_{*}(O_{\Delta_{i}}(\Delta_{1}+\cdots+\Delta_{i-1}))\simeq O(\sum_{i<j}\Delta_{ij})$$ For $(4)$, recall that in our setup, the theta divisor can be identified with: $$\textrm{det}(q_{*}(\mathcal{L}(-(d-g+1)x_{0})))^{-1}$$ here $q_{*}$ stands for the derived pushforward. Hence from part $(1)$ we see that $f^{*}(O(\Theta))$ is given by: $$\textrm{det}(q''_{*}(O_{X}(\Delta_{1}+\cdots+\Delta_{d}-(d-g+1)x_{0})))^{-1}$$ We have the exact sequence $$0\rightarrow O_{X}(-(d-g+1)x_{0})\rightarrow O_{X}(D-(d-g+1)x_{0})\rightarrow O_{D}(D-(d-g+1)x_{0})\rightarrow 0$$ where $D$ stands for the divisor $\Delta_{1}+\cdots+\Delta_{d}$ on $X\times X^{d}$. So we have $$\textrm{det}(q''_{*}(O_{X}(\Delta_{1}+\cdots+\Delta_{d}-(d-g+1)x_{0})))^{-1}\simeq\textrm{det}(q''_{*}(O_{D}(D-(d-g+1)x_{0})))^{-1}$$ Notice that for each $i$, we have the exact sequence: $$0\rightarrow O_{X}(\Delta_{1}+\cdots+\Delta_{i-1})\rightarrow O_{X}(\Delta_{1}+\cdots+\Delta_{i})\rightarrow O_{\Delta_{i}}(\Delta_{1}+\cdots+\Delta_{i})\rightarrow 0$$ And it is not hard to see that $$\textrm{det}(q''_{*}(O_{D}(D-(d-g+1)x_{0})))^{-1}\simeq\otimes^{i}(q''_{*}(O_{\Delta_{i}}(\Delta_{1}+\cdots+\Delta_{i}-(d-g+1)x_{0})))^{-1}$$ Hence the assertion follows from this.\
For $(5)$, it is well known that the tangent sheaf of $X^{(d)}$ can be identified with $q'_{*}(O_{D}(D))$. Since we have exact sequences $$0\rightarrow O_{X}(\Delta_{1}+\cdots+\Delta_{i-1})\rightarrow O_{X}(\Delta_{1}+\cdots+\Delta_{i})\rightarrow O_{\Delta_{i}}(\Delta_{1}+\cdots+\Delta_{i})\rightarrow 0$$ the assertion follows easily from this.
Recall that if $J=\textrm{Pic}^{0}$ is the Jacobian of $X$, then there exists a Poincaré line bundle $\mathcal{P}$ on $J\times J$. Moreover, it is well-known that $\mathcal{P}$ induces an equivalence of the derived category of $J$ via the Fourier-Mukai transform: $$\mathcal{F}\rightarrow Rp_{1 *}(p_{2}^{*}(\mathcal{F})\otimes\mathcal{P})$$ The inverse is given by: $$\mathcal{G}\rightarrow Rp_{2*}(p_{1}^{*}(\mathcal{G})\otimes\mathcal{P}^{-1})[g]$$ The following lemma is a direct application of the Fourier-Mukai transform:
\[vanishing of pushforward\] Let $K\in D^{b}_{coh}(X^{d})$ such that for any degree zero line bundle $L$ on $X$, we have $R^{i}\Gamma(K\otimes L^{\boxtimes d})=0$ for $i>p$. Then we have $Rf_{*}K\in D^{\leq p}_{coh}(Pic^{d})$ where $f$ is the natural morphism $X^{d}\xrightarrow{f} {\textrm{Pic}^{d}}$
From the construction of $\mathcal{P}$, it is easy to check that the conditions on $K$ implies that the Fourier-Mukai transform of $Rf_{*}(K)$ belongs to $D^{\leq p}({\textrm{Pic}^{d}})$, so the claim follows immediately from the fact that $Rp_{2 *}$ has cohomological dimension $g$ and the expression for the inverse transform.
Moduli of rank $2$ Higgs bundles on $X$
=======================================
Most of the materials in subsection $3.1$ and $3.2$ are well-known and have already appeared in one form or the other in our previous work $[13]$, we include them here for reader’s convenience. On the other hand, subsection $3.3$ is of vital importance for the construction of the Poincaré sheaf. So the reader is recommended to jump directly to $3.3$ and return to $3.1$ and $3.2$ when necessary.
Generalities on the geometry of the moduli of Higgs bundles
-----------------------------------------------------------
In this subsection we review some general definitions and results about the geometry of the stack of Higgs bundles on $X$. We fix an integer $l\geq 2g-1$ and $L$ be a line bundle on $X$ of degree $l$.
A rank $2$ $L$-valued Higgs bundle on $X$ is a pair $(E,\phi)$ where $E$ is a rank $2$ vector bundle and the Higgs field $\phi$ is a morphism of vector bundles $E\xrightarrow{\phi} E\otimes L$
We denote the stack of Higgs bundles by ${\mathcal{H}iggs}$. It decomposes as a disjoint union: $${\mathcal{H}iggs}=\coprod_{k}{\mathcal{H}iggs}^{k}$$ where ${\mathcal{H}iggs}^{k}$ is the stack of Higgs bundles of degree $k$. It is well-known that ${\mathcal{H}iggs}$ is an algebraic stack locally of finite presentation.
A Higgs bundle $(E,\phi)$ is called semistable if the following condition hold: For any line subbundle $E_{0}\subseteq E$ that is preserved by the Higgs field $\phi$ in the sense that $\phi(E_{0})\subseteq E_{0}\otimes L$, we have $\textrm{deg}(E_{0})\leq \dfrac{\textrm{deg}(E)}{2}$.
Semistable Higgs bundles form an open substack, denote it by ${\mathcal{H}iggs}_{ss}$.
\[smoothness of semistable higgs bundles\] ${\mathcal{H}iggs}_{ss}$ is smooth of dimension $4l$.
\[generically regular Higgs field\] A Higgs field $\phi$ is called generically regular if $\phi$ is a regular element in $\mathfrak{gl}_{2}(k(\eta))$ after trivializing $E$ and $L$ at the generic point $\eta$ of $X$. Higgs bundles with generically regular Higgs field forms an open substack of ${\mathcal{H}iggs}$, denote it by $\widetilde{{\mathcal{H}iggs}}$
Hitchin fibration and spectral curve
------------------------------------
In this subsection we are going to review certain properties of the Hitchin fibration that will be used later in the paper. Part $(2)$ of Proposition \[properties of Hitchin fibration\] will be frequently used. Lemma \[resolution\] will be used in Section $4$ to construct resolutions of sheaves. Most of the properties are well-known(See $[12]$ and the appendix of $[3]$ for a summary).
The Hitchin base $H$ is the affine space given by: $$H^{0}(X,L)\times H^{0}(X,L^{2})$$
It is well-known that we have the Hitchin fibration: $${\mathcal{H}iggs}\xrightarrow{h} H \qquad (E,\phi)\rightarrow (\textrm{tr}(\phi),\textrm{det}(\phi))$$ The following proposition summarizes the properties of Hitchin fibration:
\[properties of Hitchin fibration\]
$H$ is an affine space of dimension $3l-2(g-1)$
$h$ is a relative complete intersection morphism with fiber dimension $l+2(g-1)$
Let $S$ be the surface defined by the total space of the line bundle $L$. We have: $$\xymatrix{
S=\textrm{Spec}(\textrm{Sym}(L^{-1})) \ar[d]^{q}\\
X\\
}$$ Any Higgs bundle can be naturally viewed as a coherent sheaf on the surface $S$, and we have the following lemma:
\[resolution\] Let $(E,\phi)$ be a Higgs bundle. There is a locally free resolution of $E$ as a coherent sheaf on $S$: $$0\rightarrow q^{*}(E)\otimes q^{*}(L^{-1})\rightarrow q^{*}(E)\rightarrow E\rightarrow 0$$
By the definition of $S$, there is a tautological section $T$ of $q^{*}(L)$ on $S$, and the morphism $$q^{*}(E)\otimes q^{*}(L^{-1})\rightarrow q^{*}(E)$$ is given by $T-q^{*}(\phi)$. It is easy to see that this is a resolution of $E$ as an $O_{S}$ module.
The following simple lemma will be used later
\[a property of semistable higgs bundle\] Let $(E,\phi)$ be a semistable Higgs bundle such that the underlying vector bundle $E$ is not semistable. Then $(E,\phi)\in\widetilde{{\mathcal{H}iggs}}$.
Because $E$ is not semistable as a vector bundle, we can choose a line subbundle $\mathcal{M}\hookrightarrow E$ such that $$\textrm{deg}(\mathcal{M})>\dfrac{\textrm{deg}(E)}{2}$$ We may assume $\textrm{deg}(E)$ is sufficiently large so that we have a global section $O_{X}\xrightarrow{s}\mathcal{M}$. Since $(E,\phi)$ is semistable as a Higgs bundle, the Higgs field cannot fix $\mathcal{M}$, hence $s$ and $\phi(s)$ are linearly independent at the generic point of $X$, hence $(E,\phi)\in\widetilde{{\mathcal{H}iggs}}$.
\[description of the dual\] Let $(E,\phi)$ be a rank two Higgs bundle on $X$, viewed as a coherent sheaf on the corresponding spectral curve $C$. Then ${\mathcal{H}om}_{O_{C}}(E,O_{C})$ is isomorphic to $\check{E}\otimes L^{-1}$ as a Higgs bundle on $X$, with the Higgs field induced from $\phi$.
Higgs bundles and Hilbert schemes
---------------------------------
In this subsection we will define and study the geometric properties of the stacks mentioned in subsection $1.4$. In particular we will construct Diagram \[fundamental diagram\] mentioned in subsection $1.4$: $$\xymatrix{
& \mathcal{B} \ar[r] \ar[d]^{\pi} & {\textrm{Hilb}_{S}}^{d}\\
\mathcal{Z} \ar[r] & \mathcal{Y} & \\
}$$ where $\mathcal{Y}$ is a smooth stack, $\mathcal{Z}$ is a smooth closed substack of $\mathcal{Y}$ and $\mathcal{B}$ is the blowup of $\mathcal{Y}$ along $\mathcal{Z}$. The construction is given in Proposition \[resolve the rational map\]. After that we will give some dimension estimates and a few further properties of the fundamental diagram which will be used in section $4$ and section $5$.\
Let $X$ be a smooth projective curve of genus $g$. Fix a point $x_{0}$ in $X$. Let $l$ and $m$ be integers such that $m\geq 4g$, $l\geq 2g-1$ and $m>l$. Let $L$ be a fixed line bundle on $X$ of degree $l$. First let us define the main objects of study.
Let $\mathcal{X}$ be the stack classifying the data $(E,s,\sigma)$ where $E$ is a rank two vector bundle on $X$ of degree $m$ with $H^{1}(E)=0$, $H^{0}(\check{E}\otimes L)=0$, and $E$ is globally generated, $s$ a nonzero global section of $E$ such that the quotient $M=E/O_{X}$ is a line bundle, $\sigma$ is a trivialization of $M_{x_{0}}$.
\[definition of Y\] Let $\mathcal{Y}$ be the stack classifying the data $(E,\varphi,s,\sigma)$ where $(E,s,\sigma)$ satisfying the same condition as $\mathcal{X}$, and $\varphi\in \textrm{Hom}(E,M\otimes L)$.
Let ${\mathcal{H}iggs}'^{m}$ be the moduli stack classifying the data $(E,\phi,s,\sigma)$ where $(E,s,\sigma)$ satisfying the same condition as in $\mathcal{X}$, and $(E,\phi)$ is a Higgs bundle.
We have the following lemma:
\[elementary properties of Y\]
$\mathcal{Y}$ is naturally a vector bundle over $\mathcal{X}$
${\mathcal{H}iggs}'^{m}$ is a closed substack of $\mathcal{Y}$
$\mathcal{Y}$ is smooth of dimension $2l+2m+1$
The natural morphism ${\mathcal{H}iggs}'^{m}\rightarrow{\mathcal{H}iggs}$ is smooth of dimension $m+2(1-g)+1$
${\mathcal{H}iggs}'^{m}$ has dimension $4l+m+2(1-g)+1$
For $(1)$, notice that there is a natural morphism $\mathcal{Y}\rightarrow\mathcal{X}$ given by: $$(E,\varphi,s,\sigma)\rightarrow (E,s,\sigma)$$ And the condition on $l$ and $m$ guarantees that $H^{1}(\check{E}\otimes M\otimes L)=0$. Hence the assertion follows from this.\
For $(2)$, observe that the morphism $E\rightarrow M$ induced a natural morphism: $$\textrm{Hom}(E,E\otimes L)\rightarrow \textrm{Hom}(E,M\otimes L)$$ Hence we have a morphism ${\mathcal{H}iggs}'^{m}\rightarrow \mathcal{Y}$ given by: $$(E,\phi,s,\sigma)\rightarrow (E,\varphi,s,\sigma)$$ by sending $\phi$ to its image in $\textrm{Hom}(E,M\otimes L)$. We have an exact sequence: $$0\rightarrow \check{E}\otimes L\rightarrow \check{E}\otimes E\otimes L\rightarrow \check{E}\otimes M\otimes L\rightarrow 0$$ Now consider: $$\xymatrix{
X\times\mathcal{Y} \ar[d]^{f}\\
\mathcal{Y}\\
}$$ The assumption $H^{0}(\check{E}\otimes L)=0$ implies that $R^{1}f_{*}(\check{E}\otimes L)$ is a vector bundle on $\mathcal{Y}$. From the definition of $\mathcal{Y}$, $\varphi$ induces a global section of $f_{*}(\check{E}\otimes M\otimes L)$: $$O_{\mathcal{Y}}\xrightarrow{\varphi} f_{*}(\check{E}\otimes M\otimes L)$$ and it is easy to see that ${\mathcal{H}iggs}'^{m}$ is the vanishing locus of the morphism: $$O_{\mathcal{Y}}\xrightarrow{\varphi}f_{*}(\check{E}\otimes M\otimes L)\rightarrow R^{1}f_{*}(\check{E}\otimes L)$$. So $(2)$ follows from this.\
The rest of the lemma follows easily from the fact that the stack of rank two vector bundles on $X$ is smooth of dimension $4(g-1)$ and our assumption that $H^{1}(E)=0$.
Next let us consider: $$\xymatrix{
X\times\mathcal{Y} \ar[d]^{f}\\
\mathcal{Y}\\
}$$ By the definition of $\mathcal{Y}$, we have the following morphism of vector bundles on $X\times\mathcal{Y}$: $$E\xrightarrow{\varphi} M\otimes L$$ Hence this induced a section of $M\otimes L$ via: $$O_{X}\xrightarrow{s} E\xrightarrow{\varphi} M\otimes L$$ So we get: $$O\rightarrow f_{*}(M\otimes L)$$ on $\mathcal{Y}$ by pushing it forward.
\[definition of Z\] Let $\mathcal{Z}$ be the vanishing locus of the morphism $O\rightarrow f_{*}(M\otimes L)$ on $\mathcal{Y}$. Then $\mathcal{Z}$ is naturally a vector bundle over $\mathcal{X}$ and it is a subbundle of $\mathcal{Y}$. $\mathcal{Z}$ has dimension $m+l+g$.
Consider: $$\xymatrix{
X\times\mathcal{Y} \ar[d]^{f}\\
\mathcal{Y}\\
}$$ The natural exact sequence: $$0\rightarrow O_{X}\rightarrow E\rightarrow M\rightarrow 0$$ induces: $$0\rightarrow\check{M}\rightarrow\check{E}\rightarrow O_{X}\rightarrow 0$$ Hence we get: $$0\rightarrow L\rightarrow\check{E}\otimes M\otimes L\rightarrow M\otimes L\rightarrow 0$$ Now $\varphi$ can be identified with a section of $$f_{*}(\check{E}\otimes M\otimes L)$$ and the section $O\rightarrow f_{*}(M\otimes L)$ is given by the image of $\varphi$ in $f_{*}(M\otimes L)$. From this it is easy to see that $\mathcal{Z}$ can be identified with the vector bundle over $\mathcal{X}$ given by $g_{*}(L)$ where $g$ is the morphism: $$\xymatrix{
X\times\mathcal{X} \ar[d]^{g}\\
\mathcal{X}\\
}$$
Set $d=l+m$. The following observation is central to the construction of the Poincaré sheaf:
\[resolve the rational map\] Let $S$ be the smooth surface given by the total space of the line bundle $L$, and $\mathcal{B}$ be the blowup of $\mathcal{Y}$ along the closed substack $\mathcal{Z}$. Then:
There exists a natural morphism $\mathcal{Y}\setminus\mathcal{Z}\xrightarrow{p} {\textrm{Hilb}_{S}}^{d}$ such that its image is contained in the open subscheme $V$ of ${\textrm{Hilb}_{S}}^{d}$ defined in Corollary \[Hilbert scheme of P1 and S\].
The morphism $\mathcal{Y}\setminus\mathcal{Z}\xrightarrow{p} {\textrm{Hilb}_{S}}^{d}$ extends to a morphism of stacks: $\mathcal{B}\rightarrow {\textrm{Hilb}_{S}}^{d}$ such that the image of $\mathcal{B}$ is contained in the open subscheme $V$.
The morphism $\mathcal{Y}\setminus\mathcal{Z}\xrightarrow{p} V\subseteq{\textrm{Hilb}_{S}}^{d}$ is smooth.
Hence we have the following diagram: $$\xymatrix{
\mathcal{B} \ar[d]^{\pi} \ar[r] & {\textrm{Hilb}_{S}}^{d}\\
\mathcal{Y}
}$$
For $(1)$, consider: $$\xymatrix{
X\times\mathcal{Y} \ar[d]^{f}\\
\mathcal{Y}\\
}$$ By Lemma \[definition of Z\], we have a morphism $O_{\mathcal{Y}}\rightarrow f_{*}(M\otimes L)$ on $\mathcal{Y}$ which is nonvanishing over $\mathcal{Y}\setminus\mathcal{Z}$, hence we get a nonvanishing global section $t$ of $M\otimes L$ on $X\times(\mathcal{Y}\setminus\mathcal{Z})$. So if we denote the vanishing locus of $t$ by $D$, then $D$ is a closed substack of $X\times(\mathcal{Y}\setminus\mathcal{Z})$ and it is a family of finite subscheme of length $d$ of $X$ over $\mathcal{Y}\setminus\mathcal{Z}$. Notice that $t$ is given by the composition: $$O_{X}\xrightarrow{s} E\rightarrow M\otimes L$$ If we restrict the above morphisms of vector bundles to $D$, we see that the composition: $$O_{D}\xrightarrow{s_{D}} E\otimes O_{D}\rightarrow M\otimes L\otimes O_{D}$$ is equal to $0$ by the definition of $D$. Since we have the exact sequence of vector bundles: $$0\rightarrow O_{X}\rightarrow E\rightarrow M\rightarrow 0$$ So we get: $$0\rightarrow O_{D}\rightarrow E\otimes O_{D}\rightarrow M\otimes O_{D}\rightarrow 0$$ From this we see that the morphism: $$E\otimes O_{D}\rightarrow M\otimes L\otimes O_{D}$$ factors through: $$E\otimes O_{D}\rightarrow M\otimes O_{D}\rightarrow M\otimes L\otimes O_{D}$$ This gives a section $t_{D}\in H^{0}(L\otimes O_{D})$, and we embed $D$ into $S$ via $t_{D}$. So this defines a morphism $\mathcal{Y}\setminus\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$\
For $(2)$, consider: $$\xymatrix{
X\times\mathcal{B} \ar[r]^{\pi'} \ar[d]^{f'} & X\times\mathcal{Y} \ar[d]^{f}\\
\mathcal{B} \ar[r]^{\pi} & \mathcal{Y}\\
}$$ Now apply Corollary \[bl resolve nonflatness\], we see that the section: $$O\rightarrow M\otimes L$$ extends to: $$O(E)\rightarrow \pi'^{*}(M\otimes L)$$ on $X\times\mathcal{B}$, and its vanishing locus $D'$ defines a family of finite subscheme of length $d$ of $X$ over $\mathcal{B}$. Now since $D'$ is a subscheme of the pullback of $D$ to $X\times\mathcal{B}$, so the section $t_{D}\in H^{0}(L\otimes O_{D})$ induces a section of $H^{0}(L\otimes O_{D'})$, and we embed $D'$ into $S$ using this section. This gives a morphism $\mathcal{B}\rightarrow {\textrm{Hilb}_{S}}^{d}$ which extends the morphism $\mathcal{Y}\rightarrow {\textrm{Hilb}_{S}}^{d}$ in $(1)$.\
For $(3)$, we claim that the fibers of $\mathcal{Y}\setminus\mathcal{Z}\rightarrow {\textrm{Hilb}_{S}}^{d}$ is a $G_{m}$ torsor over its image. In fact, let $D$ be vanishing locus of the section $t$ of $M\otimes L$ as in $(1)$. So $M\otimes L\simeq O_{X}(D)$. By the definition of $\mathcal{Y}$, we have morphisms $$E\rightarrow M\otimes L \qquad 0\rightarrow O_{X}\rightarrow E\rightarrow M\rightarrow 0$$ on $X\times\mathcal{Y}$. They induces $$E\otimes M^{-1}\rightarrow L \qquad 0\rightarrow M^{-1}\rightarrow E\otimes M^{-1}\rightarrow O_{X}\rightarrow 0$$ By the definition of $D$, the composition $$M^{-1}\rightarrow E\otimes M^{-1}\rightarrow L\rightarrow L\otimes O_{D}$$ is zero. Hence $$E\otimes M^{-1}\rightarrow L\rightarrow L\otimes O_{D}$$ factors as $$E\otimes M^{-1}\rightarrow O_{X}\rightarrow L\otimes O_{D}$$ Since $$\label{exact sequence}
0\rightarrow M^{-1}\rightarrow E\otimes M^{-1}\rightarrow O_{X}\rightarrow 0$$ is an exact sequence, so we get the following morphism of exact sequences: $$\label{commu}
\xymatrix{
0 \ar[r] & M^{-1} \ar[r] \ar[d] & E\otimes M^{-1} \ar[r] \ar[d] & O_{X} \ar[r] \ar[d] & 0\\
0 \ar[r] & L(-D) \ar[r] & L \ar[r] & L\otimes O_{D} \ar[r] & 0\\
}$$ The exact sequence \[exact sequence\] induces morphism $H^{0}(O_{X})\rightarrow H^{1}(M^{-1})$. $E$ gives a class in $\textrm{Ext}^{1}(O_{X},M^{-1})=H^{1}(M^{-1})$ which can be identified with the image of $1\in H^{0}(O_{X})$ in $H^{1}(M^{-1})$. Because of the commutative diagram \[commu\], it can also be identified with the image of $t_{D}\in H^{0}(L\otimes O_{D})$ under the morphism $$H^{0}(L\otimes O_{D})\rightarrow H^{1}(L(-D))=H^{1}(M^{-1})$$ since $M\otimes L\simeq O_{X}(D)$. Hence given any point $(D,t_{D})\in V$ which lies in the image of $\mathcal{Y}\setminus\mathcal{Z}$, if we choose an isomorphism $M\simeq L^{-1}(D)$, we can recover the corresponding point $(E,\varphi,s,\sigma)$ of $\mathcal{Y}$ by pulling back the exact sequence via $t_{D}$: $$\xymatrix{
0 \ar[r] & L(-D)\simeq M^{-1} \ar[d] \ar[r] & E\otimes L(-D)\simeq E\otimes M^{-1} \ar[d] \ar[r] & O_{X} \ar[d]^{t_{D}} \ar[r] & 0\\
0\ar[r] & L(-D) \ar[r] & L \ar[r] & L\otimes O_{D} \ar[r] & 0\\
}$$ Hence the assertion follows from this.\
In the next lemma we will give a description about the preimage of $\textrm{Hilb}^{d}_{C}\subseteq{\textrm{Hilb}_{S}}^{d}$ under the morphism $p$ and some dimension estimates:
\[further props of B\]
Fix a spectral curve $C$. Consider $$\mathcal{Y}\backslash\mathcal{Z}\xrightarrow{p}{\textrm{Hilb}_{S}}^{d}$$ Then $p^{-1}(\textrm{Hilb}_{C}^{d})\cap(\mathcal{Y}\backslash\mathcal{Z})={\mathcal{H}iggs}'^{m}_{C}$ where ${\mathcal{H}iggs}'^{m}_{C}$ stands for the closed substack of ${\mathcal{H}iggs}'^{m}$ consists of Higgs bundles with spectral curve $C$.
Consider the following diagram $$\xymatrix{
\mathcal{B}\times{\mathcal{H}iggs}\ar[d]^{\pi} \ar[r]^{p} & {\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}\\
\mathcal{Y}\times{\mathcal{H}iggs}\\
}$$ The intersection of $\mathcal{Z}\times{\mathcal{H}iggs}$ with $\pi(p^{-1}(\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}))$ has dimension less than or equals to $m+3l+2g-1$.
For $(1)$, first notice that we always have $p({\mathcal{H}iggs}'^{m}_{C})\subseteq\textrm{Hilb}^{d}_{C}$ by the definition of $p$. To prove the reverse inclusion, take a point $(D,t_{D})\in V$ where $D$ is a degree $d$ divisor on $X$ and $t_{D}\in H^{0}(L\otimes O_{D})$. Then we can recover its preimage in the following way. Because $D$ is a closed subscheme of $S$, we have an exact sequence: $$0\rightarrow K\rightarrow O_{X}\oplus L^{-1}\rightarrow O_{D}\rightarrow 0$$ $K$ sits inside the exact sequence: $$0\rightarrow O_{X}(-D)\rightarrow K\rightarrow L^{-1}\rightarrow 0$$ We can recover the preimage of $(D,t_{D})$ by setting $M=L^{-1}(D)$, $E=K(D)$, and $E\rightarrow M\otimes L$ corresponds to $$K\rightarrow O_{X}\oplus L^{-1}\rightarrow O_{X}$$ where the last arrow is the projection onto the first factor. Hence if $(D,t_{D})$ is a point on $\textrm{Hilb}^{d}_{C}$, then $O_{C}=O_{X}\oplus L^{-1}$, and we have a morphism of exact sequences: $$\xymatrix{
o\ar[r] & K \ar[r] \ar[d]^{\phi} & O_{X}\oplus L^{-1} \ar[r] \ar[d] & O_{D} \ar[r] \ar[d] & 0\\
0\ar[r] & K\otimes L \ar[r] & L\oplus O_{X} \ar[r] & L_{D} \ar[r] & 0\\
}$$ Hence $K\rightarrow O_{X}\oplus L^{-1}\rightarrow O_{X}$ can be identified with $$K\xrightarrow{\phi}K\otimes L\rightarrow L\oplus O_{X}\rightarrow O_{X}$$ Hence the corresponding $E\rightarrow M\otimes L$ comes from $E\rightarrow E\otimes L$, this proves that $p^{-1}(D,t_{D})\in{\mathcal{H}iggs}'^{m}_{C}$.\
For $(2)$, denote the intersection $$\mathcal{Z}\times{\mathcal{H}iggs}\cap \pi(p^{-1}(\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}))$$ by $\mathcal{W}$. Let $(z,F)$ be a point in $\mathcal{W}$, $b$ a point in $\mathcal{B}$ lying over $z$ such that $p(b,F)\in\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}$. From Lemma \[definition of Z\] and the proof of part $(2)$ of Proposition \[resolve the rational map\], we see that each point $z\in\mathcal{Z}$ determines a global section $t\in H^{0}(L)$. Moreover, the image of $b$ in $V$ consists of a finite subscheme $D$ of length $d$ of $X$, and $t$ induces a section in $H^{0}(L\otimes O_{D})$ which gives $D$ the structure of a closed subscheme of $S$. Hence $D$ lies inside the image of the section $X\xrightarrow{t} S$. Let $C$ be the spectral curve of $F$. Because $p(b,F)$ lies in $\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}$, we see that $D$ is also a closed subscheme of $C$, hence $D\in C\cap X$ where $X$ is viewed as a curve in $S$ via the section $t$. If $X$ is not a component of $C$, then it is easy to see $X\cap C$ only has length $2l$, but $D$ has length $d=m+l>2l$ since we require that $m>l$ at the beginning of this subsection. Hence we must have that $X$ is a component of $C$. This again implies that the fibers of the projection $\mathcal{W}\rightarrow\mathcal{Z}$ has dimension at most $l+1-g+l+2(g-1)=2l+g-1$. Now the assertion follows from Lemma \[definition of Z\]\
For the purpose of the section $4$ and $5$, let us also note the following lemma:
\[factorization of tau\]
There exists a smooth morphism $\mathcal{Y}\rightarrow {\textrm{Pic}^{d}}$ and a regular embedding $$\mathcal{B}\hookrightarrow\mathcal{Y}\times_{{\textrm{Pic}^{d}}}X^{(d)}$$.
Consider $({\mathcal{H}iggs}'^{m}\backslash\mathcal{Z})\subseteq(\mathcal{Y}\backslash\mathcal{Z})$. We have the following commutative diagram $$\xymatrix{
{\mathcal{H}iggs}'^{m}\backslash\mathcal{Z} \ar[r]^{p} \ar[d]^{u} & \textrm{Hilb}^{d}_{\mathcal{C}|H} \ar[d]^{v}\\
{\mathcal{H}iggs}\ar[r]^{\tau} & {\mathcal{H}iggs}}$$ where $u$ is the natural projection, $v$ sends $D$ to $\check{I}_{D}={\mathcal{H}om}(I_{D},O_{C})$ and $\tau$ is the involution of ${\mathcal{H}iggs}$ given by $$E\rightarrow \check{E}\otimes\textrm{det}(E)$$ where the Higgs field on $\check{E}\otimes\textrm{det}(E)$ is induced from $E$.
Consider the following morphism $${\mathcal{H}iggs}\xrightarrow{l}{\mathcal{H}iggs}^{reg}\qquad (E,\phi)\rightarrow \lambda^{*}({\textrm{det}}(E))$$ where $\lambda$ is the projection from the spectral curve $C$ to $X$. Then the involution $\tau$ factors as: $${\mathcal{H}iggs}\xrightarrow{(\lambda,\textrm{id})}{\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}\xrightarrow{\textrm{id}\times (\mu_{L}\circ\nu)}{\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}\xrightarrow{\mu}{\mathcal{H}iggs}$$ where $\nu$, $\mu$ and $\mu_{L}$ are defined in Lemma \[equivariance property\].
For part $(1)$, the morphism $\mathcal{Y}\rightarrow {\textrm{Pic}^{d}}$ is given by: $$(E,\varphi,s,\sigma)\rightarrow M\otimes L$$ It is easy to see that this morphism is smooth. Now consider the following diagram: $$\xymatrix{
X\times\mathcal{Y}\ar[r] \ar[d]^{f} & X\times{\textrm{Pic}^{d}}\ar[d]^{h}\\
\mathcal{Y} \ar[r]^{p} & {\textrm{Pic}^{d}}\\
}$$ Let $\mathcal{N}$ be the universal line bundle on $X\times{\textrm{Pic}^{d}}$. $X^{(d)}$ is a projective bundle over ${\textrm{Pic}^{d}}$ determined by the vector bundle $h_{*}(\mathcal{N})$, so $\mathcal{Y}\times_{{\textrm{Pic}^{d}}}X^{(d)}$ is a projective bundle over $\mathcal{Y}$ corresponds to the vector bundle $f_{*}(M\otimes L)$. From Lemma \[definition of Z\], $\mathcal{Z}$ is defined by the vanishing locus of a section of the vector bundle $f_{*}(M\otimes L)$, hence the assertion follows from Proposition \[blowup as regular embedding\].\
For part $(2)$, notice that part $(1)$ of Lemma \[further props of B\] implies that the restriction of $p$ to ${\mathcal{H}iggs}'^{m}\backslash\mathcal{Z}$ sends ${\mathcal{H}iggs}'^{m}\backslash\mathcal{Z}$ to $\textrm{Hilb}^{d}_{\mathcal{C}|H}$. Moreover, from the proof of part $(1)$ of Lemma \[further props of B\] we see that if $$0\rightarrow K=I_{D}\rightarrow O_{X}\oplus L^{-1}\rightarrow O_{D}\rightarrow 0$$ is the image of $(E,\phi,s,\sigma)\in{\mathcal{H}iggs}'^{m}$, then we have $$K\simeq E\otimes M^{-1}\otimes L^{-1}$$ It is easy to see that $\check{I}_{D}\simeq K^{-1}\otimes L^{-1}$, hence the assertion follows from this, using the fact that $\det(E)\simeq M$.\
Part $3$ follows immediately from the definitions, using Lemma \[description of the dual\].
A cohomological vanishing result
================================
In this section we keep the same notation and assumption as in subsection $3.3$. Set $d=m+l$ where the assumptions on $l$ and $m$ are given in the second paragraph of subsection $3.3$. The main result of this section is Lemma \[the main lemma\]. First let us define a certain open substack of ${\mathcal{H}iggs}^{n}$.
\[certain open substack\] Let ${\mathcal{H}iggs}^{(n)}$ denote the open substack of ${\mathcal{H}iggs}^{n}$ such that the underlying rank two vector bundle $E'$ satisfies the following conditions:
All line subbudles of $E'$ has degree less than or equals to $-m-g+1$
All quotient line bundles of $E'$ has degree greater than or equals to $g-d=g-l-m$
From the definition of semistability of vector bundles, it is easy to check that if we take $n=-2m-2g+2$ or $n=-2m-2g+3$, then all the semistable vector bundle of rank two of degree $n$ satisfies the two conditions above under our assumptions on $l$. Hence we can choose two consecutive integers $n$ for which ${\mathcal{H}iggs}^{(n)}$ is nonempty. For convenience, let us also give the following reformulation of the above two conditions in terms of the vanishing of certain cohomology groups:
The two conditions in Construction \[certain open substack\] is equivalent to the following:
$H^{1}(E'\otimes\Omega_{X}((d-g+1)x_{0})\otimes\mathcal{N})=0$ for all degree zero line bundle $\mathcal{N}$ on $X$
$H^{1}(\check{E'}\otimes L\otimes O_{X}(-(d-g)x_{0})\otimes\mathcal{N})=0$ for all degree zero line bundle $\mathcal{N}$ on $X$
The purpose of this section is to prove the following lemma:
\[the main lemma\] Consider: $$\xymatrix{
\mathcal{B}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{p} \ar[d]^{\pi} & {\textrm{Hilb}_{S}}^{d}\times{\mathcal{H}iggs}^{(n)}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}
}$$ Then we have:
$R\pi_{*}(p^{*}(Q))\in D^{\leq 0}_{coh}(\mathcal{Y}\times{\mathcal{H}iggs})$ where $p^{*}$ stands for the derived pullback functor.
${\mathcal{R}Hom}(R\pi_{*}(p^{*}(Q)),O)\in D^{\leq d}_{coh}(\mathcal{Y}\times{\mathcal{H}iggs})$.
Here let us remind the reader that $d=l+m$ is the same as the codimension of ${\mathcal{H}iggs}^{'m}\times_{H}{\mathcal{H}iggs}^{(n)}$ in $\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}$.
Since $Q$ is a direct summand of $\psi_{*}(Q')$(Lemma \[summand\]), and the image of $\mathcal{B}$ is contained in the open subscheme $V$ of ${\textrm{Hilb}_{S}}^{d}$($V$ is defined in Corollary \[Hilbert scheme of P1 and S\]) by part $(2)$ of Proposition \[resolve the rational map\], we only need to prove the following:
\[reduction of the main lemma\] Set $\textrm{Flag}^{d}_{\mathcal{B}}=\mathcal{B}\times_{V}V'$. Consider: $$\xymatrix{
\textrm{Flag}^{d}_{\mathcal{B}}\times{\mathcal{H}iggs}^{(n)}\ar[r]^{w} \ar[d]^{\psi'} & V'\times{\mathcal{H}iggs}^{(n)} \ar[d]^{\psi}\\
\mathcal{B}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{p} \ar[d]^{\pi} & V\times{\mathcal{H}iggs}^{(n)}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}
}$$ where $V$ is the open subscheme of ${\textrm{Hilb}_{S}}^{d}$ defined in Corollary \[Hilbert scheme of P1 and S\] and $V'$ is the open subscheme of $\textrm{Flag}^{'d}_{S}$ defined in Corollary \[Hilbert scheme of P1 and S\]. Then we have
$R\pi_{*}(\psi'_{*}(w^{*}(Q')))\in D^{\leq 0}_{coh}(\mathcal{Y}\times{\mathcal{H}iggs})$. Here again all functors are derived.
${\mathcal{R}Hom}(R\pi_{*}(\psi'_{*}(w^{*}(Q'))),O)\in D^{\leq d}_{coh}(\mathcal{Y}\times{\mathcal{H}iggs})$.
The proof of the previous lemma relies on the following:
\[existence of resolutions\]
There exists a Cartesian diagram: $$\xymatrix{
{\textrm{Flag}^{d}_{\mathcal{B}}}\ar[r] \ar[d] & X^{d} \ar[d]\\
\mathcal{B} \ar[r] & X^{(d)}\\
}$$ and a closed embedding ${\textrm{Flag}^{d}_{\mathcal{B}}}\hookrightarrow\mathcal{Y}\times_{{\textrm{Pic}^{d}}} X^{d}$.
There exists a complex of locally free sheaves $K^{*}$ on $\textrm{Flag}^{d}_{\mathcal{B}}\times{\mathcal{H}iggs}^{(n)}$ concentrated in degree $[-d,0]$ representing $w^{*}(Q')$ where $w$ is the morphism: $$\textrm{Flag}^{d}_{\mathcal{B}}\times{\mathcal{H}iggs}^{(n)}\xrightarrow{w}V'\times{\mathcal{H}iggs}^{(n)}$$ in Lemma \[reduction of the main lemma\]. Moreover, each term $K^{-p}$ is the pullback of a vector bundle $F^{-p}$ on $X^{d}\times{\mathcal{H}iggs}^{(n)}$ via $${\textrm{Flag}^{d}_{\mathcal{B}}}\times{\mathcal{H}iggs}^{(n)}\hookrightarrow(\mathcal{Y}\times_{{\textrm{Pic}^{d}}} X^{d})\times{\mathcal{H}iggs}^{(n)}\rightarrow X^{d}\times{\mathcal{H}iggs}^{(n)}$$
For $(1)$, notice that ${\textrm{Flag}^{d}_{\mathcal{B}}}=\mathcal{B}\times_{V}V'$, so the claim follows from part $(2)$ of Corollary \[Hilbert scheme of P1 and S\] and part $(1)$ of Lemma \[factorization of tau\]\
For part $(2)$, let $E'$ be the universal Higgs bundle on $X\times{\mathcal{H}iggs}^{(n)}$, viewed as a coherent sheaf on the surface $S$. Then by Lemma \[resolution\], we see that: $$(q^{*}(E')\otimes q^{*}(L^{-1})\rightarrow q^{*}(E'))^{\boxtimes d}$$ is a locally free resolution of $E^{'\boxtimes d}$ as coherent sheaves on $S^{d}\times{\mathcal{H}iggs}^{(n)}$. Now consider: $$V'\times{\mathcal{H}iggs}^{(n)}\xrightarrow{\sigma} S^{d}\times{\mathcal{H}iggs}^{(n)}$$ It is proved in $[1]$ that $L\sigma^{*}(E^{'\boxtimes d})\simeq\sigma^{*}(E^{'\boxtimes d})$, hence $$\sigma^{*}(q^{*}(E')\otimes q^{*}(L^{-1})\rightarrow q^{*}(E'))^{\boxtimes d}$$ is a locally free resolution of $\sigma^{*}(E^{'\boxtimes d})$ on $V'\times{\mathcal{H}iggs}^{(n)}$. By construction, $Q'$ is given by $\sigma^{*}(E^{' \boxtimes d})\otimes\psi^{*}({\textrm{det}}(O_{D'}))^{-1}$ where $D'$ is the universal subscheme of $S$ on $S\times V$. Now observe that by Corollary \[Hilbert scheme of P1 and S\], the composition of $$V'\xrightarrow{\sigma} S^{d}\xrightarrow{q} X^{d}$$ is identified with the natural projection $$V'\xrightarrow{r'} X^{d}$$ Hence the composition of $${\textrm{Flag}^{d}_{\mathcal{B}}}\times{\mathcal{H}iggs}^{(n)}\xrightarrow{w}V'\times{\mathcal{H}iggs}^{(n)}\xrightarrow{\sigma} S^{d}\times{\mathcal{H}iggs}^{(n)}\xrightarrow{q} X^{d}\times{\mathcal{H}iggs}^{(n)}$$ is identified with $${\textrm{Flag}^{d}_{\mathcal{B}}}\times{\mathcal{H}iggs}^{(n)}\hookrightarrow(\mathcal{Y}\times_{{\textrm{Pic}^{d}}} X^{d})\times{\mathcal{H}iggs}^{(n)}\rightarrow X^{d}\times{\mathcal{H}iggs}^{(n)}$$ Also, by Corollary \[Hilbert scheme of P1 and S\] and Lemma \[properties of abel-jacobian map\], $\psi^{*}({\textrm{det}}(O_{D'})^{-1})\simeq r'^{*}(O(\sum_{i<j}\Delta_{ij}))$, hence the assertion follows from this.
As a byproduct, we get the following more explicit description about the vector bundle $F^{-p}$ on $X^{d}\times{\mathcal{H}iggs}^{(n)}$:
\[explicit description of resolutions\] Let $E'$ be the universal Higgs bunle on $X\times{\mathcal{H}iggs}^{(n)}$. Then the vector bundle $F^{-p}$ on $X^{d}\times{\mathcal{H}iggs}^{(n)}$ is a direct sum of vector bundles of the form $$\mathcal{F}_{1}\boxtimes\cdots\boxtimes\mathcal{F}_{d}\otimes O(\sum_{i<j}\Delta_{ij})$$ where there exists a subset $I$ of $\{1,2,\cdots,d\}$ consisting of $p$ elements such that for all $i\in I$, $\mathcal{F}_{i}\simeq E'\otimes L^{-1}$, and $\mathcal{F}_{j}\simeq E'$ for all $j$ not in $I$.
We also need the following standard fact to prove Lemma \[reduction of the main lemma\]:
\[spectral sequence\] Let $X\xrightarrow{f}Y$ be a proper morphism of schemes, and $M\in D^{b}_{coh}(X)$ represented by a complex of the form: $$0\rightarrow C^{-n}\rightarrow\cdots\rightarrow C^{0}\rightarrow 0$$ If $R^{i}f_{*}C^{-j}=0$ for $i>j$, then $R^{p}f_{*}M=0$ for $p>0$.
Now we are ready to prove Lemma \[reduction of the main lemma\]. But before we enter into the proof, let us indicate the main ingredients of the proof in the following lemma:
\[baby version of the proof\] Let $A$ be a scheme of finite type over a field $k$, $\mathcal{E}$ a vector bundle of rank $n$ on $A$ with a section $s$. Let $Z$ be the closed subscheme of $A$ given by the vanishing locus of $s$. Assume that $Z$ is a local complete intersection of codimension $n$ in $A$. Let ${Bl_{Z}A}$ be the blowup of $A$ along $Z$ and $\mathbf{P}$ be the projective bundle associated to $\mathcal{E}$ and $O(1)$ be the relative ample line bundle. Consider the diagram: $$\xymatrix{
{Bl_{Z}A}\ar[rr]^{i} \ar[rd]_{\pi} & & \mathbf{P} \ar[ld]^{\pi'}\\
& A &\\
}$$ Assume that $K\in D^{b}_{coh}({Bl_{Z}A})$ is represented by a complex of the form $$0\rightarrow C^{-r}\rightarrow C^{-(r-1)}\rightarrow\cdots\rightarrow C^{0}\rightarrow 0$$ such that each term $C^{-i}$ is the pullback of a vector bundle $D^{-i}$ on $\mathbf{P}$. If $R^{j}\pi'_{*}(D^{-i}\otimes O(a))=0$ for all $j>i$ and $a\geq 0$, then we have $R\pi_{*}(K)\in D^{\leq 0}_{coh}(A)$.
By Lemma \[pushforward of bl\], the assumptions on $D^{-i}$ implies that $R^{j}\pi_{*}(C^{-i})=0$ for $j>i$, hence the claim follows from Lemma \[spectral sequence\].
Now let us start the proof of Lemma \[reduction of the main lemma\]:
Let us deal with part $(1)$ first. We want to apply Lemma \[baby version of the proof\] with $A=\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}$, ${Bl_{Z}A}=\mathcal{B}\times{\mathcal{H}iggs}^{(n)}$. The argument is divided into the following steps:\
\
Step $1$: $\psi'_{*}(w^{*}(Q'))\in D^{b}_{coh}(\mathcal{B}\times{\mathcal{H}iggs}^{(n)})$ is represented by a complex of locally free sheaves of the form: $$0\rightarrow C^{-d}\rightarrow C^{-(d-1)}\rightarrow\cdots\rightarrow C^{0}\rightarrow 0$$ Indeed, from Lemma \[existence of resolutions\], we see that $w^{*}(Q')$ on $\textrm{Flag}^{d}_{\mathcal{B}}\times{\mathcal{H}iggs}^{(n)}$ is represented by a complex of locally free sheaves of the form $$0\rightarrow K^{-d}\rightarrow K^{-(d-1)}\rightarrow\cdots\rightarrow K^{0}\rightarrow 0$$ Since $\psi'$ is finite flat of degree $d!$, we are done.\
\
Step $2$: Set $\mathbf{P}=\mathcal{Y}\times_{{\textrm{Pic}^{d}}}X^{(d)}$. Then $\mathbf{P}$ is a projective bundle over $\mathcal{Y}$ associated with a certain vector bundle $\mathcal{E}$ on $\mathcal{Y}$. Moreover, $\mathcal{Z}$ is the vanishing locus of a section of $\mathcal{E}$. Hence we have the diagram: $$\xymatrix{
\mathcal{B} \ar[rr]^{i} \ar[rd]_{\pi} & & \mathbf{P} \ar[ld]^{f'}\\
& \mathcal{Y} &\\
}$$ Indeed, the claim follows directly from part $(1)$ of Lemma \[factorization of tau\]\
\
Step $3$: Each term of the complex $C^{-i}$ on $\mathcal{B}\times{\mathcal{H}iggs}^{(n)}$ in Step $1$ is the pullback of a vector bundle $D^{-i}$ on $\mathbf{P}\times{\mathcal{H}iggs}^{(n)}$.\
Indeed, set $\mathbf{P}'=\mathcal{Y}\times_{{\textrm{Pic}^{d}}}X^{d}$. By part $(1)$ of Lemma \[factorization of tau\], part $(2)$ of Corollary \[Hilbert scheme of P1 and S\] and the definition of ${\textrm{Flag}^{d}_{\mathcal{B}}}$, we have the following commutative diagram with Cartesian squares: $$\xymatrix{
{\textrm{Flag}^{d}_{\mathcal{B}}}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{i'} \ar[d]^{\psi'} & \mathbf{P}'\times{\mathcal{H}iggs}^{(n)} \ar[r]^{\theta'} \ar[d] & X^{d}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{h}\\
\mathcal{B}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{i} & \mathbf{P}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{\theta} & X^{(d)}\times{\mathcal{H}iggs}^{(n)}\\
}$$ By Lemma \[existence of resolutions\], each $K^{-i}$ on ${\textrm{Flag}^{d}_{\mathcal{B}}}\times{\mathcal{H}iggs}^{(n)}$ is the pullback of a vector bundle $F^{-i}$ on $X^{d}\times{\mathcal{H}iggs}^{(n)}$, hence each $C^{-i}=\psi'_{*}(K^{-p})$ is the pullback of $D^{-i}=\theta^{*}(h_{*}(F^{-i}))$ on $\mathbf{P}\times{\mathcal{H}iggs}^{(n)}$.\
\
Step $4$: Consider $$\xymatrix{
\mathbf{P}\times{\mathcal{H}iggs}^{(n)} \ar[d]_{f'}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}\\
}$$ Then $R^{j}f'_{*}(D^{-i}\otimes O(a))=0$ for all $j>i$ and $a\geq 0$.\
To see this, consider the following diagram where the bottom square is Cartesian: $$\xymatrix{
& X^{d}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{h}\\
\mathbf{P}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{\theta} \ar[d]^{f'} & X^{(d)}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{f}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)} \ar[r] & {\textrm{Pic}^{d}}\times{\mathcal{H}iggs}^{(n)}\\
}$$ By Step $3$, each $D^{-i}=\theta^{*}(h_{*}(F^{-i}))$, so from part $1$ of Lemma \[factorization of tau\] we see that $\mathcal{Y}\rightarrow{\textrm{Pic}^{d}}$ is smooth, hence we only need to prove: $$R^{j}f_{*}(h_{*}(F^{-i})\otimes O(a))=0$$ for $j>i$ and $a\geq 0$. Moreover, by projection formula, it is enough to prove that $$R^{j}f_{*}(F^{-i}\otimes h^{*}(O(a))\otimes f^{*}(O(\Theta)))=0$$ for $j>i$ and $a\geq 0$ where $\Theta$ is the theta divisor on ${\textrm{Pic}^{d}}$. Now the claim is an easy consequence of Lemma \[vanishing of pushforward\], using the description of $F^{-i}$ in Lemma \[explicit description of resolutions\], the description of $O(\Theta)$ and $O(1)$ in Lemma \[properties of abel-jacobian map\] and our assumptions on $E'$ in the beginning of this subsection.\
Now from Lemma \[baby version of the proof\], the proof of part $1$ of Lemma \[reduction of the main lemma\] is complete.\
\
Now let us turn to the proof of part $2$ of Lemma \[reduction of the main lemma\]. The proof of similar to the argument in part $1$. First notice that by Grothendieck duality and Proposition \[dualizing sheaf of bl\], we only need to prove $$R\pi_{*}{\mathcal{R}Hom}(\psi'_{*}(w^{*}(Q')),O((d-g)E))\in D^{\leq d}_{coh}(\mathcal{Y}\times{\mathcal{H}iggs})$$ where $E$ is the exceptional divisor of the blowup $\mathcal{B}\rightarrow\mathcal{Y}$.\
\
Step $1$: ${\mathcal{R}Hom}(\psi'_{*}(w^{*}(Q')),O((d-g)E))\in D^{\leq d}_{coh}(\mathcal{Y}\times{\mathcal{H}iggs})$ is represented by a complex of the form: $$0\rightarrow C^{0}\rightarrow C^{1}\rightarrow\cdots\rightarrow C^{d}\rightarrow 0$$ Indeed, consider the following diagram with Cartesian squares: $$\xymatrix{
{\textrm{Flag}^{d}_{\mathcal{B}}}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{i'} \ar[d]^{\psi'} & \mathbf{P}'\times{\mathcal{H}iggs}^{(n)} \ar[r]^{\theta'} \ar[d] & X^{d}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{h}\\
\mathcal{B}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{i} & \mathbf{P}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{\theta} & X^{(d)}\times{\mathcal{H}iggs}^{(n)}\\
}$$ Because of part $(1)$ of Lemma \[existence of resolutions\], the relative dualizing sheaf $\omega$ of the morphism $\psi'$ is isomorphic to $i'^{*}\theta'^{*}(\omega_{X^{d}|X^{(d)}})$. Since $w^{*}(Q')$ is represented by the complex of locally free sheaves $K^{*}$, we conclude from duality that $${\mathcal{R}Hom}(\psi'_{*}(w^{*}(Q')),O((d-g)E))$$ is represented by a complex of the prescribed form with $$C^{i}=\psi'_{*}(\check{K}^{-i}\otimes\omega)\otimes O((d-g)E)$$.\
\
Step $2$: Each term $C^{i}$ is the pullback of a vector bundle $D^{i}$ on $\mathbf{P}\times{\mathcal{H}iggs}^{(n)}$. Indeed, each $\check{K}^{-i}$ is the pullback of $\check{F}^{-i}$ on $X^{d}\times{\mathcal{H}iggs}$, and $\omega$ is also the pullback of $\omega_{X^{d}|X^{(d)}}$ on $X^{d}\times{\mathcal{H}iggs}$, and $O(E)$ is the pullback of $O(-1)$ from $\mathbf{P}\times{\mathcal{H}iggs}^{(n)}$, hence each $C^{i}$ is the pullback of $$\theta^{*}(h_{*}(\check{F}^{-i}\otimes\omega_{X^{d}|X^{(d)}}))\otimes O(-1)^{\otimes (d-g)}$$ on $\mathbf{P}\times{\mathcal{H}iggs}$.\
\
Step $3$: We have $R^{j}f'_{*}(D^{i}\otimes O(a))=0$ for all $j>d-i$ and $a\geq 0$. To see this, use diagram: $$\xymatrix{
& X^{d}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{h}\\
\mathbf{P}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{\theta} \ar[d]^{f'} & X^{(d)}\times{\mathcal{H}iggs}^{(n)} \ar[d]^{f}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)} \ar[r] & {\textrm{Pic}^{d}}\times{\mathcal{H}iggs}^{(n)}\\
}$$ where the bottom square is Cartesian. By step $2$, each $D^{i}\otimes O(a)$ is the pullback of $$h_{*}(\check{F}^{-i}\otimes\omega_{X^{d}|X^{(d)}})\otimes O(a-d+g)$$ from $X^{(d)}\times{\mathcal{H}iggs}^{(n)}$. Part $1$ of Lemma \[factorization of tau\] implies that $\mathcal{Y}\rightarrow{\textrm{Pic}^{d}}$ is smooth, hence we only need to prove: $$R^{j}f_{*}(h_{*}(\check{F}^{-i}\otimes\omega_{X^{d}|X^{(d)}})\otimes O(a-d+g))=0$$ for all $i>d-i$ and $a\geq 0$. This again follows from Lemma \[properties of abel-jacobian map\] and our assumptions on $E'$.\
From Lemma \[baby version of the proof\] again, the proof of part $2$ of Lemma \[reduction of the main lemma\] is also complete.
The proof of the main theorem
=============================
In this last section we will prove the main theorem. First let us establish the following:
Consider the diagram in Lemma \[the main lemma\] $$\xymatrix{
\mathcal{B}\times{\mathcal{H}iggs}^{(n)} \ar[r]^{p} \ar[d]^{\pi} & V\times{\mathcal{H}iggs}^{(n)}\\
\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}
}$$ Then
$R\pi_{*}(p^{*}(Q))$ is a maximal Cohen-Macaulay sheaf of codimension $d$ on $\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}$ supported in ${\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}^{(n)}$. Let us remind the reader again that $d$ is equal to the codimension of ${\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}^{(n)}$ inside $\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}$.
Let $U_{m}$ be the open substack of ${\mathcal{H}iggs}^{m}$ given by the condition that the underlying vector bundle $E$ of the Higgs bundle satisfies the condition $H^{1}(E)=0$ and $H^{0}(\check{E}\otimes L)=0$. Then $R\pi_{*}(p^{*}(Q))$ descends down to a maximal Cohen-Macaulay sheaf $\mathcal{P}'$ on $U_{m}\times_{H}{\mathcal{H}iggs}^{(n)}$ such that its restriction to $({\mathcal{H}iggs}^{reg}\cap U_{m})\times_{H}{\mathcal{H}iggs}^{(n)}$ agrees with the pullback of the Poincaré line bundle along $${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}^{(n)}\xrightarrow{\tau\times\textrm{id}}{\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}^{(n)}$$ where $\tau$ is the involution of ${\mathcal{H}iggs}$ defined in part $(2)$ of Lemma \[factorization of tau\].
We are going to prove this by applying Lemma $7.7$ of $[1]$. By Lemma \[the main lemma\] and Lemma $7.7$ of $[1]$, we only need to check that the support of $R\pi_{*}(p^{*}(Q))$ has codimension greater than or equals to $d$. Let $\mathcal{W}$ denote the support of $R\pi_{*}(p^{*}(Q))$. First we claim that if we restrict $p$ to $(\mathcal{Y}\backslash\mathcal{Z})\times{\mathcal{H}iggs}$, then we have $$p^{-1}(\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs})=({\mathcal{H}iggs}'^{m}\cap(\mathcal{Y}\backslash\mathcal{Z}))\times_{H}{\mathcal{H}iggs}$$ In fact, since the restriction of $p$ to $\mathcal{Y}\backslash\mathcal{Z}$ is smooth by part $(3)$ of Proposition \[resolve the rational map\], and both $\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}$ and ${\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}$ are integral, we only need to check the equality set theoretically, but this is the content of part $(1)$ of Lemma \[further props of B\]. From this we conclude that $$\mathcal{W}\cap((\mathcal{Y}\backslash\mathcal{Z})\times{\mathcal{H}iggs})=({\mathcal{H}iggs}'^{m}\cap(\mathcal{Y}\backslash\mathcal{Z}))\times_{H}{\mathcal{H}iggs}$$ Moreover, from its definition, it is clear that $\mathcal{W}\subseteq\pi(p^{-1}(\textrm{Hilb}^{d}_{\mathcal{C}|H}\times_{H}{\mathcal{H}iggs}))$, hence by part $(2)$ of Lemma \[further props of B\] we have: $$\textrm{dim}(\mathcal{W}\cap(\mathcal{Z}\times{\mathcal{H}iggs}))\leq m+3l+2g-1$$ On the other hand, part $(5)$ of Lemma \[elementary properties of Y\] implies that $$\textrm{dim}({\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs})=m+5l+1$$ Hence we have: $$\textrm{dim}({\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs})>\textrm{dim}(\mathcal{W}\cap(\mathcal{Z}\times{\mathcal{H}iggs}))$$ Combine these with part $(3)$ of Lemma \[elementary properties of Y\] we have: $$\textrm{codim}(\mathcal{W})\geq d$$ Hence $R\pi_{*}(p^{*}(Q))$ is a Cohen-Macaulay sheaf of codimension $d$ on $\mathcal{Y}\times{\mathcal{H}iggs}^{(n)}$. Moreover, because $$\textrm{dim}({\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs})=m+5l+1>\textrm{dim}(\mathcal{W}\cap(\mathcal{Z}\times{\mathcal{H}iggs}))$$ the Cohen-Macaulayness implies that $$\mathcal{W}\cap(\mathcal{Y}\backslash\mathcal{Z}\times{\mathcal{H}iggs}^{(n)})\subseteq{\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}^{(n)}$$ is dense in $\mathcal{W}$, hence $\mathcal{W}={\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}^{(n)}$.\
For $(2)$, notice that by our construction and the discussions in Proposition \[definition of Q\] and part $(2)$ of Lemma \[factorization of tau\], $R\pi_{*}(p^{*}(Q))$ agrees with the pullback of $(\tau\times \textrm{id})^{*}(\mathcal{P})$ on the open substack: $$({\mathcal{H}iggs}'^{m}\cap(\mathcal{Y}\backslash\mathcal{Z}))\times_{H}{\mathcal{H}iggs}^{(n)}$$ where $\mathcal{P}$ is the Poincaré sheaf on $widetilde{{\mathcal{H}iggs}}\times_{H}{\mathcal{H}iggs}^{(n)}$ and $\tau$ is the involution of ${\mathcal{H}iggs}$ in part $(2)$ of Lemma \[factorization of tau\]. From the proof of $(1)$ we see that the codimension of $$({\mathcal{H}iggs}'^{m}\cap\mathcal{Z})\times_{H}{\mathcal{H}iggs}^{(n)}$$ in $${\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}^{(n)}$$ is greater than or equals to $3$. So by Proposition \[uniqueness of CM sheaf\] the descend data extends to ${\mathcal{H}iggs}'^{m}\times_{H}{\mathcal{H}iggs}^{(n)}$. So by part $(4)$ of Lemma \[elementary properties of Y\] and part $(2)$ of Lemma \[factorization of tau\], it descends to $U_{m}\times_{H}{\mathcal{H}iggs}^{(n)}$ and agrees with the pullback of the Poincaré line bundle along $\tau\times\textrm{id}$.
With the previous lemma at hand, we can now prove the main theorem. First notice that from the previous lemma we have a maximal Cohen-Macaulay sheaf $\mathcal{P}'$ on $U_{m}\times_{H}{\mathcal{H}iggs}^{(n)}$. Using Lemma \[factorization of tau\] and Lemma \[equivariance property\], we see that there exists a line bundle $\mathcal{A}$ on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}$ such that the restriction of $$\check{\mathcal{P}}'\otimes\mathcal{A}$$ to $$(U_{m}\cap\widetilde{{\mathcal{H}iggs}})\times_{H}{\mathcal{H}iggs}^{(n)}$$ agrees with the Poincaré sheaf on $$(U_{m}\cap\widetilde{{\mathcal{H}iggs}})\times_{H}{\mathcal{H}iggs}^{(n)}$$ constructed in section $1.3$. Denote $\check{\mathcal{P}}'\otimes\mathcal{A}$ by $\mathcal{P}_{m}$. So our goal is to extend $\mathcal{P}_{m}$ to ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$ such that it agrees with the Poincaré line bundle on ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}_{ss}$. First choose two consecutive integers $n_{1}$ and $n_{2}$ such that the corresponding stack ${\mathcal{H}iggs}^{(n_{i})}$ is nonempty. In fact, by our discussions in Construction \[certain open substack\] in section $4$, we can take $$n_{1}=-2m-2g+2\qquad n_{2}=-2m-2g+3$$ The proof of the main theorem boils down to the following three claims:
In order to construct the Poincaré sheaf on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$, it is enough to construct the Poincaré sheaf on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$ for all $i$.
To construct the Poincaré sheaf on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$, it is enough to construct the Poincaré sheaf on $U_{m}\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$ for all $m>>0$.
Let ${\mathcal{H}iggs}^{(n_{i})}_{ss}={\mathcal{H}iggs}^{(n_{i})}\cap{\mathcal{H}iggs}^{n_{i}}_{ss}$. To construct the Poincaré sheaf on $U_{m}\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$, it is enough to construct the Poincaré sheaf on $U_{m}\times_{H}{\mathcal{H}iggs}^{(n_{i})}_{ss}$.
For claim $(1)$ notice that by tensoring with the line bundle $O_{X}(x_{0})$, we can translate any ${\mathcal{H}iggs}^{l}$ into ${\mathcal{H}iggs}^{n_{i}}$ for some $i$. Hence from Lemma \[equivariance property\] and Proposition \[uniqueness of CM sheaf\], if we can construct the Poincaré sheaf on ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$, then we can extend it to the entire ${\mathcal{H}iggs}\times_{H}{\mathcal{H}iggs}_{ss}$ using Lemma \[equivariance property\], and its restriction to ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}_{ss}$ will agree with the Poincaré line bundle.\
For $(2)$, notice that for any rank $2$ Higgs bundle $E$, we can find an integer $n$ such that $E\otimes O_{X}(nx_{0})\in U_{m}$ for some $m$. So if we fix an integer $l$ and consider the isomorphism $${\mathcal{H}iggs}^{l}\xrightarrow{\alpha}{\mathcal{H}iggs}^{l+2n}$$ induced by tensoring with $O_{X}(nx_{0})$, then the union of $\alpha^{-1}(U_{l+2n})$ for all $n$ will cover ${\mathcal{H}iggs}^{l}$. Hence it suffices to construct the Poincaré sheaf on $\alpha^{-1}(U_{l+2n})\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$ for all $n>>0$. Now if we have the Poincaré sheaf on $U_{l+2n}\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$, then we can again use Lemma \[equivariance property\] to construct the Poincaré sheaf on $\alpha^{-1}(U_{l+2n})\times_{H}{\mathcal{H}iggs}^{n_{i}}_{ss}$, and Proposition \[uniqueness of CM sheaf\] will guarantee that it agrees with the Poincaré line bundle.\
For $(3)$, by Lemma \[a property of semistable higgs bundle\], we have that $${\mathcal{H}iggs}^{n}_{ss}=({\mathcal{H}iggs}^{n}_{ss}\cap\widetilde{{\mathcal{H}iggs}})\cup({\mathcal{H}iggs}^{(n)}_{ss})$$ The construction in $(1)$ already gives the Poincaré sheaf on $$U_{m}\times_{H}({\mathcal{H}iggs}^{n}_{ss}\cap\widetilde{{\mathcal{H}iggs}})$$ Since the complement of ${\mathcal{H}iggs}^{reg}$ has codimension greater than or equals to two, it follows that the Poincaé sheaf is uniquely determined by its restriction to ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}$ by Proposition \[uniqueness of CM sheaf\]. Hence if we can construct the Poincaré sheaf on $U_{m}\times_{H}{\mathcal{H}iggs}^{(n_{i})}_{ss}$ such that its restriction to ${\mathcal{H}iggs}^{reg}\times_{H}{\mathcal{H}iggs}^{(n_{i})}_{ss}$ agrees with the Poincaré line bundle, then it automatically compatible with the Poincaré sheaf on $U_{m}\times_{H}({\mathcal{H}iggs}^{n_{i}}_{ss}\cap\widetilde{{\mathcal{H}iggs}})$, hence they glue.\
For the claim about the flatness over ${\mathcal{H}iggs}_{ss}$, we use Proposition \[flatness of CM sheaves\] and Proposition \[smoothness of semistable higgs bundles\].
[10]{}
D. Arinkin. *Autoduality of compactified Jacobian of planar curves*. .
D. Arinkin. *Cohomology of line bundles on compactified Jacobians*. .
Margarida Melo, Antonio Rapagnetta, Filippo Viviani. *Fourier-Mukai and autoduality for compactified Jacobians II*. .
D. Arinkin. *Irreducible connections admit generic oper structures*. .
David Eisenbud. *Commutative algebra with a view toward algebraic geometry*.
*Stacks Project*.
A. Altman, A. Iarrobino, and S. Kleiman. *Irreducibility of the compactified Jacobian*. .
A. Altman and S. Kleiman. *Compactifying the Picard scheme*. .
J. Fogarty. *Algebraic families on an algebraic surface*. .
Daniel Huybrechts. *Fourier-Mukai Transforms in Algebraic Geometry*. .
V. Drinfeld. *Two dimensional $l$-adic representations of the fundamental group of a curve over a finite field and automorphic forms on GL(2)*. .
G. Laumon. *Correspondence de Langlands geometrique pour les corps de fonctions*. .
M. Li. *Construction of the Poincaré sheaf on the stack of rank two Higgs bundles of ${\mathbf{P}^{1}}$*. .
D. Mumford. *Abelian varieties*,
E. Esteves, M. Gagne and S. Kleiman. *Autoduality of the compactified Jacobian*. .
A. Beauville, M.S. Narasimhan and S. Ramanan. *Spectral curves and the generalised theta divisor*.
M. Haiman. *Hilbert schemes, polygraphs and the Macdonald positivity conjecture*.
W. Bruns and J. Herzog. *Cohen-Macaulay rings*,
|
---
address: |
for the CDF Collaboration\
Instituto de Física de Cantabria (CSIC-Universidad de Cantabria)\
Av. de los Castros s/n\
39005 Santander, Spain
author:
- GERVASIO GÓMEZ
title: TOP PHYSICS RESULTS FROM CDF
---
Introduction: top at CDF
========================
The CDF detector was upgraded [@cdf] for Run 2 of the Tevatron, and has recorded $\approx$ 600 of collision data at $\sqrt{s}=1.96$ TeV. Top at the Tevatron is produced predominantly in pairs through the strong interaction (quark-antiquark annihilation or gluon gluon fusion), and it decays virtually 100% of time to $Wb$. The final state of a event therefore has two $W$s and two $b$ quarks, and the event selection at CDF is characterized by the decay mode of the $W$ bosons. Events where both $W$s decay to $e$ or $\mu$ are called “dilepton” events. This mode has the advantage of being relatively clean, with a S/N of about 1.5 to 3.5, but it has a low rate (4-6 events/100 ) owing to the small leptonic branching fraction of $W$. Events in which one $W$ decays to $e$ or $\mu$ and the other decays to quarks are called “lepton + jets” events. This decay channel has a higher rate than the dilepton channel (25-45 events/100 ), but it has worse S/N (0.3 to 3). Since all events have two $b$ quarks while only 1-2% of the dominant backgrounds contain heavy flavor, S/N can be considerably improved by identifying $b$ quarks ($b$ tagging). This is done either by looking at tracks embedded in a jet which are displaced from the primary interaction point (due to the long $b$-hadron lifetime) or by looking for a soft lepton embedded in a jet (due to the large semileptonic decay rate of $b$-hadrons). Top at the Tevatron can also be produced singly through the electroweak interaction, with a predicted cross section which is about 3 times smaller than the cross section. Single top events contain just one $W$ in the final state, have lower jet multiplicity and are harder to separate from the background.
Top Cross-Section
=================
Measurement of the production cross section in the different decay modes and with different methods provides an understanding of the event selection efficiency, background contamination, event kinematics and heavy flavor content. It is the benchmark measurement to validate a sample of top events which can be used for other top quark measurements, and it is in itself a test of QCD predictions. The most accurate measurement in the dilepton channel results from the combination two complimentary analyses using approximately 200 . One analysis selects events with two identified leptons ($e$ or $\mu$), while the other identifies one lepton and requires an additional isolated high $P_T$ track in order to increase the acceptance at the cost of some additional background contamination. The two lepton analysis observes 13 events with a background expectation of $2.7 \pm 0.7$, while the lepton+track analysis observes 19 candidates with a background expectation of $6.9 \pm 1.7$. The combined analysis gives \_[t|[t]{}]{} = 7.0\^[+2.4]{}\_[-2.1]{}[(stat)]{}\^[+1.6]{}\_[-1.1]{}[(sys)]{} 0.4[(lum) pb.]{}
In the lepton+jets decay mode several cross section measurements have been performed. The basic event selection requires one identified lepton, large missing transverse energy and 3 or energetic jets. Counting analyses require at least one of the jets to be tagged as a $b$-jet in order to further reduce the dominant $W$+jets and QCD backgrounds. The QCD and fake-tag backgrounds are derived from data, while $W$ + heavy flavor, single top, and diboson backgrounds use a combination of data and MC. We check the background estimation in a control region of low jet multiplicity, where we expect little contribution, and measure the cross section the events with 3 or more jets by assigning any excess of observed events over the predicted non-top SM backgrounds to production. The most precise determination of the cross section at CDF uses $\approx$ 160 of lepton+jets events and requires at least one jet to be $b$-tagged by a secondary vertex algorithm which looks at tracks associated to the jet. Additionally, the total transverse energy $H_T$ of the event (including the missing $E_T$) is required to be larger than 200 GeV in the signal region. Figure \[fig:btagxsec\] shows the distribution of $b$-tagged events and the predicted background rates as a function of jet multiplicity. A total of 48 events are observed in the 3 or more jet region over an expected background of $13.5 \pm 1.8$. The resulting cross section is \_[t|[t]{}]{} = 5.6\^[+1.2]{}\_[-1.1]{}[(stat)]{}\^[+0.9]{}\_[-0.6]{}[(sys) pb ]{} where a 6% luminosity uncertainty is included in the systematic uncertainty.
Other analyses in the lepton+jets channel do not use $b$-tagging. Instead, they make use of kinematic information in the candidate events. Templates for different kinematic variables are built for signal and background events, and the distribution observed in the data is fit to a sum of these templates, allowing the normalizations to float. Two analyses of this type have been performed at CDF. One simply fits the $H_T$ distribution, while the other one derives a neural net (NN) discriminant which includes information from seven different kinematic distributions. The results of such fits are shown in Figure \[fig:kinxsec\]. The $H_T$ fit illustrates the difficulty of separating the top signal from the background when one does not use $b$-tagging. The fraction of top from this fit is ($13 \pm 4$)%. The NN analysis uses $\approx$ 195 and observes 519 (pretag) events. The fraction is ($17.6 \pm 3.0$)% and the measured cross section is \_[t|[t]{}]{} = 6.7 1.1[(stat)]{} 1.6[(sys) pb.]{}
Several other measurements of the cross section have been performed which include two $b$-tagged jets, soft lepton tagging, all hadronic $W$ decays, among others. All measurements are consistent with each other and with the theoretical prediction [@xsec-th] $\sigma_{t\bar{t}}^{\rm{th}} = 6.7^{+0.7}_{-0.9}$ pb for $m_t = 175$ GeV.
Top Mass
========
One of the most interesting properties of top is its huge mass, so large that its Yukawa coupling is stinkingly close to unity, making us wonder if this is just a coincidence or if top plays some special role in the electroweak symmetry breaking mechanism. The top mass is also a dominant parameter in higher order radiative corrections to several SM observables, and in particular an accurate determination of the top mass, combined with precision electroweak measurements, helps constrain the mass of the elusive SM Higgs. CDF has measured the top mass using both dilepton and lepton+jets events. These analyses are difficult for several reasons: the final state leptons and jets observed in the detector must be matched to the decay partons, giving several possible assignments (only one of which is correct), the undetected neutrinos cause the event kinematics to be under-constrained, and the jet energies must be known with high accuracy. The dominant systematic uncertainty for all top mass measurements is the jet energy determination. As of the date of this conference, CDF is in the process of revising the jet energy corrections, which will result in a reduction of the systematic uncertainty of all $m_t$ results shown here to about one half of their value. Much improved $m_t$ results will be shown in the next round of conferences.
The best CDF result comes from a “dynamic likelihood method” (DLM) analysis which uses $\approx$ 160 of tagged lepton+jets data. Event selection requires one identified $e$ or $\mu$ and exactly 4 jets, at least one of which is required to be tagged by the secondary vertex algorithm. A likelihood is built as a function of $m_t$ for each event using information from leading order matrix element (LO ME) Monte Carlo and transfer functions which give the probability of observing final state jets and leptons given the LO ME partons. All parton-jet assignments are considered, with the correct assignment receiving a larger weight. The top mass is obtained by maximizing the combined likelihood of all 22 events passing the selection, and it is then corrected by a background mapping function which gives the true top mass for the measured background fraction of 19%. Figure \[fig:dlm\] shows the maximum likelihood mass distribution for each of the 22 observed events and the resulting combined likelihood. The top mass is determined to be m\_t = 177.8\^[+4.5]{}\_[-5.0]{}[(stat)]{} 6.2[(sys) GeV. ]{}
The top mass has also been measured using “template” analyses, in which a value for the top mass is reconstructed for each event by looping over all possible jet-parton assignments and neutrino solutions, imposing kinematic constrains, and choosing the $m_t$ which best fits the event. The resulting $m_t$ distribution is then compared to Monte Carlo $m_t$ templates simulated at various top masses. The top mass is determined by maximizing a likelihood built as a function of $m_t$ by fitting the templates to the observed distribution. This method has been used in both dilepton and lepton+jets samples. Figure \[fig:template\] shows the reconstructed top mass distributions for the dilepton sample, which observes 12 events and uses $\approx$ 193 , and for the tagged lepton+jets sample, which uses $\approx$ 162 and observes 28 events. The figure also shows signal and background templates and the likelihood as a function of $m_t$.
The corresponding top mass is determined to be m\_t &=& 176.5\^[+17.2]{}\_[-16.0]{}[(stat)]{} 6.9[(sys) GeV ]{}\
m\_t &=& 174.8\^[+7.1]{}\_[-7.7]{}[(stat)]{} 6.5[(sys) GeV ]{} for the dilepton and lepton+jets channels, respectively.
Several other measurements have been performed, including multivariate templates, neutrino weighting, and double tags among others.
Single Top
==========
Top quarks at the Tevatron can be singly produced via the weak interaction involving a $Wtb$ vertex. The two relevant production modes are the $t$ and $s$ channel exchange of a virtual $W$ boson. The predicted theoretical cross sections [@single-top] are $\sigma_t = 0.88 \pm 0.11$ pb for the $s$ channel and $\sigma_t = 1.98 \pm 0.25$ pb for the $t$ channel. The single top cross section is of particular interest because it is proportional to $|V_{tb}|^2$, the square of the CKM matrix element which relates top and bottom quarks, and which has not been directly measured. Anomalously high rates of single top production would be a hint for exotic physics beyond the SM such a FCNC, $W^{\prime}$, or anomalous couplings. In addition, single top is an important background for SM Higgs searches, since the final state is the same as for $WH \rightarrow l\nu b\bar{b}$. The event selection requires an identified $e$ or $mu$, high missing $E_T$, and two jets, one of which must be $b$-tagged. Two separate searches are performed using $\approx$ 162 of data: a combined search for the $s$ and $t$ channel single top signal, and a separate search which measures the rates of the two single top processes separately. A maximum likelihood technique is used to extract the signal content in the data. Monte Carlo templates of an appropriate kinematic variable are built for the signal and for the and non-top backgrounds, and fit to the distribution observed in the data. The separate search uses a $Q\cdot\eta$ distribution which exhibits a distinct asymmetry for $t$ channel events, where $Q$ is the charge of the lepton and $\eta$ is the pseudorapidity of the untagged jet. The combined search uses the $H_T$ distribution. The results of the fits to the data are shown in Figure \[fig:st\].
No significant evidence for single top is observed in the data, and 95% C.L. upper limits are set. For the separate search, we set \_s\^[95% CL]{} < 13.6 [pb]{}\
\_t\^[95% CL]{} < 10.1 [pb]{} and for the combined search we set \_[s+t]{}\^[95% CL]{} < 17.8 [pb]{} CDF hopes to see evidence of single top production and to measure its production rate with a much larger data sample.
Other Top Measurements
======================
Several other interesting top quark measurements have been and are being performed. Because $m_t > m_W$, a large fraction $F_0$ of $W$ bosons produced in top decays are longitudinally polarized. The SM tree-level prediction is $F_0 = 0.703$ for $m_t = 175$ GeV. This fraction can be measured because kinematic distributions of the decay products of the $W$ are different for the different $W$ polarization states. Fitting the $\cos\theta^*$ distribution, where $\theta^*$ is the angle between the charged lepton momentum in the $W$ rest frame and the boost direction from the top to the $W$ rest frame, in a 162 tagged lepton+jets sample we determine $F_0 = 0.89^{+0.30}_{-0.34}$(stat)$\pm 0.17$(sys) and $F_0 > 0.25$ 95% C.L. A similar analysis uses the $p_T$ of the lepton to discriminate between different $W$ polarizations. In the tagged lepton+jets sample the result is $F_0 = 0.88^{+0.12}_{-0.47}$(tot) and $F_0 > 0.24$ 95% C.L. In a 200 dilepton sample we determine $F_0 < 0.52$ 95% C.L. and the combined analysis gives $F_0 = 0.27^{+0.35}_{-0.24}$(stat)$\pm 0.17$(sys) and $F_0 < 0.88$ 95% C.L. The reason for the dilepton result is an excess of events with low $p_T$ leptons which drives the maximum likelihood for $F_0$ to an unphysical negative value.
A search for anomalous kinematics in top decays has been performed. All distributions agree with SM expectations except for the above mentioned excess of low $p_T$ dilepton events, which is compatible with the SM with a 1% to 4% probability.
Top events are an interesting sample to search for physics beyond the SM. A search for charged Higgs from top decays sets tree-level limits in the MSSM ($m_H$,$\tan\beta$) plane which improve upon previous existing limits, and a model independent limit of BR($t\rightarrow Hb$)$<$0.7 at 95% C.L. is set on the top to Higgs branching fraction for charged Higgs masses of 80 GeV $< m_H <$ 150 GeV. Another search for a generic 4$^th$ generation $t^{\prime}$ quark is performed using 195 of data by fitting the $H_T$ distribution in the untagged lepton + $\geq 4$ jet events to different background, top, and $t^{\prime}$ signal templates generated at various $t^{\prime}$ masses. Upper limits at 95% C.L. are set on the $t^{\prime}$ cross section as a function of $t^{\prime}$ mass. The projected sensitivity of this analysis predicts setting limits on the $t^{\prime}$ mass with an integrated luminosity greater than 500 .
Several branching fraction studies were performed. The rate of tau leptons in top decays is found to be in agreement with SM predictions, setting a limit of $r_{\tau} < 5.0$ at the 95% C.L. for the anomalous rate enhancement factor, predicted to be unity by the SM. The ratio $ R = $ BR($t\rightarrow Wb$)/BR($t\rightarrow Wq$) is measured in both dilepton and lepton+jets events, setting a limit of $R > 0.62$ at 95% C.L., in agreement with the SM prediction of $R > 0.998$. The ratio of dilepton to lepton+jets cross-sections is measured to be $\sigma_{ll}/\sigma_{lj} = 1.45^{+0.83}_{-0.55}$, in agreement with the expected value of 1.
Conclusions and Outlook
=======================
Top quark properties are under extensive study at CDF. The production cross section has been measured in different top decay channels and using different techniques. All results are in agreement with one another and with the theoretical predictions, giving us confidence that indeed we have established a top signal and that we understand its backgrounds, its heavy flavor composition, and its kinematic properties. The top mass has been measured using different samples and techniques. The largest systematic uncertainty in these measurements comes from the jet energy scale determination, and much improved top mass results will be shown shortly after this conference. With larger data samples and improved jet energy measurements, the overall top mass uncertainty will soon be reduced to about 4 GeV. We have searched for electroweak production of top, setting limits on its production cross section. With much larger data samples we hope to observe a single top signal and measure its cross section. By the end of 2007, the recorded data will reach at least 2 fb$^{-1}$, and it could be more than double this size depending on Tevatron performance. All analyses will reduce significantly their uncertainties, and searches for new physics will significantly improve their sensitivity.
Acknowledgments {#acknowledgments .unnumbered}
===============
The results shown here represent a lot of work by a lot of people. I thank my CDF colleagues for their efforts to carry out these challenging physics analyses and for providing a friendly and stimulating research environment. I thank the conference organizers for a very nice and well organized week of physics. Finally, I thank the colleagues of my research institution, IFCA, for their support. This work is supported by the Ministerio de Educación y Ciencia, Spain (under contract FPA2002-01678) and by the EU (under contract HPRN-CT-2002-00292).
References {#references .unnumbered}
==========
[99]{}
The CDF Collaboration, [*The CDF II Detector Technical Design Report*]{}, FERMILAB-Pub-96/390-E.
M. Cacciari, S. Frixione, G. Ridolfi, M. Mangano and P. Nason, [*JHEP*]{} [**404**]{}, 68 (2004).
B.W. Harris, E. Laenen, L. Phaf, Z. Sullivan, S. Weinzierl, ; Z. Sullivan, hep-ph/0408049.
|
---
abstract: 'We report the results of simultaneous observations of the Vela pulsar in X-rays and radio from the RXTE satellite and the Mount Pleasant Radio Observatory in Tasmania. We sought correlations between the Vela’s X-ray emission and radio arrival times on a pulse by pulse basis. At a confidence level of 99.8% we have found significantly higher flux density in Vela’s main X-ray peak during radio pulses that arrived early. This excess flux shifts to the ‘trough’ following the 2nd X-ray peak during radio pulses that arrive later. Our results suggest that the mechanism producing the radio pulses is intimately connected to the mechanism producing X-rays. Current models using resonant absorption of radio emission in the outer magnetosphere as a cause of the X-ray emission are explored as a possible explanation for the correlation.'
author:
- 'A. Lommen, J. Donovan, C. Gwinn, Z. Arzoumanian, A. Harding, M. Strickman, R. Dodson, P. McCulloch, and D. Moffett'
nocite:
- '[@Krishnamohan83]'
- '[@Harding02]'
- '[@Lundgren95]'
- '[@Shearer03]'
- '[@Patt99]'
- '[@Cusumano03]'
- '[@Vivekanand01]'
- '[@Johnston01]'
- '[@Dodson02]'
- '[@Petrova03]'
title: 'Correlation between X-ray Lightcurve Shape and Radio Arrival Time in the Vela Pulsar'
---
Introduction
============
The Vela pulsar (PSR B0833$-$45) has been well-studied. Much observational work has been done to understand Vela’s emission in individual wavelength regions; e.g. the work of Krishnamohan & Downs (1983, hereafter KD83) in the radio regime, observations by @Ogelman93 in the X-ray regime, and studies by [@Kanbach94] in the gamma ray regime. Observations of Vela’s spectrum allow for the possibility of both polar cap [@Daugherty96] and outer-gap [@Cheng00] models of emission. Vela’s pulse profiles in individual regions have been phase-aligned using the phase of the radio pulse across the optical, X-ray, and gamma ray wavelength bands (Harding et al. 2002, hereafter H02). This article works to further relate Vela’s high energy emission to its low-energy (radio) emission.
X-ray observations of Vela are challenging. Though Vela is the strongest gamma ray source in the sky, the pulsar’s spectral power drops off in the hard X-ray band, making its X-ray emission very difficult to detect. Additionally, the pulsed emission is overwhelmed by the bright but unpulsed background of the X-ray emission nebula in which the pulsar is embedded [@Helfand01].
The single-peaked pulse profile of the Vela pulsar in radio wavelengths is much simpler than its high energy counterparts, although several studies have revealed compound emission. KD83 detected peak-intensity dependent changes in the pulse-shape with the strongest pulses arriving earlier than the averaged profile. They conclude that the radio peak is composed of four different components originating at different heights in the emission cone with components lower in the magnetosphere arriving later. In this article one aspect we explore is whether a similar connection persists in the X-ray regime, i.e. whether the X-ray pulses associated with early-arriving radio pulses have a different shape and/or a different flux than others.
Related work on other pulsars includes experiments probing the relationship between the Crab pulsar’s “giant" pulsed emission and its gamma ray emission (Lundgren et al., 1995, hereafter Lu95) or its optical emission (Shearer et al., 2003, hereafter Sh03). They reached opposite conclusions. Lu95 observe no correlation within their sensitivity, indicating that variations in radio flux are caused by changes in radio coherence, which only affects the radio intensity. Sh03 observe a significant correlation, and they thus conclude that the increased emission in the optical and radio is caused by an increased pair production efficiency.
Patt et al. (1999) study pulse-to-pulse variability in the X-ray regime for the Crab pulsar, and they find the pulses to be steady to 7%. Using this result, as well as previous work showing that the Crab exhibits giant radio pulses roughly every two minutes, they conclude that the radio and X-ray emission mechanisms are not closely related, even though it is likely that the optical and X-ray emission regions exist in the same section of the magnetosphere [@Patt99].
Additional experiments linking pulsar emission in the radio and X-ray regimes have been performed by Cusumano et al. (2003) and Vivekanand (2001). Cusumano et al. show that in PSR B1937+21 there is close phase alignment between X-ray pulses and giant radio pulses, suggesting a correlation in their emission regions. Vivekanand, on the other hand, finds that the X-ray flux variations are so much smaller than those at radio wavelengths that they are inconsistent with the existence of any relationship between the charged emitters in the two wavelength regimes.
Giant radio pulses have not been shown to exist in Vela, but Johnston et al. (2001, hereafter J01) discovered microstructure and ‘giant micropulses’ in the Vela pulsar. The giant micropulses have flux densities no more than ten times the mean flux density and have a typical pulse width of $\sim 400 \mu s$.
By doing a pulse-by-pulse analysis of the Vela pulsar in X-ray and radio wavelengths, we will show in this paper that the Vela pulsar’s X-ray and radio emission mechanisms must be related. We will discuss the X-ray and radio observations in §\[sec.observations\], our analysis in §\[sec.analysis\], the effects of scintillation in §\[sec.scintillation\], a discussion of interpretations in §\[sec.discussion\], and finally our conclusions and related future work in §\[sec.conclusion\].
Observations {#sec.observations}
============
Our data consist of 74 hours of simultaneous radio and X-ray observations taken over three months at the Mount Pleasant Radio Observatory in Tasmania and with the RXTE satellite. The radio data were acquired during 12 separate observations between 30 April and 23 August, 1998.
The radio data were collected as part of the long term monitoring program of ten young pulsars, including Vela [@Lewis03]. These data were collected with the 26m antenna at 990.025 MHz using the incoherently de-dispersed single pulse system (full description in Dodson, McCulloch, & Lewis, 2002). All individual pulses from Vela are detectable, and the pulse height, integrated area, and central time of arrival (for the solar-system barycenter) were calculated from cross-correlation with a high signal to noise template in the usual fashion.
The X-ray data were taken during the same three months, yielding 265 ks of usable simultaneous observation. For the purposes of this project, only top-layer data from RXTE’s Proportional Counter Units (PCUs) in Good Xenon mode in the energy range of 2-16 keV were used. Other filtering parameters included were standard RXTE criteria: elevation was greater than 10 degrees, offset was less than 0.02 degrees, the data were taken with at least 3 PCUs on, time since SAA was at least 30 minutes, and electron0 was less than 0.105.
Analysis {#sec.analysis}
========
We filtered the X-ray photon arrival times and transformed them to the Solar System Barycenter (SSB) using the standard FTOOLS [@Blackburn95] package. We calculated the pulsar phase at the time of each X-ray photon, using the radio pulsar-timing program TEMPO$^{10}$, and the ephemeris downloadable from Princeton University[^1]. We matched each X-ray photon with the radio pulse that arrived at the SSB at the same time. The precise time span associated with each radio pulse was given by our best model for arrival time of the peak of the radio pulse, $\pm 0.5\times$ the instantaneous pulse period calculated via the model. Photons arriving on the borderline were associated with the earlier pulse. We then compared pulse profiles for X-rays segregated according to the arrival time of the radio pulse.
Single radio pulses arrive at a range of times around the predicted arrival time, as KD83 found. The histogram of residual arrival times for radio pulses, relative to the prediction of our best model, is shown in Figure \[fig.phasedist\]. In the figure, and in our analysis, the average residual from each 5-minute segment of data was subtracted from all the data in that segment in order to account for any systematic wandering of the pulse arrival times relative to our model. The distribution of arrival times is slightly skewed, with a tail at late arrival times, so that the mean of the distribution is slightly later than the mode (at the peak of the histogram), as Figure \[fig.phasedist\] shows. We divided all of the pulses into 10 deciles, by the residual phase of the radio pulse, with equal numbers of pulses in each decile. Figure \[fig.phasedist\] shows our division of the residual phase of the radio pulse into the deciles. The deciles are well mixed in time, i.e. no particular observing epoch dominates any decile. Removing the 5-minute average residual eliminates effects of long-term trends that might appear independently in the radio pulse-timing and X-ray photon counting data.
We formed an X-ray pulse profile, integrated from 2$-$16 keV, for each of the deciles of radio-pulse arrival times, from the X-ray photons associated with each. Figure \[fig.hists\] shows the 10 resulting X-ray profiles. Each profile contains 13 bins in phase with the radio peak being at the left edge of each plot on the border between bins 13 and 1. The X-ray profiles are significantly different; the X-ray pulse changes in shape with the arrival of the radio pulse. We denote the ten X-ray profiles by their “lateness": profile 1 comprises the decile of X-ray photons from the earliest radio-pulse arrivals, and profile 10 the decile from the latest radio-pulse arrivals. In particular it appears that the first X-ray pulse is sharper and stronger in the earlier deciles. Overall 2 distinct X-ray peaks are visible in each of the first 5 profiles, whereas in profiles 6-10 the two peaks are difficult to distinguish. In order to quantify these changes, we performed a number of statistical tests on these 10 profiles including a full Monte Carlo simulation described at the end of this section.
We observe no significant trend in total X-ray flux with lateness. Figure \[fig:total\_counts\] shows total counts for each of the 10 deciles. From these data, we determine Pearson’s correlation coefficient of $r=-.26$ of total X-ray counts with radio-pulse lateness with a confidence interval of 54%. Nominally, this is not distinguishable from the hypothesis of zero correlation. Note, however, that Pearson’s $r$ is not necessarily the best statistic for this comparison, as we discuss below.
We do find that the total counts reported in Figure \[fig:total\_counts\] are inconsistent with the Gaussian distribution expected for this limit of the Poisson distribution produced by shot noise, at 99.7% confidence, as determined from a $\chi^2$ test. This suggests that there is indeed an evolution of X-ray profile with radio lateness. If we consider the counts in individual bins, we find that they do deviate from the mean more than one would expect from Poission statistics, as well. A simple $\chi^2$ test determines that in bins 2 and 3 we can reject the hypothesis of Poisson noise around the mean value with 81% and 83% confidence respectively. In other words, bins 2 and 3 show larger changes than we would expect a priori. The changes among the 10 profiles in each of the other 11 lateness bins are consistent with Poisson noise.
The $\chi^2$ test, however, is not suited to detecting trends. In order to detect trends we look to Pearson’s correlation coefficient, $r$ and its associated confidence interval. Pearson’s $r$ has limited validity in our case because it usually assumes that both variables are drawn from random distributions with nearly Gaussian statistics. In our case, lateness is deterministic rather than random and Gaussian. Shot noise in the profile is random and Gaussian, but differences of the X-ray profile among deciles need not be. Nonetheless, this is a useful first step to take to estimate the significance of any trends. Note also that Pearson’s $r$ itself, though widely used to measure the strength of a correlation, does not judge the existence of a correlation. We therefore place more stake in the confidence interval, although this involves additional assumptions about the variables correlated [@NR14.5]. If our visual impression of the profiles is accurate we would expect that in the trough, made up of bins 12, 13 and 1, the correlations are positive (increasing counts with increasing lateness). All 3 bins do indeed show positive correlations, 0.29, 0.66 and 0.35 respectively with associated confidence intervals of 58%, 96% and 68%. In other words, 2 of the 3 bins show a significant (greater than 1$\sigma =
65$%) correlation. Likewise we would expect that the first X-ray peak, made up of bins 2 and 3, would show negative correlations (decreasing counts with increasing lateness). Bins 2 and 3 do in fact show negative correlations of $-0.73$ and $-0.48$ with 98% and 85% confidence, consistent with our interpretation of the first peak decreasing with lateness. The only other bin that displays a significant correlation is bin 8, which shows a negative correlation of $-0.63$ with 95% confidence.
Next we looked to see if pairs of bins could be combined together to better detect the signal. Based on our results described above we were curious about whether the quantity $Y_{i,a} =
(c_{i\,2}+c_{i\,3})-(c_{i\,12}+c_{i\,13})$ or $Y_{i,b} =
(c_{i\,2}+c_{i\,3})-(c_{i\,13}+c_{i\,1})$ would show significant trends with lateness. Here $c_{i\,1}, c_{i\,2}, c_{i\,3}, c_{i\,12},
c_{i\,13}$ are the measured counts in bins 1, 2, 3, 12, and 13, at lateness $i$. $Y_{i,a}$ and $Y_{i,b}$ effectively measure, for the 10 profiles, the height of the peak minus the trough and are shown in Figure \[fig:plot\_compare\_10\]. Via a simple $\chi^2$ test we can reject with 97% confidence the hypothesis that the parent distribution of either of the $Y$’s is a constant at the mean value. More importantly Figure \[fig:plot\_compare\_10\] shows a clear systematic trend of $Y$ with lateness. A line fitted to the data and its associated Poisson uncertainty shown in Figure \[fig:plot\_compare\_10\] yields a slope of $m=-297$ and $-308$ respectively with a formal 1-standard deviation uncertainty of $\sigma=76$ for both, so both represent (not independently) $\sim 4 $ standard deviation detections of a trend in these data. For the sake of completeness we tried all possible differences of pairs of adjacent bins. The next highest magnitude slope was 208, but this and all other significant correlations included some subset of bins 12, 13, 1, 2, and 3. The correlation coefficients and associated confidence intervals for $Y_{i,a}$ and $Y_{i,b}$ are $-0.91$ at 99.97% and $-0.84$ at 99.74%.
Given the limited validity of the Pearson’s $r$ coefficient with non-random, non-normally distributed data, we sought a more rigorous estimation of the possibility that such a significant trend would arise amongst 10 such profiles merely by chance, i.e. merely by random fluctuation of each bin around its mean. To answer this question we performed a Monte Carlo simulation with $10^5$ realizations of the data. Each simulated data set consisted of 10 profiles, each of 13 bins. The counts in each bin were chosen as a Gaussian deviate of the X-ray pulse profile, averaged over lateness, and with variance set by Poisson statistics. Thus, in the simulated data sets all differences among the 10 profiles arose entirely from counting statistics. In each simulated data set, for every possible pair of summed adjacent bins ({1,2 & 3,4}, {1,2 & 4,5} ...{j,k & n,o}, with bins {j,k} adjacent and bins {n,o} adjacent: 66 pairs in all) we computed $Y_i$: $$Y_i = c_{ij}+c_{ik} - (c_{in} + c_{io})$$ where $i=1$ corresponds to the earliest profile in terms of radio phase, and $c_{ij}$ is the number of counts in the jth bin of the ith profile. For the set of points {$i$, $Y_i$} where $m$ and $\sigma$ are the slope and uncertainty of the best-fit straight line we computed the following statistic: $$\Gamma = (m/\sigma)^2$$ As for the fit to our data described above, we weighted the data by the uncertainties as given by Poisson statistics. We then compared the largest $\Gamma$ for each simulated data set to the largest $\Gamma$ for the actual data (16.4) and found a probability of 0.0024 that the correlation we observe could have been obtained by chance.
We tried other statistics to see if we could find one more sensitive to the presence of a correlation like that we observed. The most sensitive one included Pearson’s $r$ as follows: $$\Gamma^\prime = r^2 + (m/\sigma^\prime)^2$$ where $\sigma^\prime$ is the $\sqrt{{\rm mean}(c_{ij}+c_{ik},
c_{il}+c_{im})}$. $\Gamma^\prime$ yielded a significance of 0.0007. $\Gamma^\prime$ is more sensitive than $\Gamma$ to the proximity of our data to the fitted line, so it yields a smaller probability of false detection. Regardless, we retain the more straightforward $\Gamma$ as a conservative estimate of the significance of our result.
Of use in interpreting our results is knowledge of the character of the radio residuals by which we are binning the X-ray photons. KD83 did much work in this area, but one question they do not address directly is the following: if a particular pulse is “early", what is the chance that the next pulse will also be early? We calculated the autocorrelation function of radio residual, shown in figure \[fig:autocorr\]. The function is normalized by the autocorrelation at zero lag. The function at a lag of 1 pulse is represented by the left edge of the plotted curve, at a value of 0.066. The figure shows that the pulsar has very little “memory" of the lateness of the previous pulse, although it is interesting to note there is finite correlation out to 40 pulse periods.
Scintillation {#sec.scintillation}
=============
In contrast to effects intrinsic to the pulsar, scintillation is unlikely to produce the observed association, because it does not affect X-rays; scintillation might erase such a correlation but it cannot introduce it. Diffractive scintillation for Vela at our frequency has a characteristic bandwidth of 2 kHz (Gupta 1995), far smaller than our observing bandwidth of 6.4 MHz. We therefore average over $\sim$3200 independent scintillation elements. Refractive scintillation modulates flux density over a wide bandwidth and has timescale $\sim$25 days [@Desai92]. This was shorter than the total span of the observations, but much longer than the span of any one observation. To combat any effect that this might have on the observed arrival times, we defined the 10 deciles separately for each observation; i.e. the specific values of residual that separated each of the 10 bins were calculated for each individual radio observation. Again, these cutoffs were defined in such a way that an equal number of radio pulses was associated with each decile.
We assumed that the pulsar was stationary relative to a refractive scintillation element during each single observation, and further tests to ensure that the length of the observation was not a factor were performed by renormalizing the cutoffs using both one hour and five minute timespans. Reanalyzing the X-ray data in one hour segments did not significantly change our results, and the five minute spans were found to be too short to accurately represent Vela’s emission.
Discussion {#sec.discussion}
==========
The early radio emission could result from coherent radiation along a different set of field lines (i.e. more leading) or from radiation at a higher altitude, or both. The work of KD83 suggests both. The early radio emission may also result from stochastic fluctuations in the radio beam intensity which would lead to pulse arrival time changes by changing the shape of the radio beam as it takes a finite time to sweep across our line of sight.
Petrova (2003, and references therein) offers a physical model that could explain the observed relationship between the radio and X-ray emission. Her model suggests that resonant absorption of radio emission from the outer magnetosphere leads to an increase in the pitch angles and momenta of the secondary pairs, which then leads to optical and higher energy emission by spontaneous synchrotron radiation. How could this model produce a changing X-ray pulse shape? Due to the effects of rotation of the magnetosphere and aberration, the early radio emission can cross a larger number of higher altitude field lines and at larger angles, thereby maximizing the opportunity for absorption by particles on those field lines, and therefore the production of high-energy radiation. Conversely a late radio photon on a path almost directly along the magnetic pole may escape the magnetosphere with many fewer interactions, since it will cross fewer open field lines and at smaller angle. The details of the above are somewhat unimportant as our knowledge of the magnetosphere and the plasma therein is limited, but the point is that as different parts of the radio beam are active, resonant absorption may happen at different rates in different parts of the magnetosphere, causing continuous change in the shape of the observed X-ray emission.
More generally our results imply a connection between the radio and X-ray emission mechanisms for Vela that is not consistent with outer gap models. In these models, the high energy emission results from a gap connection to the pole opposite from that producing the radio emission. The pole and outer gap associated with the same set of field lines are not visible to one observer. It is not clear how a correlation could exist between the radio and high energy regimes in these models.
The giant micropulse emission observed by J01 would cause a single radio pulse to arrive about 1 ms early, so it is realistic to consider the possibility that the giant micropulse emission is primarily responsible for the early arrival of the radio pulse. However, out of 20,085 pulses that J01 observed at 1413 MHz, 14 of them contained giant micropulses. So the giant micropulses may be influencing the first of our 10 deciles, but cannot contribute to the effect observed in the other 9 deciles. We conclude that Vela’s giant micropulse emission cannot be responsible for the effect presented here.
Conclusions and Future Work {#sec.conclusion}
===========================
We have evidence that links features of Vela’s X-ray emission with features of its radio emission. First, we find that X-ray pulses associated with early radio pulses show stronger emission at the main X-ray peak which is the sharper of the two. Similarly X-ray pulses associated with later radio pulses show stronger emission at the trough following the 2nd X-ray peak, a region in phase near the radio peak. The trend we measure has a 0.2% probability of appearing in the data by chance. We conclude that there is a close relationship between X-ray and radio emission in the Vela pulsar.
We plan to further characterize the relationship between the radio and high energy emissions of pulsars to identify their origins and constrain magnetospheric models. In particular, we will explore the dependence of radio-to-X-ray correlations on the radio frequency and polarization properties of individual Vela pulses, both of which carry information about emission altitudes. Similar observations of other pulsars also promise useful insights as probes of different magnetic field strengths and emission/viewing geometries.
Many thanks to Ben Stappers, Michael Kramer, Russell Edwards, Wim Hermsen, David Helfand, Paul Ray, Simone Migliari and Tiziana Di Salvo for helpful comments. AL is grateful for the hospitality of the Australia Telescope National Facility and for a Research Corporation Grant in support of this research. ZA was supported by NASA grant NRA-99-01-LTSA-070. CG acknowledges support of NSF-AST-9731584. AH acknowledges support from the NASA Astrophysics Theory Program. RD acknowledges support as a Marie-Curie fellow via EU FP6 grant MIF1-CT-2005-021873. This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA’s Goddard Space Flight Center.
[^1]: See http://pulsar.princeton.edu/tempo
|
---
abstract: 'How does our motor system solve the problem of anticipatory control in spite of a wide spectrum of response dynamics from different musculo-skeletal systems, transport delays as well as response latencies throughout the central nervous system? To a great extent, our highly-skilled motor responses are a result of a reactive feedback system, originating in the brain-stem and spinal cord, combined with a feed-forward anticipatory system, that is adaptively fine-tuned by sensory experience and originates in the cerebellum. Based on that interaction we design the counterfactual predictive control (CFPC) architecture, an anticipatory adaptive motor control scheme in which a feed-forward module, based on the cerebellum, steers an error feedback controller with *counterfactual* error signals. Those are signals that trigger reactions as actual errors would, but that do not code for any current or forthcoming errors. In order to determine the optimal learning strategy, we derive a novel learning rule for the feed-forward module that involves an eligibility trace and operates at the synaptic level. In particular, our eligibility trace provides a mechanism beyond co-incidence detection in that it convolves a history of prior synaptic inputs with error signals. In the context of cerebellar physiology, this solution implies that Purkinje cell synapses should generate eligibility traces using a forward model of the system being controlled. From an engineering perspective, CFPC provides a general-purpose anticipatory control architecture equipped with a learning rule that exploits the full dynamics of the closed-loop system.'
author:
- |
Ivan Herreros-Alonso\
SPECS lab\
Universitat Pompeu Fabra\
Barcelona, Spain\
`ivan.herreros@upf.edu`\
Xerxes D. Arsiwalla\
SPECS lab\
Universitat Pompeu Fabra\
Barcelona, Spain\
Paul F.M.J. Verschure\
SPECS, UPF\
Catalan Institution of Research\
and Advanced Studies (ICREA)\
Barcelona, Spain
bibliography:
- 'NIPS.bib'
title: A Forward Model at Purkinje Cell Synapses Facilitates Cerebellar Anticipatory Control
---
Introduction
============
Learning and anticipation are central features of cerebellar computation and function [@Bastian2006]: the cerebellum learns from experience and is able to anticipate events, thereby complementing a reactive feedback control by an anticipatory feed-forward one [@hofstoetter2002cerebellum; @Herreros2013a]. This interpretation is based on a series of anticipatory motor behaviors that originate in the cerebellum. For instance, anticipation is a crucial component of acquired behavior in eye-blink conditioning [@Gormezano1983], a trial by trial learning protocol where an initially neutral stimulus such as a tone or a light (the conditioning stimulus, CS) is followed, after a fixed delay, by a noxious one, such as an air puff to the eye (the unconditioned stimulus, US). During early trials, a protective unconditioned response (UR), a blink, occurs reflexively in a feedback manner following the US. After training though, a well-timed anticipatory blink (the conditioned response, CR) precedes the US. Thus, learning results in the (partial) transference from an initial feedback action to an anticipatory (or predictive) feed-forward one. Similar responses occur during anticipatory postural adjustments, which are postural changes that precede voluntary motor movements, such as raising an arm while standing [@Massion1992]. The goal of these anticipatory adjustments is to counteract the postural and equilibrium disturbances that voluntary movements introduce. These behaviors can be seen as feedback reactions to events that after learning have been transferred to feed-forward actions anticipating the predicted events.
Anticipatory feed-forward control can yield high performance gains over feedback control whenever the feedback loop exhibits transmission (or transport) delays [@jordan1996computational]. However, even if a plant has negligible transmission delays, it may still have sizable inertial latencies. For example, if we apply a force to a visco-elastic plant, its peak velocity will be achieved after a certain delay; i.e. the velocity itself will lag the force. An efficient way to counteract this lag will be to apply forces anticipating changes in the desired velocity. That is, anticipation can be beneficial even when one can act instantaneously on the plant. Given that, here we address two questions: what is the optimal strategy to learn anticipatory actions in a cerebellar-based architecture? and how could it be implemented in the cerebellum?
To answer that we design the counterfactual predictive control (CFPC) scheme, a cerebellar-based adaptive-anticipatory control architecture that learns to anticipate performance errors from experience. The CFPC scheme is motivated from neuro-anatomy and physiology of eye-blink conditioning. It includes a reactive controller, which is an output-error feedback controller that models brain stem reflexes actuating on eyelid muscles, and a feed-forward adaptive component that models the cerebellum and learns to associate its inputs with the error signals driving the reactive controller. With CFPC we propose a generic scheme in which a feed-forward module enhances the performance of a reactive error feedback controller steering it with signals that facilitate anticipation, namely, with *counterfactual errors*. However, within CFPC, even if these counterfactual errors that enable predictive control are learned based on past errors in behavior, they do not reflect any current or forthcoming error in the ongoing behavior.
In addition to eye-blink conditioning and postural adjustments, the interaction between reactive and cerebellar-dependent acquired anticipatory behavior has also been studied in paradigms such as visually-guided smooth pursuit eye movements [@Lisberger1987]. All these paradigms can be abstracted as tasks in which the same predictive stimuli and disturbance or reference signal are repeatedly experienced. In accordance to that, we operate our control scheme in trial-by-trial (batch) mode. With that, we derive a learning rule for anticipatory control that modifies the well-known least-mean-squares/Widrow-Hoff rule with an eligibility trace. More specifically, our model predicts that to facilitate learning, parallel fibers to Purkinje cell synapses implement a forward model that generates an eligibility trace. Finally, to stress that CFPC is not specific to eye-blink conditioning, we demonstrate its application with a smooth pursuit task.
Methods
=======
Cerebellar Model
----------------
![Anatomical scheme of a Cerebellar Purkinje cell. The $x_j$ denote parallel fiber inputs to Purkinje synapses (in red) with weights $w_j$. $o$ denotes the output of the Purkinje cell. The error signal $e$, through the climbing fibers (in green), modulates synaptic weights. []{data-label="Purkinje"}](purkinje_cell){width="40.00000%"}
We follow the simplifying approach of modeling the cerebellum as a linear adaptive filter, while focusing on computations at the level of the Purkinje cells, which are the main output cells of the cerebellar cortex [@Fujita1982; @Dean2010]. Over the mossy fibers, the cerebellum receives a wide range of inputs. Those inputs reach Purkinke cells via parallel fibers (Fig. \[Purkinje\]), that cross dendritic trees of Purkinje cells in a ratio of up to $1.5 \times 10^6$ parallel fiber synapses per cell [@Eccles67]. We denote the signal carried by a particular fiber as $x_j$, $j \in [1,G]$, with $G$ equal to the total number of inputs fibers. These inputs from the mossy/parallel fiber pathway carry contextual information (interoceptive or exteroceptive) that allows the Purkinje cell to generate a functional output. We refer to these inputs as *cortical bases*, indicating that they are localized at the cerebellar cortex and that they provide a repertoire of states and inputs that the cerebellum combines to generate its output $o$. As we will develop a discrete time analysis of the system, we use $n$ to indicate time (or time-step). The output of the cerebellum at any time point $n$ results from a weighted sum of those cortical bases. $w_j$ indicates the weight or synaptic efficacy associated with the fiber $j$. Thus, we have ${\boldsymbol{x}}[n] = \left[ x_1[n], \dots , x_G[n] \right]^\intercal$ and ${\boldsymbol{w}}[n]=\left[ w_1[n], \dots , w_G[n] \right]^\intercal$ (where the transpose, $^\intercal$, indicates that ${\boldsymbol{x}}[n]$ and ${\boldsymbol{w}}[n]$ are column vectors) containing the set of inputs and synaptic weights at time $n$, respectively, which determine the output of the cerebellum according to $$o[n]={\boldsymbol{x}}[n]^\intercal{\boldsymbol{w}}[n]$$ The adaptive feed-forward control of the cerebellum stems from updating the weights according to a rule of the form $$\label{defLearningRule}
\Delta w_j[n+1]=f(x_j[n], \dots, x_j[1], e[n],\Theta)$$ where $\Theta$ denotes global parameters of the learning rule; $x_j[n], \dots, x_j[1]$, the history of its pre-synaptic inputs of synapse $j$; and $e[n]$, an error signal that is the same for all synapses, corresponding to the difference between the desired, $r$, and the actual output, $y$, of the controlled plant. Note that in drawing an analogy with the eye-blink conditioning paradigm, we use the simplifying convention of considering the noxious stimulus (the air-puff) as a reference, $r$, that indicates that the eyelids should close; the closure of the eyelid as the output of the plant, $y$; and the sensory response to the noxious stimulus as an error, $e$, that encodes the difference between the desired, $r$, and the actual eyelid closures, $y$. Given this, we advance a new learning rule, $f$, that achieves optimal performance in the context of eye-blink conditioning and other cerebellar learning paradigms.
Cerebellar Control Architecture
-------------------------------
![Neuroanatomy of eye-blink conditioning and the CFPC architecture. *Left*: Mapping of signals to anatomical structures in eye-blink conditioning [@DeZeeuw2005]; regular arrows indicate external inputs and outputs, arrows with inverted heads indicate neural pathways. *Right*: CFPC architecture. Note that the feedback controller, $C$, and the feed-forward module, $FF$, belong to the control architecture, while the plant, $P$, denotes an object controlled. Other abbreviations: $r$, reference signal; $y$, plant’s output; $e$, output error; $\mathbf{x}$, basis signals; $o$, feed-forward signal; and $u$, motor command.[]{data-label="anatomy"}](Mapping){width="0.99\linewidth"}
![Neuroanatomy of eye-blink conditioning and the CFPC architecture. *Left*: Mapping of signals to anatomical structures in eye-blink conditioning [@DeZeeuw2005]; regular arrows indicate external inputs and outputs, arrows with inverted heads indicate neural pathways. *Right*: CFPC architecture. Note that the feedback controller, $C$, and the feed-forward module, $FF$, belong to the control architecture, while the plant, $P$, denotes an object controlled. Other abbreviations: $r$, reference signal; $y$, plant’s output; $e$, output error; $\mathbf{x}$, basis signals; $o$, feed-forward signal; and $u$, motor command.[]{data-label="anatomy"}](NIPS_architecture){width="0.99\linewidth"}
We embed the adaptive filter cerebellar module in a layered control architecture, namely the CFPC architecture, based on the interaction between brain stem motor nuclei driving motor reflexes and the cerebellum, such as the one established between the cerebellar microcircuit responsible for conditioned responses and the brain stem reflex circuitry that produces unconditioned eye-blinks [@hesslow2002functional] (Fig. \[anatomy\] *left*). Note that in our interpretation of this anatomy we assume that cerebellar output, $o$, feeds the lower reflex controller (Fig. \[anatomy\] *right*). Put in control theory terms, within the CFPC scheme an adaptive feed-forward layer supplements a negative feedback controller steering it with feed-forward signals.
Our architecture uses a single-input single-output negative-feedback controller. The controller receives as input the output error $e=r-y$. For the derivation of the learning algorithm, we assume that both plant and controller are linear and time-invariant (LTI) systems. Importantly, the feedback controller and the plant form a reactive closed-loop system, that mathematically can be seen as a system that maps the reference, $r$, into the plant’s output, $y$. A feed-forward layer that contains the above-mentioned cerebellar model provides the negative feedback controller with an additional input signal, $o$. We refer to $o$ as a *counter-factual* error signal, since although it *mechanistically* drives the negative feedback controller analogously to an error signal it is not an *actual* error. The counterfactual error is generated by the feed-forward module that receives an output error, $e$, as its teaching signal. Notably, from the point of view of the reactive layer closed-loop system, $o$ can also be interpreted as a signal that offsets $r$. In other words, even if $r$ remains the reference that sets the target of behavior, $r+o$ functions as the *effective* reference that drives the closed-loop system.
Results
=======
Derivation of the gradient descent update rule for the cerebellar control architecture {#Derivation}
--------------------------------------------------------------------------------------
We apply the CFPC architecture defined in the previous section to a task that consists in following a finite reference signal $\mathbf{r} \in \mathbb{R}^N$ that is repeated trial-by-trial. To analyze this system, we use the discrete time formalism and assume that all components are linear time-invariant (LTI). Given this, both reactive controller and plant can be lumped together into a closed-loop dynamical system, that can be described with the dynamics $\mathbf{A}$, input $\mathbf{B}$, measurement $\mathbf{C}$ and feed-through $\mathbf{D}$ matrices. In general, these matrices describe how the state of a dynamical system autonomously evolves with time, $\mathbf{A}$; how inputs affect system states, $\mathbf{B}$; how states are mapped into outputs, $\mathbf{C}$; and how inputs instantaneously affect the system’s output $\mathbf{D}$ [@Astrom2012]. As we consider a reference of a finite length $N$, we can construct the $N$-by-$N$ transfer matrix $\mathcal{T}$ as follows [@Boyd2008] $$\mathcal{T} = \left[\begin{array}{cccccc}
{\boldsymbol{D}} & 0 & 0 & \hdots & 0 \\
{\boldsymbol{C}} {\boldsymbol{B}} & {\boldsymbol{D}} & 0 & \hdots & 0 \\
{\boldsymbol{C}} {\boldsymbol{A}} {\boldsymbol{B}} & {\boldsymbol{C}} {\boldsymbol{B}} & {\boldsymbol{D}} & \hdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
{\boldsymbol{C}} {\boldsymbol{A}}^{N-2} {\boldsymbol{B}} & {\boldsymbol{C}} {\boldsymbol{A}}^{N-3} {\boldsymbol{B}} & {\boldsymbol{C}} {\boldsymbol{A}}^{N-4} {\boldsymbol{B}} & \hdots & {\boldsymbol{D}}
\end{array}\right]$$ With this transfer matrix we can map any given reference $\mathbf{r}$ into an output $\mathbf{y}_r$ using $\mathbf{y}_r=\mathcal{T} \mathbf{r}$, obtaining what would have been the complete output trajectory of the plant on an entirely feedback-driven trial. Note that the first column of $\mathcal{T}$ contains the impulse response curve of the closed-loop system, while the rest of the columns are obtained shifting that impulse response down. Therefore, we can build the transfer matrix $\mathcal{T}$ either in a model-based manner, deriving the state-space characterization of the closed-loop system, or in measurement-based manner, measuring the impulse response curve. Additionally, note that $(\mathbf{I}-\mathcal{T})\mathbf{r}$ yields the error of the feedback control in following the reference, a signal which we denote with $\mathbf{e}_0$.
Let $\mathbf{o} \in \mathbb{R}^N$ be the entire feed-forward signal for a given trial. Given commutativity, we can consider that from the point of view of the closed-loop system $o$ is added directly to the reference $\mathbf{r}$, (Fig. \[anatomy\] *right*). In that case, we can use $\mathbf{y}=\mathcal{T}(\mathbf{r}+\mathbf{o})$ to obtain the output of the closed-loop system when it is driven by both the reference and the feed-forward signal. The feed-forward module only outputs linear combinations of a set of bases. Let $\mathbf{X} \in \mathbb{R}^{N \times G}$ be a matrix with the content of the $G$ bases during all the $N$ time steps of a trial. The feed-forward signal becomes $\mathbf{o}=\mathbf{Xw}$, where $\mathbf{w} \in \mathbb{R}^G$ contains the mixing weights. Hence, the output of the plant given a particular $\mathbf{w}$ becomes $\mathbf{y}=\mathcal{T}(\mathbf{r}+\mathbf{Xw})$.
We implement learning as the process of adjusting the weights $\mathbf{w}$ of the feed-forward module in a trial-by-trial manner. At each trial the same reference signal, $\mathbf{r}$, and bases, $\mathbf{X}$, are repeated. Through learning we want to converge to the optimal weight vector $\mathbf{w}^*$ defined as $$\mathbf{w}^* = \operatorname*{arg\,min}_w c(\mathbf{w}) = \operatorname*{arg\,min}_w \frac{1}{2} \mathbf{e}^\intercal \mathbf{e}= \operatorname*{arg\,min}_w \frac{1}{2} (\mathbf{r}-\mathcal{T}(\mathbf{r}+\mathbf{Xw}))^\intercal(\mathbf{r}-\mathcal{T}(\mathbf{r}+\mathbf{Xw}))$$ where $c$ indicates the objective function to minimize, namely the $L_2$ norm or sum of squared errors. With the substitution $\tilde{\mathbf{X}}=\mathcal{T}\mathbf{X}$ and using $\mathbf{e}_0 = (\mathbf{I}-\mathcal{T})\mathbf{r}$, the minimization problem can be cast as a *canonical* linear least-squares problem: $$\mathbf{w}^* = \operatorname*{arg\,min}_w \frac{1}{2}
(\mathbf{e}_0-\tilde{\mathbf{X}}\mathbf{w})^\intercal(\mathbf{e}_0-\tilde{\mathbf{X}}\mathbf{w})$$ One the one hand, this allows to directly find the least squares solution for $\mathbf{w}^*$, that is, $\mathbf{w}^*=\tilde{\mathbf{X}}^\dagger \mathbf{e}_0$, where $\dagger$ denotes the Moore-Penrose pseudo-inverse. On the other hand, and more interestingly, with $\mathbf{w}[k]$ being the weights at trial $k$ and having $\mathbf{e}[k] = \mathbf{e}_0-\tilde{\mathbf{X}}\mathbf{w}[k]$, we can obtain the gradient of the error function at trial $k$ with relation to ${\boldsymbol{w}}$ as follows: $$\nabla_w c = -\tilde{\mathbf{X}}^\intercal \mathbf{e}[k] = -\mathbf{X}^\intercal \mathcal{T}^\intercal \ \mathbf{e}[k]$$ Thus, setting $\eta$ as a properly scaled learning rate (the only global parameter $\Theta$ of the rule), we can derive the following gradient descent strategy for the update of the weights between trials: $$\label{eqUpdate}
\mathbf{w}[k+1] = \mathbf{w}[k] + \eta \mathbf{X}^\intercal \mathcal{T}^\intercal \mathbf{e}[k]$$ This solves for the learning rule $f$ in eq. \[defLearningRule\]. Note that $f$ is consistent with both the cerebellar anatomy (Fig. \[anatomy\]*left*) and the control architecture (Fig. \[anatomy\]*right*) in that the feed-forward module/cerebellum only requires two signals to update its weights/synaptic efficacies: the basis inputs, $\mathbf{X}$, and error signal, $\mathbf{e}$.
$\mathcal{T}^\intercal$ facilitates a synaptic eligibility trace
----------------------------------------------------------------
The standard least mean squares (LMS) rule (also known as Widrow-Hoff or decorrelation learning rule) can be represented in its batch version as $\mathbf{w}[k+1] = \mathbf{w}[k] + \eta \mathbf{X}^\intercal \mathbf{e}[k]$. Hence, the only difference between the batch LMS rule and the one we have derived is the insertion of the matrix factor $\mathcal{T}^\intercal$. Now we will show how this factor acts as a filter that computes an eligibility trace at each weight/synapse. Note that the update of a single weight, according Eq. \[eqUpdate\] becomes $$w_j[k+1] = w_j[k] + \eta \mathbf{x}_j^\intercal \mathcal{T}^\intercal \mathbf{e}[k]$$ where $\mathbf{x}_j$ contains the sequence of values of the cortical basis $j$ during the entire trial. This can be rewritten as $$\label{errorAsAvector}
w_j[k+1] = w_j[k] + \eta \mathbf{h}_j^\intercal \mathbf{e}[k]$$ with $\mathbf{h}_j \equiv \mathcal{T} \mathbf{x}_j$. The above inner product can be expressed as a sum of scalar products $$\label{errorAsAscalar}
w_j[k+1] = w_j[k] + \eta \sum_{n=1}^N \mathbf{h}_j[n] \mathbf{e}[k,n]$$ where $n$ indexes the within trial time-step. Note that $\mathbf{e}[k]$ in Eq. \[errorAsAvector\] refers to the whole error signal at trial $k$ whereas $\mathbf{e}[k,n]$ in Eq. \[errorAsAscalar\] refers to the error value in the $n$-th time-step of the trial $k$. It is now clear that each $\mathbf{h}_j[n]$ weighs how much an error arriving at time $n$ should modify the weight $w_j$, which is precisely the role of an eligibility trace. Note that since $\mathcal{T}$ contains in its columns/rows shifted repetitions of the impulse response curve of the closed-loop system, the eligibility trace codes at any time $n$, the convolution of the sequence of previous inputs with the impulse-response curve of the reactive layer closed-loop. Indeed, in each synapse, the eligibility trace is generated by a forward model of the closed-loop system that is exclusively driven by the basis signal.
Consequently, our main result is that by deriving a gradient descent algorithm for the CFPC cerebellar control architecture we have obtained an exact definition of the suitable eligibility trace. That definition guarantees that the set of weights/synaptic efficacies are updated in a locally optimal manner in the weights’ space.
On-line gradient descent algorithm
----------------------------------
The trial-by-trial formulation above allowed for a straightforward derivation of the (batch) gradient descent algorithm. As it lumped together all computations occurring in a same trial, it accounted for time within the trial implicitly rather than explicitly: one-dimensional time-signals were mapped onto points in a high-dimensional space. However, after having established the gradient descent algorithm, we can implement the same rule in an on-line manner, dropping the repetitiveness assumption inherent to trial-by-trial learning and performing all computations locally in time. Each weight/synapse must have a process associated to it that outputs the eligibility trace. That process passes the incoming (unweighted) basis signal through a (forward) model of the closed-loop as follows: $$\begin{array}{rcl}
{\boldsymbol{s}}_j[n+1] & = & {\boldsymbol{A}} {\boldsymbol{s}}_j[n] + {\boldsymbol{B}} x_j[n] \\
h_j[n] & = & {\boldsymbol{C}} {\boldsymbol{s}}_j[n] + {\boldsymbol{D}} x_j[n]
\end{array}$$ where matrices ${\boldsymbol{A}}$, ${\boldsymbol{B}}$, ${\boldsymbol{C}}$ and ${\boldsymbol{D}}$ refer to the closed-loop system (they are the same matrices that we used to define the transfer matrix $\mathcal{T}$), and ${\boldsymbol{s}}_j[n]$ is the state vector of the forward model of the synapse $j$ at time-step $n$. In practice, each “synaptic” forward model computes what would have been the effect of having driven the closed-loop system with each basis signal alone. Given the superposition principle, the outcome of that computation can also be interpreted as saying that $h_j[n]$ indicates what would have been the displacement over the current output of the plant, $y[n]$, achieved feeding the closed-loop system with the basis signal $x_j$. The process of weight update is completed as follows: $$w_j[n+1] = w_j[n] + \eta h_j[n] e[n]$$ At each time step $n$, the error signal $e[n]$ is multiplied by the current value of the eligibility trace $h_j[n]$, scaled by the learning rate $\eta$, and subtracted to the current weight $w_j[n]$. Therefore whereas the contribution of each basis to the output of the adaptive filter depends only on its current value and weight, the change in weight depends on the current and past values passed through a forward model of the closed-loop dynamics.
Simulation of a visually-guided smooth pursuit task
---------------------------------------------------
We demonstrate the CFPC approach in an example of a visual smooth pursuit task in which the eyes have to track a target moving on a screen. Even though the simulation does not capture all the complexity of a smooth pursuit task, it illustrates our anticipatory control strategy. We model the plant (eye and ocular muscles) with a two-dimensional linear filter that maps motor commands into angular positions. Our model is an extension of the model in [@Porrill2007], even though in that work the plant was considered in the context of the vestibulo-ocular reflex. In particular, we use a chain of two leaky integrators: a slow integrator with a relaxation constant of 100 ms drives the eyes back to the rest position; the second integrator, with a fast time constant of 3 ms ensures that the change in position does not occur instantaneously. To this basic plant, we add a reactive control layer modeled as a proportional-integral (PI) error-feedback controller, with proportional gain $k_p$ and integral gain $k_i$. The control loop includes a 50 ms delay in the error feedback, to account for both the actuation and the sensing latency. We choose gains such that reactive tracking lags the target by approximately 100 ms. This gives $k_p=20$ and $k_i=100$. To complete the anticipatory and adaptive control architecture, the closed-loop system is supplemented by the feed-forward module.
\[Results1\]
![Behavior of the system. Left: Reference ($\mathbf{r}$) and output of the system before ($\mathbf{y}[1]$) and after learning ($\mathbf{y}[50]$). Right: Error before $\mathbf{e}[1]$ and after learning $\mathbf{e}[50]$ and output acquired by cerebellar/feed-forward component ($\mathbf{o}[50]$)[]{data-label="fig1"}](F1){width="0.99\linewidth"}
![Behavior of the system. Left: Reference ($\mathbf{r}$) and output of the system before ($\mathbf{y}[1]$) and after learning ($\mathbf{y}[50]$). Right: Error before $\mathbf{e}[1]$ and after learning $\mathbf{e}[50]$ and output acquired by cerebellar/feed-forward component ($\mathbf{o}[50]$)[]{data-label="fig1"}](F2){width="0.99\linewidth"}
The architecture implementing the forward model-based gradient descent algorithm is applied to a task structured in trials of $2.5$ sec duration. Within each trial, a target remains still at the center of the visual scene for a duration 0.5 sec, next it moves rightwards for 0.5 sec with constant velocity, remains still for 0.5 sec and repeats the sequence of movements in reverse, returning to the center. The cerebellar component receives 20 Gaussian basis signals ($\mathbf{X}$) whose receptive fields are defined in the temporal domain, relative to trial onset, with a width (standard-deviation) of $50$ ms and spaced by $100$ ms. The whole system is simulated using a $1$ ms time-step. To construct the matrix $\mathcal{T}$ we computed closed-loop system impulse response.
At the first trial, before any learning, the output of the plant lags the reference signal by approximately $100$ ms converging to the position only when the target remains still for about $300$ ms (Fig. \[fig1\] *left*). As a result of learning, the plant’s behavior shifts from a reactive to an anticipatory mode, being able to track the reference without any delay. Indeed, the error that is sizable during the target displacement before learning, almost completely disappears by the 50$^{th}$ trial (Fig. \[fig1\] *right*). That cancellation results from learning the weights that generate a feed-forward predictive signal that leads the changes in the reference signal (onsets and offsets of target movements) by approximately $100$ ms (Fig. \[fig1\] *right*). Indeed, convergence of the algorithm is remarkably fast and by trial $7$ it has almost converged to the optimal solution (Fig. \[fig2\]).
\[Results2\]
![Performance achieved with different learning rules. Representative learning curves of the forward model-based eligibility trace gradient descent (FM-ET), the simple Widrow-Hoff (WH) and the Widrow-Hoff algorithm with a delta-eligibility trace matched to error feedback delay (WH+50 ms) or with an eligibility trace exceeding that delay by 20 ms (WH+70 ms). Error is quantified as the relative root mean-squared error (rRMSE), scaled proportionally to the error in the first trial. Error of the optimal solution, obtained with ${\boldsymbol{w}}^*=(\mathcal{T}{\boldsymbol{X}})^\dagger {\boldsymbol{e}}_0$, is indicated with a dashed line.[]{data-label="fig2"}](F3){width="0.99\linewidth"}
To assess how much our forward-model-based eligibility trace contributes to performance, we test three alternative algorithms. In both cases we employ the same control architecture, changing the plasticity rule such that we either use no eligibility trace, thus implementing the basic Widrow-Hoff learning rule, or use the Widrow-Hoff rule extended with a delta-function eligibility trace that matches the latency of the error feedback (50 ms) or slightly exceeds it (70 ms). Performance with the basic WH model worsens rapidly whereas performance with the WH learning rule using a “pure delay” eligibility trace matched to the transport delay improves but not as fast as with the forward-model-based eligibility trace (Fig. \[fig2\]). Indeed, in this case, the best strategy for implementing a delayed delta eligibility trace is setting a delay exceeding the transport delay by around 20 ms, thus matching the peak of the impulse response. In that case, the system performs almost as good as with the forward-model eligibility trace (70 ms). This last result implies that, even though the literature usually emphasizes the role of transport delays, eligibility traces also account for response lags due to intrinsic dynamics of the plant.
To summarize our results, we have shown with a basic simulation of a visual smooth pursuit task that generating the eligibility trace by means of a forward model ensures convergence to the optimal solution and accelerates learning by guaranteeing that it follows a gradient descent.
Discussion
==========
In this paper we have introduced a novel formulation of cerebellar anticipatory control, consistent with experimental evidence, in which a forward model has emerged naturally at the level of Purkinje cell synapses. From a machine learning perspective, we have also provided an optimality argument for the derivation of an eligibility trace, a construct that was often thought of in more heuristic terms as a mechanism to bridge time-delays [@barto1983neuronlike; @shibata2001biomimetic; @mckinstry2006cerebellar].
The first seminal works of cerebellar computational models emphasized its role as an associative memory [@Marr1969; @albus1971theory]. Later, the cerebellum was investigates as a device processing correlated time signals[@Fujita1982; @Kawato1987; @Dean2010]. In this latter framework, the use of the computational concept of an eligibility trace emerged as a heuristic construct that allowed to compensate for transmission delays in the circuit[@Kettner1997; @shibata2001biomimetic; @Porrill2007], which introduced lags in the cross-correlation between signals. Concretely, that was referred to as the problem of *delayed error feedback*, due to which, by the time an error signal reaches a cell, the synapses accountable for that error are no longer the ones currently active, but those that were active at the time when the motor signals that caused the actual error were generated. This view has however neglected the fact that beyond transport delays, response dynamics of physical plants also influence how past pre-synaptic signals could have related to the current output of the plant. Indeed, for a linear plant, the impulse-response function of the plant provides the complete description of how inputs will drive the system, and as such, integrates transmission delays as well as the dynamics of the plant. Recently,
Even though cerebellar microcircuits have been used as models for building control architectures, e.g., the feedback-error learning model [@Kawato1987], our CFPC is novel in that it links the cerebellum to the input of the feedback controller, ensuring that the computational features of the feedback controller are exploited at all times. Within the domain of adaptive control, there are remarkable similarities at the functional level between CFPC and iterative learning control (ILC) [@amann1996iterative], which is an input design technique for learning optimal control signals in repetitive tasks. The difference between our CFPC and ILC lies in the fact that ILC controllers directly learn a control signal, whereas, the CFPC learns a conterfactual error signal that steers a feedback controller. However the similarity between the two approaches can help for extending CFPC to more complex control tasks.
With our CFPC framework, we have modeled the cerebellar system at a very high level of abstraction: we have not included bio-physical constraints underlying neural computations, obviated known anatomical connections such as the cerebellar nucleo-olivary inhibition [@Bengtsson2006; @Herreros2013a] and made simplifications such as collapsing cerebellar cortex and nuclei into the same computational unit. On the one hand, such a choice of high-level abstraction may indeed be beneficial for deriving general-purpose machine learning or adaptive control algorithms. On the other hand, it is remarkable that in spite of this abstraction our framework makes fine-grained predictions at the micro-level of biological processes. Namely, that in a cerebellar microcircuit [@Apps2005], the response dynamics of secondary messengers [@wang2000coincidence] regulating plasticity of Purkinje cell synapses to parallel fibers must mimic the dynamics of the motor system being controlled by that cerebellar microcircuit. Notably, the logical consequence of this prediction, that different Purkinje cells should display different plasticity rules according to the system that they control, has been validated recording single Purkinje cells in vivo [@suvrathan2016timing]. In conclusion, we find that a normative interpretation of plasticity rules in Purkinje cell synapses emerges from our systems level CFPC computational architecture. That is, in order to generate optimal eligibility traces, synapses must include a forward model of the controlled subsystem. This conclusion, in the broader picture, suggests that synapses are not merely components of multiplicative gains, but rather the loci of complex dynamic computations that are relevant from a functional perspective, both, in terms of optimizing storage capacity [@benna2016computational; @lahiri2013memory] and fine-tuning learning rules to behavioral requirements.
### Acknowledgments {#acknowledgments .unnumbered}
The research leading to these results has received funding from the European Commission’s Horizon 2020 socSMC project (socSMC-641321H2020-FETPROACT-2014) and by the European Research Council’s CDAC project (ERC-2013-ADG 341196).
|
---
abstract: 'In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.'
author:
- |
Florentin Smarandache\
Department of Mathematics\
University of New Mexico\
Gallup, NM 87301, U.S.A.\
smarand@unm.edu\
- |
Jean Dezert\
ONERA\
29 Av. de la Division Leclerc\
92320 Châtillon, France.\
Jean.Dezert@onera.fr
title: 'Qualitative Belief Conditioning Rules (QBCR)'
---
qualitative belief, belief conditioning rules (BCRs), computing with words, Dezert-Smarandache Theory (DSmT), reasoning under uncertainty.
Introduction {#sec:Introduction}
============
In this paper, we propose a simple arithmetic of linguistic labels which allows a direct extension of quantitative Belief Conditioning Rules (BCR) proposed in the DSmT [@DSmTBook_2004; @DSmTBook_2006] framework to their qualitative counterpart. Qualitative beliefs assignments are well adapted for manipulated information expressed in natural language and usually reported by human expert or AI-based expert systems. A new method for computing directly with words (CW) for combining and conditioning qualitative information is presented. CW, more precisely computing with linguistic labels, is usually more vague, less precise than computing with numbers, but it is expected to offer a better robustness and flexibility for combining uncertain and conflicting human reports than computing with numbers because in most of cases human experts are less efficient to provide (and to justify) precise quantitative beliefs than qualitative beliefs.
Before extending the quantitative DSmT-based conditioning rules to their qualitative counterparts, it will be necessary to define few but new important operators on linguistic labels and what is a qualitative belief assignment. Then we will show though simple examples how the combination of qualitative beliefs can be obtained in the DSmT framework.
Qualitative operators and belief assignments {#sec2}
============================================
Since one wants to compute directly with words (CW) instead of numbers, we define without loss of generality a finite set of linguistic labels $\tilde{L}=\{L_1,L_2,\ldots,L_n\}$ where $n\geq 2$ is an integer. $\tilde{L}$ is endowed with a total order relationship $\prec$, so that $L_1\prec L_2\prec \ldots\prec L_n$. To work on a close linguistic set under linguistic addition and multiplication operators, one extends $\tilde{L}$ with two extreme values $L_{0}$ and $L_{n+1}$ where $L_{0}$ corresponds to the minimal qualitative value and $L_{n+1}$ corresponds to the maximal qualitative value, in such a way that $L_0\prec L_1\prec L_2\prec \ldots\prec L_n\prec L_{n+1}$ where $\prec$ means inferior to, or less, or smaller (in quality) than, etc. Therefore, one will work on the extended ordered set $L$ of qualitative values $L=\{L_0,L_1,L_2,\ldots,L_n,L_{n+1}\}$. The qualitative addition and multiplication of linguistic labels, which are commutative, associative, and unitary operators, are defined as follows - see Chapter 10 in [@DSmTBook_2006] for details and examples :
- Addition : if $i+j < n+1$, $L_i + L_j=L_{i+j}$ otherwise $L_i + L_j=L_{n+1}$.
- Multiplication[^1] : $L_i \times L_j=L_{\min\{i,j\}}$
Let’s consider a finite and discrete frame of discernment $\Theta=\{\theta_1,\ldots,\theta_n\}$ for the given problem under consideration where the true solution must lie in; its model $\mathcal{M}(\Theta)$ defined by the set of integrity constraints on elements of $\Theta$ (i.e. free-DSm model, hybrid model or Shafer’s model) and its corresponding hyper-power set denoted $D^\Theta$; that is, the Dedekind’s lattice on $\Theta$ [@DSmTBook_2004] which is nothing but the space of propositions generated with $\cap$ and $\cup$ operators and elements of $\Theta$ taking into account the integrity constraints (if any) of the model. A qualitative basic belief assignment (qbba) also called qualitative belief mass is a mapping function $qm(.): D^\Theta \mapsto L$. In the sequel, all qualitative masses not explicitly specified in the examples, are by default (and for notation convenience) assumed to take the minimal linguistic value $L_0$.
Quasi-normalization of qualitative masses
=========================================
There is no way to define a normalized $qm(.)$, but a qualitative quasi-normalization [@DSmTBook_2006] is nevertheless possible if needed as follows:
- If the previous defined labels $L_0$, $L_1$, $L_2$, $\ldots$, $L_n$, $L_{n+1}$ from the set $L$ are equidistant, i.e. the (linguistic) distance between any two consecutive labels $L_j$ and $L_{j+1}$ is the same, for any $j \in \{0, 1, 2, \ldots, n\}$, then one can make an isomorphism between $L$ and a set of sub-unitary numbers from the interval $[0, 1]$ in the following way: $L_i = i/(n+1)$, for all $i \in \{0, 1, 2, \ldots, n+1\}$, and therefore the interval $[0, 1]$ is divided into $n+1$ equal parts. Hence, a qualitative mass, $qm(X_i) = L_i$, is equivalent to a quantitative mass $m(X_i) = i/(n+1)$ which is normalized if $$\sum_{X\in D^\Theta} m(X)= \sum_{k} i_k/(n+1)=1$$ but this one is equivalent to $$\sum_{X\in D^\Theta} qm(X)= \sum_{k} L_{i_k}=L_{n+1}$$ In this case we have a [*[qualitative normalization]{}*]{}, similar to the (classical) numerical normalization.
- But, if the previous defined labels $L_0$, $L_1$, $L_2$, $\ldots$, $L_n$, $L_{n+1}$ from the set $L$ are not equidistant, so the interval $[0, 1]$ cannot be split into equal parts according to the distribution of the labels, then it makes sense to consider a [*[qualitative quasi-normalization]{}*]{}, i.e. an approximation of the (classical) numerical normalization for the qualitative masses in the same way: $$\sum_{X\in D^\Theta} qm(X)=L_{n+1}$$ In general, if we don’t know if the labels are equidistant or not, we say that a qualitative mass is quasi-normalized when the above summation holds.
Quantitative Belief Conditioning Rules (BCR)
============================================
Before presenting the new Qualitative Belief Conditioning Rules (QBCR) in the next section, it is first important and necessary to briefly recall herein what are the (quantitative) Belief Conditioning Rules (BCR) and what was the motivation for their development in DSmT framework and also the fundamental difference between BCR and Shafer’s Conditioning Rule (SCR) proposed in [@Shafer_1976].\
So, let’s suppose one has a prior basic belief assignment (bba) $m(.)$ defined on hyper-power set $D^\Theta$, and one finds out (or one assumes) that the truth is in a given element $A\in D^\Theta$, i.e. $A$ has really occurred or is supposed to have occurred. The problem of belief conditioning is on how to revise properly the prior bba $m(.)$ with the knowledge about the occurrence of $A$. Simply stated: how to compute $m(.|A)$ from the knowledge available, that is with any prior bba $m(.)$ and $A$ ?
Shafer’s Conditioning Rule (SCR)
--------------------------------
Until very recently, the most commonly used conditioning rule for belief revision was the one proposed by Shafer [@Shafer_1976] and referred here as Shafer’s Conditioning Rule (SCR). The SCR consists in combining the prior bba $m(.)$ with a specific bba focused on $A$ with Dempster’s rule of combination for transferring the conflicting mass to non-empty sets in order to provide the revised bba. In other words, the conditioning by a proposition $A$, is obtained by SCR as follows :
$$m_{SCR}(.|A)=[m\oplus m_S] (.)
\label{eqSCR}$$
where $m(.)$ is the prior bba to update, $A$ is the conditioning event, $m_S(.)$ is the bba focused on $A$ defined by $m_S(A)=1$ and $m_S(X)=0$ for all $X\neq A$ and $\oplus$ denotes the Dempster’s rule of combination [@Shafer_1976].\
The SCR approach based on Dempster’s rule of combination of the prior bba with the bba focused on the conditioning event remains [*[subjective]{}*]{} since actually in such belief revision process both sources are subjective and SCR doesn’t manage properly the objective nature/absolute truth carried by the conditioning term. Indeed, when conditioning a prior mass $m(.)$, [*[knowing]{}*]{} (or assuming) that the truth is in $A$, means that we have in hands an absolute (not subjective) knowledge, i.e. the truth in $A$ has occurred (or is assumed to have occurred), thus $A$ is realized (or is assumed to be realized) and this is (or at least must be interpreted as) an absolute truth. The conditioning term “Given $A$” must therefore be considered as an absolute truth, while $m_S(A)=1$ introduced in SCR cannot refer to an absolute truth actually, but only to a [*[subjective certainty]{}*]{} on the possible occurrence of $A$ from a [*[virtual]{}*]{} second source of evidence. The advantage of SCR remains undoubtedly in its simplicity and the main argument in its favor is its coherence with conditional probability when manipulating Bayesian belief assignment. But in our opinion, SCR should better be interpreted as the fusion of $m(.)$ with a particular subjective bba $m_S(A)=1$ rather than an objective belief conditioning rule. This fundamental remark motivated us to develop a new family of BCR [@DSmTBook_2006] based on hyper-power set decomposition (HPSD) explained briefly in the next section. It turns out that many BCR are possible because the redistribution of masses of elements outside of $A$ (the conditioning event) to those inside $A$ can be done in $n$-ways. This will be briefly presented right after the next section.
Hyper-Power Set Decomposition (HPSD)
------------------------------------
Let $\Theta=\{\theta_1,\theta_2,\ldots,\theta_n\}$, $n\geq 2$, a model $\mathcal{M}(\Theta)$ associated for $\Theta$ (free DSm model, hybrid or Shafer’s model) and its corresponding hyper-power set $D^\Theta$. Let’s consider a (quantitative) basic belief assignment (bba) $m(.): D^\Theta \mapsto [0,1]$ such that $\sum_{X\in D^\Theta}m(X)=1$. Suppose one finds out that the truth is in the set $A\in D^\Theta\setminus\{\emptyset\}$. Let $\mathcal{P}_{\mathcal{D}}(A)=2^A \cap D^{\Theta} \setminus \{\emptyset\}$, i.e. all non-empty parts (subsets) of $A$ which are included in $D^\Theta$. Let’s consider the normal cases when $A\neq\emptyset$ and $\sum_{Y\in \mathcal{P}_{\mathcal{D}}(A)}m(Y)> 0$. For the degenerate case when the truth is in $A=\emptyset$, we consider Smets’ open-world, which means that there are other hypotheses $\Theta'=\{\theta_{n+1},\theta_{n+2},\ldots\theta_{n+m}\}$, $m\geq 1$, and the truth is in $A\in D^{\Theta'}\setminus\{\emptyset\}$. If $A=\emptyset$ and we consider a close-world, then it means that the problem is impossible. For another degenerate case, when $\sum_{Y\in \mathcal{P}_{\mathcal{D}}(A)}m(Y)=0$, i.e. when the source gave us a totally (100%) wrong information $m(.)$, then, we define: $m(A|A)\triangleq 1$ and, as a consequence, $m(X|A)=0$ for any $X\neq A$. Let $s(A)=\{\theta_{i_1},\theta_{i_2},\ldots,\theta_{i_p}\}$, $1\leq p\leq n$, be the singletons/atoms that compose $A$ (for example, if $A=\theta_1\cup(\theta_3\cap\theta_4)$ then $s(A)=\{\theta_1,\theta_3,\theta_4\}$). The Hyper-Power Set Decomposition (HPSD) of $D^\Theta \setminus \emptyset$ consists in its decomposition into the three following subsets generated by $A$:
- $D_1=\mathcal{P}_{\mathcal{D}}(A)$, the parts of $A$ which are included in the hyper-power set, except the empty set;
- $D_2=\{(\Theta\setminus s(A)),\cup , \cap\} \setminus \{\emptyset\}$, i.e. the sub-hyper-power set generated by $\Theta\setminus s(A)$ under $\cup$ and $\cap$, without the empty set.
- $D_3=(D^\Theta\setminus\{\emptyset\}) \setminus (D_1\cup D_2)$; each set from $D_3$ has in its formula singletons from both $s(A)$ and $\Theta\setminus s(A)$ in the case when $\Theta\setminus s(A)$ is different from empty set.
$D_1$, $D_2$ and $D_3$ have no element in common two by two and their union is $D^\Theta\setminus\{\emptyset\}$.\
[*[Simple example of HPSD]{}*]{}: Let’s consider $\Theta=\{\theta_1, \theta_2, \theta_3\}$ with Shafer’s model (i.e. all elements of $\Theta$ are exclusive) and let’s assume that the truth is in $\theta_2\cup \theta_3$, i.e. the conditioning term is $\theta_2\cup \theta_3$. Then one has the following HPSD: $D_1=\{\theta_2,\theta_3,\theta_2\cup \theta_3\}$, $D_2=\{\theta_1\}$ and $D_3=\{\theta_1\cup \theta_2, \theta_1\cup \theta_3, \theta_1\cup \theta_2\cup \theta_3\}$. More complex and detailed examples can be found in [@DSmTBook_2004].
Belief conditioning rules (BCR)
-------------------------------
Since there exists actually many ways for redistributing the masses of elements outside of $A$ (the conditioning event) to those inside $A$, several BCR have been proposed recently in [@DSmTBook_2006]. Due to space limitation, we will not browse here all the possibilities for doing these redistributions and all BCR but one just presents here a typical and interesting BCR, i.e. the BCR number 17 (i.e. BCR17) which does in our opinion the most refined redistribution since: - the mass $m(W)$ of each element $W$ in $D_2\cup D_3$ is transferred to those $X\in D_1$ elements which are included in $W$ if any proportionally with respect to their non-empty masses; - if no such $X$ exists, the mass $m(W)$ is transferred in a pessimistic/prudent way to the $k$-largest element from $D_1$ which are included in $W$ (in equal parts) if any; - if neither this way is possible, then $m(W)$ is indiscriminately distributed to all $X \in D_1$ proportionally with respect to their nonzero masses.\
BCR17 is defined by the following formula (see [@DSmTBook_2004], Chap. 9 for detailed explanations and examples):
$$m_{BCR17}(X|A)=
m(X)\cdot \Bigg[ S_{D_1}
+
\displaystyle\sum_{
\begin{array}{c}
\scriptstyle W\in D_2 \cup D_3\\
\scriptstyle X\subset W\\
\scriptstyle S(W)\neq 0
\end{array}
}
\frac{m(W)}{S(W)}
\Bigg]\\
+ \displaystyle\sum_{
\begin{array}{c}
\scriptstyle W\in D_2\cup D_3\\
\scriptstyle X\subset W, \, $X$ \,\text{is $k$-largest}\\
\scriptstyle S(W)=0
\end{array}}
m(W)/k
\label{eq:BCR17}$$
where “$X\,\text{is $k$-largest}$” means that $X$ is the $k$-largest (with respect to inclusion) set included in $W$ and
$$S(W) \triangleq \sum_{Y\in D_1,Y\subset W} m(Y)
\qquad\text{and}\qquad
S_{D_1} \triangleq \frac{1}{\sum_{Y\in D_1}m(Y)} \times
\displaystyle
\sum_{
\begin{array}{c}
\scriptstyle Z\in D_1,\\
\scriptstyle \text{or}\, Z\in D_2 \,\mid\, \nexists Y\in D_1\, \text{with}\, Y\subset Z
\end{array}}
m(Z)$$
[*[A simple example for BCR17]{}*]{}: Let’s consider $\Theta=\{\theta_1, \theta_2, \theta_3\}$ with Shafer’s model (i.e. all elements of $\Theta$ are exclusive) and let’s assume that the truth is in $\theta_2\cup \theta_3$, i.e. the conditioning term is $A\triangleq \theta_2\cup \theta_3$. Then one has the following HPSD: $$D_1=\{\theta_2,\theta_3,\theta_2\cup \theta_3\}, \qquad D_2=\{\theta_1\}$$ $$D_3=\{\theta_1\cup \theta_2, \theta_1\cup \theta_3, \theta_1\cup \theta_2\cup \theta_3\}.$$
Let’s consider the following prior bba: $m(\theta_1)=0.2$, $m(\theta_2)=0.1$, $m(\theta_3)=0.2$, $m(\theta_1\cup \theta_2)=0.1$, $m(\theta_2\cup \theta_3)=0.1$ and $m(\theta_1\cup \theta_2\cup \theta_3)=0.3$.\
With BCR17, for $D_2$, $m(\theta_1)=0.2$ is transferred proportionally to all elements of $D_1$, i.e. $\frac{x_{\theta_2}}{0.1}=\frac{y_{\theta_3}}{0.2}=\frac{z_{\theta_2\cup \theta_3}}{0.1}=\frac{0.2}{0.4}=0.5$ whence the parts of $m(\theta_1)$ redistributed to $\theta_2$, $\theta_3$ and $\theta_2\cup\theta_3$ are respectively $x_{\theta_2}=0.05$, $y_{\theta_3}=0.10$, and $z_{\theta_2\cup \theta_3}=0.05$. For $D_3$, there is actually no need to transfer $m(\theta_1\cup \theta_3)$ because $m(\theta_1\cup \theta_3)=0$ in this example; whereas $m(\theta_1\cup \theta_2)=0.1$ is transferred to $\theta_2$ (no case of $k$-elements herein); $m(\theta_1\cup \theta_2\cup \theta_3)=0.3$ is transferred to $\theta_2$, $\theta_3$ and $\theta_2\cup \theta_3$ proportionally to their corresponding masses: $$\frac{x_{\theta_2}}{0.1}=\frac{y_{\theta_3}}{0.2}=\frac{z_{\theta_2\cup \theta_3}}{0.1}=\frac{0.3}{0.4}=0.75$$ whence $x_{\theta_2}=0.075$, $y_{\theta_3}=0.15$, and $z_{\theta_2\cup \theta_3}=0.075$. Finally, one gets $$\begin{aligned}
& m_{BCR17}(\theta_2|\theta_2\cup \theta_3)=0.10+0.05+0.10+0.075=0.325\\
& m_{BCR17}(\theta_3|\theta_2\cup \theta_3)=0.20+0.10+0.15=0.450\\
& m_{BCR17}(\theta_2\cup \theta_3|\theta_2\cup \theta_3)=0.10+0.05+0.075=0.225\end{aligned}$$
which is different from the result obtained with SCR, since one gets in this example:
$$\begin{aligned}
& m_{SCR}(\theta_2|\theta_2\cup \theta_3)=0.25\\
& m_{SCR}(\theta_3|\theta_2\cup \theta_3)=0.25\\
& m_{SCR}(\theta_2\cup \theta_3|\theta_2\cup \theta_3)=0.50\end{aligned}$$
More complex and detailed examples can be found in [@DSmTBook_2004].
Qualitative belief conditioning rules (QBCR)
============================================
In this section we propose two Qualitative belief conditioning rules (QBCR) which extend the principles of quantitative BCR in the qualitative domain using the operators on linguistic labels defined in section \[sec2\]. We consider from now on a general frame $\Theta=\{\theta_1,\theta_2,\ldots,\theta_n\}$, a given model $\mathcal{M}(\Theta)$ with its hyper-power set $D^\Theta$ and a given extended ordered set $L$ of qualitative values $L=\{L_0,L_1,L_2,\ldots,L_m,L_{m+1}\}$. The prior qualitative basic belief assignment (qbba) taking its values in $L$ is denoted $qm(.)$. We assume in the sequel that the conditioning event is $A\neq\emptyset$, $A\in D^\Theta$, i.e. the absolute truth is in $A$.
Qualitative Belief Conditioning Rule no 1 (QBCR1)
-------------------------------------------------
The first QBCR, denoted QBCR1, does the redistribution of masses in a pessimistic/prudent way, as follows:
- transfer the mass of each element $Y$ in $D_2\cup D_3$ to the largest element $X$ in $D_1$ which is contained by $Y$;
- if no such $X$ element exists, then the mass of $Y$ is transferred to $A$.
The mathematical formula for QBCR1 is then given by:
- If $X\notin D_1$, $$qm_{QBCR1}(X|A)=L_{\min}\equiv L_0$$
- If $X \in D_1$, $$qm_{QBCR1}(X|A)=qm(X) + qS_1(X,A) + qS_2(X,A)
\label{eqQBCR1}$$
where the addition operator involved in corresponds to the [*[addition operator on linguistic labels]{}*]{} defined in section \[sec2\] and where the [*[qualitative summations]{}*]{} $qS_1(X,A)$ and $qS_2(X,A)$ are defined by:
$$qS_1(X,A)\triangleq \displaystyle\sum_{
\begin{array}{c}
\scriptstyle Y\in D_2 \cup D_3\\
\scriptstyle X\subset Y\\
\scriptstyle X=\max
\end{array}
}
qm(Y)
\label{eq:qS1}$$
$$qS_2(X,A)\triangleq \displaystyle\sum_{
\begin{array}{c}
\scriptstyle Y\in D_2 \cup D_3\\
\scriptstyle Y\cap A=\emptyset\\
\scriptstyle X=A
\end{array}
}
qm(Y)
\label{eq:qS2}$$
$qS_1(X,A)$ corresponds to the transfer of qualitative mass of each element $Y$ in $D_2\cup D_3$ to the largest element $X$ in $D_1$ and $qS_2(X,A)$ corresponds to the transfer of the mass of $Y$ is to $A$ when no such largest element $X$ in $D_1$ exists.
Qualitative Belief Conditioning Rule no 2 (QBCR2)
-------------------------------------------------
The second QBCR, denoted QBCR2, does a uniform redistribution of masses, as follows:
- transfer the mass of each element $Y$ in $D_2\cup D_3$ to the largest element $X$ in $D_1$ which is contained by $Y$ (as QBCR1 does);
- if no such $X$ element exists, then the mass of $Y$ is uniformly redistributed to all subsets of $A$ whose (qualitative) masses are not $L_0$ (i.e. to all qualitative focal elements included in $A$).
- if there is no qualitative focal element included in $A$, then the mass of $Y$ is transferred to $A$.
The mathematical formula for QBCR2 is then given by:
- If $X\notin D_1$, $$qm_{QBCR2}(X|A)=L_{\min}\equiv L_0$$
- If $X \in D_1$, $$qm_{QBCR2}(X|A)=qm(X) + qS_1(X,A) + qS_3(X,A)+ qS_4(X,A)
\label{eqQBCR2}$$
where the addition operator involved in corresponds to the [*[addition operator on linguistic labels]{}*]{} defined in section \[sec2\] and where the [*[qualitative summations]{}*]{} $qS_1(X,A)$ is defined in , $qS_3(X,A)$ and $qS_4(X,A)$ by:
$$qS_3(X,A)\triangleq \displaystyle\sum_{
\begin{array}{c}
\scriptstyle Y\in D_2 \cup D_3\\
\scriptstyle Y\cap A=\emptyset\\
\scriptstyle q_F\neq 0
\end{array}
}
\frac{qm(Y)}{q_F}
\label{eq:qS3}$$
$$qS_4(X,A)
\triangleq \displaystyle\sum_{
\begin{array}{c}
\scriptstyle Y\in D_2 \cup D_3\\
\scriptstyle Y\cap A=\emptyset\\
\scriptstyle X=A, q_F=0
\end{array}
}
qm(Y),
\label{eq:qS4}$$
where $q_F\triangleq \text{Card}\{Z| Z\subset A, qm(Z)\neq L_0\}=$ number of qualitative focal elements of $A$.\
### Scalar division of linguistic label {#scalar-division-of-linguistic-label .unnumbered}
For the complete derivation of we need to define the scalar division of labels involved in . We propose the following definition:
$$\frac{L_i}{j}\triangleq L_{[\frac{i}{j}]}
\label{eq:qdivision}$$
for all $i\geq 0$ and $j>0$ where $[\frac{i}{j}]$ is the integer part of $\frac{i}{j}$, i.e. the largest integer less than or equal to $\frac{i}{j}$. For example, $\frac{L_5}{3}= L_{[\frac{5}{3}]}=L_1$, or $\frac{L_6}{3}= L_{[\frac{6}{3}]}=L_2$, etc.
Examples for QBCR1 and QBCR2
============================
Let’s consider the following set of ordered linguistic labels $L=\{L_0,L_1,L_2,L_3,L_4,L_5,L_6\}$ (for example, $L_1$, $L_2$, $L_3$, $L_4$ and $L_5$ may represent the values: $L_1\triangleq \text{{\it{very poor}}}$, $L_2\triangleq \text{{\it{poor}}}$, $L_3\triangleq \text{{\it{medium}}}$, $L_4\triangleq \text{{\it{good}}}$ and $L_5\triangleq \text{{\it{very good}}}$, where the symbol $\triangleq$ means [*[by definition]{}*]{}). The addition and multiplication tables corresponds respectively to Tables \[CWTable3\] and \[CWTable4\].
$+$ $L_0$ $L_1$ $L_2$ $L_3$ $L_4$ $L_5$ $L_6$
------- ------- ------- ------- ------- ------- ------- -------
$L_0$ $L_0$ $L_1$ $L_2$ $L_3$ $L_4$ $L_5$ $L_6$
$L_1$ $L_1$ $L_2$ $L_3$ $L_4$ $L_5$ $L_6$ $L_6$
$L_2$ $L_2$ $L_3$ $L_4$ $L_5$ $L_6$ $L_6$ $L_6$
$L_3$ $L_3$ $L_4$ $L_5$ $L_6$ $L_6$ $L_6$ $L_6$
$L_4$ $L_4$ $L_5$ $L_6$ $L_6$ $L_6$ $L_6$ $L_6$
$L_5$ $L_5$ $L_6$ $L_6$ $L_6$ $L_6$ $L_6$ $L_6$
$L_6$ $L_6$ $L_6$ $L_6$ $L_6$ $L_6$ $L_6$ $L_6$
: Addition table[]{data-label="CWTable3"}
$\times$ $L_0$ $L_1$ $L_2$ $L_3$ $L_4$ $L_5$ $L_6$
---------- ------- ------- ------- ------- ------- ------- -------
$L_0$ $L_0$ $L_0$ $L_0$ $L_0$ $L_0$ $L_0$ $L_0$
$L_1$ $L_0$ $L_1$ $L_1$ $L_1$ $L_1$ $L_1$ $L_1$
$L_2$ $L_0$ $L_1$ $L_2$ $L_2$ $L_2$ $L_2$ $L_2$
$L_3$ $L_0$ $L_1$ $L_2$ $L_3$ $L_3$ $L_3$ $L_3$
$L_4$ $L_0$ $L_1$ $L_2$ $L_3$ $L_4$ $L_4$ $L_4$
$L_5$ $L_0$ $L_1$ $L_2$ $L_3$ $L_4$ $L_5$ $L_5$
$L_6$ $L_0$ $L_1$ $L_2$ $L_3$ $L_4$ $L_5$ $L_6$
: Multiplication table[]{data-label="CWTable4"}
Example 1
---------
Let’s consider the frame $\Theta=\{A,B,C,D\}$ with the hybrid model corresponding to the Venn diagram on Figure \[venn1\]. We assume that the prior qualitative bba $qm(.)$ is given by:
$$qm(A)=L_1, \quad qm(C)=L_1, \quad qm(D)=L_4$$
and the qualitative masses of all other elements of $G^\Theta$ take the minimal value $L_0$. This qualitative mass is quasi-normalized since $L_1+L_1+L_4=L_{1+1+4}=L_6=L_{\max}$.
If we assume that the conditioning event is the proposition $A\cup B$, i.e. the absolute truth is in $A\cup B$, the hyper-power set decomposition (HPSD) is obtained as follows: $D_1$ is formed by all parts included in $A\cup B$, i.e. $D_1=\{A\cap B, A, B, A\cup B, B\cap D, A\cup (B\cap D), (A\cap B)\cup (B\cap D)\}$, $D_2$ is the set generated by $\{(C,D),\cup,\cap\} \setminus \emptyset=\{C,D,C\cup D, C\cap D\}$, and $D_3=\{A\cup C, A\cup D, B\cup C, B\cup D, A\cup B\cup C, A\cup (C\cap D), \ldots\}$.\
The qualitative mass of element $D$ is transferred to $D\cap(A\cup B)=B\cap D$ according to the model, since $D$ is in the set $D_2\cap D_3$ and the largest element $X$ in $D_1$ which is contained by element $D$ is $B\cap D$. Whence $qm_{QBCR1}(B\cap D | A\cup B) = L_4$, while $qm_{QBCR1}(D|A\cup B) = L_0$. The qualitative mass of element $C$, which is in $D_2\cup D_3$, but $C$ has no intersection with $A\cup B$ (i.e. the intersection is empty), is transferred to the whole $A\cup B$. Whence $qm_{QBCR1}(A\cup B|A\cup B)=L_1$, while $qm_{QBCR1}(C|A\cup B)=L_0$. Since the truth is in $A\cup B$, then the qualitative masses of the elements $A$ and $B$, which are included in $A\cup B$, are not changed in this example, i.e. $qm_{QBCR1}(A| A\cup B)=L_1$ and $qm_{QBCR1}(B|A\cup B)=L_0$. One sees that the resulting qualitative conditional mass, $qm_{QBCR1}(.)$ is also quasi-normalized since $$L_4+L_0+L_1+L_0+L_1+L_0=L_6=L_{\max}$$ In summary, one gets the following qualitative conditioned masses with QBCR1[^2]:
$$\begin{aligned}
qm_{QBCR1}(B\cap D|A\cup B)&=L_4\\
qm_{QBCR1}(A\cup B|A\cup B)& =L_1\\
qm_{QBCR1}(A| A\cup B)&=L_1\end{aligned}$$
Analogously to QBCR1, with QBCR2 the qualitative mass of the element $D$ is transferred to $D\cap (A\cup B)=B\cap D$ according to the model, since $D$ is in $D_2\cup D_3$ and the largest element $X$ in $D_1$ which is contained by $D$ is $B\cap D$. Whence $qm_{QBCR2}(B\cap D|A\cup B) = L_4$, while $qm_{QBCR2}(D|A\cup B) = L_0$. But, differently from QBCR1, the qualitative mass of $C$, which is in $D_2\cup D_3$, but $C$ has no intersection with $A\cup B$ (i.e. the intersection is empty), is transferred $A$ only since $A \in A\cup B$ and $qm_1(A)$ is different from zero (while other sets included in $A\cup B$ have the qualitative mass equal to $L_0$). Whence $qm_{QBCR2}(A|A\cup B)=L_1+L_1=L_2$, while $qm_{QBCR2}(C|A\cup B)=L_0$. Similarly, the resulting qualitative conditional mass, $qm_{QBCR2}(.)$ is also quasi-normalized since $L_4+L_0+L_2+L_0 =L_6=L_{\max}$. Therefore the result obtained with QBCR2 is: $$\begin{aligned}
qm_{QBCR2}(B\cap D|A\cup B)&=L_4\\
qm_{QBCR2}(A|A\cup B)&=L_2\end{aligned}$$
Example 2
---------
Let’s consider a more complex example related with military decision support. We assume that the frame $\Theta=\{A,B,C,D\}$ corresponds to the set of four regions under surveillance because these regions are known to potentially protect some dangerous enemies. The linguistic labels used for specifying qualitative masses belong to $L=\{L_0,L_1,L_2,L_3,L_4,L_5,L_6\}$. Let’s consider the following prior qualitative mass $qm(.)$ defined by: $$qm(A)=L_1, qm(C)=L_1, qm(D)=L_4$$ All other masses take the value $L_0$. This qualitative mass is quasi-normalized since $L_1+L_1+L_4 = L_{1+1+4} = L_6 = L_{\max}$.
We assume that the military headquarter has decided to bomb in priority region $D$ because there was a high qualitative belief on the presence of enemies in zone $D$ according to the prior qbba $qm(.)$. But let’s suppose that after bombing and verification, it turns out that the enemies were not in $D$. The important question the headquarter is now face to is on how to revise its prior qualitative belief $qm(.)$ knowing that the absolute truth is now not in $D$, i.e. $\bar{D}$ (the complement of $D$) is absolutely true. The problem is a bit different from the previous one since the conditioning term $\bar{D}$ in this example does not belong to the hyper-power set $D^\Theta$. In such case, one has to work actually directly on the super-power set[^3] as proposed in [@DSmTBook_2006] (Chap. 8). $\bar{D}$ belongs to $D^\Theta$ only if Shafer’s model (or for some other specific hybrid models - see case 2 below) is adopted, i.e. when region $D$ has no overlap with regions $A$, $B$ or $C$. The truth is not in $D$ is in general (but with Shafer’s model or with some specific hybrid models) not equivalent to the truth is in $A\cup B\cup C$ but with the truth is in $\bar{D}$. That’s why the following two cases need to be analyzed:\
- [**[Case 1]{}**]{}: $ \bar{D}\neq A\cup B \cup C.$
If we consider the model represented in Figure \[venn2\], then it is clear that $\bar{D}\neq A\cup B \cup C$.
The Super-Power Set Decomposition (SPSD) is the following:
- if the truth is in $A$, then $D_1$ is formed by all non-empty parts of $A$;
- $D_2$ is formed by all non-empty parts of $\bar{A}$;
- $D_3$ is formed by what’s left, i.e. $D_3 =(S^\Theta \setminus \{\emptyset\}) \setminus (D_1\cup D_2)$; thus $D_3$ is formed by all elements from $S^\Theta$ which have the form of unions of some element(s) from $D_1$ and some element(s) from $D_2$, or by all elements from $S^\Theta$ that overlap $A$ and $\bar{A}$.
In our particular example: $D_1$ is formed by all non-empty parts of $\bar{D}$; $D_2$ is formed by all non-empty parts of $D$; $D_3 =\{A, B, C, A\cup D, B\cup D, A\cup B, \ldots\}$.\
- Using QBCR1: one gets: $$\begin{aligned}
qm_{QBCR1}(A\cap \bar{D}|\bar{D})&=L_1\\
qm_{QBCR1}(C\cap \bar{D}|\bar{D})&=L_1\\
qm_{QBCR1}(\bar{D}|\bar{D})&=L_4\end{aligned}$$
- Using QBCR2: one gets $$\begin{aligned}
qm_{QBCR2}(A\cap \bar{D}|\bar{D})&=L_1+\frac{1}{2}L_4 = L_1+ L_{[\frac{4}{2}]}=L_3\\
qm_{QBCR2}(C\cap \bar{D}|\bar{D})&=L_1+\frac{1}{2}L_4=L_3\end{aligned}$$
Note that with both conditioning rules, one gets quasi-normalized qualitative belief masses. The results indicate that zones $A$ and $C$ have the same level of qualitative belief after the conditioning which is normal. QBRC1 however, which is more prudent, just commits the higher belief to the whole zone $A\cup B\cup C$ which represents actually the less specific information, while QBRC2 commits equal beliefs to the restricted zones $A\cap \bar{D}$ and $C\cap \bar{D}$ only. As far as only the minimal surface of the zone to bomb is concerned (and if zones $A\cap \bar{D}$ and $C\cap \bar{D}$ have the same surface), then a random decision has to be taken between both possibilities. Of course some other military constraints need to be taking into account in the decision process in such situation if the random decision choice is not preferred.\
- [**[Case 2]{}**]{}: $ \bar{D}= A\cup B \cup C.$ This case occurs only when $D\cap (A\cup B\cup C)=\emptyset$ as for example to the following model[^4]. In this second case, “the truth is not in $D$” is equivalent to “the truth is in $A\cup B \cup C$”. The decomposition is the following: $D_1$ is formed by all non-empty parts of $A\cup B\cup C$; $D_2 = \{D\}$; $D_3 = \{A\cup D, B\cup D, C\cup D, A\cup B\cup D, A\cup C\cup D, B\cup C\cup D,
A\cup B\cup C\cup D, (A\cap B)\cup D, (A\cap B\cap C)\cup D, ...\}$.
- Using QBCR1: one gets $$\begin{aligned}
qm_{QBCR1}(A|\bar{D})&=L_1\\
qm_{QBCR1}(C|\bar{D})&=L_1\\
qm_{QBCR1}(A\cup B \cup C|\bar{D})&=L_4\end{aligned}$$
- Using QBCR2: one gets $$\begin{aligned}
qm_{QBCR2}(A|\bar{D})&=L_3\\
qm_{QBCR2}(C|\bar{D})&=L_3\\\end{aligned}$$
Same concluding remarks as for case 1 can be drawn for the case 2. Note that in this case, there is uncertainty in the decision to bomb zone $A$ or zone $C$ because they have the same supporting belief. The only difference with respect to case 1, it that the zone to be bomb (whatever the one chosen - $A$ or $C$) will remain larger than in case 1 because $D$ has no intersection with $A$, $B$ and $C$ for this model.
Example 3
---------
Let’s modify the previous example for examining what happens when using an [*[unconventional]{}*]{} bombing strategy. Here we still consider four zones under surveillance, i.e. $\Theta=\{A,B,C,D\}$ and $L=\{L_0,L_1,L_2,L_3,L_4,L_5,L_6\}$ but with the following prior quasi-normalized qualitative basic belief mass $qm(.)$: $$qm(A)=L_1, qm(C)=L_3, qm(D)=L_2$$
All other qualitative masses take the value $L_0$. Such prior suggests normally/rationally to bomb in priority the zone $C$ since it is the one carrying the higher belief on the location of enemies. But for some unknown reasons (military, political or whatever) let’s assume that the headquarter has finally decided to bomb $D$ first. Let’s examine how will be revised the prior $qm(.)$ with QBCR1 and QBCR2 in such situation for the two cases:\
- [**[Case 1]{}**]{}: $ \bar{D}\neq A\cup B \cup C.$
- Using QBCR1: $qm(A)=L_1$ is transferred to $A\cap \bar{D}$, since $A\cap \bar{D}$ is the largest element from $ \bar{D}$ which is included in $A$, so we get $qm_{QBCR1}(A\cap \bar{D}| \bar{D})=L_1$; and similarly $qm(C)=L_3$ is transferred to $C\cap \bar{D}$, since $C\cap \bar{D}$ is the largest element from $\bar{D}$ which is included in $C$, so we get $qm_{QBCR1}(C\cap \bar{D}| \bar{D})=L_3$; Also, $qm_2(D)=L_2$ is transferred to $ \bar{D}$ since no element from $\bar{D}$ is included in $D$, therefore $qm_{QBCR1}( \bar{D}| \bar{D})=L_2$. Analogously, this qualitative conditioned mass $qm_{QBCR1}(.)$ is quasi-normalized since $L_1+L_3+L_2=L_6=L_{\max}$. In summary, with QBCR1 one gets in this case: $$\begin{aligned}
qm_{QBCR1}(A\cap \bar{D}|\bar{D})&=L_1\\
qm_{QBCR1}(C\cap \bar{D}|\bar{D})&=L_3\\
qm_{QBCR1}(\bar{D}|\bar{D})&=L_2\end{aligned}$$
- Using QBCR2: $qm(A)=L_1$ is transferred to $A\cap \bar{D}$, and $qm(C)=L_3$ is transferred to $C\cap \bar{D}$. Since no qualitative focal element exists in $\bar{D}$, then $qm(D)=L_2$ is transferred to $ \bar{D}$, and we get the same result as for QBCR1.\
- [**[Case 2]{}**]{}: $ \bar{D}= A\cup B \cup C.$
- Using QBCR1: the qualitative masses of $A$, $B$, $C$ do not change since they are included in $A\cup B\cup C$ where the truth is. The qualitative mass of $D$ becomes [*[zero]{}*]{} (i.e. it takes the linguistic value $L_0$) since $D$ is outside the truth, and $qm(D)=L_2$ is transferred to $A\cup B\cup C$. Hence: $$\begin{aligned}
qm_{QBCR1}(A|\bar{D})&=L_1\\
qm_{QBCR1}(C|\bar{D})&=L_3\\
qm_{QBCR1}(A\cup B \cup C|\bar{D})&=L_2\end{aligned}$$ This resulting qualitative conditional mass is also quasi-normalized.\
- QBCR2, the qualitative mass of $D$ becomes (linguistically) [*[zero]{}*]{} since $D$ is outside the truth, but now $qm(D)=L_2$ is equally split to $A$ and $C$ since they are the only qualitative focal elements from $D_1$ which means all parts of $A\cup B\cup C$, therefore each of them $A$ and $C$ receive $(1/2)L_2=L_1$. Hence: $$qm_{QBCR2}(A|\bar{D})=L_1 + (1/2)L_2 = L_1 + L_{2/2} = L_1 + L_1 = L_2$$ $$qm_{QBCR1}(C|\bar{D})=L_3 + (1/2)L_2 = L_3 + L_{2/2} = L_3 + L_1 = L_4$$ Again, the resulting qualitative conditional mass is quasi-normalized.\
As concluding remark, we see that even if a [*[unconventional]{}*]{} bombing strategy is chosen first, the results obtained by QBCR rules 1 or 2 are legitimate and coherent with intuition since they commit the higher belief in either $C\cap \bar{D}$ (case 1) or $C$ (case 2) which is normal because the prior belief mass in $C$ was the higher one before bombing $D$.
Example 4
---------
Let’s complicate a bit the previous example by working directly with a prior $qm(.)$ defined on the super-power set $S^\Theta$ (see the previous Footnote 3), i.e. the complement is allowed among the set of propositions to deal with. As previously, we consider four zones under surveillance, i.e. $\Theta=\{A,B,C,D\}$ and $L=\{L_0,L_1,L_2,L_3,L_4,L_5,L_6\}$. The following prior qualitative basic belief mass $qm(.)$ is extended from the hyper-power set to the super-power set, i.e. $qm(.): S^\Theta \rightarrow L$: $$qm(A)=L_1, \qquad qm(C)=L_1, \qquad qm(D)=L_2$$ $$qm(C\cup D)=L_1, \qquad qm(C\cap \bar{D})=L_1$$
All other qualitative masses take the value $L_0$. This qualitative mass is quasi-normalized since $$L_1+L_1+L_2+L_1+L_1 = L_{1+1+2+1+1} = L_6 = L_{\max}$$
We assume that the military headquarter has decided to bomb in priority region $D$ because there was a high qualitative belief on the presence of enemies in $D$ according to the prior qbba $qm(.)$. But after bombing and verification, it turns out that the enemies were not in $D$ (same scenario as for example 2). Let’s examine the results of the conditioning by the rules QBCR1 and QBCR2 for the cases 1 and 2:\
- [**[Case 1]{}**]{}: $ \bar{D}\neq A\cup B \cup C.$
- Using QBCR1: $qm(A)=L_1$ is transferred to $A\cap \bar{D}$, since $A\cap \bar{D}$ is the largest element (with respect to inclusion) from $ \bar{D}$ which is included in $A$. $qm(C)=L_1$ is similarly transferred to $C\cap \bar{D}$, since $C\cap \bar{D}$ is the largest element from $\bar{D}$ which is included in $C$. $qm(C\cup D)=L_1$ is also transferred to $C\cap \bar{D}$ since $C\cap \bar{D}$ is the largest element from $\bar{D}$ which is included in $C\cup D$. $qm(D)=L_2$ is transferred to $\bar{D}$ since no element from $\bar{D}$ is included in $D$. In summary, we get: $$\begin{aligned}
qm_{QBCR1}(A\cap \bar{D}|\bar{D})&=L_1\\
qm_{QBCR1}(C\cap \bar{D}|\bar{D})&=qm(C\cap \bar{D})+qm(C) +qm(C\cup D) = L_1+L_1+L_1=L_3\\
qm_{QBCR1}(\bar{D}|\bar{D})&=L_2\end{aligned}$$ All others are equal to $L_0$. The resulting qualitative conditioned mass is quasi-normalized since $L_1+L_3+L_2=L_6=L_{\max}$.
- Using QBCR2: Similarly as for QBCR1, $qm(A)=L_1$ is transferred to $A\cap \bar{D}$; also $qm(C)=L_1$ and $qm(C\cup D)=L_1$ are transferred to $C\cap \bar{D}$. But now, differently, $qm(D)=L_2$ is equally split to the focal elements of $\bar{D}$, but only $C\cap \bar{D}$ is focal for $\bar{D}$, so $C\cap \bar{D}$ receives the whole qualitative mass of $D$. Finally we get: $$\begin{aligned}
qm_{QBCR2}(A\cap \bar{D}|\bar{D})&=L_1\\
qm_{QBCR2}(C\cap \bar{D}|\bar{D})&=qm(C\cap \bar{D})+qm(C) +qm(C\cup D)+qm(D) = L_1+L_1+L_1+L_2=L_5\\\end{aligned}$$ All others are equal to $L_0$. The resulting qualitative conditioned mass is quasi-normalized since $L_1+L_5=L_6=L_{\max}$.\
The results obtained by QBCR1 and QBCR2 are coherent with rational human reasoning since after bombing zone $D$ we get, in such case, a higher belief in finding enemies in $C\cap \bar{D}$ than in $A\cap \bar{D}$ which is normal due to the prior information we had before bombing $D$. QBRC2 is more specific than QBRC1. Say differently, QBRC1 is more prudent than QBRC2 in the revision of the masses of belief.\
- [**[Case 2]{}**]{}: $ \bar{D}= A\cup B \cup C.$
- Using QBCR1: $qm(C\cup D)=L_1$ is transferred to $C$ since $C$ is the largest element (with respect to inclusion) from $A\cup B\cup C$ which is included in $C\cup D$. $qm(C\cap \bar{D})=qm(C)$ since $C\cap (A\cup B\cup C)=C$. $qm(D)=L_2$ is transferred to $A\cup B\cup C$ since no element from $A\cup B\cup C$ is included in $D$, so the qualitative mass of $D$ becomes [*[zero]{}*]{} (i.e. it takes the linguistic value $L_0$). Thus we finally obtain: $$\begin{aligned}
qm_{QBCR1}(A|\bar{D})&=L_1\\
qm_{QBCR1}(C|\bar{D})&=qm(C)+qm(C\cup D) + qm(C\cap\bar{D}) = L_1+L_1+L_1=L_3\\
qm_{QBCR1}(A\cup B \cup C|\bar{D})&=L_2\end{aligned}$$ All others are equal to $L_0$. The resulting qualitative conditioned mass is quasi-normalized since $L_1+L_3+L_2=L_6=L_{\max}$.
- Using QBCR2: $qm(C\cup D)=L_1$ and $qm(C\cap \bar{D})=L_1$ are similarly as in QBRC1 transferred to $C$. But $qm(D)=L_2$ is equally split among the focal qualitative elements of $ \bar{D}= A\cup B \cup C$, which are $A$ and $C$, so each of them receive $1/2\cdot L_2=L_{2/2}=L_1$. Whence $$qm_{QBCR2}(A|\bar{D})=qm(A)+\frac{1}{2}qm(D) = L_1+ \frac{1}{2} L_2=L_1+L_1=L_2$$ $$qm_{QBCR1}(C|\bar{D})=[qm(C)+qm(C\cup D) + qm(C\cap \bar{D})] + \frac{1}{2}qm(D)
= [L_1+L_1+L_1] + L_1=L_4$$ All others are equal to $L_0$. The resulting qualitative conditioned mass is quasi-normalized since $L_2+L_4=L_6=L_{\max}$.\
The results obtained by QBCR1 and QBCR2 are again coherent with rational human reasoning since after bombing zone $D$ we get, in such case, a higher belief in finding enemies in $C$ than in $A$ which is normal due to the prior information we had before bombing $D$ and taking into account the constraint of the model.\
Conclusions
===========
In this paper, we have designed two Qualitative Belief Conditioning Rules in order to revise qualitative basic belief assignments and we presented some examples to show how they work. QBCR1 is more prudent than QBCR2 because the revision of the belief is done in a less specific transfer than for QBCR2. We use it when we are less confident in the source. While QBCR2 is more optimistic and refined; we use it when we are more confident in the source. Of course, the qualitative conditioning process is less precise than its quantitative counterparts because it is based on a rough approximation, as it normally happens when working with linguistic labels. Such qualitative methods present however some interests for manipulating information and beliefs expressed in natural language by human experts and can be helpful for high-level decision support systems.
[99]{}
F. Smarandache and J. Dezert (Editors), “Enrichment of Qualitative Beliefs for Reasoning under Uncertainty", *Proceedings of Fusion 2007 International Conference on Information Fusion*, Qu' ebec, Canada, July 9-12, 2007.
G. Shafer, “A Mathematical Theory of Evidence", *Princeton University Press*, Princeton, NJ, 1976.
F. Smarandache and J. Dezert (Editors), “Advances and Applications of DSmT for Information Fusion (Collected works)", *American Research Press*, Rehoboth, 2004.
.
F. Smarandache and J. Dezert (Editors), “Advances and Applications of DSmT for Information Fusion (Collected works)", Vol.2, *American Research Press*, Rehoboth, 2006.
.
[^1]: A more precise multiplication operator has been proposed in [@Li2007].
[^2]: Only non minimal linguistic values are given here since all the masses of other elements (i.e. non focal elements) take by default the value $L_0$.
[^3]: The super-power $S^\Theta$ is the Boolean algebra $(\Theta,\cap,\cup,\mathcal{C})$ where $\mathcal{C}$ denotes the complement, while hyper-power set $D^\Theta$ corresponds to $(\Theta,\cap,\cup)$.
[^4]: This condition is obviously also satisfied for Shafer’s model, i.e. when all regions are well separate/distinct.
|
---
abstract: |
This paper studies the inclusions between different Sobolev-Lorentz spaces $W^{1,(p,q)}(\Omega)$ defined on open sets $\Omega \subset {\mathbf{R}^n},$ where $n \ge 1$ is an integer, $1<p<\infty$ and $1 \le q \le \infty.$ We prove that if $1 \le q<r \le \infty,$ then $W^{1,(p,q)}(\Omega)$ is strictly included in $W^{1,(p,r)}(\Omega).$
We show that although $H^{1,(p,\infty)}(\Omega) \subsetneq W^{1,(p,\infty)}(\Omega)$ where $\Omega \subset {\mathbf{R}}^n$ is open and $n \ge 1,$ there exists a partial converse. Namely, we show that if a function $u$ in $W^{1,(p,\infty)}(\Omega), n \ge 1$ is such that $u$ and its distributional gradient $\nabla u$ have absolutely continuous $(p,\infty)$-norm, then $u$ belongs to $H^{1,(p,\infty)}(\Omega)$ as well.
We also extend the Morrey embedding theorem to the Sobolev-Lorentz spaces $H_{0}^{1,(p,q)}(\Omega)$ with $1 \le n<p<\infty$ and $1 \le q \le \infty.$ Namely, we prove that the Sobolev-Lorentz spaces $H_{0}^{1,(p,q)}(\Omega)$ embed into the space of Hölder continuous functions on $\overline{\Omega}$ with exponent $1-\frac{n}{p}$ whenever $\Omega \subset {\mathbf{R}}^n$ is open, $1 \le n<p<\infty,$ and $1 \le q \le \infty.$
address: |
Ş. Costea\
Department of Mathematics\
University of Pisa\
Largo Bruno Pontecorvo 5\
I-56127 Pisa, Italy
author:
- Şerban Costea
title: 'Sobolev-Lorentz spaces in the Euclidean setting and counterexamples'
---
[^1]
Introduction
============
In this paper we study the Sobolev-Lorentz spaces in the Euclidean setting and the inclusions between them. This paper is motivated by the results obtained in my 2006 PhD thesis [@Cos0] and in my book [@Cos3]. There I studied the Sobolev-Lorentz spaces and the associated Sobolev-Lorentz capacities in the Euclidean setting for $n \ge 2.$ The restriction on $n$ there was due to the the fact that I studied the $n,q$-capacity for $n>1.$
The Sobolev-Lorentz spaces have also been studied by Cianchi-Pick in [@CiPi1] and [@CiPi2], by Kauhanen-Koskela-Mal[ý]{} in [@KKM], and by Mal[ý]{}-Swanson-Ziemer in [@MSZ].
The classical Sobolev spaces were studied by Gilbarg-Trudinger in [@GT], Maz’ja in [@Maz], Evans in [@Eva], Heinonen-Kilpeläinen-Martio in [@HKM], and by Ziemer in [@Zie].
The Lorentz spaces were studied by Bennett-Sharpley in [@BS], Hunt in [@Hun], and by Stein-Weiss in [@SW].
The Newtonian Sobolev spaces in the metric setting were studied by Shanmugalingam in [@Sha1] and [@Sha2]. See also Heinonen [@Hei]. Costea-Miranda studied the Newtonian Lorentz Sobolev spaces and the corresponding global $p,q$-capacities in [@CosMir].
There are several other definitions of Sobolev-type spaces in the metric setting when $p=q$; see Haj[ł]{}asz [@Haj1], [@Haj2], Heinonen-Koskela [@HeK], Cheeger [@Che], and Franchi-Haj[ł]{}asz-Koskela [@FHK]. It has been shown that under reasonable hypotheses, the majority of these definitions yields the same space; see Franchi-Haj[ł]{}asz-Koskela [@FHK] and Shanmugalingam [@Sha1].
The Sobolev-Lorentz relative $p,q$-capacity was studied in the Euclidean setting by Costea (see [@Cos0], [@Cos1] and [@Cos3]) and by Costea-Maz’ya [@CosMaz]. The Sobolev $p$-capacity was studied by Maz’ya [@Maz] and by Heinonen-Kilpeläinen-Martio [@HKM] in ${\mathbf{R}}^n$ and by J. Bj[ö]{}rn [@Bjo], Costea [@Cos2] and Kinnunen-Martio [@KiM1] and [@KiM2] in metric spaces.
The Sobolev-Lorentz spaces can be also studied in the Euclidean setting for $n=1.$ We do it in this paper. Many of the results on Sobolev-Lorentz spaces that we obtained in [@Cos0] and [@Cos3] in dimension $n \ge 2$ were extended here to the case $n=1.$
In Section \[Section Lorentz spaces\] we start by presenting some of the basic properties of the Lorentz spaces $L^{p,q}(\Omega; {\mathbf{R}}^m),$ where $\Omega \subset {\mathbf{R}}^n$ is open, $n, m \ge 1$ are integers, $1<p<\infty$ and $1 \le q \le \infty.$
It is known that $L^{p,q}((0, \Omega_n r^n)) \subsetneq L^{p,s}((0, \Omega_n r^n)).$ We see this in Theorem \[Lpr stricly included in Lps\] by constructing a function $u$ in $L^{p,s}((0, \Omega_n r^n)) \setminus L^{p,q}((0, \Omega_n r^n)).$ Here $r>0,$ $n \ge 1$, $1<p<\infty$ and $1 \le q<s \le \infty.$
This function $u$ is used in Theorem \[Lpr strictly included in Lps via grad of smooth fns\] to construct a radial function $v$ that is smooth in the punctured ball $B^{*}(0,r)$ such that $|\nabla v|$ is in $L^{p,s}(B(0,r)) \setminus L^{p,q}(B(0,r)).$ Later it will be shown in Theorem \[W1pr strictly included in W1ps\] that $v$ is in $W^{1,(p,s)}(B(0,r)) \setminus W^{1,(p,q)}(B(0,r)).$ This shows that the inclusion $W^{1,(p,q)}(B(0,r)) \subset W^{1,(p,s)}(B(0,r))$ is strict whenever $r>0,$ $n \ge 1$, $1<p<\infty$ and $1 \le q<s \le \infty.$
In Section \[Section Sobolev Lorentz spaces\] we revisit many of the results from my PhD thesis [@Cos0 Chapter V] and from my book [@Cos3 Chapter 3] and we extend them to the case $n=1.$ We improve some of the old results from [@Cos0 Chapter V] and from [@Cos3 Chapter 3].
We also obtain some new results in this section. Among them we mention the case $q=\infty$ for Theorems \[H=W revisited\] and \[H=H\_0 revisited\] (see the discussion below) as well as the strict inclusion $W^{1,(p,q)}(B(0,r)) \subsetneq W^{1,(p,s)}(B(0,r))$ that we discussed above. As before, $r>0,$ $n \ge 1$, $1<p<\infty$ and $1 \le q<s \le \infty$ (see Theorem \[W1pr strictly included in W1ps\]).
For $n \ge 2,$ we proved in Costea [@Cos0] and [@Cos3] (by using partition of unity and convolution) that $H^{1,(p,q)}(\Omega)=W^{1,(p,q)}(\Omega)$ whenever $1<p<\infty$ and $1 \le q<\infty.$ The partition of unity and convolution technique used there is similar to the techniques used by Ziemer in [@Zie] and by Heinonen-Kilpeläinen-Martio in [@HKM].
We proved in [@Cos0] and [@Cos3] (for $n \ge 2$) that $H^{1,(p,\infty)}(\Omega) \subsetneq W^{1,(p,\infty)}(\Omega).$ Once we constructed a function $u \in W^{1,(p, \infty)}(\Omega)$ such that its distributional gradient $\nabla u$ did not have an absolutely continuous $(p,\infty)$-norm, we proved there that $u$ was not in $H^{1,(p,\infty)}(\Omega).$
In Section \[Section Sobolev Lorentz spaces\] of this paper, Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\] and Theorem \[H1pinfty subsetneq W1pinfty\] show that $H^{1,(p,\infty)}(\Omega) \subsetneq W^{1,(p,\infty)}(\Omega)$ for $n \ge 1.$ In this paper we also give a partial converse. Namely, we show in Theorem \[H=W revisited\] that if a function $u$ in $W^{1,(p,q)}(\Omega), n \ge 1,$ $1 \le q \le \infty$ is such that $u$ and its distributional gradient $\nabla u$ have absolutely continuous $(p,q)$-norm, then $u$ belongs to $H^{1,(p,q)}(\Omega)$ as well. This result is new for $q=\infty$ and $n \ge 1$ and improves a result from [@Cos0] and [@Cos3], proved there for $n \ge 2$ and $1 \le q<\infty.$ We proved this result via a partition of unity and convolution argument, because convolution and partition of unity work well on functions $u$ that have absolutely continuous $(p,q)$-norm along with their distributional gradients $\nabla u.$
In Theorem \[H=H\_0 revisited\] we show that if a function $u$ in $W^{1,(p,q)}({\mathbf{R}}^n), n \ge 1$ is such that $u$ and its distributional gradient $\nabla u$ have absolutely continuous $(p,q)$-norm, $1 \le q \le \infty,$ then $u$ belongs to $H_{0}^{1,(p,q)}({\mathbf{R}}^n)$ as well. This result is new when $q=\infty$ and $n \ge 1$ and improves a result from [@Cos0] and [@Cos3], proved there for $n \ge 2$ and $1 \le q<\infty.$
In Section \[Section Morrey embedding theorems\] (among other things) we prove the Morrey embedding theorem for the Sobolev-Lorentz spaces $H_{0}^{1,(p,q)}(\Omega).$
For $n=1,$ we prove in Theorem \[Holder 1/p’ continuity for u in W1pq n equal 1\] that if $\Omega \subset {\mathbf{R}}$ is an open interval, then $H_{0}^{1,(p,q)}(\Omega)$ and $W^{1,(p,q)}(\Omega)$ embed into the space of Hölder continuous functions in $\overline{\Omega}$ with exponent $1-\frac{1}{p}.$
For $1<n<p<\infty,$ we prove in Theorem \[Morrey embedding 1<n<p\] (among other things) that the spaces $H_{0}^{1,(p,q)}(\Omega)$ and $C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega)$ embed into the space $C^{0, 1-\frac{n}{p}}(\overline{\Omega})$ of Hölder continuous functions in $\overline{\Omega}$ with exponent $1-\frac{n}{p}$ whenever $\Omega \subset {\mathbf{R}}^n$ is open and $1 \le q \le \infty.$
Since we deal with functions in $H_{0}^{1,(p,q)}(\Omega)$ or in $C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega)$ when $1<n<p<\infty$ and $1 \le q \le \infty,$ no regularity assumptions on $\partial \Omega$ are needed.
When $1<n<p<\infty,$ we first prove the Morrey embedding $C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega) \hookrightarrow C^{0, 1-\frac{n}{p}}(\overline{\Omega}).$ The embedding $H_{0}^{1,(p,q)}(\Omega) \hookrightarrow C^{0, 1-\frac{n}{p}}(\overline{\Omega})$ follows afterwards, after an approximation argument with functions from $C_{0}^{\infty}(\Omega).$ We also rely on the well-known Poincaré inequality in the Euclidean setting and we invoke the classical embedding for $1<n<s<p<\infty,$ proved by Evans in [@Eva] and by Gilbarg-Trudinger in [@GT].
Notations
=========
We recall the standard notation to be used throughout this paper. Throughout this paper, $C$ will denote a positive constant whose value is not necessarily the same at each occurrence; it may vary even within a line. $C(a,b, \ldots)$ is a constant that depends only on the parameters $a,b, \cdots.$
Throughout this paper $\Omega$ will denote a nonempty open subset of ${\mathbf{R}}^n,$ while $dx=d m_n(x)$ will denote the Lebesgue $n$-measure in ${\mathbf{R}}^n,$ where $n \ge 1$ is integer. For $E \subset {\mathbf{R}}^n,$ the boundary, the closure, and the complement of $E$ with respect to ${\mathbf{R}}^n$ will be denoted by $\partial E,$ $\overline{E},$ and $\mathbf{R}^n \setminus E,$ respectively, while $|E|=\int_{E} dx$ will denote the measure of $E$ whenever $E$ is measurable; $\mbox{diam }E$ is the Euclidean diameter of $E$ and $E \subset \subset F$ means that $\overline{E}$ is a compact subset of $F.$
Moreover, $B(a,r)= \{ x \in \mathbf{R}^n: |x-a|<r \}$ is the open ball with center $a \in \mathbf{R}^n$ and radius $r>0,$ $B^{*}(a,r)=\{ x \in \mathbf{R}^n: 0<|x-a|<r \}$ is the punctured open ball with center $a \in \mathbf{R}^n$ and radius $r>0,$ while $\overline{B}(a,r)= \{ x \in \mathbf{R}^n: |x-a| \le r \}$ is the closed ball with center $a \in \mathbf{R}^n$ and radius $r>0.$
For two sets $A, B \subset \mathbf{R}^n,$ we define $\mbox{dist}(A,B),$ the distance between $A$ and $B,$ by $$\mbox{dist}(A,B)=\inf_{a \in A, b \in B} |a-b|.$$
For $n \ge 1$ integer, $\Omega_n$ denotes the Lebesgue measure of the $n$-dimensional unit ball. (That is, $\Omega_n=|B(0,1)|$). For $n \ge 2$ integer, $\omega_{n-1}$ denotes the spherical measure of the $n-1$-dimensional sphere; thus, $\omega_{n-1}=n \Omega_n$ for every integer $n \ge 2.$
For $\Omega \subset \mathbf{R}^n,$ $C(\Omega)$ is the set of all continuous functions $u: \Omega \rightarrow {\mathbf{R}},$ while $C(\overline{\Omega})$ is the set of all continuous functions $u: \overline{\Omega} \rightarrow {\mathbf{R}}.$ Moreover, for a measurable $u: \Omega \rightarrow {\mathbf{R}},$ $\mbox {supp } u$ is the smallest closed set such that $u$ vanishes outside $\mbox {supp } u.$ We also define $$\begin{aligned}
C^{k}(\Omega)&=&\{ \varphi: \Omega \rightarrow \mathbf{R}: \mbox{ the $k$th-derivative of $\varphi$ is continuous} \}\\
C_{0}^{k}(\Omega)&=&\{\varphi \in C^{k}(\Omega): \mbox{supp } \varphi \subset \subset \Omega \}\\
C^{\infty}(\Omega)&=& \bigcap_{k=1}^{\infty} C^{k}(\Omega)\\
C_{0}^{\infty}(\Omega)&=& \{ \varphi \in C^{\infty}(\Omega): \mbox{supp } \varphi \subset \subset \Omega\}.
\end{aligned}$$ For a function $\varphi \in C^{\infty}(\Omega)$ we write $$\nabla \varphi=(\partial_1 \varphi, \partial_2 \varphi, \ldots, \partial_n \varphi)$$ for the gradient of $\varphi.$
Let $f: \Omega \rightarrow {\mathbf{R}}$ be integrable. For $E \subset
\Omega$ measurable with $0<|E|<\infty,$ we define $$f_{E}=\frac{1}{|E|} \int_{E} f dx.$$
For a measurable vector-valued function $f=(f_1, \ldots, f_m):
\Omega \rightarrow \mathbf{R}^m,$ we let $$\label{def abs val of a vector function}
|f|=\sqrt{f_1^2+f_2^2+\ldots+f_m^2}.$$ $L^{\infty}(\Omega; \mathbf{R}^m)$ denotes the space of essentially bounded measurable functions $u: \Omega \rightarrow \mathbf{R}^m$ with $$||u||_{L^{\infty}(\Omega)}=\mbox{ess} \sup |u| <\infty.$$
Lorentz Spaces {#Section Lorentz spaces}
==============
Definitions and basic properties
--------------------------------
Let $f:\Omega \rightarrow \mathbf{R}$ be a measurable function. We define $\lambda_{[f]},$ the *distribution function* of $f$ as follows (see Bennett-Sharpley [@BS Definition II.1.1] and Stein-Weiss [@SW p. 57]): $$\lambda_{[f]}(t)=|\{x \in \Omega: |f(x)| > t \}|, \qquad t \ge 0.$$ We define $f^{*},$ the *nonincreasing rearrangement* of $f$ by $$f^{*}(t)=\inf\{v: \lambda_{[f]}(v) \le t \}, \quad t \ge 0.$$ (See Bennett-Sharpley [@BS Definition II.1.5] and Stein-Weiss [@SW p. 189]). We notice that $f$ and $f^{*}$ have the same distribution function. Moreover, for every positive $\alpha$ we have $(|f|^{\alpha})^{*}=(|f|^{*})^{\alpha}$ and if $|g|\le |f|$ a.e. on $\Omega,$ then $g^{*}\le f^{*}.$ (See Bennett-Sharpley [@BS Proposition II.1.7]). We also define $f^{**}$, the *maximal function* of $f^{*}$ by $$f^{**}(t)=m_{f^{*}}(t)=\frac{1}{t} \int_{0}^{t} f^{*}(s) ds, \quad t >0.$$ (See Bennett-Sharpley [@BS Definition II.3.1] and Stein-Weiss [@SW p. 203]).
Throughout this paper, we denote by $p'$ the Hölder conjugate of $p \in [1,\infty],$ that is $$p'=\left\{ \begin{array}{ll}
\infty & \mbox{if $p=1$}\\
\frac{p}{p-1} & \mbox{if $1<p<\infty$}\\
1 & \mbox{if $p=\infty$}.
\end{array}
\right.$$
The *Lorentz space* $L^{p,q}(\Omega),$ $1<p<\infty,$ $1\le
q\le \infty,$ is defined as follows: $$L^{p,q}(\Omega)= \{f: \Omega \rightarrow \mathbf{R}: f \mbox { is measurable and }
||f||_{L^{p,q}(\Omega)}<\infty\},$$ where $$||f||_{L^{p,q}(\Omega)}=||f||_{p,q}=\left\{ \begin{array}{lc}
\left( \int_{0}^{\infty} (t^{\frac{1}{p}}f^{*}(t))^q \, \frac{dt}{t}
\right)^{\frac{1}{q}} & 1 \le q < \infty \\
\sup_{t>0} t \lambda_{[f]}(t)^{\frac{1}{p}}=\sup_{s>0}
s^{\frac{1}{p}} f^{*}(s) & q=\infty.
\end{array}
\right.$$ (See Bennett-Sharpley [@BS Definition IV.4.1] and Stein-Weiss [@SW p. 191]). If $1 \le
q\le p,$ then $||\cdot||_{L^{p,q}(\Omega)}$ already represents a norm, but for $p < q \le \infty$ it represents a quasinorm that is equivalent to the norm $||\cdot||_{L^{(p,q)}(\Omega)},$ where $$||f||_{L^{(p,q)}(\Omega)}=||f||_{(p,q)}=\left\{ \begin{array}{lc}
\left( \int_{0}^{\infty} (t^{\frac{1}{p}}f^{**}(t))^q \, \frac{dt}{t} \right)^{\frac{1}{q}} & 1 \le q < \infty \\
\sup_{t>0} t^{\frac{1}{p}} f^{**}(t) & q=\infty.
\end{array}
\right.$$ (See Bennett-Sharpley [@BS Definition IV.4.4]).
Namely, from Lemma IV.4.5 in Bennett-Sharpley [@BS] we have that $$||f||_{L^{p,q}(\Omega)} \le ||f||_{L^{(p,q)}(\Omega)} \le \frac {p}{p-1} ||f||_{L^{p,q}(\Omega)}$$ for every $1\le q \le \infty.$
For a measurable vector-valued function $f=(f_1,\ldots, f_m): \Omega
\rightarrow \mathbf{R}^m$ we say that $f \in L^{p,q}(\Omega;
\mathbf{R}^m)$ if and only if $f_i \in L^{p,q}(\Omega)$ for $i=1,2,\ldots, m,$ if and only if $|f| \in L^{p,q}(\Omega)$ and we define $$||f||_{L^{p,q}(\Omega;\mathbf{R}^m)}=||\,|f|\,||_{L^{p,q}(\Omega)}.$$ Similarly $$||f||_{L^{(p,q)}(\Omega;
\mathbf{R}^m)}=||\,|f|\,||_{L^{(p,q)}(\Omega)}.$$ Obviously, it follows from the real-valued case that $$||f||_{L^{p,q}(\Omega; \mathbf{R}^m)} \le ||f||_{L^{(p,q)}(\Omega; \mathbf{R}^m)}
\le \frac {p}{p-1} ||f||_{L^{p,q}(\Omega; \mathbf{R}^m)}$$ for every $1 \le q \le \infty,$ and like in the real-valued case, $||\cdot||_{L^{p,q}(\Omega; \mathbf{R}^m)}$ is already a norm when $1\le q \le p,$ while it is a quasinorm when $p<q\le \infty.$
It is known that $(L^{p,q}(\Omega; \mathbf{R}^m),
||\cdot||_{L^{p,q}(\Omega; \mathbf{R}^m)})$ is a Banach space for $1\le q \le p,$ while $(L^{p,q}(\Omega; \mathbf{R}^m),
||\cdot||_{L^{(p,q)}(\Omega; \mathbf{R}^m)})$ is a Banach space for $1<p< \infty,$ $1\le q \le \infty.$ These spaces are reflexive if $1<q<\infty$ and the dual of $L^{p,q}\Omega; \mathbf{R}^m)$ is, up to equivalence of norms, the space $L^{p',q'}(\Omega; \mathbf{R}^m)$ for $1 \le q<\infty.$ See Bennett-Sharpley [@BS Theorem IV.4.7, Corollaries I.4.3 and IV.4.8], Hunt [@Hun p. 259-262], the definition of $L^{p,q}(\Omega; \mathbf{R}^m)$ and the discussion after Proposition \[characterization of fns with absolutely continuous norm\].
Absolute continuity of the $(p,q)$-norm and reflexivity of the Lorentz spaces
-----------------------------------------------------------------------------
\[defn of fns with absolutely continuous norm\] [(See Bennett-Sharpley [@BS Definition I.3.1]).]{} Let $n, m \ge 1$ be two integers, $1<p<\infty$ and $1 \le q \le \infty.$ Suppose $\Omega \subset {\mathbf{R}^n}$ is open. Let $X=L^{p,q}(\Omega; {\mathbf{R}}^m).$ A function $f$ in $X$ is said to have *absolutely continuous norm* in $X$ if and only if $||f
\chi_{E_k}||_{X} \rightarrow 0$ for every sequence $E_k$ satisfying $E_k \rightarrow \emptyset$ a.e.
The following proposition gives a characterization of functions with absolutely continuous norm in $X=L^{p,q}(\Omega; {\mathbf{R}}^m).$
\[characterization of fns with absolutely continuous norm\] [(See Bennett-Sharpley [@BS Proposition I.3.6]).]{} A function $f$ in $X$ has absolutely continuous norm if and only if the following condition holds: whenever $f_k$ [($k=1,2, \ldots$)]{}, and $g$ are measurable functions satisfying $|f_k| \le |f|$ for all $k$ and $f_k \rightarrow g$ a.e., then $||f_k-g||_{X} \rightarrow 0.$
Let $X_a$ be the subspace of $X$ consisting of functions with absolutely continuous norm and let $X_b$ be the closure in $X$ of the set of simple functions. It is known that $X_a=X_b$ when $X=L^{p,q}(\Omega; \mathbf{R}^m)$ for $1<p<\infty,$ $1\le q\le \infty,$ and $m \ge 1$ integer. (See Bennett-Sharpley [@BS Theorem I.3.13]). Moreover, we have $X_a=X_b=X$ when $X=L^{p,q}(\Omega; \mathbf{R}^m)$ for $1<p<\infty,$ $1\le q<\infty,$ and $m \ge 1$ integer. (See Bennett-Sharpley [@BS Theorem IV.4.7 and Corollary IV.4.8] and the definition of $L^{p,q}(\Omega; \mathbf{R}^m)$).
From Proposition \[function in Lpinfty but not in Lpq q finite\] it follows that $X_a \subsetneq X$ for $X=L^{p,\infty}(\Omega;
\mathbf{R}^m)$ whenever $m \ge 1$ is an integer. Since $L^{p,\infty}(\Omega; \mathbf{R}^m)$ can be identified with $(L^{p',1}(\Omega; \mathbf{R}^m))^{*}$ (see Bennett-Sharpley [@BS Corollary IV.4.8] and the definition of $L^{p,q}(\Omega; \mathbf{R}^m)$), it follows from Bennett-Sharpley [@BS Corollaries I.4.3, I.4.4, IV.4.8 and Theorem IV.4.7] that neither $L^{p,1}(\Omega; {\mathbf{R}^m}),$ nor $L^{p,\infty}(\Omega; {\mathbf{R}^m})$ are reflexive whenever $1<p<\infty.$
Strict inclusions between Lorentz spaces
----------------------------------------
\[relation between Lpr and Lps\] It is known (see Bennett-Sharpley [@BS Proposition IV.4.2]) that for every $p \in
(1,\infty)$ and $1\le r<s\le \infty$ there exists a constant $C(p,r,s)>0$ such that $$\label{relation between Lpr and Lps norm}
||f||_{L^{p,s}(\Omega)} \le C(p,r,s) ||f||_{L^{p,r}(\Omega)}$$ for all measurable functions $f \in L^{p,r}(\Omega).$ In particular, $L^{p,r}(\Omega) \subset L^{p,s}(\Omega).$ Like in the real-valued case, it follows that $$||f||_{L^{p,s}(\Omega; \mathbf{R}^m)} \le C(p,r,s)
||f||_{L^{p,r}(\Omega; \mathbf{R}^m)}$$ for every $m \ge 1$ integer and for all measurable functions $f \in
L^{p,r}(\Omega; \mathbf{R}^m),$ where $C(p,r,s)$ is the constant from (\[relation between Lpr and Lps norm\]). In particular, $$L^{p,r}(\Omega; \mathbf{R}^m) \subset
L^{p,s}(\Omega;\mathbf{R}^m) \mbox{ for every $m \ge 1$ integer.}$$
The above inclusion is strict. (See Ziemer [@Zie p. 37, Exercise 1.7]). Given an open ball $B(0,r) \subset {\mathbf{R}^n},$ where $n \ge 1$ integer, $r>0$ and $1 \le q_1<q_2 \le \infty,$ we construct next in Theorem \[Lpr stricly included in Lps\] a function $u \in L^{p,q_2}(B(0,r); \mathbf{R}^m) \setminus L^{p,q_1}(B(0,r);\mathbf{R}^m).$ In addition, Theorem \[Lpr stricly included in Lps\] will allow us to construct later a radial function $v$ that is smooth in a punctured ball $B^{*}(0,r)$ such that $|\nabla v|$ is in $L^{p,q_2}(B(0,r)) \setminus L^{p,q_1}(B(0,r)).$ It is enough to assume that $m=1$ when proving this strict inclusion.
\[Lpr stricly included in Lps\] Let $n \ge 1$ be an integer. Let $0 <\alpha \le 1$ and $r>0.$ Suppose $1<p<\infty$ and $1 \le q_1<q_2 \le \infty.$ We define $u_{r, \alpha, p}$ on $[0, \Omega_n r^n)$ by $$\label{defn of uralphap}
u_{r, \alpha, p}(t)=t^{-\frac{1}{p}} \ln^{-\alpha} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right). %0 \le t < \Omega_n r^n$$
We also define $$\begin{aligned}
\label{defn of uradraplhap}
& & u_{rad, r, \alpha, p}:[0, r) \rightarrow [0, \infty], u_{rad, r, \alpha, p}(t):=u_{r, \alpha, p}(\Omega_n t^n) \mbox{ and }\\
\label{defn of uralphanp}
& & u_{r,\alpha, n, p}: B(0,r) \subset {\mathbf{R}^n} \rightarrow [0, \infty], u_{r, \alpha, n, p}(x):=u_{rad, r, \alpha, p}(|x|).\end{aligned}$$
Then
[(i)]{} $u_{r, \alpha, p}$ is a decreasing function on $[0, \Omega_n r^n)$ and $$\label{uralphanpstar=uralphapstar=uralphap}
u_{r, \alpha, n, p}^{*}(t)=u_{r, \alpha, p}^{*}(t)=u_{r, \alpha, p}(t) \mbox{ for all } t \in [0, \Omega_n r^n).$$
[(ii)]{} $u_{r, \alpha, n, p} \in L^{p, q_2}(B(0,r)) \setminus L^{p,q_1}(B(0,r))$ if $1\le q_1 \le \frac{1}{\alpha} < q_2 \le \infty.$
\(i) Since $u_{r, \alpha, p}$ is defined on $[0, \Omega_n r^n)$, it follows that $u_{r, \alpha, p}^{*}(t)=0$ whenever $\Omega_n r^n \le t<\infty.$ Similarly, since $u_{r, \alpha, n, p}$ is defined on $B(0,r)$ and $|B(0,r)|=\Omega_n r^n,$ it follows that $u_{r, \alpha, n, p}^{*}(t)=0$ whenever $\Omega_n r^n \le t<\infty.$ Once we show that $u_{r, \alpha, p}$ is decreasing on $[0, \Omega_n r^n),$ the definition of $u_{r, \alpha, n, p}$ implies immediately that $u_{r, \alpha, n, p}$ and $u_{r, \alpha, p}$ have the same distribution function, proving claim (i).
We see that $u_{r, \alpha, p}$ is smooth and strictly positive on $(0, \Omega_n r^n).$ Moreover, it is easy to see that $\lim_{t \rightarrow 0} u_{r, \alpha, p}(t)=\infty.$ Thus, in order to show that $u_{r, \alpha, p}$ is decreasing on $[0, \Omega_n r^n),$ it is enough to show that $u_{r, \alpha, p}'(t)<0$ on $(0, \Omega_n r^n).$
For $t \in (0, \Omega_n r^n)$ we have $$\begin{aligned}
u_{r, \alpha, p}'(t)&=&-\frac{1}{p} t^{-1-\frac{1}{p}} \ln^{-\alpha} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)+\alpha
t^{-1-\frac{1}{p}} \ln^{-\alpha-1} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)\\
&=&t^{-1-\frac{1}{p}} \ln^{-\alpha-1} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right) \left(\alpha-\frac{1}{p} \ln \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)\right).\end{aligned}$$
We see that $$\ln \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)>0, \mbox{ for all } t \in (0, \Omega_n r^n).$$
Thus, for $t \in (0, \Omega_n r^n)$ we have $$\begin{aligned}
u_{r, \alpha, p}'(t)<0 &\iff& \alpha-\frac{1}{p} \ln \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)<0 \iff\\
\ln \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)>p \alpha &\iff& \frac{\Omega_n r^n e^{p \alpha}}{t}>e^{p \alpha} \iff \Omega_n r^n>t.\end{aligned}$$
But the last inequality is obviously true for all $t \in (0, \Omega_n r^n).$ Thus, $u_{r,\alpha, p}'$ is strictly negative on $(0, \Omega_n r^n),$ which implies that $u_{r, \alpha, p}$ is strictly decreasing on $[0, \Omega_n r^n).$
The definition of $u_{r, \alpha, n, p}$ and the fact that $u_{r, \alpha, p}$ is continuous, strictly decreasing and strictly positive on $(0, \Omega_n r^n)$ imply immediately that $u_{r, \alpha, n, p}$ and $u_{r, \alpha, p}$ have the same distribution function. This yields (\[uralphanpstar=uralphapstar=uralphap\]), finishing the proof of (i).
\(ii) We proved in part (i) that $u_{r, \alpha, n, p}^{*}(t)=u_{r, \alpha, p}^{*}(t)$ for all $t \ge 0.$ Thus, $$||u_{r, \alpha, n, p}||_{L^{p,q}(B(0,r))}=||u_{r, \alpha, p}||_{L^{p,q}((0, \Omega_n r^n))}$$ for every $q$ in $[1,\infty].$
For $1<p<\infty$ and $1 \le q \le \infty$ we let $I_{r, \alpha, p, q}=||u_{r, \alpha, p}||_{L^{p,q}((0,\Omega_n r^n))}.$
Then via (\[uralphanpstar=uralphapstar=uralphap\]) we have $$\begin{aligned}
I_{r, \alpha, p, q}&=&\sup_{0 \le t \le \Omega_n r^n} t^{\frac{1}{p}} u_{r, \alpha, p}^{*}(t)
= \sup_{0 \le t \le \Omega_n r^n} t^{\frac{1}{p}} u_{r, \alpha, p}(t) \\
&=& \sup_{0 \le t \le \Omega_n r^n} \ln^{-\alpha} \left(\frac{\Omega_n r^n e^{p \alpha}}{t} \right)=(p \alpha)^{-\alpha}\end{aligned}$$ for $q=\infty$ and $$\begin{aligned}
I_{r, \alpha, p, q}^{q}&=&\int_{0}^{\Omega_n r^n} \left(t^{\frac{1}{p}} u_{r, \alpha, p}^{*}(t) \right)^q \frac{dt}{t}
= \int_{0}^{\Omega_n r^n} \left(t^{\frac{1}{p}} u_{r, \alpha, p}(t) \right)^q \frac{dt}{t}\\
&=& \int_{0}^{\Omega_n r^n} \ln^{-q \alpha} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right) \frac{dt}{t}\end{aligned}$$ for $1 \le q<\infty.$
For a given $q$ in $[1, \infty),$ the last integral in the above sequence is an improper one and converges if and only if $1-q \alpha<0$ if and only if $\frac{1}{\alpha}<q.$ An easy computation shows that the value of the convergent improper integral is
$$\frac{1}{-1+q \alpha} \ln^{1-q \alpha} \left( \frac{\Omega_n r^n e^{p \alpha}}{\Omega_n r^n}\right)=\frac{(p \alpha)^{1-q \alpha}}{-1+q \alpha}.$$
Thus, if $1 \le q_1 \le \frac{1}{\alpha} < q_2 \le \infty,$ we have $$||u_{r, \alpha, n, p}||_{L^{p,q_2}(B(0,r))}<\infty=||u_{r, \alpha, n, p}||_{L^{p,q_1}(B(0,r))}.$$
Hence, we proved that $u_{r, \alpha, n, p} \in L^{p,q_2}(B(0,r)) \setminus L^{p,q_1}(B(0,r)).$ This shows that the inclusion $L^{p,q_1}(B(0,r)) \subset L^{p,q_2}(B(0,r))$ is strict whenever $1<p<\infty$ and $1 \le q_1 < q_2 \le \infty.$ This finishes the proof of the theorem.
Theorem \[Lpr stricly included in Lps\] allows us to construct a radial function $v$ that is smooth in a punctured ball $B^{*}(0,r)$ such that $|\nabla v|$ is in $L^{p,q_2}(B(0,r)) \setminus L^{p,q_1}(B(0,r)).$ Here $r>0,$ $n \ge 1,$ $1<p<\infty$ and $1 \le q_1<q_2 \le \infty.$
\[Lpr strictly included in Lps via grad of smooth fns\] Let $n \ge 1$ be an integer. Let $0<\alpha \le 1$ and $r>0.$ Suppose $1<p<\infty$ and $1 \le q_1<q_2 \le \infty.$
We define $$\label{defn of fradralphap}
f_{rad, r, \alpha, p}:[0,r) \rightarrow [0, \infty], f_{rad, r, \alpha, p}(t)=\int_{t}^{r} u_{rad, r, \alpha, p}(s) ds,$$ where $u_{rad, r, \alpha, p}$ is the function defined in [(\[defn of uradraplhap\])]{}. We also define $$\label{defn of vralphanp}
v_{r, \alpha, n, p}:B(0,r) \rightarrow [0, \infty], v_{r, \alpha, n, p}(x):=f_{rad, r, \alpha, p}(|x|).$$
Then
[(i)]{} $v_{r, \alpha, n, p} \in C^{\infty}(B^{*}(0,r))$ and $$\nabla v_{r, \alpha, n, p}(x)=f_{rad, r, \alpha, p}'(|x|) \frac{x}{|x|} \mbox{ for all } x \in B^{*}(0,r).$$
[(ii)]{} $|\nabla v_{r, \alpha, n, p}(x)|=u_{r, \alpha, n, p}(x)$ for all $x \in B^{*}(0,r),$ where $u_{r, \alpha, n, p}$ is the function defined in [(\[defn of uralphanp\])]{}.
[(iii)]{} $\lim_{x \rightarrow y} v_{r, \alpha, n, p}(x)=0$ for all $y \in \partial B(0,r).$
[(iv)]{} If $p>n,$ then $v_{r, \alpha, n, p}$ is continuous in $B(0,r).$
[(v)]{} If $1<p \le n,$ then $v_{r, \alpha, n, p}$ is unbounded on $B(0,r).$
[(vi)]{} $|\nabla v_{r, \alpha, n, p}| \in L^{p,q_2}(B(0,r)) \setminus L^{p,q_1}(B(0,r))$ if $1 \le q_1 \le \frac{1}{\alpha} < q_2 \le \infty.$
Since $u_{rad, r, \alpha, p}$ is smooth in $(0,r)$ and bounded near $t=r,$ it follows immediately from the definition of $f_{rad, r, \alpha, p}$ that $f_{rad, r, \alpha, p}$ is smooth in $(0,r),$ $\lim_{t \rightarrow r} f_{rad, r, \alpha, p}(t)=0$ and $f_{rad, r, \alpha, p}'(t)=-u_{rad, r, \alpha, p}(t)$ for all $t \in (0,r).$ This and the definition of $v_{r, \alpha, n, p}$ and $u_{r, \alpha, n, p}$ yield the claims (i), (ii) and (iii) immediately.
Moreover, since $$\lim_{t \rightarrow 0} f_{rad, r, \alpha, p}'(t)=-\lim_{t \rightarrow 0} u_{rad, r, \alpha, p}(t)=-\infty,$$ it follows immediately via (i) and (ii) that $v$ is not in $C^{\infty}(B(0,r)),$ because $v$ does not have a gradient at $x=0 \in B(0,r).$
We proved in (i) that $v_{r, \alpha, n, p} \in C^{\infty}(B^{*}(0,r)).$ Thus, the function $v_{r, \alpha, n, p}$ is continuous in $B(0,r)$ if and only if it is continuous at $x=0 \in B(0,r)$ if and only if $f_{rad, r, \alpha, p}$ is continuous at $t=0 \in [0,r).$ But from the definition of $f_{rad, r, \alpha, p},$ we see that this function is continuous at $t=0 \in [0,r)$ if and only $f_{rad, r, \alpha, p}(0)<\infty.$ Therefore, $v_{r, \alpha, n, p}$ is continuous in $B(0,r)$ if and only if $f_{rad, r, \alpha, p}(0)<\infty.$
We prove now claim (iv). The definition of $u_{rad, r, \alpha, p}$ easily implies that $$u_{rad, r, \alpha, p}(s) \le (\Omega_n s^n)^{-\frac{1}{p}} \ln^{-\alpha} (e^{p \alpha})= (p \alpha)^{-\alpha} (\Omega_n s^n)^{-\frac{1}{p}}$$ for all $s \in (0,r).$
For $1 \le n<p<\infty,$ the definition of $f_{rad, r, \alpha, p},$ the finiteness of the improper Riemann integral $\int_{0}^{r} s^{-\frac{n}{p}} ds,$ and the Comparison Test for improper Riemann integrals imply immediately that $$\begin{aligned}
f_{rad, r, \alpha, p}(0)&=&\lim_{t \rightarrow 0} f_{rad, r, \alpha, p}(t) \le (p \alpha)^{-\alpha} \Omega_n^{-\frac{1}{p}}
\lim_{t \rightarrow 0} \int_{t}^{r} s^{-\frac{n}{p}} ds \\
&=& (p \alpha)^{-\alpha} \Omega_n^{-\frac{1}{p}} \left(1-\frac{n}{p}\right)^{-1} r^{1-\frac{n}{p}}<\infty.\end{aligned}$$ Thus, if $1 \le n<p<\infty$ we have $f_{rad, r, \alpha, p}(0)<\infty,$ which implies (via the above discussion on the boundedness of $f_{rad, r, \alpha, p}(0)$) that $v_{r, \alpha, n, p}$ is continuous in $B(0,r).$
\(v) For $1<p \le n,$ we show that $f_{rad, r, \alpha, p}(0)=\infty.$ We treat the cases $1<p<n$ and $1<p=n$ separately.
Case I. We consider first the case $1<p<n.$ We begin by showing that there exists a constant $m=m_{r, \alpha, n, p}>0$ such that $$t^{\frac{1}{n}-\frac{1}{p}} \ln^{-\alpha} \left( \frac{\Omega_n r^n e^{p\alpha}}{t} \right) \ge m \mbox{ for all } t \in (0, \Omega_n r^n),$$ which is equivalent to showing that $$t^{-\frac{1}{p}} \ln^{-\alpha} \left( \frac{\Omega_n r^n e^{p\alpha}}{t} \right) \ge m t^{-\frac{1}{n}} \mbox{ for all } t \in (0, \Omega_n r^n),$$ which is equivalent to showing that $$u_{rad, r, \alpha, p}(s) \ge m (\Omega_n s^n)^{-\frac{1}{n}} \mbox{ for all } s \in (0,r).$$
Once we show the existence of such $m$, it follows immediately via the Comparison Test for improper Riemann integrals and the definition of $f_{rad, r, \alpha, p}$ that $$\begin{aligned}
f_{rad, r, \alpha, p}(0)&=&\lim_{t \rightarrow 0} f_{rad, r, \alpha, p}(t) = \lim_{t \rightarrow 0} \int_{t}^{r} u_{rad, r, \alpha, p}(s) ds\\
&\ge& m \Omega_n^{-\frac{1}{n}} \lim_{t \rightarrow 0} \int_{t}^{r} s^{-1} ds=\infty.\end{aligned}$$
This would prove the unboundedness of $v_{r, \alpha, n, p}$ on $B(0,r)$ when $1<p<n.$
We let $p_1=\frac{np}{n-p}.$ Thus, $p_1>p$ and $\frac{1}{p_1}=\frac{1}{p}-\frac{1}{n}.$ We define $h$ on the interval $[0, \Omega_n r^n)$ by $$\label{defn of hralphanp}
h(t)=h_{r, \alpha, n, p}(t)=t^{-\frac{1}{p_1}} \ln^{-\alpha} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right).$$ We notice that $h$ is smooth and strictly positive on $(0, \Omega_n r^n).$ Moreover, it is easy to see that $\lim_{t \rightarrow 0} h(t)=\infty.$ We compute $h'$ on $(0, \Omega_n r^n)$ and we notice that $$h'(t)=t^{-1-\frac{1}{p_1}} \ln^{-\alpha-1} \left( \frac{\Omega_n r^n e^{p \alpha}}{t} \right)
\left(\alpha-\frac{1}{p_1} \ln \left(\frac{\Omega_n r^n e^{p \alpha}}{t} \right)\right), t \in (0, \Omega_n r^n).$$ We see that $h'(t)=0$ if and only if $t=t_{crit}=\Omega_n r^n e^{p\alpha-p_1\alpha} \in (0, \Omega_n r^n).$ We notice that $h$ has an unique global minimum on $(0, \Omega_n r^n),$ at $t=t_{crit}.$ We define $m=m_{r, \alpha, n, p}:=h(t_{crit}).$ Then $m>0$ and $h(t) \ge m>0$ for all $t \in [0, \Omega_n r^n).$ This proves the existence of the desired constant $m$ and finishes the proof of Case I.
Case II. We consider now the case $1<p=n.$
We compute effectively $f_{rad, r, \alpha, n}$ by considering the cases $\alpha=1$ and $\alpha \in (0,1)$ separately.
We assume first that $\alpha=1.$ For every $t \in (0,r)$ we have $$\begin{aligned}
f_{rad, r, 1, n}(t)&=&\int_{t}^{r} (\Omega_n s^{n})^{-\frac{1}{n}} \ln^{-1} \left( \frac{\Omega_n r^n e^n}{\Omega_n s^n} \right) ds\\
&=& \Omega_n^{-\frac{1}{n}} \int_{t}^{r} s^{-1} \ln^{-1} \left( \frac{r^n e^n}{s^n} \right) ds\\
&=& \Omega_n^{-\frac{1}{n}} n^{-1} \int_{t}^{r} s^{-1} \ln^{-1} \left( \frac{re}{s} \right) ds\\
&=& \Omega_n^{-\frac{1}{n}} n^{-1} \ln \left(\ln\left( \frac{re}{t} \right)\right)=\Omega_n^{-\frac{1}{n}} n^{-1} \ln\left(1+\ln\left(\frac{r}{t}\right)\right).\end{aligned}$$ Thus, $$v_{r, 1, n, n}(x)=\Omega_n^{-\frac{1}{n}} n^{-1} \ln\left(1+\ln\left(\frac{r}{|x|}\right)\right) \mbox{ for all } x \in B(0,r).$$ It is easy to see that $v_{r, 1, n, n}$ is unbounded on $B(0,r).$ This proves Case II when $1<p=n$ and $\alpha=1.$
We assume now that $\alpha \in (0,1).$ For every $t \in (0,r)$ we have $$\begin{aligned}
f_{rad, r, \alpha, n}(t)&=&\int_{t}^{r} (\Omega_n s^{n})^{-\frac{1}{n}} \ln^{-\alpha} \left( \frac{\Omega_n r^n e^{n\alpha}}{\Omega_n s^n} \right) ds\\
&=& \Omega_n^{-\frac{1}{n}} \int_{t}^{r} s^{-1} \ln^{-\alpha} \left( \frac{r^n e^{n\alpha}}{s^n} \right) ds\\
&=& \Omega_n^{-\frac{1}{n}} n^{-\alpha} \int_{t}^{r} s^{-1} \ln^{-\alpha} \left( \frac{re^{\alpha}}{s} \right) ds\\
&=& \Omega_n^{-\frac{1}{n}} n^{-\alpha} (1-\alpha)^{-1} \left( \ln^{1-\alpha} \left( \frac{re^{\alpha}}{t} \right)-\alpha^{1-\alpha} \right).\end{aligned}$$ Thus, $$v_{r, \alpha, n, n}(x)=\frac{1}{\Omega_n^{\frac{1}{n}}n^{\alpha} (1-\alpha)} \left( \ln^{1-\alpha} \left( \frac{re^{\alpha}}{|x|} \right)-\alpha^{1-\alpha} \right) \mbox{ for all } x \in B(0,r).$$ It is easy to see that $v_{r, \alpha, n, n}$ is unbounded on $B(0,r).$ This proves Case II when $1<p=n$ and $\alpha \in (0,1).$ This finishes the proof of claim (v).
We prove now (vi). From part (ii) we have $|\nabla v_{r, \alpha, n, p}(x)|=u_{r, \alpha, n, p}(x)$ for all $x \in B^{*}(0,r).$ The claim follows immediately from the choice of $\alpha$ and Theorem \[Lpr stricly included in Lps\] (ii). This finishes the proof of the theorem.
The following proposition shows that $L^{p,\infty}$ does not have an absolutely continuous $(p,\infty)$-norm. Moreover, it exhibits a function $u \in L^{p, \infty}$ that does not have absolutely continuous $(p,\infty)$-norm and is not in $L^{p,q}$ for any $q$ in $[1, \infty).$
\[function in Lpinfty but not in Lpq q finite\] Let $n \ge 1$ be an integer. Let $r>0$ and $1<p<\infty.$ We define $$u_r: B(0,r) \rightarrow [0, \infty], u_{r}(x)=|x|^{-\frac{n}{p}}, 0 \le |x|<r.$$ Then
[[(i)]{}]{} $u_r \in L^{p, \infty}(B(0,r))$ and $||u_r||_{L^{p, \infty}(B(0,r))}=\Omega_n^{\frac{1}{p}}.$
[[(ii)]{}]{} $u_r \notin L^{p,q}(B(0,r))$ for every $q \in [1, \infty).$
[[(iii)]{}]{} $u_r$ does not have absolutely continuous $(p,\infty)$-norm.
[[(iv)]{}]{} If $v:B(0,r) \rightarrow {\mathbf{R}}$ is a locally bounded Lebesgue measurable function on $B(0,r),$ then $$||u_r-v||_{L^{p,\infty}(B(0,\alpha))} \ge ||u_r||_{L^{p,\infty}(B(0,r))}$$ for every $\alpha \in (0,r).$
We compute $u_r^{*},$ the nonincreasing rearrangement of $u_r.$ In order to do that, we first compute $\lambda_{[u_r]},$ the distribution function of $u_r.$ For every $t \in [0, \infty)$ we have $$\begin{aligned}
\lambda_{[u_r]}(t)&=&|\{ x \in B(0,r): |u_r(x)|>t \}|=|\{ x \in B(0,r): |x|^{-\frac{n}{p}}>t \}|\\
&=&|\{ x \in B(0,r): |x|<t^{-\frac{p}{n}} \}|=|B(0, t^{-\frac{p}{n}}) \cap B(0,r)|\\
&=&\min (\Omega_n t^{-p}, \Omega_n r^n).\end{aligned}$$
Thus,
$$\begin{aligned}
u_r^{*}(t)=\left\{ \begin{array}{ll}
\left(\frac{\Omega_n}{t}\right)^{\frac{1}{p}} & \mbox{ if $t \in [0, \Omega_n r^n)$}\\
0 & \mbox{ if $t \in [\Omega_n r^n, \infty)$}.
\end{array}
\right.\end{aligned}$$
This implies immediately that $$||u_r||_{L^{p,\infty}(B(0,r))}= \sup_{t \in [0, \Omega_n r^n)} t^{\frac{1}{p}} u_r^{*}(t)=\sup_{t \in [0, \Omega_n r^n]} t^{\frac{1}{p}}
(\Omega_n t^{-1})^{\frac{1}{p}}=\Omega_n^{\frac{1}{p}}$$ and $$\begin{aligned}
||u_r||_{L^{p,q}(B(0,r))}^{q}&=&\int_{0}^{\Omega_n r^n} (t^{\frac{1}{p}} u_r^{*}(t))^{q} \frac{dt}{t}\\
&=&\int_{0}^{\Omega_n r^n} \left(t^{\frac{1}{p}} (\Omega_n t^{-1})^{\frac{1}{p}} \right)^{q} \frac{dt}{t}\\
&=&\int_{0}^{\Omega_n r^n} \Omega_n^{\frac{q}{p}} \frac{dt}{t}=\infty\end{aligned}$$ for all $q$ in $[1, \infty).$ This proves (i) and (ii).
\(iii) We prove now that the function $u_r$ does not have an absolutely continuous $(p,\infty)$-norm. Let $\alpha \in (0,r)$ be fixed. Let $u_{r,\alpha}: B(0,r) \rightarrow [0, \infty]$ be the restriction of $u_r$ to $B(0, \alpha).$ By doing a computation very similar to the computation of $u_r^{\*},$ we have $$\begin{aligned}
u_{r,\alpha}^{*}(t)=\left\{ \begin{array}{ll}
\left(\frac{\Omega_n}{t}\right)^{\frac{1}{p}} & \mbox{ if $t \in [0, \Omega_n \alpha^n)$}\\
0 & \mbox{ if $t \in [\Omega_n \alpha^n, \infty).$}
\end{array}
\right.\end{aligned}$$ Thus, $$\label{ur has constant pinfty norm on B0alpha}
||u_{r,\alpha}||_{L^{p,\infty}(B(0,\alpha))}=||u_r||_{L^{p,\infty}(B(0,\alpha))} =||u_r||_{L^{p,\infty}(B(0,r))}=\Omega_n^{\frac{1}{p}}$$ for every $\alpha \in (0,r).$ This shows that $u_r$ does not have an absolutely continuous $(p,\infty)$-norm. This proves (iii).
We prove now (iv). Let $v:B(0,r) \rightarrow {\mathbf{R}}$ be a Lebesgue measurable function that is locally bounded on $B(0,r).$ (Any continuous function on $B(0,r)$ is such a function). Let $\alpha \in (0,r)$ and $\varepsilon \in (0,1)$ be fixed. Let $M_{\alpha}>0$ be chosen such that $|v(x)|<M_{\alpha}$ for all $x \in B(0, \alpha).$ We have $$|u_{r}(x)-v(x)| \ge |u_{r}(x)|-|v(x)| \ge |u_{r}(x)|-M_{\alpha}$$ for all $x \in B(0,\alpha).$ We want to find $\alpha_{\varepsilon} \in (0, \alpha)$ such that $M_{\alpha}<\varepsilon |u_{r}(x)|$ for all $x \in B(0,\alpha_{\varepsilon}).$ We have $$M_{\alpha}<\varepsilon |u_{r}(x)| \iff \frac{M_{\alpha}}{\varepsilon}<|x|^{-\frac{n}{p}} \iff |x|^{\frac{n}{p}}<\frac{\varepsilon}{M_{\alpha}} \iff |x|<\left(\frac{\varepsilon}{M_{\alpha}}\right)^{\frac{p}{n}}.$$ If we choose $$\alpha_{\varepsilon}=\min\left(\alpha, \left(\frac{\varepsilon}{M_{\alpha}}\right)^{\frac{p}{n}}\right),$$ the above computation, the definition of $u_{r}$ and the fact that $|v|< M_{\alpha}$ on $B(0,\alpha)$ imply that $$|u_{r}(x)-v(x)| \ge (1-\varepsilon) |u_{r}(x)|$$ for all $x$ in $B(0, \alpha_{\varepsilon}).$ Thus, we have $$\begin{aligned}
||u_{r}-v||_{L^{p,\infty}(B(0, \alpha))} &\ge& ||u_{r}-v||_{L^{p,\infty}(B(0, \alpha_{\varepsilon}))} \ge (1-\varepsilon) ||u_{r}||_{L^{p,\infty}(B(0, \alpha_{\varepsilon}))}\\
&=&(1-\varepsilon) ||u_{r}||_{L^{p,\infty}(B(0,r))}.\end{aligned}$$ The inequalities in the above sequence are obvious; we use (\[ur has constant pinfty norm on B0alpha\]) for the equality in the above sequence. By letting $\varepsilon \rightarrow 0,$ we obtain the desired conclusion for a fixed $\alpha \in (0,r).$ Thus, we proved claim (iv). This finishes the proof.
Hölder Inequalities for Lorentz Spaces
--------------------------------------
Here we record the following generalized Hölder inequalities for Lorentz spaces, previously proved in [@Cos1] and/or in [@Cos3], valid for all integers $n \ge 1.$
\[Holder for Lorentz\] [(See Costea [@Cos1 Theorem 2.3] and [@Cos3 Theorem 2.2.1]).]{} Let $\Omega \subset \mathbf{R}^n.$ Suppose $1<p<\infty$ and $1\le q \le \infty.$ If $f \in L^{p,q}(\Omega)$ and $g \in L^{p', q'}(\Omega),$ then $$\int_{\Omega}|f(x)g(x)| dx \le \int_{0}^{\infty} f^{*}(s) g^{*}(s) ds \le ||f||_{L^{p,q}(\Omega)} ||g||_{L^{p',q'}(\Omega)}.$$
We have the following generalized Hölder inequality for Lorentz spaces, valid for all integers $n \ge 1.$
\[Holder for Lorentz with general exponents\] [(See Costea [@Cos3 Theorem 2.2.2]).]{} Suppose $\Omega \subset \mathbf{R}^n$ has finite measure. Let $1<p_1,p_2, p_3<\infty,$ $1\le q_1, q_2, q_3\le \infty$ be such that $$\frac{1}{p_1}=\frac{1}{p_2}+\frac{1}{p_3}$$ and either $$\frac{1}{q_1}=\frac{1}{q_2}+\frac{1}{q_3}$$ whenever $1\le q_1, q_2, q_3<\infty$ or $1\le q_1=q_2 \le
q_3=\infty$ or $1 \le q_1=q_3 \le q_2=\infty.$ Then $$||f||_{L^{p_1,q_1}(\Omega; \mathbf{R}^m)} \le
||f||_{L^{p_2,q_2}(\Omega; \mathbf{R}^m)} \,
||\chi_{\Omega}||_{L^{p_3,q_3}(\Omega)}.$$
As an application of Theorem \[Holder for Lorentz with general exponents\] we have the following result, valid for all integers $n \ge 1.$
\[Coro Holder for Lorentz\] [(See Costea [@Cos1 Corollary 2.4] and [@Cos3 Corollary 2.2.3]).]{} Let $1<p<q\le \infty$ and $\varepsilon \in (0, p-1)$ be fixed. Suppose $\Omega \subset \mathbf{R}^n$ has finite measure. Then $$\label{Coro Holder for Lorentz 1}
||f||_{L^{p-\varepsilon}(\Omega; \mathbf{R}^m)} \le
C(p,q,\varepsilon) \,|\Omega|^{\frac{\varepsilon}{p(p-\varepsilon)}}
||f||_{L^{p,q}(\Omega; \mathbf{R}^m)}$$ for every integer $m \ge 1,$ where $$C(p,q,\varepsilon)=\left\{ \begin{array}{cc}
\left(\frac{p(q-p+\varepsilon)}{q}\right)^{\frac{1}{p-\varepsilon}-\frac{1}{q}} \, \varepsilon^{\frac{1}{q}-\frac{1}{p-\varepsilon}},& p<q<\infty \\
p^{\frac{1}{p-\varepsilon}} \,
\varepsilon^{-\frac{1}{p-\varepsilon}}, & q=\infty.
\end{array}
\right.$$
For the following definition, see Bennett-Sharpley [@BS Definition IV.4.17].
For every measurable function $f$ on $\mathbf{R}^n,$ $n \ge 2,$ the *fractional integral $I_{1}f$* is defined by $$(I_{1}f)(x)=\int_{\mathbf{R}^n} \frac{f(y)}{|x-y|^{n-1}} \, dy.$$
We record here the Hardy-Littlewood-Sobolev theorem of fractional integration. (See Bennett-Sharpley [@BS Theorem IV.4.18] and Costea [@Cos3 Theorem 2.2.5]).
[**Hardy-Littlewood-Sobolev theorem.**]{} \[Hardy-Littlewood-Sobolev\] Let $1<p<n$ and $1 \le q \le \infty.$ Then there exists a constant $C(n,p,q)>0$ such that $$||I_{1}f||_{L^{\frac{np}{n-p},q}({\mathbf{R}}^n)} \le C(n,p,q)
||f||_{L^{p,q}({\mathbf{R}}^n)}$$ whenever $f \in L^{p,q}({\mathbf{R}}^n).$
Sobolev-Lorentz Spaces {#Section Sobolev Lorentz spaces}
======================
This section is based in part on Chapter V of my PhD thesis [@Cos0] and on Chapter 3 of my book [@Cos3]. We generalize and extend some of the results from [@Cos0] and [@Cos3] to the case $n=1.$
Among the new results in this section we mention the case $q=\infty$ for Theorems \[H=W revisited\] and \[H=H\_0 revisited\] as well as the inclusion $W^{1,(p,q)}(\Omega) \subsetneq W^{1,(p,s)}(\Omega),$ where $\Omega \subset {\mathbf{R}}^n$ is open, $n \ge 1$ is an integer, $1<p<\infty$ and $1 \le q<s \le \infty.$
The $H^{1, (p,q)}$ and $W^{1, (p,q)}$ Spaces
--------------------------------------------
For $1<p<\infty$ and $1\le q \le \infty$ we define the Sobolev-Lorentz space $H^{1, (p,q)}(\Omega)$ as follows. Let $r=\min(p,q).$ For a function $\phi \in
C^{\infty}(\Omega)$ we define its Sobolev-Lorentz $(p,q)$-norm by $$||\phi||_{1, (p,q); \Omega}=\left(||\phi||_{L^{(p,q)}(\Omega)}^{r}
+||\nabla \phi||_{L^{(p,q)}(\Omega;
\mathbf{R}^n)}^{r}\right)^{\frac{1}{r}},$$ where, we recall, $\nabla
\phi=(\partial_1 \phi, \ldots,
\partial_n \phi)$ is the gradient of $\phi.$ Similarly we define the Sobolev-Lorentz $p,q$-quasinorm of $\phi$ by $$||\phi||_{1, p,q; \Omega}=\left(||\phi||_{L^{p,q}(\Omega)}^{r}
+||\nabla \phi||_{L^{p,q}(\Omega;
\mathbf{R}^n)}^{r}\right)^{\frac{1}{r}},$$ Then $H^{1, (p,q)}(\Omega)$ is defined as the completion of $$\{\phi \in C^{\infty}(\Omega): ||\phi||_{1, (p,q); \Omega} < \infty \}$$ with respect to the norm $||\cdot||_{1, (p,q); \Omega}.$ Throughout the paper we might also use $||\cdot||_{H^{1, (p,q)}(\Omega)}$ instead of $||\cdot||_{1, (p,q); \Omega}$ and $||\cdot||_{H^{1, p,q}(\Omega)}$ instead of $||\cdot||_{1, p,q; \Omega}.$
The Sobolev-Lorentz space $H_{0}^{1, (p,q)}(\Omega)$ is defined as the closure of $C_{0}^{\infty}(\Omega)$ in $H^{1, (p,q)}(\Omega)$. The Sobolev-Lorentz spaces $H_{0}^{1, (p,q)}(\Omega)$ and $H^{1,
(p,q)}(\Omega)$ can be both regarded as closed subspaces of $L^{(p,q)}(\Omega) \times L^{(p,q)}(\Omega; \mathbf{R}^n).$ Since $L^{(p,q)}(\Omega) \times L^{(p,q)}(\Omega; \mathbf{R}^n)$ is reflexive when $1<q< \infty,$ it follows that both $H_{0}^{1,
(p,q)}(\Omega)$ and $H^{1, (p,q)}(\Omega)$ are reflexive Banach spaces when $1<q<\infty$ and have absolutely continuous norm when $1\le q<\infty.$ In particular, $u \in L^{(p,q)}(\Omega)$ and $\nabla u \in L^{(p,q)}(\Omega; \mathbf{R}^n)$ have absolutely continuous $(p,q)$-norm whenever $1<p<\infty$ and $1\le
q<\infty.$
For $q=1$ we have that $(L^{(p,1)}(\Omega))^{*} \times
(L^{(p,1)}(\Omega; \mathbf{R}^n))^{*}$ can be regarded as a subspace of $(H^{1,(p,1)}(\Omega))^{*}$ and since $(L^{(p,1)}(\Omega))^{*}
\times (L^{(p,1)}(\Omega; \mathbf{R}^n))^{*}$ can be identified with the non-reflexive space $L^{(p',\infty)}(\Omega) \times
L^{(p',\infty)}(\Omega; \mathbf{R}^n),$ it follows that $H^{1,(p,1)}(\Omega)$ is non-reflexive and so is $H_{0}^{1,(p,1)}(\Omega),$ since it is a closed subspace of $H^{1,(p,1)}(\Omega).$ It will be proved later in Theorem \[H1pinfty subsetneq W1pinfty\] that none of these two spaces is reflexive when $q=\infty.$
Next we record the following reflexivity result, valid for all $n \ge 1$ and for all $q \in (1,\infty).$
\[HKM93 Thm132\] [(See Costea [@Cos0 Theorem V.22] and [@Cos3 Theorem 3.5.4]).]{} Let $1<p,q<\infty.$ Suppose that $u_j$ is a bounded sequence in $H^{1,(p,q)}(\Omega)$ such that $u_j \rightarrow u$ pointwise almost everywhere in $\Omega.$ Then $u \in H^{1,(p,q)}(\Omega).$ Moreover, if $u_j \in H_{0}^{1,(p,q)}(\Omega)$ for all $j \ge 1,$ then $u \in H_{0}^{1, (p,q)}(\Omega).$
The following theorem generalizes the Gagliardo-Nirenberg-Sobolev inequality to the Sobolev-Lorentz spaces $H_{0}^{1,(p,q)}(\Omega)$ for $1<p<n$ and $1 \le q \le \infty.$ It also presents a Sobolev-Poincaré inequality for the Sobolev-Lorentz spaces $H_{0}^{1,(p,q)}(\Omega)$ when $\Omega \subset {\mathbf{R}}^n$ is open and bounded, $1<p<\infty$ and $1 \le q \le \infty.$
\[Sobolev-Poincare for Sobolev-Lorentz\] [**Sobolev inequalities for Sobolev-Lorentz spaces.**]{}
Let $\Omega \subset \mathbf{R}^n$ be an open set, where $n \ge 2$ is an integer. Suppose $1<p<\infty$ and $1\le q \le \infty.$
[(i)]{} If $1<p<n,$ then there exists a constant $C(n,p,q)>0$ such that $$||u||_{L^{\frac{np}{n-p}, q}(\Omega)} \le C(n,p,q) ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}^n})}$$ for every $u \in H_{0}^{1,(p,q)}(\Omega).$
[(ii)]{} [(See Costea [@Cos3 Theorem 3.1.1]).]{} If $\Omega$ is bounded, then there exists a constant $C(n,p,q)>0$ such that $$\label{Sobolev-Poincare for Sobolev-Lorentz 1}
||u||_{L^{p,q}(\Omega)} \le C(n,p,q) \, |\Omega|^{\frac{1}{n}}
||\nabla u||_{L^{p,q}(\Omega; \mathbf{R}^n)}$$ for every $u \in H_{0}^{1,(p,q)}(\Omega).$
We have that $H_{0}^{1,(p,q)}(\Omega)$ is the closure of $C_{0}^{\infty}(\Omega)$ in $H^{1,(p,q)}(\Omega).$ Thus, via Costea [@Cos1 Corollary 2.7], it is enough to prove claims (i) and (ii) for functions $u \in C_{0}^{\infty}(\Omega).$
Let $u$ be in $C_{0}^{\infty}(\Omega).$ We extend the function $u$ by $0$ on ${\mathbf{R}}^n \setminus \Omega$ and we denote this extension by $u$ as well. Then $u$ is in $C_{0}^{\infty}({\mathbf{R}}^n)$ and $u$ is compactly supported in $\Omega.$ Via Gilbarg-Trudinger [@GT Lemma 7.14], we have $$|u(x)| \le \frac{1}{\omega_{n-1}} (I_1|\nabla u|)(x)$$ for every $x \in {\mathbf{R}}^n.$ By using this pointwise inequality together with the Hardy-Littlewood-Sobolev Theorem (see Theorem \[Hardy-Littlewood-Sobolev\] and Bennett-Sharpley [@BS Theorem IV.4.18]) it follows immediately that claim (i) holds for all functions $u \in C_{0}^{\infty}(\Omega).$ This proves claim (i) via Costea [@Cos1 Corollary 2.7].
We prove now claim (ii). We have to consider two cases, depending on whether $1<p<n$ or $n \le p<\infty.$
Case I. First we assume that $1<p<n.$ We notice that $p<\frac{np}{n-p}.$ Via Theorems \[Holder for Lorentz with general exponents\] and \[Hardy-Littlewood-Sobolev\] it follows from part (i) that $$||u||_{L^{p,q}(\Omega)} \le |\Omega|^{\frac{1}{n}} ||u||_{L^{\frac{np}{n-p},q}(\Omega)}
\le \frac{C(n,p,q)}{\omega_{n-1}} |\Omega|^{\frac{1}{n}} ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}^n})}$$ for every $u \in C_{0}^{\infty}(\Omega),$ where $C(n,p,q)$ is the constant from Theorem \[Hardy-Littlewood-Sobolev\]. This proves the claim (ii) for $1<p<n$ via Costea [@Cos1 Corollary 2.7].
Case II. We assume now that $1<n \le p<\infty.$ We choose $s \in (1,n)$ such that $p<\frac{ns}{n-s}.$ Via Theorems \[Holder for Lorentz with general exponents\] and \[Hardy-Littlewood-Sobolev\] it follows from part (i) that $$\begin{aligned}
||u||_{L^{p,q}(\Omega)} &\le& |\Omega|^{\frac{1}{p}-\frac{n-s}{ns}} ||u||_{L^{\frac{ns}{n-s},q}(\Omega)}\\
&\le& \frac{C(n,s,q)}{\omega_{n-1}} |\Omega|^{\frac{1}{p}-\frac{n-s}{ns}} ||\nabla u||_{L^{s,q}(\Omega; {\mathbf{R}^n})}\\
&\le& \frac{C(n,s,q)}{\omega_{n-1}} |\Omega|^{\frac{1}{n}} ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}^n})}\end{aligned}$$ for every $u \in C_{0}^{\infty}(\Omega),$ where $C(n,s,q)$ is the constant from Theorem \[Hardy-Littlewood-Sobolev\]. This proves the claim (ii) for $1<n \le p<\infty$ via Costea [@Cos1 Corollary 2.7]. This finishes the proof of the theorem.
We recall that for $1<p<\infty,$ $H^{1,p}(\Omega)$ is defined as the closure of $C^{\infty}(\Omega)$ with respect to the $||\cdot||_{1,p;
\Omega}$-norm, where $$||\psi||_{1,p; \Omega}=\left(\int_{\Omega} |\psi(x)|^{p} dx +
\int_{\Omega} |\nabla \psi(x)|^{p} dx \right)^{\frac{1}{p}}$$ for every $\psi \in C^{\infty}(\Omega).$ We recall that $H^{1,p}_{loc}(\Omega)$ is defined in the obvious manner: a measurable function $u: \Omega \rightarrow \mathbf{R}$ is in $H^{1,p}_{loc}(\Omega)$ if and only if $u$ is in $H^{1,p}_{loc}(\Omega')$ for every open set $\Omega' \subset \subset
\Omega.$
Let $u \in L_{loc}^{1}(\Omega).$ For $i=1,\ldots,n$ a function $v
\in L_{loc}^{1}(\Omega)$ is called the *$i$th weak partial derivative of* $u$ and we denote $v= \partial_{i} u$ if $$\int_{\Omega} \varphi \, v \,dx=-\int_{\Omega} \partial_{i}\varphi \, u \,dx$$ for all $\varphi \in C_{0}^{\infty}(\Omega).$ Recall that $$W^{1,p}(\Omega)=L^{p}(\Omega) \cap \{ u: \partial_{i} u \in
L^{p}(\Omega), \, i=1,\ldots,n \}.$$ The space $W^{1,p}(\Omega)$ is equipped with the norm $$||u||_{W^{1,p}(\Omega)}=||u||_{L^{p}(\Omega)}+ \sum_{i=1}^{n}
||\partial_{i} u||_{L^{p}(\Omega)},$$ which is clearly equivalent to $$\left(||u||_{L^{p}(\Omega)}^p+ ||\nabla u||_{L^{p}(\Omega;
\mathbf{R}^n)}^p\right)^{\frac{1}{p}}.$$ Here $\nabla u$ is the distributional gradient of $u.$ We recall that $W^{1,p}(\Omega)=H^{1,p}(\Omega).$
We define the Sobolev-Lorentz space $W^{1,(p,q)}(\Omega)$ by $$W^{1,(p,q)}(\Omega)=L^{(p,q)}(\Omega) \cap \{ u: \partial_{i} u \in
L^{(p,q)}(\Omega), \, i=1, \ldots, n \}.$$ The space $W^{1,(p,q)}(\Omega)$ is equipped with the norm $$||u||_{W^{1, (p,q)}(\Omega)}=||u||_{L^{(p,q)}(\Omega)}+
\sum_{i=1}^{n} ||\partial_{i} u||_{L^{(p,q)}(\Omega)},$$ which is clearly equivalent to $$\left(||u||_{L^{(p,q)}(\Omega)}^r+ ||\nabla u||_{L^{(p,q)}(\Omega;
\mathbf{R}^n)}^r\right)^{\frac{1}{r}},$$ where $r=\min(p,q).$ As earlier, it is easy to see that $W^{1,(p,q)}(\Omega)$ is a reflexive Banach space when $1<q<\infty$ and a non-reflexive Banach space when $q=1.$ It will be proved later in Theorem \[H1pinfty subsetneq W1pinfty\] that $W^{1,(p,\infty)}(\Omega)$ is not reflexive.
The corresponding local space $H_{loc}^{1, (p,q)}(\Omega)$ is defined in the obvious manner: $u$ is in $H_{loc}^{1,
(p,q)}(\Omega)$ if and only if $u$ is in $H^{1, (p,q)}(\Omega')$ for every open set $\Omega' \subset \subset \Omega.$
Similarly, the local space $W_{loc}^{1, (p,q)}(\Omega)$ is defined as follows: $u$ is in $W_{loc}^{1,
(p,q)}(\Omega)$ if and only if $u$ is in $W^{1, (p,q)}(\Omega')$ for every open set $\Omega' \subset \subset \Omega.$
The following theorem shows, among other things, the relation between $W^{1,(p,q)}(\Omega)$ and $H_{loc}^{1,s}(\Omega),$ where $1<s<p<\infty$ and $1\le q \le
\infty.$
\[W1pqloc included in H1sloc s<p\] Let $\Omega \subset {\mathbf{R}}^n$ be an open set, where $n \ge 1$ is an integer. Let $1<s<p<\infty$ and $1\le q<r \le \infty.$
[(i)]{} We have $W^{1,(p,q)}(\Omega) \subset H_{loc}^{1,s}(\Omega).$ Moreover, if $\Omega$ has finite Lebesgue measure (in particular if $\Omega$ is bounded), then $W^{1,(p,q)}(\Omega) \subset H^{1,s}(\Omega).$
[(ii)]{} If $\Omega$ is bounded, then $H_{0}^{1,(p,q)}(\Omega) \subset H_{0}^{1,s}(\Omega).$
[(iii)]{} We have $H_{0}^{1,(p,q)}(\Omega) \subset H_{0}^{1,(p,r)}(\Omega),$ $H^{1,(p,q)}(\Omega) \subset H^{1,(p,r)}(\Omega),$ and $W^{1,(p,q)}(\Omega) \subset W^{1,(p,r)}(\Omega).$
For claim (i), see Costea [@Cos0 Theorem V.2] and [@Cos3 Theorem 3.1.2]. From either of these two references we can copy almost verbatim the proof, valid also for $n=1.$
Claims (ii) and (iii) follow immediately from Remark \[relation between Lpr and Lps\], Corollary \[Coro Holder for Lorentz\], and the definition of the Sobolev-Lorentz spaces on $\Omega.$
We record the following theorem, which shows that every Sobolev element $u$ in $H_{loc}^{1,(p,q)}(\Omega)$ is a distribution.
\[H included in W\] [(See Costea [@Cos0 Theorem V.3] and [@Cos3 Theorem 3.1.3]).]{} Suppose $1<p<\infty$ and $1\le q \le \infty.$ Let $u$ be in $H_{loc}^{1,(p,q)}(\Omega).$ Then $u$ is a distribution with distributional gradient $\nabla u \in
L_{loc}^{1}(\Omega; \mathbf{R}^n).$ Moreover, $u \in
L_{loc}^{(p,q)}(\Omega) \subset L_{loc}^{1}(\Omega)$ and
$$\int_{\Omega} u \, \partial_{i} \varphi \, dx= -\int_{\Omega}
\partial_{i}u \, \varphi \, dx$$
for all $\varphi \in C_{0}^{\infty}(\Omega)$ and $i=1, \ldots, n,$ where $\partial_{i}u$ is the $i$th coordinate of $\nabla u.$ In particular, $H^{1,(p,q)}(\Omega) \subset W^{1,(p,q)}(\Omega).$
Regularization
--------------
We need some basic properties of the Sobolev-Lorentz spaces. Before proceeding we recall the usual regularization procedure.
Let $\eta \in C_{0}^{\infty}(B(0,1))$ be a *mollifier*. This means that $\eta$ is a nonnegative function such that $$\int_{\mathbf{R}^n} \eta(x) \, dx=1.$$ Without loss of generality we can assume that $\eta$ is a radial function. Next we write $$\eta_{\varepsilon}(x)=\varepsilon^{-n} \eta(\varepsilon^{-1}x), \: \varepsilon>0.$$ For the basic properties of a mollifier see Ziemer [@Zie Theorems 1.6.1 and 2.1.3]. We summarize the properties of the convolution (valid for all integers $n \ge 1$) in the following theorem.
\[properties of convolutions in Lorentz spaces\] [(See Costea [@Cos0 Theorem V.4] and [@Cos3 Theorem 3.2.1]).]{} For $v \in L_{loc}^{1}({\mathbf{R}}^n),$ the convolution $$v_{\varepsilon}(x)=\eta_{\varepsilon}*v(x)= \int_{{\mathbf{R}}^n} \eta_{\varepsilon}(x-y) v(y) dy$$ enjoys the following properties for every $\varepsilon>0$:
[[(i)]{}]{} For every $p \in (1, \infty)$ and every $q \in [1,\infty],$ there exists a constant $C(p,q)>0$ such that $$\label{Marcinkiewicz for convolution}
||v_{\varepsilon}||_{L^{(p,q)}({\mathbf{R}}^n)} \le C(p,q)
||v||_{L^{(p,q)}({\mathbf{R}}^n)}.$$
[[(ii)]{}]{} For every $p \in (1,\infty),$ every $q \in [1,\infty]$ and every $v \in L^{(p,q)}({\mathbf{R}}^n)$ with absolutely continuous $(p,q)$-norm, we have $$\label{v_eps converges to v in Lpq}
||v_{\varepsilon}-v||_{L^{(p,q)}({\mathbf{R}}^n)} \rightarrow 0$$ as $\varepsilon \rightarrow 0.$
Recall that a function $u: \Omega \rightarrow {\mathbf{R}}$ is *Lipschitz* on $\Omega \subset {\mathbf{R}}^n,$ if there is $L>0$ such that $$|u(x)-u(y)| \le L |x-y|$$ for all $x, y \in \Omega.$ Moreover, $u$ is *locally Lipschitz* on $\Omega$ if $u$ is Lipshitz on each compact subset of $\Omega.$
It is well known that every locally Lipschitz function on ${\mathbf{R}}^n$ is differentiable; this is Rademacher’s theorem (see Federer [@Fed Theorem 3.1.6]).
\[normal gradient\] [(See Costea [@Cos0 Lemma V.5] and [@Cos3 Theorem 3.2.2]).]{} Suppose $1<p<\infty$ and $1\le q \le \infty.$ Let $u: \Omega \rightarrow \mathbf{R}$ be a locally Lipschitz function. Then $u \in H_{loc}^{1,(p,q)}(\Omega)$ and $\nabla
u=(\partial_1 u, \ldots, \partial_n u)$ is the usual gradient of $u.$
Product rule, density results and strict inclusions for Sobolev-Lorentz spaces
------------------------------------------------------------------------------
\[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\]
Let $n \ge 1$ be an integer and let $1<p<\infty.$ Let $u_p: {\mathbf{R}}^n \rightarrow [-\infty, \infty],$ $$u_p(x)=\left\{ \begin{array}{ll}
\ln |x| & \mbox{ if $p=n>1$}\\
|x|^{1-\frac{n}{p}} & \mbox { if $p \neq n.$}
\end{array}
\right.$$ Then $u_p$ is in $W^{1,(p,\infty)}_{loc}({\mathbf{R}}^n) \setminus H^{1,(p,\infty)}_{loc}({\mathbf{R}}^n).$
Since from Theorem \[W1pqloc included in H1sloc s<p\] we have $W^{1,(p,\infty)}_{loc}({\mathbf{R}}^n) \subset H^{1,s}_{loc}({\mathbf{R}}^n)$ for all $1<s<p,$ we prove first that $u_p$ is in $H^{1,s}_{loc}({\mathbf{R}}^n)$ for all $1<s<p.$
We start by noticing that $u_p \in L^{p}_{loc}({\mathbf{R}}^n)$ for $p=n>1$ and that $u_{p}(x) \le r |x|^{-\frac{n}{p}}$ for all $x \in B(0,r)$ and for all $p \neq n.$ Thus, $u_{p}$ is in $L^{(p,\infty)}_{loc}({\mathbf{R}}^n)$ for all $p \in (1, \infty).$ Moreover, an easy computation shows that $$\lim_{r \rightarrow \infty} ||u_{p}||_{L^{p,\infty}(B(0,r))}=\infty$$ for all $p \in (1, \infty).$ Thus, $u_{p}$ is not in $L^{p,\infty}({\mathbf{R}}^n).$
We notice that $u_{p}$ is smooth in ${\mathbf{R}}^n \setminus \{0\}$ with $$\nabla u_{p}(x)=C(n,p) \, x \, |x|^{-1-\frac{n}{p}}, x \neq 0,$$ where $$\label{Cnp is 1-n/p or 1}
C(n,p)=\left\{ \begin{array}{ll}
1 & \mbox{ if $p=n>1$}\\
1-\frac{n}{p} & \mbox{ if $p \neq n.$}
\end{array}
\right.$$
Thus, $|\nabla u_{p}(x)|=|C(n,p)| \, |x|^{-\frac{n}{p}}, x \neq 0.$ By doing a computation similar to the one in Proposition \[function in Lpinfty but not in Lpq q finite\], we have $$|\nabla u_p|^{*}(t)=|C(n,p)| \left(\frac{\Omega_n}{t}\right)^{1/p}$$ for all $t \ge 0,$ where $C(n,p)$ is the constant from (\[Cnp is 1-n/p or 1\]).
Thus, it follows immediately that $|\nabla u_{p}|$ is in $L^{(p,\infty)}({\mathbf{R}}^n)$ and $$||\nabla u_{p}||_{L^{p,\infty}({\mathbf{R}}^n; {\mathbf{R}}^n)}=||\nabla u_{p}||_{L^{p,\infty}(B(0,r); {\mathbf{R}}^n)}=|C(n,p)| \, \Omega_n^{1/p}<||\nabla u_{p}||_{L^{p,q}(B(0,r); {\mathbf{R}}^n)}=\infty$$ for every $r>0$ and every $1 \le q<\infty,$ where $C(n,p)$ is the above constant.
By invoking Proposition \[function in Lpinfty but not in Lpq q finite\] (iv), we see that $$||\nabla u_{p}-\nabla v||_{L^{p,\infty}(B(0,\alpha); {\mathbf{R}}^n)} \ge ||\nabla u_{p}||_{L^{p,\infty}(B(0,r); {\mathbf{R}}^n)}
=||\nabla u_{p}||_{L^{p,\infty}({\mathbf{R}}^n; {\mathbf{R}}^n)}=|C(n,p)| \, \Omega_n^{1/p}>0$$ for every $v \in C^{\infty}({\mathbf{R}}^n)$ and every $0<\alpha<r<\infty,$ where $C(n,p)$ is the constant from (\[Cnp is 1-n/p or 1\]). This implies immediately that $u_{p}$ is not in $H_{loc}^{1,(p,\infty)}({\mathbf{R}^n})$ because $u_{p}$ cannot be approximated with smooth functions in the $H^{1,(p,\infty)}$ norm on open balls centered at the origin.
It is enough to prove that $u_{p}$ is in $W^{1,(p,\infty)}(B(0,r))$ and in $H^{1,s}(B(0,r))$ for all $r>0$ and for all $s \in (1,p).$ We can assume without loss of generality that $r>1.$ We fix such $s$ and $r.$
For every integer $k \ge 1$ we truncate the function $u_{p}$ on the set $B(0, \frac{1}{k+1})$ and we denote this truncation by $u_{p,k}.$ Specifically, for $p=n>1$ and $k \ge 1$ integer we define $u_{n,k}$ on ${\mathbf{R}^n}$ by $$u_{n,k}(x)=\left\{\begin{array}{ll}
\ln \frac{1}{k+1} & \mbox{ if $0 \le |x| \le \frac{1}{k+1}$}\\
u_{n}(x)=\ln |x| & \mbox{ if $\frac{1}{k+1} \le |x|< \infty.$}
\end{array}
\right.$$ For $p \neq n$ and $k \ge 1$ integer we define $u_{p,k}$ on ${\mathbf{R}}^n$ by $$u_{p,k}(x)=\left\{\begin{array}{ll}
\left(\frac{1}{k+1}\right)^{1-\frac{n}{p}} & \mbox{ if $0 \le |x| \le \frac{1}{k+1}$}\\
u_{p}(x)=|x|^{1-\frac{n}{p}} & \mbox{ if $\frac{1}{k+1} \le |x|<\infty.$}
\end{array}
\right.$$
We notice that $(u_{p,k})_{k \ge 1} \subset H^{1,(p, \infty)}(B(0,r))$ is a sequence of Lipschitz functions on $\overline{B}(0,r).$
Moreover, for every $k \ge 1$ we have $0 \le |u_{p,k}| \le |u_{p}|$ pointwise in ${\mathbf{R}}^n$ and $|\nabla u_{p,k}| \le |\nabla u_{p}|$ almost everywhere in ${\mathbf{R}}^n.$ Thus, the sequence $u_{p,k}$ is bounded in $H^{1,(p, \infty)}(B(0,r))$ and in $H^{1,s}(B(0,r))$ for all $1<s<p.$
This sequence converges to $u_{p}$ pointwise in ${\mathbf{R}}^n \setminus \{0\}.$ The aforementioned pointwise convergence on $B^{*}(0,r)$ together with the reflexivity argument from Heinonen-Kilpeläinen-Martio [@HKM Theorem 1.32], valid for all integers $n \ge 1,$ shows that $u_{p}$ is in $H^{1,s}(B(0,r)).$ Thus, we showed that $u_{p} \in H^{1,s}(B(0,r))$ for all $1<s<p$ and all $r>0.$
We proved that $u_{p}$ is in $H^{1,s}_{loc}({\mathbf{R}}^n)$ for all $s \in (1,p).$ The fact that $u_{p}$ is in $H^{1,s}_{loc}({\mathbf{R}}^n)$ for all $1<s<p$ coupled with the fact that $u_{p}$ is in $L^{(p,\infty)}_{loc}({\mathbf{R}}^n)$ and $|\nabla u_{p}|$ is in $L^{(p,\infty)}({\mathbf{R}}^n)$ show that $u_{p}$ is indeed in $W^{1,(p,\infty)}_{loc}({\mathbf{R}}^n).$ This finishes the proof.
The following theorem shows, among other things, that $H^{1,(p,\infty)}(\Omega) \subsetneq W^{1,(p,\infty)}(\Omega);$ it also shows that the spaces $H_{0}^{1,(p,\infty)}(\Omega),$ $H^{1,(p, \infty)}(\Omega),$ and $W^{1,(p, \infty)}(\Omega)$ are not reflexive.
\[H1pinfty subsetneq W1pinfty\] Let $\Omega \subset {\mathbf{R}}^n$ be an open set, where $n \ge 1$ is an integer and let $y$ be a point in $\Omega.$ Suppose $1<p<\infty.$
[(i)]{} We have $H^{1,(p,\infty)}(\Omega) \subsetneq W^{1,(p,\infty)}(\Omega) \cap H^{1,(p,\infty)}(\Omega \setminus \{y \}).$
[(ii)]{} We have $H^{1,(p,q)}(\Omega \setminus \{y \}) \subsetneq H^{1,(p,\infty)}(\Omega \setminus \{y \})$ whenever $1 \le q<\infty.$
[(iii)]{} The spaces $H_{0}^{1,(p,\infty)}(\Omega),$ $H^{1,(p,\infty)}(\Omega),$ and $W^{1,(p,\infty)}(\Omega)$ are not reflexive.
The inclusion in (i) follows immediately from Theorem \[H included in W\] and from the definition of the Sobolev-Lorentz spaces $H^{1,(p, q)};$ the inclusion in (ii) follows immediately from Theorem \[W1pqloc included in H1sloc s<p\] (iii) and from the definition of the Sobolev-Lorentz spaces $H^{1,(p, q)}.$
In order to prove the strict inclusions in (i) and (ii) and the non-reflexivity in (iii), we can assume without loss of generality that $\Omega$ is a bounded open set in ${\mathbf{R}^n}$ such that $y=0 \in \Omega.$ Furthermore, we can assume without loss of generality that $\Omega=B(0,r)$ with $r>1.$
We prove (i) and (ii). We define $u_{r,p}:B(0,r) \rightarrow [-\infty, \infty]$ by $$u_{r,p}(x)=\left\{ \begin{array}{ll}
\ln \frac{|x|}{r} & 0 \le |x|<r, \mbox{ if $p=n>1$}\\
|x|^{1-\frac{n}{p}}-r^{1-\frac{n}{p}} & 0 \le |x|<r, \mbox{ if $p\neq n.$}
\end{array}
\right.$$
Let $c(n,p,r)$ be a constant that depends on $n,p,r,$ defined by $$\label{defn of cnpr}
c(n,p,r)=\left\{ \begin{array}{ll}
\ln r & \mbox{ if $p=n>1$}\\
r^{1-\frac{n}{p}} & \mbox{ if $p\neq n.$}
\end{array}
\right.$$
We notice that $u_{r,p}=u_p-c(n,p,r)$ on $B(0,r),$ where $u_p$ is the function from Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\], defined on ${\mathbf{R}}^n.$ Thus, from Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\] it follows immediately that $u_{r,p}$ is in $W^{1,(p,\infty)}(B(0,r)) \setminus H^{1,(p,\infty)}(B(0,r))$ and that $u_{r,p}$ is not in $H^{1,(p,q)}(B^{*}(0,r))$ whenever $1 \le q<\infty.$
Moreover, by mimicking the argument from the proof of Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\], we have $$||\nabla u_{r,p}-\nabla v||_{L^{p,\infty}(B(0,\alpha); {\mathbf{R}}^n)} \ge ||\nabla u_{r,p}||_{L^{p,\infty}(B(0,r); {\mathbf{R}}^n)}=|C(n,p)| \, \Omega_n^{1/p}$$ for every $v \in C^{\infty}(B(0,r))$ and every $\alpha \in (0,r),$ where $C(n,p)$ is the constant from (\[Cnp is 1-n/p or 1\]).
We notice that $u_{r,p}$ is smooth in $B^{*}(0,r).$ Since we saw that $u_{r,p}$ is in $W^{1,(p,\infty)}(B(0,r)),$ it follows immediately that $u_{r,p} \in H^{1,(p,\infty)}(B^{*}(0,r)).$ This finishes the proof of claims (i) and (ii).
We prove now claim (iii). We modify slightly the reflexivity argument from Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\]. For every integer $k \ge 1$ we define the functions $u_{r,p,k}$ on $B(0,r)$ by $u_{r,p,k}(x)=u_{p,k}(x)-c(n,p,r), x \in B(0,r);$ here $c(n,p,r)$ is the constant from (\[defn of cnpr\]) and $u_{p,k}$ are the functions from Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\] (namely the truncations of $u_p$ on $B(0, \frac{1}{k+1})$). Specifically, for $p=n>1$ and $k \ge 1$ integer we have $$u_{r,n,k}(x)=\left\{\begin{array}{ll}
\ln \frac{1}{k+1}-\ln r & \mbox{ if $0 \le |x| \le \frac{1}{k+1}$}\\
u_{r,n}(x)=\ln \frac{|x|}{r} & \mbox{ if $\frac{1}{k+1} \le |x| < r.$}
\end{array}
\right.$$ For $p \neq n$ and $k \ge 1$ integer we have $$u_{r,p,k}(x)=\left\{\begin{array}{ll}
\left(\frac{1}{k+1}\right)^{1-\frac{n}{p}}-r^{1-\frac{n}{p}} & \mbox{ if $0 \le |x| \le \frac{1}{k+1}$}\\
u_{r,p}(x)=|x|^{1-\frac{n}{p}}-r^{1-\frac{n}{p}} & \mbox{ if $\frac{1}{k+1} \le |x| < r.$}
\end{array}
\right.$$
We see that $(u_{r,p,k})_{k \ge 1} \subset H_{0}^{1,(p, \infty)}(B(0,r))$ is a sequence of Lipschitz functions on $B(0,r)$ that can be extended continuously by $0$ on $\partial B(0,r).$ Moreover, for every $k \ge 1$ we have $0 \le |u_{r,p,k}| \le |u_{r,p}|$ pointwise in $B(0,r)$ and $|\nabla u_{r,p,k}| \le |\nabla u_{r,p}|$ almost everywhere in $B(0,r).$
By using the argument from Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\] (ii) with minor modifications, we see that the sequence $u_{r,p,k}$ is bounded in $H_{0}^{1,(p, \infty)}(B(0,r)),$ in $H_{0}^{1,s}(B(0,r))$ and also in $H^{1,s}(B(0,r))$ for all $1<s<p.$ Since this sequence converges to $u_{r,p}$ pointwise in $B^{*}(0,r)$ but $u_{r,p}$ is not in $H^{1,(p,\infty)}(B(0,r)),$ it follows that $H_{0}^{1,(p,\infty)}(B(0,r))$ and $H^{1,(p,\infty)}(B(0,r))$ are not reflexive spaces. Moreover, since both these spaces are closed subspaces of $W^{1,(p,\infty)}(B(0,r)),$ it follows that the space $W^{1,(p,\infty)}(B(0,r))$ is not reflexive. Thus, we proved claim (ii). This finishes the proof of the theorem.
The following lemma shows, among other things, that the product between a function $u$ in $W^{1,(p,q)}(\Omega)$ and a function $\varphi$ in $C_{0}^{\infty}(\Omega)$ yields a function in $H_{0}^{1,(p,q)}(\Omega)$ if $1<p<\infty$ and $1\le q \le \infty$ whenever $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm.
\[Product Rule for W1pq\] [(See Costea [@Cos0 Lemma V.6] and [@Cos3 Lemma 3.3.1]).]{} Let $\Omega \subset {\mathbf{R}}^n$ be an open set, where $n \ge 1$ is an integer. Suppose $1<p<\infty$ and $1\le q\le \infty.$ Suppose that $u \in W^{1,(p,q)}(\Omega)$ and that $\varphi \in C_{0}^{\infty}(\Omega).$ Then $u \varphi \in W^{1,(p,q)}(\Omega)$ and $\nabla(u \varphi)=u \nabla \varphi+ \varphi \nabla u.$ Moreover, $u \varphi \in H_{0}^{1,(p,q)}(\Omega)$ whenever $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm.
We can assume without loss of generality that $\Omega$ is bounded. Let $s \in (1,p).$ Then from Theorem \[W1pqloc included in H1sloc s<p\] we have $u \in H^{1,s}(\Omega),$ and hence from Evans [@Eva p. 247 Theorem 1] it follows that $u \varphi \in H^{1,s}(\Omega)$ and $\nabla (u \varphi)=u \nabla \varphi+ \varphi \nabla u.$ Since $u \varphi \in L^{(p,q)}(\Omega)$ and $u \nabla \varphi+ \varphi \nabla u \in L^{(p,q)}(\Omega; {\mathbf{R}}^n),$ it follows that $u \varphi \in W^{1,(p,q)}(\Omega).$
Now suppose that $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm. (This is always the case when $1 \le q < \infty$). We have to prove that $u \varphi \in H_{0}^{1,(p,q)}(\Omega).$
If we multiply $u$ with a function $\widetilde{\eta} \in C_{0}^{\infty}(\Omega),$ the first part of the proof shows that both $u \widetilde{\eta}$ and $\nabla(u \widetilde{\eta})$ have absolutely continuous norm whenever $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm. If the function $\widetilde{\eta}$ is chosen to be $1$ on $\mbox{supp } \varphi,$ then $\widetilde{\eta} \varphi=\varphi.$ This allows us to assume without generality that $u$ has compact support in $\Omega.$
Let $\eta \in C_{0}^{\infty}(B(0,1))$ be a mollifier. Let $j_0>0$ be an integer such that $$j_0>(\mbox{dist}(\mbox{supp } u, {\mathbf{R}}^n \setminus \Omega))^{-1}.$$ For $j \ge j_0$ integer we define $u_j: \Omega \rightarrow {\mathbf{R}},$ $u_j(x)=(\eta_j*u)(x),$ where $\eta_j(x)=j^n \eta(jx).$ We notice that $(u_j)_{j \ge j_0} \subset C_{0}^{\infty}(\Omega).$ Moreover, since $\eta_j \in C_{0}^{\infty}(B(0, j^{-1}))$ are mollifiers and $u \in W^{1,(p,q)}(\Omega),$ it follows via Ziemer [@Zie Theorem 1.6.1] that $\partial_i u_j=(\partial_i \eta_j)*u=\eta_j*(\partial_i u)$ for all $i=1, \ldots, n$ and for all integers $j \ge j_0.$ Since $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm, it follows via Theorem \[properties of convolutions in Lorentz spaces\] that $u_j$ converges to $u$ in $H^{1,(p,q)}(\Omega)$ as $j \rightarrow \infty.$
This implies, via the first part of the proof that $u_j \varphi, j \ge j_0$ is a sequence in $C_{0}^{\infty}(\Omega)$ that converges to $u \varphi$ in $H^{1,(p,q)}(\Omega),$ which means that $u \varphi \in H_{0}^{1,(p,q)}(\Omega).$ This finishes the proof.
\[counterexample product rule q=infty\] We notice that the product $u \varphi$ does not necessarily belong to $H_{0}^{1,(p,\infty)}(\Omega)$ when $u \in W^{1,(p,\infty)}(\Omega)$ and $\varphi \in C_{0}^{\infty}(\Omega)$ if $\nabla u$ does not have absolutely continuous $(p,\infty)$-norm.
Indeed, let $0<\alpha<r<\infty$ and let $\Omega=B(0,r).$ Let $u_{r,p}$ be the function from Theorem \[H1pinfty subsetneq W1pinfty\]. Choose $\varphi_{r, \alpha}$ in $C_0^{\infty}(\Omega)$ such that $\varphi_{r, \alpha}=1$ in $B(0,\alpha).$ Then via Lemma \[Product Rule for W1pq\] we have $u_{r, p} \varphi_{r, \alpha} \in W^{1,(p, \infty)}(\Omega).$ It is obvious that $u_{r, p} \varphi_{r, \alpha}=u_{r, p}$ in $B(0,\alpha)$ and hence via Theorem \[H1pinfty subsetneq W1pinfty\] it follows that $u_{r, p} \varphi_{r, \alpha}$ does not belong to $H_{0}^{1,(p, \infty)}(\Omega).$
Now we prove that if $n \ge 1$ is an integer, $\Omega \subset {\mathbf{R}}^n$ is an open set and $u \in W^{1,(p,q)}(\Omega)$ is such that $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm, then $u \in H^{1,(p,q)}(\Omega).$ This result is new for $q=\infty$ and $n \ge 1.$ For $1 \le q<\infty$ it yields $H^{1,(p,q)}(\Omega)=W^{1, (p,q)}(\Omega),$ a result proved in Costea [@Cos3 Theorem 3.3.4] and Costea [@Cos0 Theorem V.9] for $n \ge 2.$ Thus, we generalize and improve the result obtained in Costea [@Cos3 Theorem 3.3.4] and Costea [@Cos0 Theorem V.9].
\[H=W revisited\] Let $\Omega \subset {\mathbf{R}}^n$ be an open set, where $n \ge 1$ is an integer. Suppose $1<p<\infty$ and $1 \le q \le \infty.$ Suppose that $u \in W^{1,(p,q)}(\Omega).$ If $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm, then $u \in H^{1,(p,q)}(\Omega).$ In particular, $H^{1,(p,q)}(\Omega)=W^{1,(p,q)}(\Omega)$ if $1 \le q<\infty.$
Like in the proof of Ziemer [@Zie Lemma 2.3.1] and Costea [@Cos3 Theorem 3.3.4], we choose open sets $\Omega_0=\emptyset \subsetneq \Omega_j \subset \subset \Omega_{j+1}, j \ge 1$ such that $\cup_{j} \Omega_j=\Omega$ and a sequence of functions $\psi_j, j\ge 1$ such that $\psi_j \in C_{0}^{\infty}(\Omega_{j+1} \setminus \overline{\Omega}_{j-1}),$ $0 \le \psi_j \le 1$ for every $j \ge 1$ and $\sum_j \psi_j \equiv 1$ in $\Omega.$
Let $\varepsilon>0$ be fixed. For every $j \ge 1,$ we have via Lemma \[Product Rule for W1pq\] that $u \psi_j$ is in $H_{0}^{1,(p,q)}(\Omega).$ Moreover, since $\psi_j \in C_{0}^{\infty}(\Omega_{j+1} \setminus \overline{\Omega}_{j-1})$, we see that in fact $u \psi_j$ is in $H_{0}^{1,(p,q)}(\Omega_{j+1} \setminus \overline{\Omega}_{j-1})$ and thus, there exists $\varphi_{j}$ in $C_{0}^{\infty}(\Omega_{j+1} \setminus \overline{\Omega}_{j-1})$ such that $$||\varphi_j- u \psi_j||_{1, (p,q); \Omega} \le ||\varphi_j- u \psi_j||_{L^{(p,q)}(\Omega)} + ||\nabla \varphi_j- \nabla (u \psi_j)||_{L^{(p,q)}(\Omega; {\mathbf{R}}^n)}
< \frac{\varepsilon}{2^{j}}$$ for all $j \ge 1$. If we define $\varphi \equiv \sum_{j \ge 1} \varphi_j,$ we see that $\varphi \in C^{\infty}(\Omega)$ because $\varphi$ can be written as a finite sum of the functions $\varphi_i \in C_{0}^{\infty}(\Omega)$ on every bounded open set $U \subset \subset \Omega.$ Moreover, $$||\varphi-u||_{1, (p,q); \Omega}=||\sum_{j \ge 1} (\varphi_j-u \psi_j)||_{1, (p,q); \Omega} \le \sum_{j \ge 1} ||\varphi_j-u \psi_j||_{1, (p,q); \Omega}< \sum_{j \ge 1} \frac{\varepsilon}{2^{j}}=\varepsilon.$$ This finishes the proof of the theorem.
Now we prove that if $n \ge 1$ is an integer and $u \in W^{1,(p,q)}({\mathbf{R}}^n)$ is such that $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm, then $u \in H_{0}^{1,(p,q)}({\mathbf{R}}^n).$ This result is new for $q=\infty$ and $n \ge 1.$ For $1 \le q<\infty$ it yields $H^{1,(p,q)}({\mathbf{R}}^n)=H_{0}^{1, (p,q)}({\mathbf{R}}^n),$ a result proved in Costea [@Cos3 Theorem 3.3.6] and Costea [@Cos0 Theorem V.16] for $n \ge 2.$ Thus, we generalize and improve the result obtained in Costea [@Cos3 Theorem 3.3.6] and Costea [@Cos0 Theorem V.16].
\[H=H\_0 revisited\] Suppose $1<p<\infty$ and $1 \le q \le \infty.$ Suppose that $u \in W^{1,(p,q)}({\mathbf{R}}^n),$ where $n \ge 1$ is an integer. If $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm, then $u \in H_{0}^{1,(p,q)}({\mathbf{R}}^n).$ In particular, $H^{1, (p,q)}({\mathbf{R}}^n)=H_{0}^{1, (p,q)}({\mathbf{R}}^n)$ if $1 \le q<\infty.$
Let $u \in W^{1,(p,q)}({\mathbf{R}}^n)$ such that $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm. (This is always the case when $1 \le q<\infty$). Then from Theorem \[H=W revisited\] it follows that $u$ is in fact in $H^{1,(p,q)}({\mathbf{R}}^n).$
For $j=1,2,\ldots$ choose functions $\varphi_j \in C_{0}^{\infty}(B(0, j+1)),$ $0 \le \varphi_j \le 1,$ such that $\varphi_j(x)=1$ for each $x \in \overline{B}(0,j).$ Moreover, we choose these functions $\varphi_j$ to be radial and $2$-Lipschitz for all $j \ge 1.$ Then $u_j:=u \varphi_j \in H_{0}^{1,(p,q)}(B(0,j+1))$ for all $j \ge 1$ via Lemma \[Product Rule for W1pq\].
Fix $\varepsilon>0.$ For every $j \ge 1$ choose $\psi_j \in C_{0}^{\infty}(B(0,j+1))$ such that $$||\psi_j-u_j||_{1,(p,q);{\mathbf{R}}^n}=||\psi_j-u \varphi_j||_{1,(p,q);{\mathbf{R}}^n}<\frac{\varepsilon}{4}.$$
For every $j \ge 1$ integer we have, via the definition of the $H^{1,(p,q)}$-norm and via Lemma \[Product Rule for W1pq\] $$\begin{aligned}
||u-u_j||_{1,(p,q); {\mathbf{R}}^n} &\le& ||u-u_j||_{L^{(p,q)}({\mathbf{R}}^n)}+||\nabla u - \nabla u_j||_{L^{(p,q)}({\mathbf{R}}^n; {\mathbf{R}}^n)}\\
&\le& ||u (1-\varphi_j)||_{L^{(p,q)}({\mathbf{R}}^n)}+
||(1-\varphi_j) \nabla u||_{L^{(p,q)}({\mathbf{R}}^n;{\mathbf{R}}^n)}\\
& &+||u \nabla \varphi_j||_{L^{(p,q)}({\mathbf{R}}^n;{\mathbf{R}}^n)}.\end{aligned}$$
Since $0 \le \varphi_j \le 1$ is a 2-Lipschitz smooth function supported in $B(0,j+1)$ such that $\varphi_j=1$ in $\overline{B}(0,j),$ this yields $$\begin{aligned}
||u-u_j||_{1,(p,q); {\mathbf{R}}^n} &\le& ||u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n)}\\
& &+||\nabla u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n;{\mathbf{R}}^n)}\\
& &+ || \nabla \varphi_j ||_{L^{\infty}({\mathbf{R}}^n)} ||u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n;{\mathbf{R}}^n)}\\
&\le&3||u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n)}\\
& &+||\nabla u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n;{\mathbf{R}}^n)}\end{aligned}$$ for all $j \ge 1.$
Since $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm, we can choose an integer $j_0>1$ such that $$||u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n)}+||\nabla u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n; {\mathbf{R}}^n)}<\frac{\varepsilon}{4}$$ for all $j \ge j_0.$
Thus, $(\psi_j)_{j \ge 1} \subset C_{0}^{\infty}({\mathbf{R}}^n)$ and $$\begin{aligned}
||\psi_j-u||_{1,(p,q); {\mathbf{R}}^n} &\le& ||\psi_j-u_j||_{1,(p,q); {\mathbf{R}}^n}+||u-u_j||_{1,(p,q); {\mathbf{R}}^n}\\
&<&\frac{\varepsilon}{4}+ 3||u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n)}\\
&&+||\nabla u \chi_{{\mathbf{R}}^n \setminus B(0, j)}||_{L^{(p,q)}({\mathbf{R}}^n;{\mathbf{R}}^n)}<\varepsilon\end{aligned}$$ for all $j \ge j_0.$ This finishes the proof of the theorem.
We prove now that $W^{1, (p,q_1)}(\Omega) \subsetneq W^{1,(p,q_2)}(\Omega)$ whenever $1<p<\infty$ and $1 \le q_1 < q_2\le \infty.$
\[W1pr strictly included in W1ps\] Let $n \ge 1$ be an integer and $r>0$ be a positive number. Suppose $1<p<\infty$ and $1 \le q_1< q_2\le\infty.$ Let $\alpha$ be a number in $(0,1]$ such that $1 \le q_1 \le \frac{1}{\alpha}<q_2 \le \infty.$ Let $v_{r, \alpha, n, p}:B(0,r) \rightarrow [0,\infty]$ be the function defined in [(\[defn of vralphanp\])]{}.
Then
[[(i)]{}]{} $v_{r, \alpha, n, p} \in H_{0}^{1, (p, q_2)}(B(0,r)) \setminus H^{1, (p,q_1)}(B(0,r)).$
[[(ii)]{}]{} $v_{r, \alpha, n, p} \in H^{1, (p, q_2)}(B^{*}(0,r)) \setminus H^{1, (p,q_1)}(B^{*}(0,r)).$
By choosing $q_3$ such that $\frac{1}{\alpha}<q_3<q_2$ if necessary, we can assume without loss of generality via Theorem \[W1pqloc included in H1sloc s<p\] (iii) that $q_2<\infty$ throughout the proof of this theorem.
Since $n, p, r, \alpha, q_1$ and $q_2$ are fixed here, we simplify the notations throughout the proof of the theorem. We let $v_{n, p}:=v_{r, \alpha, n, p}$ and $f_{\alpha, p}:=f_{rad, r, \alpha, p},$ where $f_{rad, r, \alpha, p}$ is the function defined in (\[defn of fradralphap\]).
Since $$||\nabla v_{n,p}||_{L^{p,q_1}(B^{*}(0,r); {\mathbf{R}}^n)}=\infty,$$ it follows immediately via Theorem \[H=W revisited\] that $v_{n, p} \notin H^{1, (p, q_1)}(B^{*}(0,r))$ and consequently $v_{n, p} \notin H^{1, (p, q_1)}(B(0,r))=W^{1, (p, q_1)}(B(0,r)).$
We want to show that $v_{n,p} \in H_{0}^{1,(p,q_2)}(B(0,r)).$ In order to do that, we resort to a truncation argument and we invoke Theorem \[HKM93 Thm132\].
We know from the proof of Theorem \[Lpr strictly included in Lps via grad of smooth fns\] that $f_{\alpha, p}$ is in $C^{\infty}((0,r)),$ positive and strictly decreasing on $(0,r).$ Moreover, we have $\lim_{t \rightarrow 0} f_{\alpha, p}(t)=\infty$ when $1<p \le n$ and $\lim_{t \rightarrow 0} f_{\alpha, p}(t)<\infty$ when $n<p<\infty.$
For every integer $k \ge 1$ we truncate the function $v_{n, p}$ on the set $B(0, \frac{r}{k+1})$ and we denote this truncation by $v_{n, p, k}.$ Specifically, for $k \ge 1$ integer we define $u_{n, p, k}$ on $B(0,r)$ by $$v_{n, p, k}(x)=\left\{\begin{array}{ll}
f_{\alpha, p}(\frac{r}{k+1}) & \mbox{ if $0 \le |x| \le \frac{r}{k+1}$}\\
v_{n, p}(x)=f_{\alpha, p}(|x|) & \mbox{ if $\frac{r}{k+1} < |x| < r.$}
\end{array}
\right.$$ It is easy to see that $0 \le v_{n,p,k} \le v_{n,p}$ pointwise in $B(0,r)$ for all $k \ge 1.$ Moreover, all the functions $v_{n,p,k}$ are Lipschitz on $B(0,r)$ and can be extended continuously by $0$ on $\partial B(0,r).$ More precisely, for all $k \ge 1$ we have $$\nabla v_{n, p, k}(x)=\left\{\begin{array}{ll}
0 & \mbox{ if $0 \le |x| < \frac{r}{k+1}$}\\
\nabla v_{n, p}(x)=f'_{\alpha, p}(|x|) \frac{x}{|x|} & \mbox{ if $\frac{r}{k+1} < |x| < r.$}
\end{array}
\right.$$ In particular, for every $k \ge 1$ we have $|\nabla v_{n, p, k}| \le |\nabla v_{n,p}|$ almost everywhere in $B(0,r).$ Thus, we have that $(v_{n, p, k})_{k \ge 1} \subset H_{0}^{1,(p, q_2)}(B(0,r)).$ We claim that the sequence $v_{n,p,k}$ is bounded in $H_{0}^{1,(p,q_2)}(B(0,r))$ and in $H^{1,(p,q_2)}(B^{*}(0,r)).$
We study the cases $n=1$ and $n>1$ separately.
Case I. We suppose first that $n=1.$ Then $p>n$ and from Theorem \[Lpr strictly included in Lps via grad of smooth fns\] (iv) it follows that $v_{n,p}$ is continuous and bounded on $B(0,r).$ The boundedness of the sequence $v_{n,p,k}$ in $H_{0}^{1,(p,q_2)}(B(0,r))$ and in $H^{1,(p,q_2)}(B^{*}(0,r))$ is immediate in this case since $0 \le v_{n,p,k} \le v_{n,p}$ pointwise in $B(0,r),$ $|\nabla v_{n,p}| \in L^{(p,q_2)}(B(0,r))$ and since $|\nabla v_{n,p,k}| \le |\nabla v_{n,p}|$ almost everywhere in $B(0,r)$ for every $k \ge 1.$
Case II. We assume now that $n>1.$ Via Theorem \[Sobolev-Poincare for Sobolev-Lorentz\] (ii) we have $$\begin{aligned}
||v_{n,p,k}||_{L^{p,q_2}(B(0,r))} &\le& C(n,p,q_2) \, |\Omega|^{\frac{1}{n}} ||\nabla v_{n,p,k}||_{L^{p,q_2}(B(0,r); {\mathbf{R}^n})}\\
&\le& C(n,p,q_2) \, |\Omega|^{\frac{1}{n}} ||\nabla v_{n,p}||_{L^{p,q_2}(B(0,r); {\mathbf{R}^n})}
\end{aligned}$$ for every $k \ge 1$ integer.
Thus, we proved that the sequence $v_{n,p,k}$ is bounded in $H_{0}^{1,(p,q_2)}(B(0,r))$ and in $H^{1,(p,q_2)}(B^{*}(0,r))$ whenever $n \ge 1,$ $1<p<\infty,$ and $1<q_2<\infty.$ The reflexivity of these two spaces and the pointwise convergence of $v_{n,p,k}$ to $v_{n,p}$ on $B^{*}(0,r)$ imply immediately via Theorem \[HKM93 Thm132\] that $v_{n,p}$ is in fact in $H_{0}^{1,(p,q_2)}(B(0,r))$ and in $H^{1,(p,q_2)}(B^{*}(0,r)).$ Moreover, by invoking Theorem \[Holder 1/p’ continuity for u in W1pq n equal 1\] (i) for $n=1$ and respectively Theorem \[Morrey embedding 1<n<p\] (iv) for $n>1,$ we see that $v_{n,p}$ is in fact Hölder continuous in $\overline{B}(0,r)$ with exponent $1-\frac{n}{p}.$ This finishes the proof.
Chain Rule Results
------------------
We recall the chain rule property for the Sobolev-Lorentz spaces, proved in Costea [@Cos3] for $n \ge 2.$
\[Chain Rule revisited\] [(See Costea [@Cos3 Theorem 3.4.1]).]{} Let $\Omega \subset {\mathbf{R}}^n$ be an open set, where $n \ge 1$ is an integer. Suppose $1<p<\infty$ and $1\le q\le \infty.$ Suppose that $f \in C^{1}({\mathbf{R}}),$ $f(0)=0$ and $f'$ is bounded. If $u \in
W^{1, (p,q)}(\Omega),$ then $f \circ u \in W^{1, (p,q)}(\Omega)$ and $$\nabla(f \circ u)=f'(u) \nabla u.$$ Moreover, if $u \in H_{0}^{1, (p,q)}(\Omega),$ then $f \circ u \in
H_{0}^{1, (p,q)}(\Omega).$
We have $$\label{f circ u dominated by u}
|f \circ u(x)|=|f(u(x))-f(0)| \le ||f'||_{L^{\infty}({\mathbf{R}})} |u(x)| \mbox{ for a.e. $x$ in $\Omega$},$$ which implies that $f \circ u \in L^{(p,q)}(\Omega).$
Let $s \in (1,p)$ be fixed. We have that $u \in W^{1,(p,q)}(\Omega),$ hence by Theorem \[W1pqloc included in H1sloc s<p\] it follows that $u \in H^{1,s}_{loc}(\Omega).$ This and (\[f circ u dominated by u\]) imply via Ziemer [@Zie Theorem 2.1.11] that $f \circ u \in H^{1,s}_{loc}(\Omega)$ and $$\nabla (f \circ u)= f'(u) \nabla u.$$ Thus, we have $f \circ u \in L^{(p,q)}(\Omega),$ $\nabla (f \circ u)=f'(u) \nabla u \in L^{(p,q)}(\Omega; {\mathbf{R}}^n),$ which implies that $f \circ u \in W^{1,(p,q)}(\Omega).$
We want to prove that $f \circ u \in H_{0}^{1,(p,q)}(\Omega)$ if $u \in H_{0}^{1,(p,q)}(\Omega).$ This was done in Costea [@Cos3 Theorem 3.4.1] in the case $n \ge 2,$ but the proof is valid for $n=1$ as well. We present it for the convenience of the reader.
Suppose that $u \in H_{0}^{1,(p,q)}(\Omega).$ Then it follows immediately that $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm. From the first part of the proof we already know that $f \circ u \in W^{1,(p,q)}(\Omega)$ because $u \in H_{0}^{1,(p,q)}(\Omega) \subset W^{1,(p,q)}(\Omega).$ Let $u_j, j \ge 1$ be a sequence of functions in $C_{0}^{\infty}(\Omega)$ that converges to $u$ in $H_{0}^{1,(p,q)}(\Omega).$ Without loss of generality, we can assume that $u_j \rightarrow u$ pointwise almost everywhere in $\Omega.$ Since the functions $u_j$ are compactly supported in $\Omega$ and $f(0)=0,$ it follows that the functions $f \circ u_j$ are compactly supported in $\Omega.$ Moreover, since the functions $u_j$ are in $C^{1}(\Omega)$ and $f$ is in $C^{1}({\mathbf{R}}),$ it follows that the functions $f \circ u_j$ are in $C^{1}(\Omega).$ Thus, $f \circ u_j, j \ge 1$ is a sequence of functions in $C_{0}^{1}(\Omega) \subset H_{0}^{1,(p,q)}(\Omega)$ with $\nabla (f \circ u_j)=f'(u_j) \nabla u_j, j \ge 1.$ Since $f'$ is bounded on ${\mathbf{R}},$ we have
$$|(f \circ u_j)(x)-(f \circ u)(x)| \le ||f'||_{L^{\infty}({\mathbf{R}})} |u_j(x)-u(x)| \mbox{ for a.e. $x$ in } \Omega.$$ This implies that $f \circ u_j$ converges to $f \circ u$ in $L^{(p,q)}(\Omega).$
We have $$\begin{aligned}
||f'(u_j) \nabla u_j-f'(u) \nabla u||_{L^{(p,q)}(\Omega; {\mathbf{R}}^n)}&\le& ||f'||_{L^{\infty}({\mathbf{R}})} ||\nabla u_j-\nabla u||_{L^{(p,q)}(\Omega; {\mathbf{R}}^n)}\\
&&+ ||(f'(u_j)-f'(u)) \nabla u||_{L^{(p,q)}(\Omega; {\mathbf{R}}^n)}.\end{aligned}$$ The first term of the right-hand side trivially converges to $0.$ The second term of the right-hand side converges to $0$ via Bennett-Sharpley [@BS Proposition I.3.6] since $\nabla u$ has absolutely continuous $(p,q)$-norm, $f'$ is bounded and $f'(u_j)$ converges to $f'(u)$ pointwise almost everywhere in $\Omega.$ Consequently, the sequences $f(u_j), f'(u_j) \nabla u_j$ converge to $f(u), f'(u) \nabla u$ respectively and thus $\nabla (f \circ u)=f'(u) \nabla u.$ This finishes the proof.
Recall the notation $$u^{+}=\max(u,0) \mbox { and } u^{-}=\min(u,0).$$
\[pos part revisited\] [(See Costea [@Cos0 Lemma V.12] and [@Cos3 Lemma 3.4.4]).]{} Suppose $1<p<\infty$ and $1\le q\le \infty.$ If $u \in W^{1, (p,q)}(\Omega),$ then $u^{+} \in W^{1,
(p,q)}(\Omega)$ and
$$\nabla u^{+}=\left\{ \begin{array}{cl}
\nabla u & \mbox{if $u>0$} \\
0 & \mbox{if $u \le 0.$}
\end{array}
\right.$$
Let $s \in (1,p)$ be fixed. From Theorem \[W1pqloc included in H1sloc s<p\] we have that $u \in H^{1,s}_{loc}(\Omega).$ Via Evans [@Eva p. 291-292, Exercise 20] it follows that $$\nabla u^{+}=\left\{ \begin{array}{cl}
\nabla u & \mbox{if $u>0$} \\
0 & \mbox{if $u \le 0.$}
\end{array}
\right.$$ But in that case $\nabla u^{+} \in L^{(p,q)}(\Omega; {\mathbf{R}}^n)$ since $\nabla u$ is in $L^{(p,q)}(\Omega; {\mathbf{R}}^n)$ and $|\nabla u^{+}(x)| \le |\nabla u(x)|$ for almost every $x$ in $\Omega.$ We also have $u^{+} \in L^{(p,q)}(\Omega)$ since $|u^{+}| \le |u|$ in $\Omega$ and $u \in L^{(p,q)}(\Omega).$ So we have in fact that $u^{+} \in W^{1,(p,q)}(\Omega).$ The claim is proved.
From Theorem \[H=W revisited\] and Lemma \[pos part revisited\] it follows immediately that the space $H^{1,(p,q)}(\Omega)$ is closed under truncations from above by nonnegative numbers and from below by negative numbers whenever $1<p<\infty$ and $1 \le q<\infty.$ Moreover, we have the following density result.
\[bdd fns in H1pq are dense in H1pq q finite\] [(See Costea [@Cos3 Theorem 3.4.5]).]{} Suppose $1<p<\infty$ and $1\le q<\infty.$ Bounded functions in $H^{1,(p,q)}(\Omega)$ are dense in $H^{1,(p,q)}(\Omega).$
It is important to notice that the Sobolev-Lorentz space $W^{1, (p,q)}(\Omega)$ is a lattice.
\[W lattice\] [(See Costea [@Cos0 Theorem V.13] and [@Cos3 Theorem 3.4.6]).]{} Suppose $1<p<\infty$ and $1\le q \le \infty.$ If $u$ and $v$ are in $W^{1, (p,q)}(\Omega),$ then $\max(u,v)$ and $\min(u,v)$ are in $W^{1, (p,q)}(\Omega)$ with
$$\nabla \max(u,v)(x)=\left\{ \begin{array}{cl}
\nabla u(x) & \mbox{if $u(x) \ge v(x)$} \\
\nabla v(x) & \mbox{if $v(x) \ge u(x)$}
\end{array}
\right.$$
and $$\nabla \min(u,v)(x)=\left\{ \begin{array}{cl}
\nabla u(x) & \mbox{if $u(x) \le v(x)$} \\
\nabla v(x) & \mbox{if $v(x) \le u(x).$}
\end{array}
\right.$$
In particular, $|u|=u^{+}-u^{-}$ belongs to $W^{1, (p,q)}(\Omega).$
\[closed truncation\] [(See Costea [@Cos0 Lemma V.14] and [@Cos3 Lemma 3.4.7]).]{} Suppose $1<p<\infty$ and $1\le q<\infty.$ If $u_j, v_j \in H^{1, (p,q)}(\Omega)$ are such that $u_j \rightarrow u$ and $v_j \rightarrow v$ in $H^{1,(p,q)}(\Omega),$ then $\min(u_j, v_j) \rightarrow \min(u,v)$ and similarly $\max(u_j, v_j) \rightarrow \max(u,v)$ in $H^{1,(p,q)}(\Omega).$
We recall next that the space $H_{0}^{1, (p,q)}(\Omega)$ is also a lattice whenever $1<p<\infty$ and $1\le q \le \infty.$
\[H 0 lattice\] [(See Costea [@Cos0 Theorem V.15] and [@Cos3 Theorem 3.4.8]).]{} Suppose $1<p<\infty$ and $1\le q\le \infty.$ If $u$ and $v$ are in $H_{0}^{1, (p,q)}(\Omega),$ then $\max(u,v)$ and $\min(u,v)$ are in $H_{0}^{1, (p,q)}(\Omega).$ Moreover, if $u \in H_{0}^{1,
(p,q)}(\Omega)$ is nonnegative, then there exists a sequence of nonnegative functions $\varphi_j \in C_{0}^{\infty}(\Omega)$ that converges to $u$ in $H_{0}^{1,(p,q)}(\Omega).$
We have a result analogous to Theorem \[bdd fns in H1pq are dense in H1pq q finite\] for $H_{0}^{1,(p,q)}(\Omega)$ whenever $1<p<\infty$ and $1 \le q \le \infty.$
\[bdd fns in H01pq are dense in H01pq 1 le q le infty\] [(See Costea [@Cos3 Theorem 3.4.9]).]{} Suppose $1<p<\infty$ and $1 \le q \le \infty.$ Bounded functions in $H_{0}^{1,(p,q)}(\Omega)$ are dense in $H_{0}^{1,(p,q)}(\Omega).$
It is easy to see that if a function $u$ is in $H_{0}^{1,(p,q)}(\Omega),$ then $u$ and its distributional gradient $\nabla u$ must have absolutely continuous $(p,q)$-norm. Next we give a sufficient condition for membership in $H_{0}^{1,(p,q)}(\Omega).$
\[zero on bdry Omega implies membership in H0\] Let $\Omega \subset {\mathbf{R}}^n$ be a bounded open set, where $n \ge 1$ is an integer. Suppose $1<p<\infty$ and $1 \le q \le \infty.$ Suppose that $u$ is a function in $W^{1, (p,q)}(\Omega)$ such that $\lim_{x \rightarrow y} u(x)=0$ for all $y \in \partial \Omega.$ If $\nabla u$ has absolutely continuous $(p,q)$-norm, then $u \in H_{0}^{1,(p,q)}(\Omega).$
We first show that $u$ has absolutely continuous $(p,q)$-norm if $u$ satisfies the hypotheses of this lemma, a fact which is trivial when $1 \le q<\infty.$ We have to consider the cases $n=1$ and $n \ge 2$ separately.
Case I. We assume first that $n=1.$ Then via Theorem \[Holder 1/p’ continuity for u in W1pq n equal 1\] it follows that $u$ has a version $\overline{u}$ that is Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}.$ Without loss of generality we can assume that $u$ is Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}.$ Thus, it follows that $u$ has absolutely continuous $(p,q)$-norm on $\Omega$ if $n=1$ because $u$ is continuous on the bounded set $\overline{\Omega}.$
Case II. We assume now that $n \ge 2.$ Let $s$ be chosen in $(1,p).$ Since $\Omega$ is bounded, it follows via Theorem \[W1pqloc included in H1sloc s<p\] that $u$ is in $H^{1,s}(\Omega).$ Thus, it follows via Heinonen-Kilpeläinen-Martio [@HKM Lemma 1.26] that $u \in H_{0}^{1,s}(\Omega).$ If in addition, $s$ is chosen such that $p<\frac{ns}{n-s},$ then we have via Sobolev’s embedding theorem for $H_{0}^{1,s}(\Omega)$ (see Gilbarg-Trudinger [@GT Theorem 7.10]) that in fact $u \in L^{\frac{ns}{n-s}}(\Omega).$ This, (\[relation between Lpr and Lps norm\]), Theorem \[Holder for Lorentz\] and Bennett-Sharpley [@BS Proposition IV.4.2 and Lemma IV.4.5] show that actually $u$ has absolutely continuous $(p,q)$-norm for $n \ge 2$ if it satisfies the hypotheses of the lemma.
Since we now know that $u$ and $\nabla u$ have absolutely continuous $(p,q)$-norm whenever $u$ satisfies the hypotheses of the lemma, it follows via Theorem \[H=W revisited\] that $u$ is in fact in $H^{1,(p,q)}(\Omega).$
By recalling that $u=u^{+}+u^{-},$ it follows immediately via Lemma \[pos part revisited\] that both $\nabla u^{+}$ and $\nabla u^{-}$ have absolutely continuous $p,q$-norm since $\nabla u$ has absolutely continuous $p,q$-norm and since $|\nabla u^{+}|, |\nabla u^{-}| \le |\nabla u|$ a.e. in $\Omega.$
We also notice that both $u^{+}$ and $u^{-}$ have absolutely continuous $(p,q)$-norm since $|u^{+}|, |u^{-}| \le |u|$ and since $u$ has absolutely continuous $(p,q)$-norm. Moreover, $\lim_{x \rightarrow y} u^{+}(x)=\lim_{x \rightarrow y} u^{-}(x)=0$ for all $y \in \partial \Omega$ since $\lim_{x \rightarrow y} u(x)=0$ for all $y \in \partial \Omega.$ Hence, $u^{+}$ and $u^{-}$ satisfy the hypotheses of the lemma if $u$ does, which implies via Theorem \[H=W revisited\] that $u^{+}$ and $u^{-}$ are in fact in $H^{1,(p,q)}(\Omega).$ Thus, it is enough to prove the claim of the lemma for $u^{+}$ and $u^{-}.$ This means that we can assume without loss of generality that $u \ge 0.$
Fix $\varepsilon>0.$ Let $u_{\varepsilon}=(u-\varepsilon)^{+}=\max(u-\varepsilon, 0).$ Then $u_{\varepsilon}$ has compact support in $\Omega.$ Moreover, via Theorem \[W lattice\], we see that $u_{\varepsilon} \in W^{1,(p,q)}(\Omega)$ and $$\nabla u_{\varepsilon}=\left\{ \begin{array}{cl}
\nabla u & \mbox{if $u>\varepsilon$} \\
0 & \mbox{if $0 \le u \le \varepsilon.$}
\end{array}
\right.$$
We now show that $u_{\varepsilon}$ is in $H_{0}^{1,(p,q)}(\Omega).$ The function $u_{\varepsilon}$ has absolutely continuous $(p,q)$-norm since $0\le u_{\varepsilon} \le u$ pointwise in $\Omega$ and since $u$ has absolutely continuous $(p,q)$-norm. Similarly, $\nabla u_{\varepsilon}$ has absolutely continuous $(p,q)$-norm since $0 \le |\nabla u_{\varepsilon}| \le |\nabla u|$ almost everywhere in $\Omega$ and since $\nabla u$ has absolutely continuous $(p,q)$-norm. These two facts plus the membership of $u_{\varepsilon}$ in $W^{1,(p,q)}(\Omega)$ yield the membership of $u_{\varepsilon}$ in $H_{0}^{1,(p,q)}(\Omega)$ via Lemma \[Product Rule for W1pq\].
We now show that $u_{\varepsilon}$ converges to $u$ in $W^{1,(p,q)}(\Omega).$ Indeed, we see that $0 \le u-u_{\varepsilon} \le \varepsilon$ pointwise on the bounded set $\Omega,$ which implies $$\lim_{\varepsilon \rightarrow 0} ||u_{\varepsilon}-u||_{L^{(p,q)}(\Omega)}=0.$$
We also see via Theorem \[W lattice\] and the definition of $u_{\varepsilon}$ that $$\nabla u-\nabla u_{\varepsilon}=\nabla u \chi_{0<u\le \varepsilon} \mbox{ a.e. in } \Omega.$$ This and the absolute continuity of the $(p,q)$-norm of $\nabla u$ yield $$\lim_{\varepsilon \rightarrow 0} ||\nabla u_{\varepsilon}-\nabla u||_{L^{(p,q)}(\Omega; {\mathbf{R}^n})}=\lim_{\varepsilon \rightarrow 0} ||\nabla u \chi_{0 < u \le \varepsilon}||_{L^{(p,q)}(\Omega; {\mathbf{R}^n})}=0.$$ This finishes the proof.
Hölder continuity of functions in Sobolev-Lorentz spaces {#Section Morrey embedding theorems}
========================================================
In this section we extend some of the known classical embedding theorems to the spaces $H_{0}^{1,(p,q)}(\Omega),$ $C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega)$ and $W_{loc}^{1,(p,q)}(\Omega)$ for $1 \le n<p<\infty$ and $1 \le q \le \infty,$ where $\Omega \subset {\mathbf{R}}^n$ is open. First we recall the definition of Hölder continuous functions with exponent $0<\alpha<1.$
[(See Gilbarg-Trudinger [@GT p. 52-53] and Ziemer [@Zie p. 2-3]).]{} Let $n \ge 1$ and $0<\alpha<1.$ Let $u$ be a function defined on a set $D \subset {\mathbf{R}}^n.$ We say that $u$ is *Hölder continuous in $D$ with exponent $\alpha$* if the quantity $$[u]_{0, \alpha; D}:=\sup_{x, y \in D, x \neq y} \frac{|u(x)-u(y)|}{|x-y|^{\alpha}}$$ is finite. We say that $u$ is *locally Hölder continuous in $D$ with exponent $\alpha$* if $u$ is Hölder continuous with exponent $\alpha$ on compact subsets of $D.$
Let $\Omega \subset {\mathbf{R}}^n$ be an open set. Let $u$ be a continuous function on $\overline{\Omega}.$ We say that $u$ is in $C^{0, \alpha}(\overline{\Omega})$ if $u$ is Hölder continuous in $\Omega$ with exponent $\alpha.$
Before we state and prove these embedding results, we need to prove an extension result for functions in $H_{0}^{1,(p,q)}(\Omega).$
\[Extension by zero in H01pq\] Let $\Omega \subset \widetilde{\Omega}$ be two open sets in ${\mathbf{R}}^n,$ where $n \ge 1$ is an integer. Suppose $1<p<\infty$ and $1\le q \le \infty.$ Let $u$ be a function in $H_{0}^{1,(p,q)}(\Omega)$ and let $\widetilde{u}$ be the extension of $u$ by zero to $\widetilde{\Omega}.$ Then $\widetilde{u} \in H_{0}^{1,(p,q)}(\widetilde{\Omega}).$
Let $u_k, k \ge 1$ be a sequence of functions in $C_{0}^{\infty}(\Omega)$ such that $u_k$ converges to $u$ in $H_{0}^{1,(p,q)}(\Omega).$ By passing to a subsequence if necessary, we can assume without loss of generality that $u_k$ converges to $u$ pointwise almost everywhere in $\Omega$ and that $\nabla u_k$ converges to $\nabla u$ pointwise almost everywhere in $\Omega.$ For every $k \ge 1$ let $\widetilde{u}_k$ be the extension of $u_k$ by $0$ to $\widetilde{\Omega}.$ Then $\widetilde{u}_k \in C_{0}^{\infty}(\widetilde{\Omega})$ for all $k \ge 1$ and $$||\widetilde{u}_{k}-\widetilde{u}_{l}||_{H_{0}^{1,(p,q)}(\widetilde{\Omega})}=
||u_{k}-u_{l}||_{H_{0}^{1,(p,q)}(\Omega)}$$ for all $l, k \ge 1.$ Hence the sequence $(\widetilde{u}_{k})_{k \ge 1}$ is fundamental in $H_{0}^{1,(p,q)}(\widetilde{\Omega})$ since the sequence $(u_{k})_{k \ge 1}$ is fundamental in $H_{0}^{1,(p,q)}(\Omega).$ Thus, the sequence $(\widetilde{u}_{k})_{k \ge 1}$ converges in $H_{0}^{1,(p,q)}(\widetilde{\Omega})$ to some function $v \in H_{0}^{1,(p,q)}(\widetilde{\Omega}).$ Since $\widetilde{u}_k$ converges to $\widetilde{u}$ in $L^{p,q}(\widetilde{\Omega})$ and pointwise almost everywhere in $\widetilde{\Omega},$ it follows in fact that $\widetilde{u}=v$ almost everywhere in $\widetilde{\Omega}.$ Thus, $\widetilde{u} \in H_{0}^{1,(p,q)}(\widetilde{\Omega}).$ This finishes the proof.
We prove later that if $n=1$ and $\Omega \subset {\mathbf{R}}$ is an open interval, then all the functions in $H_{0}^{1,(p,q)}(\Omega)$ and in $W^{1,(p,q)}(\Omega)$ have representatives that are Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}.$ The following result is the first step in this direction.
\[Holder 1/p’ continuity for u in C1R n equal 1\] Suppose $n=1<p<\infty$ and $1 \le q \le \infty$. Let $\Omega \subset {\mathbf{R}}$ be an open interval. If $u \in C^{1}(\Omega)$ and $x, y \in \Omega$ with $x<y,$ then $$\label{Holder 1/p' continuity estimate for u in C1R n equal 1}
|u(x)-u(y)| \le C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}((x,y))},$$ where $$\label{Cpq is Lp'q' norm of 01}
C(p,q)=\left\{ \begin{array}{ll}
1 & \mbox{ if $q=1$}\\
\left(\frac{p'}{q'}\right)^{\frac{1}{q'}} & \mbox {if $1<q \le \infty.$}
\end{array}
\right.$$
Let $x, y \in \Omega$ with $x<y$ and $u \in C^{1}(\Omega).$ Then $u(y)-u(x)=\int_{x}^{y} u'(t) dt.$ By taking absolute values on both sides and using Theorem \[Holder for Lorentz\], we obtain $$\begin{aligned}
|u(x)-u(y)| &\le& \int_{x}^{y} |u'(t)| dt = ||u'||_{L^{1}((x,y))} \le ||u'||_{L^{p,q}((x,y))} ||1||_{L^{p',q'}((x,y))}\\
&=& |x-y|^{1-\frac{1}{p}} ||1||_{L^{p',q'}((0,1))} ||u'||_{L^{p,q}((x,y))}.\end{aligned}$$ This finishes the proof of the theorem, since $||1||_{L^{p',q'}((0,1))}=C(p,q),$ the constant defined in (\[Cpq is Lp’q’ norm of 01\]).
We say that the function $\overline{u}$ defined on $\Omega$ is a version of a given function $u$ on $\Omega$ if $u=\overline{u}$ a.e. in $\Omega.$
Now we prove (among other things) that if $n=1$ and $\Omega \subset {\mathbf{R}}$ is an open interval, then all the functions in $H_{0}^{1,(p,q)}(\Omega)$ and in $W^{1,(p,q)}(\Omega)$ have representatives that are Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}.$
\[Holder 1/p’ continuity for u in W1pq n equal 1\] Suppose $n=1<p<\infty$ and $1 \le q \le \infty.$ Let $\Omega \subset {\mathbf{R}}$ be an open set. Let $C(p,q)$ be the constant from [[(\[Cpq is Lp’q’ norm of 01\])]{}]{}.
[(i)]{} Suppose that $\Omega$ is an interval. If $u \in W^{1,(p,q)}(\Omega),$ then there exists a version $\overline{u} \in C(\overline{\Omega})$ that is Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}$ and $$[\overline{u}]_{0, 1-\frac{1}{p}; \overline{\Omega}} \le C(p,q) ||u'||_{L^{p,q}(\Omega)}.$$
[(ii)]{} If $u \in W_{loc}^{1,(p,q)}(\Omega),$ then there exists a version $\overline{u} \in C(\Omega)$ that is locally Hölder continuous in $\Omega$ with exponent $1-\frac{1}{p}$ and $$[\overline{u}]_{0, 1-\frac{1}{p}; \overline{\Omega'}} \le C(p,q) ||u'||_{L^{p,q}(\Omega')},$$ whenever $\Omega'$ is an open subinterval of $\Omega$ such that $\Omega' \subset \subset \Omega.$ Moreover, if $u' \in L^{(p,q)}(\Omega)$ and $\Omega$ is an interval, then $\overline{u}$ is Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}$ and $$[\overline{u}]_{0, 1-\frac{1}{p}; \overline{\Omega}} \le C(p,q) ||u'||_{L^{p,q}(\Omega)}.$$
[(iii)]{} If $u \in H_{0}^{1,(p,q)}(\Omega),$ then there exists a version $\overline{u} \in C(\overline{\Omega})$ that is Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{1}{p}$ and $$[\overline{u}]_{0, 1-\frac{1}{p}; \overline{\Omega}} \le C(p,q) ||u'||_{L^{p,q}(\Omega)}.$$
From Theorem \[W1pqloc included in H1sloc s<p\] we have $W_{loc}^{1,(p,q)}(\Omega) \subset H_{loc}^{1,s}(\Omega)$ for every $1<s<p.$ Hence, it follows via Evans [@Eva p. 290, Exercise 6] that $u$ has a version $\overline{u} \in C(\Omega)$ that is locally Hölder continuous in $\Omega$ with exponent $1-\frac{1}{s}.$ Without loss of generality we can assume that $u$ is itself continuous in $\Omega \subset {\mathbf{R}}.$
For both (i) and (ii) we prove first that $$|u(x)-u(y)| \le C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}((x,y))}$$ whenever $x$ and $y$ are two points in $\Omega$ such that $x<y$ and $(x,y) \subset \Omega.$ Here $C(p,q)$ is the constant from (\[Cpq is Lp’q’ norm of 01\]).
The function $u$ is assumed to be continuous in $\Omega$ and the above pointwise inequality is local; thus, in order to prove it, we can assume without loss of generality for both (i) and (ii) that $\Omega \subset {\mathbf{R}}$ is a bounded open interval and that $u$ is compactly supported in $\Omega.$
From Theorem \[W1pqloc included in H1sloc s<p\] it follows (since $\Omega$ is assumed to be bounded) that $u \in H_{0}^{1,s}(\Omega)$ for every $s \in (1,p).$ Fix such an $s \in (1,p).$ Let $u_k, k \ge 1$ be a sequence in $C_{0}^{\infty}(\Omega)$ converging to $u \in H_{0}^{1,s}(\Omega).$
Since $H_{0}^{1,(p,q)}(\Omega) \subset H_{0}^{1,s}(\Omega),$ $u$ is continuous in $\Omega$ and $n=1<p<\infty,$ it follows immediately that $u_k$ in fact converges to $u$ uniformly on $\Omega.$
The pointwise and uniform convergence of $u_k$ to $u$ on $\Omega,$ the fact that $u_k'$ converges to $u'$ in $L^{(p,q)}(\Omega)$ and the fact that (\[Holder 1/p’ continuity estimate for u in C1R n equal 1\]) holds for every $k \ge 1$ and for all $x,y \in \Omega$ with $x<y$ imply immediately by passing to the limit in (\[Holder 1/p’ continuity estimate for u in C1R n equal 1\]) that $$\begin{aligned}
|u(x)-u(y)| &\le& \int_{x}^{y} |u'(t)| dt = ||u'||_{L^{1}((x,y))} \\
&\le& C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}((x,y))} \\
&\le& C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}(\Omega)}\end{aligned}$$ for all $x, y \in \Omega$ with $x<y.$ Here $C(p,q)$ is the constant from (\[Cpq is Lp’q’ norm of 01\]). This finishes the proof of the desired pointwise inequality.
Now we prove claim (i). Since $u' \in L^{(p,q)}(\Omega),$ the above pointwise inequality implies that $$|u(x)-u(y)| \le C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}(\Omega)}$$ for all $x, y \in \Omega$ with $x<y.$ In particular, $u$ is uniformly continuous on $\Omega.$
We claim that $u$ admits a continuous extension to $\overline{\Omega}.$ This is obvious when $\Omega={\mathbf{R}},$ so we can assume without loss of generality that $\Omega \neq {\mathbf{R}}.$
If $\Omega=(a,b)$ is a bounded interval, then it follows that $u$ is bounded on $\Omega$ and uniformly continuous on $\Omega,$ so in this case we can indeed extend $u$ continuously to $\overline{\Omega}=[a,b].$ We denote the extension to $\overline{\Omega}=[a,b]$ by $u$ as well.
If $\Omega \neq {\mathbf{R}}$ is an unbounded interval, then $\Omega$ is either $(a, \infty)$ or $(-\infty, a)$ for some $a \in {\mathbf{R}}.$ In both situations, $u$ is uniformly continuous on $\Omega$ and bounded near $x=a,$ so in this case we can also extend $u$ continuously to the unbounded set $\overline{\Omega}=\Omega \cup \{a\}.$ We denote this continuous extension to $\overline{\Omega}$ by $u$ as well.
The above pointwise inequality and the continuity of $u$ on $\overline{\Omega}$ imply that $$|u(x)-u(y)| \le C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}(\Omega)}$$ for all $x, y \in \overline{\Omega}$ with $x<y.$ This finishes the proof of claim (i).
Now we prove claim (ii). The first part of claim (ii) follows immediately from (i).
Assume now that $u \in W_{loc}^{1,(p,q)}(\Omega),$ $\Omega$ is an interval and $u' \in L^{(p,q)}(\Omega).$ By mimicking the argument from the proof of claim (i), we see that $u$ admits a continuous extension to $\overline{\Omega}.$ If we denote that extension by $u$ as well, we see that $$|u(x)-u(y)| \le C(p,q) |x-y|^{1-\frac{1}{p}} ||u'||_{L^{p,q}(\Omega)}$$ for all $x, y \in \overline{\Omega}$ with $x<y.$ This finishes the proof of claim (ii).
Now we prove claim (iii). If $\Omega \subset {\mathbf{R}}$ is an interval, then claim (iii) follows obviously from (i).
Suppose now that $\Omega$ is not an interval. Let $U$ be the smallest open interval containing $\Omega.$ That is, $U=(a,b),$ where $$a=\inf_{x \in \Omega} x \mbox{ and } b=\sup_{x \in \Omega} x.$$ Let $\widetilde{u}$ be the extension of $u$ by $0$ to $U.$ Then $\widetilde{u} \in H_{0}^{1,(p,q)}(U)$ via Proposition \[Extension by zero in H01pq\]. Thus, claim (iii) holds for $\widetilde{u} \in H_{0}^{1,(p,q)}(U).$
We notice that $||\widetilde{u}'||_{L^{p,q}(U)}=||u'||_{L^{p,q}(\Omega)}.$ Moreover, it is easy to see that $u$ is in $C^{0,1-\frac{1}{p}}(\overline{\Omega})$ if and only if $\widetilde{u}$ is in $C^{0,1-\frac{1}{p}}(\overline{U})$ with $$[\widetilde{u}]_{0, 1-\frac{1}{p}; \overline{U}}=[u]_{0, 1-\frac{1}{p}; \overline{\Omega}}.$$ Thus, claim (iii) holds also for $u \in H_{0}^{1,(p,q)}(\Omega)$ when $\Omega$ is not an interval. This finishes the proof of the theorem.
Now we prove (among other things) that if $1<n<p<\infty$ and $1 \le q \le \infty,$ then the spaces $H_{0}^{1,(p,q)}(\Omega)$ and $C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega)$ embed into $C^{0, 1-\frac{n}{p}}(\overline{\Omega}).$ Since we work with functions in $H_{0}^{1,(p,q)}(\Omega)$ and in $C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega),$ no regularity assumptions on $\partial \Omega$ are needed. This extends the Morrey embedding theorem to the Sobolev-Lorentz spaces in the Euclidean setting. We prove this theorem by relying on the well-known Poincaré inequality in the Euclidean setting and by invoking the classical Morrey embedding theorem for $1<n<s<p<\infty,$ proved by Evans in [@Eva] and by Gilbarg-Trudinger in [@GT]. Theorem \[Morrey embedding 1<n<p\] (i) was also obtained via a different proof by Cianchi-Pick in [@CiPi2]. (See Cianchi-Pick [@CiPi2 Theorem 1.3]).
\[Morrey embedding 1<n<p\] Suppose $1<n<p<\infty$ and $1 \le q \le \infty.$ Let $\Omega \subset {\mathbf{R}}^n$ be open.
[(i)]{} If $u \in W^{1,(p,q)}(\Omega)$ is compactly supported in $\Omega,$ then $u$ has a version $\overline{u} \in C^{0,1-\frac{n}{p}}(\overline{\Omega})$ and $$\label{Holder 1-n/p estimate Morrey embedding 1<n<p}
[\overline{u}]_{0, 1-\frac{n}{p}; \overline{\Omega}} \le C(n,p,q) ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}}^n)},$$ where $C(n,p,q)>0$ is a constant that depends only on $n,p,q.$
[(ii)]{} If $u \in W^{1,(p,q)}_{loc}(\Omega),$ then $u$ has a version $\overline{u}$ that is locally Hölder continuous in $\Omega$ with exponent $1-\frac{n}{p}.$
[(iii)]{} If $u \in W^{1,(p,q)}_{loc}({\mathbf{R}}^n)$ and $|\nabla u| \in L^{(p,q)}({\mathbf{R}}^n),$ then $u$ has a version $\overline{u} \in C^{0,1-\frac{n}{p}}({\mathbf{R}}^n)$ and $$[\overline{u}]_{0, 1-\frac{n}{p}; {\mathbf{R}}^n} \le C(n,p,q) ||\nabla u||_{L^{p,q}({\mathbf{R}}^n; {\mathbf{R}}^n)},$$ where $C(n,p,q)$ is the constant from [(\[Holder 1-n/p estimate Morrey embedding 1<n<p\])]{}.
[(iv)]{} If $u \in H_{0}^{1,(p,q)}(\Omega),$ then $u$ has a version $\overline{u} \in C^{0,1-\frac{n}{p}}(\overline{\Omega})$ and $$[\overline{u}]_{0, 1-\frac{n}{p}; \overline{\Omega}} \le C(n,p,q) ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}}^n)},$$ where $C(n,p,q)$ is the constant from [(\[Holder 1-n/p estimate Morrey embedding 1<n<p\])]{}.
Let $s \in (n,p)$ be fixed. We have via Gilbarg-Trudinger [@GT Theorem 7.17] and via Theorem \[W1pqloc included in H1sloc s<p\] that $W^{1,(p,q)}_{loc}(\Omega)$ embeds into the space of locally Hölder continuous functions in $\Omega$ with exponent $1-\frac{n}{s}.$ Thus, if $u \in W^{1,(p,q)}_{loc}(\Omega)$ with $1<n<p<\infty,$ we can assume without loss of generality throughout the proof of this theorem (after possibly redefining $u$ on a subset of $\Omega$ of Lebesgue measure $0$) that $u$ is in fact locally Hölder continuous in $\Omega$ with exponent $1-\frac{n}{s}.$
We prove now (i). Suppose that $u \in W^{1,(p,q)}(\Omega)$ is compactly supported in $\Omega.$ Then we can assume without loss of generality that $\Omega$ is bounded. Since $u$ is compactly supported in $\Omega$ and $u$ is locally Hölder continuous in $\Omega$ with exponent $1-\frac{n}{s},$ it follows in fact that $u$ can be extended continuously by $0$ on $\partial \Omega$ and this extension (denoted by $u$ as well) is in fact Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{n}{s},$ where $1<n<s<p.$
We extend $u$ by $0$ to ${\mathbf{R}}^n \setminus \Omega$ and we denote this extension by $v.$ Since $u \in C_{0}(\Omega) \cap W^{1,(p,q)}(\Omega),$ it follows immediately from the definition of $v$ that $v \in C_{0}({\mathbf{R}}^n) \cap W^{1,(p,q)}({\mathbf{R}}^n)$ and $$\nabla v(x)=\left\{\begin{array}{ll}
\nabla u(x) & \mbox{if $x \in \Omega$}\\
0 & \mbox{if $x \in {\mathbf{R}}^n \setminus \Omega$}.
\end{array}
\right.$$ Moreover, since $u \in C^{0, 1-\frac{n}{s}}(\overline{\Omega}),$ it is easy to see that $v \in C^{0, 1-\frac{n}{s}}({\mathbf{R}}^n)$ with $$[v]_{0, 1-\frac{n}{s}; {\mathbf{R}}^n}=[u]_{0, 1-\frac{n}{s}; \overline{\Omega}}.$$ It is also easy to see that $v \in C^{0, 1-\frac{n}{p}}({\mathbf{R}}^n)$ if and only if $u \in C^{0, 1-\frac{n}{p}}(\overline{\Omega})$ with $$[v]_{0, 1-\frac{n}{p}; {\mathbf{R}}^n}=[u]_{0, 1-\frac{n}{p}; \overline{\Omega}}.$$
It is enough to show that $$[v]_{0, 1-\frac{n}{p}; {\mathbf{R}}^n} \le C(n,p,q) ||\nabla v||_{L^{p,q}({\mathbf{R}}^n;{\mathbf{R}}^n)}.$$
Let $x \neq y$ be two points from ${\mathbf{R}}^n$ and let $a$ be the midpoint of the segment connecting $x$ and $y.$ Let $R=|x-y|.$
For every integer $j \ge 0$ let $B_{x,j}=B(x, 2^{-j-1}R)$ and $B_{y,j}=B(y, 2^{-j-1}R).$ Let $B_{a}=B(a,R).$ It is easy to see that $B_{x,0} \cup B_{y,0} \subset B_a.$
Since $v$ is continuous in ${\mathbf{R}}^n,$ all the points in ${\mathbf{R}}^n$ are Lebesgue points for $v.$ Thus, $$v(x)=\lim_{j \rightarrow \infty} v_{B_{x,j}} \mbox{ and } v(y)=\lim_{j \rightarrow \infty} v_{B_{y,j}}.$$ Hence $$\begin{aligned}
v(x)-v(y)&=&\left((v_{B_{x,0}}-v_{B_{a}})+\sum_{j=1}^{\infty} \frac{1}{|B_{x,j+1}|} \int_{B_{x, j+1}} (v(z)-v_{B_{x,j}}) \, dz\right)\\
& &-\left((v_{B_{y,0}}-v_{B_{a}})+\sum_{j=1}^{\infty} \frac{1}{|B_{y,j+1}|} \int_{B_{y, j+1}} (v(z)-v_{B_{y,j}}) \, dz\right).\end{aligned}$$ This implies $$\begin{aligned}
|v(x)-v(y)|&\le&\left(|v_{B_{x,0}}-v_{B_{a}}|+\sum_{j=1}^{\infty} \frac{1}{|B_{x,j+1}|} \int_{B_{x, j+1}} |v(z)-v_{B_{x,j}}| \, dz\right)\\
& &+\left(|v_{B_{y,0}}-v_{B_{a}}|+\sum_{j=1}^{\infty} \frac{1}{|B_{y,j+1}|} \int_{B_{y, j+1}} |v(z)-v_{B_{y,j}}| \, dz\right).\end{aligned}$$ Since $v \in W^{1,(p,q)}({\mathbf{R}}^n)$ is compactly supported in ${\mathbf{R}}^n,$ then via Theorem \[W1pqloc included in H1sloc s<p\] we have $v \in H_{0}^{1,s}({\mathbf{R}}^n).$ Thus, via Poincaré’s inequality, we have $$\label{Poincare 11 inequality Rn}
\frac{1}{B(w,r)} \int_{B(w,r)} |v(z)-v_{B(w,r)}| dz \le C(n) r \frac{1}{B(w,r)} \int_{B(w,r)} |\nabla v(z)| dz$$ for every $w \in {\mathbf{R}}^n$ and every $r>0,$ where $C(n)>0$ is a constant that depends only on $n.$
Since $B_{x,0} \subset B_{a},$ we have via Hölder’s inequality for Lorentz spaces (see Theorem \[Holder for Lorentz\]) and Poincaré’s inequality (\[Poincare 11 inequality Rn\]) $$\begin{aligned}
|v_{B_{x,0}}-v_{B_a}|&=&\frac{1}{|B_{x,0}|} \left| \int_{B_{x,0}} (v(z)-v_{B_a}) \, dz \right|\\
&\le&\frac{1}{|B_{x,0}|} \int_{B_{x,0}} |v(z)-v_{B_a}| \, dz\\
&\le& \frac{2^n}{|B_a|} \int_{B_{a}} |v(z)-v_{B_a}| \, dz\\
&\le& C(n) R \frac{1}{|B_a|} \int_{B_{a}} |\nabla v(z)| \, dz \\
&\le& C(n,p,q) R \left(\frac{||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)}^p}{|B_a|}\right)^{1/p} \\
&=& C(n,p,q) R^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)}\\
&=& C(n,p,q) R^{1-\frac{n}{p}} ||\nabla u||_{L^{p,q}(\Omega \cap B_a; {\mathbf{R}}^n)}\\
&\le&C(n,p,q) R^{1-\frac{n}{p}} ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}}^n)}\\
&=& C(n,p,q) R^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}({\mathbf{R}}^n; {\mathbf{R}}^n)}.\end{aligned}$$
Similarly, since $B_{y,0} \subset B_{a},$ we obtain (after an almost identical reasoning, by replacing $B_{x,0}$ with $B_{y,0}$) $$\begin{aligned}
|v_{B_{y,0}}-v_{B_a}| &\le& C(n,p,q) R^{1-\frac{n}{p}} ||\nabla u||_{L^{p,q}(\Omega \cap B_a; {\mathbf{R}}^n)}\\
&\le& C(n,p,q) R^{1-\frac{n}{p}} ||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}}^n)}.\end{aligned}$$
We want to obtain upper estimates for $$\begin{aligned}
|u_{B_{x,j+1}}-u_{B_{x,j}}| &=& \frac{1}{|B_{x,j+1}|} \left| \int_{B_{x,j+1}} (u(z)-u_{B_{x,j}}) \, dz \right| \mbox{ and } \\
|u_{B_{y,j+1}}-u_{B_{y,j}}| &=& \frac{1}{|B_{y,j+1}|} \left| \int_{B_{y,j+1}} (u(z)-u_{B_{y,j}}) \, dz \right|\end{aligned}$$ for all $j \ge 0.$
For all $j \ge 0$ we only do the estimate for $|u_{B_{x,j+1}}-u_{B_{x,j}}|$ because we would use an almost identical reasoning to obtain the estimate for $|u_{B_{x,j+1}}-u_{B_{x,j}}|.$
Since $B_{x,j+1} \subset B_{x,j} \subset B_a$ for all $j \ge 0,$ we have via Hölder’s inequality for Lorentz spaces (see Theorem \[Holder for Lorentz\]) and Poincaré’s inequality (\[Poincare 11 inequality Rn\]) $$\begin{aligned}
|v_{B_{x,j+1}}-v_{B_{x,j}}|&=&\frac{1}{|B_{x,j+1}|} \left| \int_{B_{x,j+1}} (v(z)-v_{B_{x,j}}) \, dz \right|\\
&\le& \frac{1}{|B_{x,j+1}|} \int_{B_{x,j+1}} |v(z)-v_{B_{x,j}}| \, dz\\
&\le& \frac{2^n}{|B_{x,j}|} \int_{B_{x,j}} |v(z)-v_{B_{x,j}}| \, dz\\
&\le& C(n) R \frac{1}{|B_{x,j}|} \int_{B_{x,j}} |\nabla v(z)| \, dz \\
&\le& C(n,p,q) 2^{-j}R \left(\frac{||\nabla v||_{L^{p,q}(B_{x,j}; {\mathbf{R}}^n)}^p}{|B_{x,j}|}\right)^{1/p} \\
&\le& C(n,p,q) 2^{-j}R \left(\frac{||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)}^p}{|B_{x,j}|}\right)^{1/p} \\
&=& C(n,p,q) (2^{-j}R)^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)}\\
&\le& C(n,p,q) (2^{-j}R)^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}({\mathbf{R}}^n; {\mathbf{R}}^n)}.\end{aligned}$$
By summing the above inequalities and taking into account that $|x-y|=R,$ we have $$\begin{aligned}
|v(x)-v(y)| &\le& C(n,p,q) ||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)} \left(\sum_{j=0}^{\infty} (2^{-j}R)^{1-\frac{n}{p}}\right)\\
&=& C(n,p,q) R^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)}\\
&=& C(n,p,q) |x-y|^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}(B_a; {\mathbf{R}}^n)}\\
&\le& C(n,p,q) |x-y|^{1-\frac{n}{p}} ||\nabla v||_{L^{p,q}({\mathbf{R}}^n; {\mathbf{R}}^n)}.\end{aligned}$$
Claim (i) holds with constant $C(n,p,q)$ from the last line in the above sequence of inequalities. This finishes the proof of claim (i).
We prove now claim (ii). Let $\Omega' \subset \subset \Omega$ be an open subset of $\Omega$ and let $u \in W^{1,(p,q)}_{loc}(\Omega).$ We choose a cut-off function $\varphi \in C_{0}^{\infty}(\Omega)$ such that $0 \le \varphi \le 1$ and such that $\varphi=1$ in $\overline{\Omega'}.$ Then $u \varphi$ is compactly supported in $\Omega$ and via Lemma \[Product Rule for W1pq\] we have $u \varphi \in W^{1,(p,q)}(\Omega).$ From part (i) we have that $u \varphi$ is Hölder continuous in $\overline{\Omega}$ with exponent $1-\frac{n}{p}$ and $$[u\varphi]_{0, 1-\frac{n}{p}; \overline{\Omega}} \le C(n,p,q) ||\nabla(u \varphi)||_{L^{p,q}(\Omega; {\mathbf{R}}^n)},$$ where $C(n,p,q)>0$ is the constant from (\[Holder 1-n/p estimate Morrey embedding 1<n<p\]). Since $\Omega' \subset \subset \Omega$ and $\varphi=1$ in $\overline{\Omega'},$ it follows that $u \varphi=u$ in $\overline{\Omega'}.$ Thus, $u$ is Hölder continuous in $\overline{\Omega'}$ with exponent $1-\frac{n}{p}$ and $$\begin{aligned}
[u]_{0, 1-\frac{n}{p}; \overline{\Omega'}}&=&[u \varphi]_{0, 1-\frac{n}{p}; \overline{\Omega'}}
\le [u \varphi]_{0, 1-\frac{n}{p}; \overline{\Omega}}\\
&\le& C(n,p,q) ||\nabla(u \varphi)||_{L^{p,q}(\Omega; {\mathbf{R}}^n)},
\end{aligned}$$ where $C(n,p,q)>0$ is the constant from (\[Holder 1-n/p estimate Morrey embedding 1<n<p\]). This finishes the proof of (ii).
We prove now claim (iii). We use the notation from part (i).
Let $x \neq y$ be two points in ${\mathbf{R}}^n,$ let $a$ be the midpoint of the segment $[x,y]$ and let $R=|x-y|.$ Let $\varphi_{x,y} \in C_{0}^{\infty}({\mathbf{R}}^n)$ be a function such that $0 \le \varphi_{x,y} \le 1$ and such that $\varphi_{x,y}=1$ on $B_a:=B(a,R).$
By running the argument from (i) with the function $u \varphi_{x,y} \in W^{1,(p,q)}(\Omega)$ that is compactly supported in ${\mathbf{R}}^n$ we obtain, since $\varphi_{x,y}=1$ on $B_{a} \ni \{x,y \}$ $$\begin{aligned}
|u(x)-u(y)|&=&|(u \varphi_{x,y})(x)-(u \varphi_{x,y})(y)|\\
&\le& C(n,p,q) ||\nabla (u \varphi_{x,y})||_{L^{p,q}(B_a; {\mathbf{R}}^n)}\\
&=&C(n,p,q) ||\nabla u||_{L^{p,q}(B_a; {\mathbf{R}}^n)} \le C(n,p,q) ||\nabla u||_{L^{p,q}({\mathbf{R}}^n; {\mathbf{R}}^n)},
\end{aligned}$$ where $C(n,p,q)$ is the constant from (\[Holder 1-n/p estimate Morrey embedding 1<n<p\]) and from the last line of the last sequence of inequalities in the proof of claim (i). This finishes the proof of claim (iii).
We prove now claim (iv). We have to consider the cases $\Omega={\mathbf{R}}^n$ and $\Omega \subsetneq {\mathbf{R}}^n$ separately.
Suppose first that $\Omega={\mathbf{R}}^n.$ In this case the claim follows immediately from (iii) because the membership of $u$ in $H_{0}^{1, (p,q)}({\mathbf{R}}^n)$ implies that $u$ is in $W_{loc}^{1, (p,q)}({\mathbf{R}}^n)$ and $|\nabla u| \in L^{(p,q)}({\mathbf{R}}^n).$
Suppose now that $\Omega \subsetneq {\mathbf{R}}^n.$ Let $v$ be the extension by $0$ of $u$ to ${\mathbf{R}}^n \setminus \Omega.$ We claim that $v$ is continuous in ${\mathbf{R}}^n.$
Indeed, let $(u_k)_{k \ge 1} \subset C_{0}^{\infty}(\Omega)$ be a sequence of functions such that $u_k$ converges to $u$ in $H_{0}^{1,(p,q)}(\Omega)$ and pointwise almost everywhere in $\Omega.$ For every $k \ge 1$ let $v_k$ be the extension by $0$ of $u_k$ to ${\mathbf{R}}^n \setminus \Omega.$ We see immediately that $(v_k)_{k \ge 1} \subset C_{0}^{\infty}({\mathbf{R}}^n)$ and that $v_k$ converges to $v$ pointwise almost everywhere in ${\mathbf{R}}^n.$ Moreover, from Proposition \[Extension by zero in H01pq\], it follows that $v_k$ converges to $v$ in $H_{0}^{1,(p,q)}({\mathbf{R}^n}).$
Since the sequence $(u_k)_{k \ge 1} \subset C_{0}^{\infty}(\Omega)$ converges to $u$ in $H_{0}^{1,(p,q)}(\Omega),$ $u$ is continuous in $\Omega,$ and $1<n<p<\infty,$ it follows immediately that in fact $u_k$ converges to $u$ uniformly on compact subsets of $\Omega.$ In particular, $u_k$ converges pointwise to $u$ everywhere in $\Omega.$
From this, the definition of $v$ and of the functions $v_k$ and from the fact that $v_k=v=0$ everywhere on ${\mathbf{R}}^n \setminus \Omega$ for all $k \ge 1,$ it follows that the sequence $v_k$ converges pointwise to $v$ everywhere in ${\mathbf{R}}^n.$ Since the sequence $(v_k)_{k \ge 1} \subset C_{0}^{\infty}({\mathbf{R}}^n)$ converges to $v$ in $H_{0}^{1,(p,q)}({\mathbf{R}}^n)$ and pointwise in ${\mathbf{R}}^n$ and since $1<n<p<\infty,$ it follows immediately that in fact $v \in C({\mathbf{R}}^n)$ and the sequence $v_k$ converges to $v$ uniformly on compact subsets of ${\mathbf{R}}^n.$ Thus, we proved that $v$ is continuous in ${\mathbf{R}}^n.$ If we denote the extension of $u$ by $0$ to $\partial \Omega$ by $u$ as well, the above argument proved that $u \in C(\overline{\Omega}).$
We see now that the claim (iv) holds for $v$ via (iii). Since $v \in C^{0, 1-\frac{n}{p}}({\mathbf{R}}^n),$ and $v$ is the continuous extension by $0$ of the function $u \in C(\overline{\Omega})$ to ${\mathbf{R}}^n,$ it follows immediately that $u \in C^{0, 1-\frac{n}{p}}(\overline{\Omega})$ and $$[u]_{0, 1-\frac{n}{p}; \overline{\Omega}}=[v]_{0, 1-\frac{n}{p}; {\mathbf{R}}^n}.$$ This implies immediately that the claim (iv) holds for $u$ as well, because $$||\nabla u||_{L^{p,q}(\Omega; {\mathbf{R}}^n)}=||\nabla v||_{L^{p,q}({\mathbf{R}}^n; {\mathbf{R}}^n)}.$$ This finishes the proof of the theorem.
Theorems \[Holder 1/p’ continuity for u in W1pq n equal 1\] and \[Morrey embedding 1<n<p\] together with Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\] yield the following corollary.
Suppose $1 \le n<p<\infty,$ where $n$ is an integer. Let $u_p: {\mathbf{R}}^n \rightarrow {\mathbf{R}},$ $u_p(x)=|x|^{1-\frac{n}{p}}.$ Then $u_p$ is Hölder continuous in ${\mathbf{R}}^n$ with exponent $1-\frac{n}{p}.$
We proved in Proposition \[up in Lpinfty loc and in W1pinfty loc minus H1pinfty\] that $u_p \in W_{loc}^{1,(p,\infty)}({\mathbf{R}}^n)$ for all $p \in (1, \infty).$ The claim follows immediately by invoking Theorem \[Holder 1/p’ continuity for u in W1pq n equal 1\] (ii) for $n=1$ and respectively Theorem \[Morrey embedding 1<n<p\] (iii) for $n>1.$ One can also see that the claim holds via a direct and easy computation.
[**[Acknowledgements.]{}**]{} I started writing this article towards the end of my stay at the University of Trento and I finished it during my stay at the University of Pisa. I would like to thank Professor Giuseppe Buttazzo for his helpful suggestions. I would also like to thank the referees for their comments and for helping me improve the paper.
[99999]{} C. Bennett and R. Sharpley. [*Interpolation of Operators*]{}. Academic Press, 1988.
J. Bj[ö]{}rn. Boundary continuity for quasiminimizers on metric spaces. [*Illinois J. Math.*]{}, [**46**]{} (2002), no. 2, 383–403.
J. Cheeger. Differentiability of Lipschitz functions on metric spaces. [*Geom. Funct. Anal.*]{}, [**9**]{} (1999), 428–517.
A. Cianchi, L. Pick. Sobolev embeddings into BMO, VMO, and ${L}_{\infty}$. [*Ark. Mat.*]{}, [**36**]{} (1998), 317–340.
A. Cianchi, L. Pick. Sobolev embeddings into spaces of Campanato, Morrey, and Hölder type. [*J. Math. Anal. Appl.*]{}, [**282**]{} (2003), 128–150.
Ş. Costea. [*Strong $A_{\infty}$-weights and scaling invariant Besov and Sobolev-Lorentz capacities*]{}, Ph.D thesis, University of Michigan, 2006.
Costea. Scaling invariant Sobolev-Lorentz capacity on ${\mathbb{R}}^n$. [*Indiana Univ. Math. J.*]{}, [**56**]{} (2007), no. 6, 2641–2669.
Costea. Sobolev capacity and Hausdorff measures in metric measure spaces, [*Ann. Acad. Sci. Fenn. Math*]{}, [**34**]{} (2009), no. 1, 179-194.
Ş. Costea [*Scaling Besov and Sobolev-Lorentz capacities in the Euclidean setting*]{}, Publishing House of the Romanian Academy, Bucharest, 2012, xii+114 pages.
Costea and V. Maz’ya. Conductor inequalities and criteria for Sobolev-Lorentz two-weight inequalities, [*Sobolev Spaces in Mathematics II. Applications in Analysis and Partial Differential Equations*]{}, p. 103-121, Springer, (2009).
Ş. Costea and M. Miranda Jr. (2012). Newtonian Lorentz metric spaces. [*Illinois J. Math.*]{} [**56**]{} no. 2, 579–616.
L.C. Evans. [*Partial Differential Equations*]{}. American Mathematical Society, 1998.
H. Federer. [*Geometric Measure Theory*]{}. Springer-Verlag, 1969.
B. Franchi, P. Haj[ł]{}asz and P. Koskela. Definitions of Sobolev classes on metric spaces. [*Ann. Inst. Fourier (Grenoble)*]{}, [**49**]{} (1999), 1903–1924.
D. Gilbarg and N.S. Trudinger. [*Elliptic Partial Differential Equations of Second Order*]{}. Springer-Verlag, 2nd edition, 1983.
P. Haj[ł]{}asz. Sobolev spaces on an arbitrary metric space. [*Potential Anal.*]{}, [**5**]{} (1996), 403–415.
P. Haj[ł]{}asz. Sobolev spaces on metric-measure spaces. (Heat kernels and analysis on manifolds, graphs, and metric spaces (Paris, 2002)) [*Contemp. Math.*]{}, [**338**]{} (2003) Amer. Math. Soc., Providence, RI, 173–218.
J. Heinonen. [*Lectures on Analysis on Metric Spaces*]{}. Springer-Verlag, 2001.
J. Heinonen, T. Kilpel[ä]{}inen and O. Martio. [*Nonlinear Potential Theory of Degenerate Elliptic Equations*]{}. Oxford University Press, 1993.
J. Heinonen and P. Koskela. Quasiconformal maps in metric spaces with controlled geometry. [*Acta Math.*]{}, [**181**]{} (1998), 1–61.
R. Hunt. On $L(p,q)$ measure spaces. [*Enseignement Math.*]{}, [**12**]{} (1966), no. 2, 249–276
J. Kauhanen, P. Koskela and J. Mal[ý]{}. On functions with derivatives in a Lorentz space. [*Manuscripta Math.*]{}, [**100**]{} (1999), no. 1, 87–101.
J. Kinnunen and O. Martio. The Sobolev capacity on metric spaces. [*Ann. Acad. Sci. Fenn. Math*]{}, [**21**]{} (1996), no. 2, 367–382.
J. Kinnunen and O. Martio. Choquet property for the Sobolev capacity in metric spaces. In [*Proceedings of the conference on Analysis and Geometry held in Novosibirsk*]{}, pages 285–290, 2000.
J. Malý, D. Swanson, and W. P. Ziemer. Fine behavior of functions whose gradients are in an Orlicz space. [*Studia Math.*]{} [**190**]{} (2009), no. 1, 33–71.
V. Maz’ya. [*Sobolev Spaces*]{}. Springer-Verlag, 1985.
N. Shanmugalingam. Newtonian spaces: An extension of Sobolev spaces to metric measure spaces. [*Rev. Mat. Iberoamericana*]{}, [**16**]{} (2000), no. 2, 243–279.
N. Shanmugalingam. Harmonic functions on metric spaces. [*Illinois J. Math.*]{}, [**45**]{} (2001), no. 3, 1021–1050.
E. Stein, G. Weiss. [*Introduction to Fourier Analysis on Euclidean Spaces*]{}. Princeton University Press, 1975.
W.P. Ziemer. [*Weakly Differentiable Functions*]{}. Springer-Verlag, 1989.
[^1]: The author was partly supported by the Project PRIN 2010-11 “Calculus of Variations” and by the University of Pisa via grant PRA-2015-0017.
|
---
abstract: 'This document discusses the Information Theoretically Efficient Model (ITEM), a computerized system to generate an information theoretically efficient multinomial logistic regression from a general dataset. More specifically, this model is designed to succeed even where the logit transform of the dependent variable is not necessarily linear in the independent variables. This research shows that for large datasets, the resulting models can be produced on modern computers in a tractable amount of time. These models are also resistant to overfitting, and as such they tend to produce interpretable models with only a limited number of features, all of which are designed to be well behaved.'
author:
- |
Tyler Ward\
`ward.tyler@gmail.com`
title: 'The Information Theoretically Efficient Model (ITEM): A model for computerized analysis of large datasets'
---
Introduction
============
####
A good overview of the state of the art can be found in [@NELDER80]. That paper introduces the general problem ITEM was designed to solve, and very briefly surveys the various methods available in the literature that can solve this problem.
Statement of the Problem
------------------------
####
Consider the usual regression situation: we have data $ (\vec x_i, \vec y_i) $ for $ i = 1, 2, ..., N $. Here $ \vec x_i = (x_{i,1}, x_{i,2}, ...., x_{i,K})^T $ and $ \vec y_i = (y_{i,1}, y_{i,2}, ... y_{i,W})^T $ are the regressor and response variables. Suppose also that $ x_{i,k} \in \mathbb{R} $ and $ \vec y_i $ is a probability vector. In shorthand, $ X = \{\vec x_{i}\} $ and $ Y = \{ \vec y_{i} \} $.
####
A model is some function $ A_{XY}(x_w) $ whose output is a prediction for $ y_w $. In most common situations, the model $ A_{XY} $ must be deduced from the data $ X = \{\vec x_1, \vec x_2, ....., \vec x_N \} $ and $ Y = \{ \vec y_1, \vec y_2, ..., \vec y_N \} $, this process is referred to as fitting. Then the model would be used to make predictions for $ \vec x_w $ with $ w > N $, i.e. predict $ \vec y $ for the data points where we only know $ \vec x $, this is referred to as projection or running. ITEM is one such model, but several others will also be discussed.
####
In addition to the requirements above, a two more factors are added from an engineering standpoint:
- A large amount of data is available (at least 100,000 observations)
- The data has more than a few dimensions ($ \vec x_i $ has at least 5 dimensions)
For one dimensional data, various solutions are available, but generally, people will simply look at the results manually and make any adjustments that are needed. Beyond a few dimensions, this no longer works efficiently. It uses too much labor, and the projections on to the individual dimensions become less and less tractable. Building up a model from a set of $ N $ regressors requires roughly $ N^{2} $ operations, as for each regressor added, all other regressors need to be checked. With a moderate number of regressors (dozens to hundreds) this is too time consuming for people, but could be done by a computer. The primary goal of ITEM is to perform this multidimensional analysis and fitting automatically. Since human guidance is limited, it is important for the model to be able to resist overfitting while simultaneously using the data efficiently to produce a model as accurate as possible. The dataset must be large, because it will generally require a lot of data to support good predictions across many dimensions.
### The Analogy
####
Before proceeding further, this entire paper is devoted to producing good models. It is worth taking a brief detour to discuss what is meant by a good model. For this purpose, it is helpful to use a car analogy. For a person buying a car, there are several statistics that might be important.
1. acceleration
2. top speed
3. fuel efficiency
4. material efficiency
5. automated construction
####
Each of these factors helps drive the appeal (and cost) of a given car. For instance, a car could be made out of carbon fiber and titanium, thus getting better speed and efficiency, but by using these extremely expensive materials, the cost may be prohibitive. Similarly, a car could be constructed with extremely advanced manufacturing techniques, but if it cannot be mass produced, then again the car would be too expensive for typical use. Among these five characteristics, often improving one will make others worse.
####
This tradeoff will define a sort of efficient frontier, a 5 dimensional surface enclosing some volume. Each point on this frontier represents some tradeoff among the 5 qualities, but there is no point reachable with this technology that is better in all 5 categories. One definition of the quality of automotive technology is to simply say that better technology allows for a larger frontier, one that completely encloses the frontier for an inferior set of technology. A model has a similar set of factors that together help to determine its quality.
1. fitting computational efficiency
2. running computational efficiency
3. data efficiency
4. parameter efficiency
5. automated construction
####
A good model would be computationally inexpensive to fit and to run. Just like in the case of a car, these are nearly the same requirement. A car with good acceleration will typically have a high top speed, similarly a model that is efficient to fit will typically (though not always) also be efficient to run. These two factors will be grouped together as computational efficiency.
####
Data efficiency and parameter efficiency are similar concepts as well, so they will be grouped together as information theoretic efficiency. Data efficiency means essentially that statistically significant features of the data will be included in the model. We require that a feature will be picked up by the model without requiring excessive amounts of data, it will be added as soon as enough data is present for it to be significant. The concept of parameter efficiency means that the model will not pick up extra features beyond those indicated by the data. Essentially, this is a requirement that the model use as few parameters as possible, while simultaneously fitting the data as well as possible. Obviously there is some trade off between these two features, but together they define a part of the efficient frontier for models. We would like the model to be on or near this frontier, and for this frontier to itself pass close to the true distribution of the data.
####
The last requirement is simply that the model be constructed automatically, with limited human involvement. The whole purpose of this exercise is to produce a good model without expending a large amount of human effort to do so.
####
It is not possible for a model to excel in all five of these areas. Some models are better in one, or another, and for most distributions of data there is a definite limit to how high the information theoretic efficiency of a model can be, just as a car’s fuel efficiency runs into thermodynamic limits. The goal of the ITEM system is to construct models that are very good (though by no means ideal) in all 5 of these areas.
### The Example Problem {#example}
####
For the purposes of this paper, a residential mortgage model will be considered. In the United States, a residential mortgage is a loan made by a lender (typically a bank) to a borrower (typically an individual) for the purpose of buying residential real estate, typically a house, but sometimes a condo or other living space. When making such a loan, it is important for the lender to be able to project the probable future behavior of the borrower. In order to do that, the lender can examine a large dataset of historical loan behavior, and attempt to build a model based on this historical record.
####
One such dataset is the Freddie Mac loan level dataset [@FREDDIE], which will be used to fit this example model. This data set has approximately 600 million observations, so it is large enough to show an example of a real model. Other similar data sets are also available, most of which contain one to several billion observations. Since this is real data, projections based on it have actual economic value. For instance, projections of future behavior determine the interest rate offered to the borrower, and help the lender determine how to hedge efficiently.
####
The dataset is a collection of loan-month observations. By this, it is meant that each loan in the dataset has one observation per month. This observation is little more than the knowledge of whether or not the borrower made his payment that month. These observations will be considered independently, ignoring the fact that some of them are related to each other through the underlying loan. The logic here is that if the model is accurate and the dataset is comprehensive, then the loan-months will be independent conditional on the data presented. Another way of saying this is that the model residuals will be independent. This is an assumption that can be relaxed, but that is outside the scope of this paper.
####
Each of these loan-months has several regressors, including FICO, several flags related to the type of mortgage, loan age, Debt-To-Income ratios, refinance incentive, and others. Each loan-month also has a status. This status is a combination of the number of payments the borrower is in arrears, as well as some additional information about potential foreclosure proceedings and similar items. For the purposes of this paper, only three statuses will be considered.
- Status “C”: The borrower is “current”, and is not behind on his payments
- Status “P”: The borrower has repaid the entire loan, the loan no longer exists
- Status “3”: The borrower is one payment behind schedule (“30 days delinquent”)
####
Now consider the status that the loan will have next month. If the loan-month is historical data, this value may be known, otherwise it it necessary to project it. The purpose of the example model is to predict next month’s status for any loan-month in which we know this month’s status. In this way, a Markov Chain can be built up, progressively projecting the status further and further into the future.
####
In each month, the borrower transitions from his current delinquency state to a new state depending on the number of payments made. For a borrower who is Current (status “C”), he can make 1 payment (remaining “C”), 0 payments (to become “3”), or all the payments (to become “P”). Similarly, if a borrower is already in status “3”, he could become more delinquent by missing additional payments and so on. A separate model can be fit for each loan status, modeling the transitions available to loan-months in that status, so for the purpose of this paper it is enough to consider only the status “C”.
####
Therefore, there are three transitions available, in shorthand written thus:
- $ C \rightarrow C $ Going from Current to Current
- $ C \rightarrow P $ Going from Current to Prepaid
- $ C \rightarrow 3 $ Going from Current to 30 days delinquent
####
The transition $ C \rightarrow C $ is more common than the others by several orders of magnitude, but most of the economic behavior of these loans is related to the $ C \rightarrow P $ and $ C \rightarrow 3 $ transitions.
####
Long before large datasets were available, mortgages were modeled using pool level approaches, which simply averaged a group of loans together and then applied the model. Some of these models are still around, however they are quickly falling out of favor. The essential issue with the pool level approach is that it is subject to numerous problems related to this averaging. For instance, there is a world of difference between two pools with 700 FICO, one where every loan has exactly a 700 and another where half the loans have 600 and the other half have 800. In many cases, the pool behavior is driven by just a handful of loans, and the information about this handful of loans is destroyed by the averaging process.
### The Example Data
loanId month FICO MTMLTV age incentive isOwner startStatus endStatus
-------- --------- ------ -------- ----- ----------- --------- ------------- -----------
1 2014-01 750 75 14 -0.3 1 C C
1 2014-02 750 75 15 -0.35 1 C P
2 2013-01 700 65 41 1.21 1 C C
2 2013-02 700 64 42 1.22 1 C C
2 2013-03 700 63 43 1.25 1 C C
2 2013-04 700 62 44 1.18 1 C C
2 2013-05 700 61 45 1.22 1 C C
: Example Mortgage Data
####
The fields presented here are a handful of the more important factors driving mortgage behavior. A complete mortgage model would include dozens of effects, but it is not helpful to list them all out here.
- loanId: A unique ID identifying the loan.
- month: The month of the observation.
- FICO: The borrower’s FICO score at origination.
- MTMLTV: The LTV of the loan modified for house price, amortization and curtailment.
- age: The age of the loan, in months.
- incentive: The interest rate improvement the borrower would expect from refinancing.
- isOwner: Is this home occupied by its owner.
- startStatus: What was the status of the loan at month start.
- endStatus: What was the status of the loan at month end.
####
Each row in the table above represents one observation. A given loan would be represented by many observations. Some of the columns (such as FICO) that don’t change with time will depend only on the loanId, others will depend on loanId and month. Computing several of these columns could be a very involved process. For instance, MTMLTV requires knowing the value of the home as well as any amortization or curtailment undertaken by the borrower. Incentive requires an estimate of what rate the borrower would be offered if he decided to refi in the given month. When fitting on historical data, these calculations are often much easier, as the historical data is known so fewer sub-models are needed to make forecasts.
####
For the purposes of this paper, it is not important to understand the specifics behind these regressors, other than to know a few salient facts.
1. There are a large number of them.
2. Some of these regressors require substantial calculation to derive.
3. Some of these regressors are highly path dependent, depending on previous status transitions.
####
Efficiency Considerations
=========================
####
For the space of models in question, efficiency will be very important. When discussed here, there are two types of efficiency considered.
- Computational Efficiency
- Information Theoretic Efficiency
####
A good model in this space is one that very closely approximates the actual phenomenon under consideration using minimal human intervention and a reasonable amount of computing resources.
Computational Efficiency
------------------------
####
Within the world of mortgage modeling, the computational space is huge. There are roughly 60 million mortgages in the country. Mortgage models typically require path dependent effects. As just one example, one of the best predictors for whether or not a borrower will miss a payment is the number of months since the last missed payment. Regressors such as this ensure that while each loan-month transition probability may have a closed form, there is no (known) closed form for the distribution of loan status at any time more than 1 month in the future. Therefore, if loan level accuracy is desired, then numerous (e.g. 1000+) simulations will be needed. A typical loan (on a typical path) will require an average of approximately 100 months of projection before liquidation, since the typical mortgage is refinanced approximately once every 5-10 years. In addition, the financial firms that uses these models typically need to examine a significant number (e.g. 20) of interest rate and house price scenarios. So the number of loan-months we would like to compute each day is:
- $ n_{loan} = 6*10^7 $
- $ n_{path} = 10^{4} $
- $ n_{month} = 10^2 $
- $ n_{scenario} = 2*10^{1} $
####
$$n_{loanMonth} = n_{loan} * n_{path} * n_{month} * n_{scenario} = 1.2*10^{14}$$
####
Given that a single call to $ e^{x} $ takes on the order of 100 CPU clock cycles, assume that a model takes at least 10,000 cycles per loan-month. This implies that computing the whole universe would take at least $ 1.2*10^{18} $ clock cycles. This is about 500 million CPU seconds, or approximately 110,000 CPU hours. Clearly, running on a compute grid will be necessary to perform any meaningful fraction of this computation.
####
Throughout this paper, it will be important to consider the computational cost of the functions being used. Generally, only entire (in the sense of complex analysis) functions should be used, as they are highly suitable for numerical optimization. Being entire, they are defined everywhere (so there is no “edge” for the optimizer to hit), and they have infinitely many continuous derivatives, ensuring that they are smooth enough to be optimized easily. Of these functions, polynomials are typically cheaper to compute than other functions in this family. Unfortunately, polynomials are often unsuitable due to issues with oscillations, closure, and unboundedness. Of the non-polynomial entire functions, the exponential function is unusually inexpensive, and therefore will be used in preference to other options whenever it is appropriate.
####
It is these efficiency considerations, as much as the related explainability considerations that lead the models chosen in this field to be overwhelmingly parametric.
Information Theoretic Efficiency
--------------------------------
####
Within this space, the data itself is not believed to be generated by any exact closed form formula. Therefore, it is important that the family of functions making up the model be able to approximate the physical phenomenon in question very closely. Appendix II has some definitions and discussion of these factors.
####
The important point to take away from Appendix II is that most models will not converge to the exact distribution of the data. In this problem space, the dataset is large, so if the model fitting itself is efficient in the sense that the variance of the parameters is small, then the error in the model will come to be dominated by the mismatch between the model form itself, and the distribution of the data. The primary purpose of ITEM is to eliminate some of this mismatch, and thus produce a more accurate model.
####
If a distribution is the limit of a sequence of models, it is clear that in realistic cases it is the limit of such a sequence as the number of parameters goes to infinity. If it were a limit of a sequence of functions $ g(\vec x | \theta) $ with a fixed set of parameters $ \theta $, then it would be equal to $ g(\vec x | \theta_{MLE}) $, which is by assumption not the case. So it is important that this sequence converge quickly, without including an unnecessarily large number of parameters. In future work, this question could be explored more completely in the context of Kolmogorov Complexity, but for the purposes of this paper it is enough to have heuristic means by which to prevent the unnecessary waste of parameters.
Model Choice
============
####
There are several broad families of models that could be applied to this space. A sample is explored below.
1. Linear Regression
2. Nonlinear Regression
3. Kernel Smoothing
4. Neural Network
5. Random Forest
####
In this problem space, the predicted quantity in question is a probability. Therefore, linear regression is totally unsuitable, as it can give rise to probabilities that are greater than 1.0 or less than 0.0. Even if this were to be bounded, it doesn’t have the expected saturation behavior. We would expect that whatever function we choose would steadily approach some level of probability as the regressors become more or less favorable.
####
Computational cost and engineering considerations eliminate kernel smoothing and neural networks from consideration. Neural networks themselves would blow far beyond the 10,000 CPU cycle computational budget envisioned for this model. In addition, there are issues with explainability. Kernel smoothing suffers from the correlation among loans. Due to macroeconomic variables, the entire loan population is moving with time, so most loans in the future will not be very near to any significant number of loans from the past. There are also issues with tiling this dataset properly to be able to run it on a grid, and with the fairly high dimensionality of the data itself. The space is simply too large (and too mobile) for kernel smoothing to work well.
####
Lastly, the random forest could work, but it has issues with explainability and also parameter efficiency. Simply put, the random forest will use far too many parameters, and run the risk of very dangerous overfitting. With regards to explainability, in the mortgage world, people talk about the “credit box”, which is in typical parameter space, quite literally a box. For instance, (LTV $< $ 80, Fico $> $ 720) is a reasonable credit box. Even defining structures such as this becomes very hard with a random forest approach, requiring extensive simulation. For something like multinomial logistic, it is pretty easy (though not trivial) to define level sets of the function. These level sets may be high dimensional, so some projection is requied in order to get anything that is very explainable. In the absence of the need for very high precision, a credit box approach can serve as a simple substitue.
####
For all of these reasons, modern mortgage models are typically parametric.
Parametric Models
-----------------
Within mortgage modeling, there are two widely used families of parametric models.
1. Structural models
2. Multinomial regressions
####
There are advantages and disadvantages of each which will be discussed in turn.
### Structural Models
####
For a good overview of a structural model, see [@JPM]. When modelers talk of structural models, they typically mean a model that is not the result of a maximum likelihood optimization. Occasionally, (as in [@JPM]) a structural model is more explicitly defined as hedonic (see [@HED]) model driven by known (e.g. economic) factors. Usually a true structural model is both of these things, and will take the form of a collection of unrelated terms that are estimated through manual examination of the dataset, or by drawing some analogy between the data and relevant policy or law. The problem with this approach is that it is hugely expensive in terms of man hours, tends to have explainability issues (i.e. why did you add this rather than that), and almost always results in parameters that are not very optimal. It doesn’t help that only a handful of effects can be discovered through this manual process, and people are not good at visualizing data in higher dimensions.
####
Structural models are effective when the data is very limited, and when there is insufficient computational power to fit an accurate model on a large dataset. Historically, this was very much the case, which is why older mortgage models tend to be structural. However, when the dataset size is very large and computational resources are readily available for fitting, structural models tend to be less efficient than nonlinear regressiosn both in terms of information theory and in terms of computational performance.
####
This inefficiency is caused by several factors. The first is that (by our definition here), in a structural model, the parameters are provided by hand and are not the result of an MLE estimation. It is certainly possible that $ \theta_{structural} = \theta_{MLE} $, but in practice it is unlikely. For this reason, the model will almost always be biased, and the variance of the parameter estimates will be needlessly high. In addition, it is possible that the parameters are added in an efficiently parsimonious fashion so as to not overparameterize the model, but again, without carefully searching the problem space, this is unlikely to happen of its own accord in practice. In addition, it is certainly possible that the model could be constructed from entire functions chosen for efficiency, but again, this is not common in practice. As a result, these models tend to be inaccurate, bloated with excess parameters, and computationall inefficient.
### Multinomial Regression
####
A nonlinear regression needs to properly represent the problem space. Primarily, this means that it needs to produce a probability vector for any input. A broad class of functions that fit this description is the class of multinomial cumulative distribution functions. This family of functions takes a vector of values (often referred to as “power scores” or “propensity scores”) as input, and produces a probability vector as output. Common examples of this family are logistic (associated with the logit function), and the gaussian CDF (associated with the probit function).
####
For these models, if there is a strong belief that the data actually follows one of these distributions, then this should certainly be considered. For most datasets, and particularly for mortgages, there is no reason to believe that one of these distributions is a more natural fit than the other. For this reason, the logit model is typically used because it is computationally cheaper than the probit model.
Multinomial Logistic Regression
-------------------------------
Since modern mortgage modeling is typically based on multinomial logistic regression, it is helpful to review the technique here. One can see an overview of the multinomial logistic regression model in [@CZEPIEL].
####
Consider first the function multinomial logistic function.
$$L(\vec v) = \frac{ e^{\vec v} }{| e^{\vec v}|}$$
####
It is understood in the definition above that the exponential of a vector is taken element wise to produce a vector, and that $ \beta $ is a matrix of parameters. The values of $ \vec v $ are known as power scores. If we set the probability vector $ \vec y = L(\vec v) $, then this equation is underdetermined since the values of $ \vec y $ sum to 1. The equivalent invariant for $ \vec v $ is that subtracting the same value from all elements of $ \vec v $ leaves the values of $ \vec y $ unchanged. Traditionally, the largest power score is subtracted from all the power scores, guaranteeing one power score is $ 0 $ and the rest are negative. This improves the numerical stability of the logistic function itself and ensures that overflow results in the correct limiting behavior rather than NaN. With that convention in place, $ L(\vec v) $ defines a bijection, so it has an inverse.
$$L^{-1}(\vec y) = \ln(\frac{\vec y}{y_{0}})$$
Where here $ y_{0} $ is the probability corresponding to the $ 0 $ element of $ \vec v $. These power scores are then derrived from the regressors using a simple dot product.
$$\vec v = \beta \cdot \vec x$$
Modeling the data in this way makes the following assumptions.
- $ \vec y $ is linear with respect to $ \vec x $ in logit space.
- The terms of $ \vec x $ interact through summation in logit space.
####
These limitations are critically important, in that they are typically not satisfied by any real world phenomenon. This problem is resolved by introducing an arbitrary vector valued function $ F(\vec x) $ that transforms the values of $ \vec x $ into new regressors which are linear with respect to $ \vec y $ in logit space and do interact by summation.
####
Now the model looks like this.
$$L(\beta \cdot F(\vec x_i)) = \frac{ e^{\beta \cdot F(\vec x_i)} }{| e^{\beta \cdot F(\vec x_i)}|}$$
This equation is now perfectly general. Any probability vector can be written in such a form provided $ F(\vec x) $ can be discovered, and also that one is willing to allow $ \beta $ to have entries in the generalized reals $ ( \mathbb{R} \cup \{+\infty, -\infty \} ) $.
####
This function has a (relatively) simple closed form derivative. Start by computing the derivative with respect to the power score.
####
$$\frac{d}{ds_{i}} L(\vec s)_{w} = (\delta_{i, w} - L(\vec s)_{i}) L(\vec s)_{w} ds_{i}$$
####
Now if $ s_{i} = \beta_{i} \cdot F(\vec x) $ then the derivative with respect to $ \beta_{i, j} $ follows.
####
$$\frac{d}{d \beta_{i,k}} L(\beta \cdot F(\vec x))_{w} = (\delta_{i, w} - L(\beta \cdot F(\vec x))_{i}) L(\beta \cdot F(\vec x))_{w} F(\vec x)_{k}$$
####
Similarly, if $ F(\vec x) $ is parameterized by $ \vec \alpha $, then
####
$$\frac{d}{d \alpha_{i}} L(\beta \cdot F_{\vec \alpha}(\vec x))_{w} = (\delta_{i, w} - L(\beta \cdot F_{\vec \alpha}(\vec x))_{i}) L(\beta \cdot F(\vec x))_{w}(\beta_{w} \cdot \frac{d}{d \alpha_{i}}F_{\vec \alpha}(\vec x)_{k})$$
####
Likewise, the derivative of the log likelihood can be computed.
####
$$\frac{d}{ds_{i}} -\vec y \cdot \ln(L(\vec s)) = \sum_{w} y_{w} * (\delta_{i, w} - L(\vec s)_{i}) ds_{i}$$
In the mortgage modeling world (returning to the example), the vector $ \vec x_{i} $ is the data for the i’th loan-month, and the vector $ \vec y_{i} $ represents the probability that the loan will be in each of the available loan statuses in the next month. The parameters $ \beta $ depend on the starting state, for this example we can assume that the model under consideration is for computing the $ C \rightarrow * $ transitions. The loan-months are then simply partitioned by starting state, and the appropriate value of $ \beta $ used for each.
####
The difficulty now is reduced to finding a suitable value of $ F $. The model would be able to converge to any distribution of $ \vec y $ if the corresponding value of $ F $ could be found. If no such value of $ F $ can be found, then the model will converge to a distribution that is a non-infinitesimal distance (in KL divergence, for instance) from the true distribution of $ \vec y $. Given the function $ F $, the remaining fitting of the model is an entirely mechanical maximum likelihood estimation.
Optimality of Logistic Regression
---------------------------------
####
The logistic regression model is extremely efficient for several reasons. First of all, it is critical that only analytic functions be considered. Analytic functions are always $ C^{\infty} $, and thus it is always possible for an automatic nonlinear optimizer to solve for its parameters. In addition, analytic functions are closed under composition, so it is easy to build analytic functions using simpler analytic functions as building blocks. It should be possible to make a good model using $ C^{\infty} $ functions that are not analytic, but such functions are rare and often difficult to construct. As a practical matter, all reasonable functions that are $ C^{\infty} $ are also analytic, so this is a distinction without a meaningful difference.
####
Multinomial logistic regression is analytic, and it also has computational efficiency that is in some sense optimal. See Appendix I for more details. In brief, any analytic CDF will be based on one or more analytic functions that are not polynomials. The computationally cheapest such function (subject to certain conditions) is the exponential function. Therefore a simple formula incorporating exponentials will be more computationally efficient than any other option. The logistic function is just such a function. In addition to the above mentioned traits, the logistic function has arbitrarily many simple closed form derivatives. This greatly accelerates the process of fitting model parameters and reduces the associated errors.
####
Lastly, if the proper value for $ F $ can be discovered, then the logistic regression will converge arbitrarily closely to any distribution. In that sense, it is completely general. In addition, if the function $ F $ can be recovered efficiently from the data at hand, then this regression will also be information theoretically efficient in that it will produce parameters with low variance and converge quickly towards the actual data distribution, thus producing a model with small residuals. The ITEM system attempts to recover this value of $ F $ automatically.
Splined Multinomial Logistic Regression
---------------------------------------
####
Returning to the mortgage modeling problem described in Section \[example\].
####
For the multinomial logistic models, the problem usually begins with variable selection. Here, expert knowledge is important. Some of the regressors are fairly obvious, but others reflect real economic motives and drives, but are only obvious when examining certain interactions between the dataset and historical data from other sources. In any case, a candidate list of regressors is drawn up. It is helpful to eliminate regressors that do not have much impact, and often modelers do so using p-tests and Lasso style regularization to come to an initial fit. Initially, the function $ F $ is simply taken to be the identity function, or perhaps some simple splines chosen a-priori.
####
Once this is done, the next step is to look at the residuals. Observing a typical set of residuals, it will generally be the case that there is some smooth deviation from the projection. It appears that the projection is following one curve, but the actual data is following another. The mean of the two (for a suitable definition of mean) must be the same, but the curves do not in general coincide. This situation comes about when the response to a variable is not linear in logit space.
####
The solution to this problem is generally to construct some splines, the linear step spline defined below is typical.
$$s_{a, b}(x) = min(1, max(0, \frac{x-a}{b-a}))$$
####
Then the model would be recalculated using this spline instead of (or perhaps in addition to) the original regressor.
$$L( ... ) = \frac{e^{\beta_{0} + \beta_{1} \cdot s_{a, b}(x_{1}) + ...}}{| ... |}$$
####
By adding several of these splines, the response curve for $ x_{1} $ could slowly be transformed into any particular curve, making the approximation better and better. In addition, this spline is bounded, so it naturally defends the model against extreme outliers.
####
This spline is an example of a (non-analytic) CDF. Unfortunately, because this curve is not analytic with respect to $ a $ and $ b $, it is usually not possible to run an optimizer over it in order to find the optimal values of these parameters. Therefore, the values of $ a $ and $ b $ are typically chosen manually through inspection of the data and model residuals. This is extremely expensive in terms of man-hours, and results in splines that are almost never truly optimal. In addition, when adding splines, it is not typically feasible to determine which regressor most needs a spline, so often the splines actually included (even with optimal parameters) are less helpful than other splines which were not included. Using a sequence of these splines, the model can approach the true distribution arbitrarily closely, but it cannot do so without almost unlimited manual labor.
####
Often, modeling groups attempt to avoid this unlimited investment of manual effort by automatically fitting splines. Typically, before the process even begins it has failed. Step splines such as this cannot be fit automatically (with typical algorithms at least) because they are not analytical in their parameters. Often, this gives rise to an attemp to use cubic splines (which suffer from the same problem) or polynomials.
####
Polynomials deserve special mention, because they are analytic. However, just the same, they will not work. First of all, polynomials are never bounded, so even a few outliers will completely destroy the fitting process. Secondly, polynomials of degree $ d $ make a closed group, adding more polynomials of the same or lesser degree simply produces another polynomial of the same degree. Therefore, in order to add “more” curves, it is necessary to instead add polynomials of progressively higher and higher degree, which brings us to the next issue. Polynomials are strongly oscillitory, having all manner of harmonics and “ringing” issues which rapidly make the model nonsensical. Polynomial ringing is the wild oscillitory behavior caused by small moves in two points that are very close together and that the polynomial is constrained to pass through.
####
When this work is tied to the notion that these response curves need to be splines or polynomials, there is never any real hope of success.
####
With an algorithm that can select proper curves (not necessarily splines) automatically, the model would be much improved. This is the goal of ITEM.
An Introduction to ITEM
=======================
The ITEM model builds on the splined multinomial logistic regression by providing a means to fit splines automatically. To get good results, the family of curves chosen must meet the following minimum critera.
1. The family must be defined by a few (e.g. 2) parameters
2. It must be bounded and entire in all inputs (parameters as well as $ x $).
3. It must be very smooth and have at most one local maximum internal to the dataset (i.e. no harmonics)
4. The family must form a complete basis. Ideally, realistic curves are well approximated by a few members
5. It must be efficient to calculate
6. It must have an efficient closed form derivative.
####
The first requirement is necessary for information-theoretic reasons. If the model is to use the AIC to determine which curves to include, this will be greatly undermined if the curves require a huge number of parameters. In addition, the optimizers that will make these curves will have a much harder time finding good results.
####
The second requirement (together with the first) is basically the same as saying that an optimizer can be used to find the parameters. Recall that a curve is entire if it is analytic and defined for all points. For this purpose, we will consider a curve to be entire if it is defined on the whole real line. Similarly, no properly entire function can be bounded (Liouville’s Theorem), but one could be bounded on the real line, which is the important factor here.
####
The analytic requirement ensures that the curve has well defined derivatives. The boundedness and requirement that the function be entire protect against computational problems when an unrealistic starting point is given to the optimizer. The requirement that it be entire also ensures that there are no large zones where the derivative is zero (such as in the spline case). For instance, when optimizing a spline or similar curve, if both $ a $ and $ b $ are chosen to be below the whole dataset, the optimizer will find all progress impossible because the derivative is identically zero. Choosing curves bounded at 1 in particular helps to make the associated values of $ \beta $ easier to interpret.
####
In addition to the computational utility of using analytic functions, there is a physical justification as well. In most realistic datasets, there will be some degree of measurement error. If these measurement errors are (for instance) gaussian, then any properly specified model must be composed of only analytic response functions. We know this because the model response to the data must be a convolution between the error function and the actual functional form of the underlying phenomenon. Analytic functions are contagious over convolutions, convolving an analytic function with any other reasonable (e.g. bounded with finitely many discontinuities) function produces an analytic function. Therefore, any model operating on a dataset with analytic (again, typically gaussian) measurement errors is automatically misspecified if it includes any unanalytic response functions. This is one reason to avoid splines, the sharp turns they make are simply not possible if the visible data has any noise relative to the state driving the modeled phenomenon.
####
The third requirement ensures that results for similar inputs will be similar, a basic requirement for any model.
####
The fourth requirement ensures that any curve can be suitably approximated by a sum of enough curves from this family. This is necessary if the model is to actually converge to the true distribution for large dataset sizes.
####
The fifth requirement is a basic requirement that the model shouldn’t be wasting money, once all the other requirements are satisfied.
####
Most of the curves that modelers try to use here fail at least one of these tests. Requirement 1 eliminates kernel smoothing, requirement 2 eliminates splines, requirmements 3 and 4 together with 1 eliminate polynomials.
ITEM Curve Families
-------------------
####
Notice that for any cumulative distribution function $ CDF(x) $ that is analytic in $ x $, the function $ CDF(a^{2}(x-b)) $ will satisfy all requirements, provided it is efficient to calculate. The logistic function is an efficent CDF, and it has closed form derivatives, so that is a natural choice. The value of $ a^{2} $ is used so that this CDF is always upwards sloping, hence the associated $ \beta $ has the expected sign, positive for positive correlation, negative for negative correlation. This function also has closed form derivatives, which make it much easier to handle within the optimizer.
####
The formula is then
$$C_{a, b}(x) = \frac{1}{1 + e^{-a^{2}(x-b)}}$$
The derivatives are
$$\frac{d}{da} C_{a, b}(x) = 2a(x-b)C_{a, b}(x)C_{a, b}(-x)$$
and
$$\frac{d}{db} C_{a, b}(x) = -a^{2}C_{a, b}(x)C_{a, b}(-x)$$
####
Unfortunately, this function family is not enough to achieve good results. The problem comes down to the procedure for doing the fit itself. In the case of a regressor with a response that is strongly peaked within the distribution (e.g. it looks like a gaussian), no single CDF can improve the results significantly. Two of them taken together could approximate a gaussian, but if the fitting procedure is adding curves one at a time, this does not help. Therefore, it is necessary to include a spike-like family of curves as well. Fairly obviously, the spike family should be actual gaussians.
$$G_{a, b}(x) = e^{\frac{-(x-a)^{2}}{2b^{2}}}$$
Its derivatives are
$$G_{a, b}(x) = \frac{2(x-a)}{2b^{2}} G_{a,b}(x)$$
and
$$G_{a, b}(x) = \frac{2(x-a)^{2}}{2b^{3}} G_{a,b}(x)$$
####
Given that the gaussian family is used, if the centrality parameter of the gaussian is set outside of the bulk of the dataset, the result would be a bounded monotonically increasing function within the dataset. It then stands to reason that this might eliminate the need for the Logistic family. Unfortunately, this is not the case. The Gaussian family will tend to have a very large derivative near the edge of the dataset, whereas many features of datasets like the one in the example problem saturate rapidly well within the range covered by the data, and have a much more classic S-curve shape. Therefore the Gaussian family itself will yield poor results, and it is worth keeping the Logistic family as well.
With these families in hand, it is then necessary to fit them.
Comparison to Generalized Additive Models
-----------------------------------------
####
ITEM may be considered to be a special case of a Generalized Additive Model [@GAM], albeit one that is parametric. Since the nature of the problem has eliminated from consideration all nonparametric models, a proper nonparametric GAM could not be considered for this problem space. The issue is that in order to make any predictions, a true GAM would require the whole dataset resident in memory, this is not realistic for multi-TB data sets.
####
A slight modification of a GAM could be used, in that the data for each parameter could be bucketed and averaged, then these averages connected with (for instance) a cubic spline. This would have the advantage of being comparatively quick to fit and also relatively efficient to use. The disadvantage is that if the bucket size is too small, there will be a lot of oscillation, but a bucket size too large will lose a lot of detail. An algorithm similar to that used by ITEM to determine the number of buckets using an information criterion could be used, but that would considerably slow the fitting. There are also many potential issues related to the fact that for typical problems, highly non-uniform buckets are likely to be called for, greatly expanding the needed number of parameters. Some of these parameters would then be knot points, which would potentially have numerous issues due to unanalytic results when the knots get close together. It remains to be seen whether a bucketing approach such as this could achieve information theoretic efficiency on par with the ITEM curves themselves. Where there is a strong suspicion that the data follows an S-curve, for instance, ITEM would be expected to have a significant advantage. It can closely approximate an S-curve with just two parameters, whereas a more typical bucketed GAM would require many.
Information Theoretical Considerations
--------------------------------------
####
Once an automated procedure is unleashed upon curve selection, it is critically important to handle two information-theoretical considerations.
### Efficient Convergence
####
The model must efficiently converge to the true distribution of the dataset. The functions chosen above were selected such that most typical data distributions will be well approximated by a small number of functions that can be selected one at a time. The choice of functions changes the assumptions related to our model from the classic logistic model pair to a new set of greatly relaxed assumptions.
- The function $ L^{-1}(\vec y) $ is not pathologicial (\* see below).
- The terms of $ \vec x $ interact through summation in logit space.
####
Notice the first condition. By “not pathological” we really mean just that it is efficiently approximated by a limited number of logistic and gaussian functions. For instance the condition that its Fourier transform decays rapidly in frequency is sufficient. Rather than requiring linearity, we require only that the function is not pathological, a much weaker condition. The second condition remains unchanged. If there are complicated interaction terms between the variables, it will still be up to a human operator to discover that. No similar scheme can be attempted to uncover interaction terms since the number of such terms at each step is equal to the power set of the elements of x times the number of interaction functions, a space much too large to search effectively and posessing no obvious metrics to allow an optimization.
####
However, subject to these assumptions, the model will now converge to the true distribution if given enough parameters.
### Resistance to Overfitting
####
To achieve optimal accuracy, the model must not incorporate curves that are not well supported by the dataset. This inclusion of extraneous features is called overfitting. This requirement has two portions.
1. The model should stop calculating when there is no point in further work
2. The model must not include features that are not well supported by the dataset
####
The key here is to use an information criterion. At each stage of the fitting, include the best available curve, but demand that it pass an information criterion before allowing it to be included into the model. When the best curve can no longer pass the criterion, the model is complete. In the case of ITEM, this is done using the AIC/BIC. These criteria differ by only a constant of $ (2 - \ln(N))k $, with the BIC being the stricter criterion. In the results section, the choice of stopping condition is discussed. Typically, it won’t matter, an AIC criterion may include marginally more curves than BIC, but the difference will be small. In many tests, they include the exact same curves. There is some argument to be made that the BIC could be used simply because it is somewhat computationally cheaper since it may stop drawing curves earlier, and it is better to err on the side of fewer rather than more curves when in doubt.
Implementation of ITEM
======================
There are many subtle points in the actual fitting of the ITEM model, so that procedure will be described here.
Fitting ITEM curves
-------------------
Regressors can be divided into two types.
- Binary Flags
- Real Numbers
####
Binary flags are just what they sound like, being either $ 1 $ or $ 0 $. They don’t need to be further considered here, since they have no need of curves, a simple fit to produce a corresponding $ \beta $ will suffice.
####
Real numbers are typically double precision IEEE floating point numbers. It is important that these values have a well defined (i.e. not discrete) metric. A discrete metric simply means that the regressor is categorical, see [@DISCRETE]. Having a non-discrete metric over a space means that there is some sensible notion of distance. Given such a notion, it should be possible to approximate the apparent power score of the data (as evidenced by the observed transitions) by a smooth curve. The exact shape of the dataset itself is not important, but it is important that it (for example) have a fourier transform with rapidly decaying high frequencies. For instance, it should not be oscillatory with a high frequency. These pathologies are not common.
####
If a regressor happens to be a categorical variable (i.e. it is drawn from an enumeration of $ N $ values, but without a well defined metric), then it should be replaced with $ N - 1 $ flags. It is important to not form a complete basis with any flags that are introduced, as that would cause a linear correlation between the sum of the flags (guaranteed to be 1) and the intercept term of the regression (also guaranteed to be 1).
####
For numerical stability, it is very important to cap the log likelihood at some maximum constant. Approximately 20 is reasonable. This prevents the case where a specific loan with extremely odd parameters drives the entire fit by producing a vanishingly low probability for an even that actually occurs in the dataset. Adding a small random noise (see Annealing, \[annealing\]) can also help here.
####
The basic fitting procedure is as follows.
1. Fit all flags and betas for existing curves.
2. If model has maximum number of curves, exit.
3. For each regressor
4. For each transition
5. For each curve type (i.e. logistic, gaussian)
6. Use an optimizer to find the curve that minimizes the log likelihood.
7. Consider all curves thus computed, take the on that reduces the log likelihood the most.
8. For this best curve, examine the AIC, verify that the test passes.
9. If AIC test fails, exit, otherwise add the curve and go to step 1
####
Notice that it is critically important that the curve families were chosen such that a single curve will always improve the fit. This ensures that the optimization procedure above can always make progress and will not get stuck in a situation where fitting two curves at once would succeed, except in the most marginal cases.
####
During the fitting, there are a few pitfalls. The first thing to remember is that the logistic function has an inverse. Therefore, it is most efficient to convert the model probabilities into power scores, then do an (N + 2) parameter fit where N is the number of parameters in the curve (i.e, 2 for the curves mentioned above). This will be fitting the 2 curve parameters, the curve beta, and an intercept adjustment. All 4 are needed in order to arrive at the actual optimum. For instance, without fitting the intercept adjustment, it will typically be impossible to draw any curve that improves the fit, since any curve will increase or decrease the population-wide average, making the fit worse.
####
Once the power scores are availble, on each iteration simply update $ score_{i} = score_{i} + intercept + \beta*C_{a, b}(x) $. This will require only the regressor being fit against and the 4 parameters, it will not require the rest of the regressors which would otherwise bog down the calculation. Then convert these scores back to probabilities, and compute the log likelihood. Remember to use the analytical derivatives. Most of these calculations will not need to process the whole dataset, see the section on adaptive optimzation (Adaptive Optimization \[adaptive\]).
####
Each curve so fit will cost 3 parameters, 2 for the curve, 1 for the beta, the intercept is an adjustment of an existing parameter, so it is free. These curves can be made available to other transitions at a cost of 1 parameter each, and they could even be chosen such that when applied across all transitions the model is most improved. Both of these paths should be avoided, as either one will result in a model that is less understandable, though admittedly the information-theoretic efficiency may be marginally improved. When sharing curves like this, a few parameters are avoided, but each transition now gets a curve that is not ideal for it, and it may be hard to explain why some features from one transition are suddenly showing up in another. This is a judgement call, and the one place where efficiency has been sacrificed for interpretability within the model.
Annealing ITEM curves {#annealing}
---------------------
####
Once all curves are fit, and there is either insufficient information to fit further curves or the model has reached the operator supplied curve limit, the model is complete. At this point, however, the model may not actually be optimal. The exact curves chosen are path dependent, and may not be a global optimum.
####
This is analogous to metal that has been formed and then cooled. If it was cooled quickly, the internal arrangement of atoms may not be at the minimum energy. Essentially, the crystal was left at a local minimum somewhere along the way to the global minimum. Given more time (and sufficient energy), it could eventually, through random chance, bounce out of the local minimum and continue towards the global minimum. At low temperatures, there is not enough energy for this to occur in any reasonable timeframe. The solution in metalurgy is called annealing, simply heat the metal and then cool it very slowly. There is a procedure ([@ANNEALING]) that can improve the total fit quality at the cost of computational time during the fit.
####
This pass is usually not necessary, but if extra CPU cycles are available, it may be attempted. Dramatic improvements from this process should not be expected, but neither are dramatic changes. Most models go through many iterations of fitting and analysis (or bug fixing), so fitting performance is important. However, for a final version, it may be worthwhile to spend the extra resources to get a marginally better product. The fit below assumes that curve parameters are not optimized during step 1. From experience, adding so many parameters to the optimization rapidly bumps against machine precision issues and causes the results to be worse than an iterative approach.
1. Run a full optimization on all $ \beta $ values in the model (but not curve parameters).
2. If loop count reached, exit.
3. Loop over each regressor.
4. Remove all the curves from this regressor, then (as above) generate the same number of new curves.
5. Once all regressors have been processed in this way, go back to step (1).
####
For this to really work well, some small amount of randomness needs to be added to the model. In this way, any curves that came about simply because of the exact content of this dataset will have a chance to be replaced by other curves that may be better, but are unreachable from this dataset. The proposed method is to add to each observed probability a gaussian noise with a very small standard deviation (around 1.0e-4 to 1.0e-6 depending on dataset size), and then renormalize the results. This has the effect of making events that didn’t happen (probability 0 in the dataset) have some small probability of occuring, and thus they cannot be completely discounted by the model. The size of this standard deviation should be inversely proportional to the dataset size. Essentially, in a dataset with a million points, nothing should be considered less than approximately a one in a million chance, for instance. The data itself simply does not support such a strong assertion.
####
Another way of introducing randomness, is in the selection fo the starting points for the centrality parameters of the various curves to be drawn. This could be started at the actual dataset mean, but that may prevent improvement in some cases, where a local optimimum prevents the the global optimimum from being obtained. An example related to seasonality will be discussed in Section \[fitResult\]. One way to avoid this is to instead select a random data point, and start the centrality parameter at that point’s value. This has the effect of allowing some out of the way portions of the parameter space to be explored, but it does not tend to explore places lacking in data. Since the curve fitting process is iterative, there will be many chances to find a good curve, especially if annealing is used. For curves that have an easily reachable global optimum, this won’t matter anyway.
####
For the example built here, annealing results will not be discussed. In future work, good values of the noise term and annealing performance analysis can be discussed.
Adaptive Optimization {#adaptive}
---------------------
For the ITEM model, all the optimizations are taken over the expected value of the log-likelihood of the dataset. These functions look like this.
$$f(\beta) = \frac{1}{N} \sum_{k=0}^{N} \vec y_{k} \cdot \ln(\vec L(\beta \cdot F(\vec x_{k})))$$
In this equation, $ N $ is the number of observations, and $ L(\beta \cdot F(\vec x_{k})) $ is the model estimated value of the transition probabilities. Considering just the partial sums of this function.
$$f_{M}(\beta) = \frac{1}{M} \sum_{k=0}^{M} \vec y_{k} \cdot \ln(\beta \cdot F(\vec x_{k}))$$
It is clear that for large $ M $, $ f_{M}(\beta) - f(\beta) $ is distributed as $ N(0, \frac{\sigma}{\sqrt{M}}) $ where $ \sigma $ is simply the standard deviation of the average element from the dataset. Optimizers are driven not by the value of $ f(\beta) $, but by the differences between these values, e.g. $ f(\beta_{1}) - f(\beta_{2}) $. The standard deviation of this difference in the case of partial sums may be computed directly, call it $ \sigma(\beta_{1}, \beta_{2}, M) $
First, define a term wise difference.
$$d(\beta_{1}, \beta_{2}, k) = \vec y_{k} \cdot (\ln(\vec L(\beta_{1} \cdot \vec x_{k})) - \ln(\vec L(\beta \cdot \beta_{2}, \vec x_{k}))))$$
Then the std. deviation of the partial sum can be defined as follows.
$$\sigma(\beta_{1}, \beta_{2}, M) = \frac{1}{\sqrt{M}} \sqrt{(d(\beta_{1}, \beta_{2}, k) - \mu)^{2}}$$
Where here, $ \mu $ can be taken to simply be the sample mean. Due to the sizes of the numbers typically encountered, this will present no problem. Then, for some constant $ C $, successive values of $ M $ (starting at $ M_{0} $) are considered until
$$| f_{M}(\beta_{1}) - f_{M}(\beta_{2}) | > C \sigma(\beta_{1}, \beta_{2}, M)$$
####
Here the choice for the value of $ C $ reflects how certain the modeler would like to be that the new point is actually better than the old point. For values of $ C $ around 5 (the standard metric for proof in scientific circles) the odds of spurious point selection are very small, roughly $ 10^{-6} $. In any case, the model is not very sensitive to this parameter, setting it smaller doesn’t help performance much, and setting it larger doesn’t improve accuracy meaningfully. The value of $ M_{0} $ needs to be chosen large enough that the law of large numbers will take effect, and the data sample will be representative of the whole dataset. Again, the model is not very sensitive to this parameter. Making it approximately 1% of the data set size is a reasonable setting if the dataset is large. Alternatively something of moderate size such as $ 10^{4} $ should work for most data sets. Again, setting this parameter very small doesn’t improve the efficiency much, so if it is a moderately sized fraction of the whole dataset size, little is gained by reducing it further.
####
Note that for each successive value of M, the previous results need not be recalculated, only the additional elements need to be considered. If the dataset is large, then it will almost never be necessary to examine the whole dataset. Only in the last few iterations of the optimizer will the comparisons be close enough together that the entire dataset must be considered.
####
Furthermore, this gives a natural stopping condition. Whenever the points considered by the optimizer satisfy
$$| f_{N}(\beta_{1}) - f_{N}(\beta_{2}) | < \sigma(\beta_{1}, \beta_{2}, N)$$
####
Applied over the whole dataset $ N $, then there is simply no further progress possible, so the current results should be returned as-is.
####
The end result of this algorithm is that for a constant value of x and y tolerance (i.e. stop optimizing when the parameter is bracked close enough to its optimum, etc...) the cost of optimization grows sub linearly. For very large dataset sizes, this cost would approach a constant, as the optimizer would never reach the end of the dataset for any iteration.
Hybrid Projection
-----------------
####
Once a model is fit, it must be used to project future behavior. Though this process will not be covered in depth in this paper, an efficient means for performing this projection will be described.
### Markov Matrix Multplication
####
If the model contains $ m $ potential states, then at each time period, the $ m \times m $ Markov matrix at a time $ t $ (here denoted $ M(t) $) may be computed. Then the next state $ \vec s(t) $ may be computed from $ \vec s(t-1) $ through matrix multiplication.
$$\label{matmult}
\vec s(t) = M(t-1) \vec s(t-1)$$
####
This method has two primary disadvantages.
1. It forbids the use of non-Markovian regressors
2. It wastes resources on computations for extremely rare states
####
In mortgage modeling, both of these disadvantages are particularly severe. In general, a regressor such as “number of months since last delinquent” is incredibly important, but non-Markovian. In addition, some states (e.g. “In Foreclosure”) are extremely rare. If the matrix multiplication method is used, then the majority of the computations each month go towards states that are immaterial for most loans, on most interest rate paths. This method is especially expensive since interest rates and housing prices need to be simulated anyway, so it doesn’t even eliminate the need for simulation.
### Simulation
####
An alternative to matrix multiplication is to each month select a random transition to make, weighted by transition probability. Now instead of $ \vec s(t) $, the equation includes $ \vec {s'}(t) $ where $ \vec {s'}(t) $ is composed of a single $ 1 $ and all other elements are zero. This means that only one column of $ M(t) $ needs to be calculated. Call this selected matrix $ M'(t) $, it is composed of all zeros except for a single $ 1 $ element in the randomly selected (weighted by probability) location along the column corresponding to the $ 1 $ element in $ \vec{s'}(t-1) $. Then $ \vec {s'}(t) $ is now especially easy to calculate.
$$\vec {s'}(t) = M'(t-1) \vec {s'}(t-1)$$
####
This computation involves only a single column of $ M(t-1) $, and is thus much cheaper (generally about $ m $ times cheaper) than the computation in equation \[matmult\]. Furthermore, since there is no longer any uncertainty about the loan status in any month, non-Markovian regressors no longer pose a problem. Unfortunately, this method introduces substantial additional simulation noise, which may require the use of additional paths, thus increasing computational cost.
####
In typical usage, especially if loan level noise is not a problem (i.e. only aggregate behavior of a pool is needed), simulation is several times cheaper than matrix multiplication, even when it requires the use of extra paths due to excessive noise. This is especially true since many of the most expensive computations are related to some of the rarest states (e.g. “Foreclosure”) in many loan level models.
### Hybrid Simulation
####
Ideally, one would like a mechanism as inexpensive as simulation, but with as little noise as matrix multiplication. In addition, this method should allow the use of non-Markovian regressors, just like simulation does.
####
Returning again to the mortgage modeling example, notice that most loans will be in status $ C $ most of the time. For any loan that starts in status $ C $, we could assume that it will always remain in $ C $. If we do so, then the non-Markovian regressors are not an issue, since the status at each time is entirely clear. Comparing this to the simulation approach, we can have a misprediction in one of two ways.
1. The loan transitions to $ P $
2. The loan transitions to $ 3 $
####
In the first case above, since $ P $ is an absorbing state, no harm has been done. Though the non-Markovian regressors are no longer correct, they are also no longer needed. In the second case, it would be necessary to then begin real simulation in order to correctly calculate future states. Taking $ T(t, X) $ to be the probability that the loan transitions from $ C $ to $ X $ in month $ t $, the following three values are needed.
1. $P(t, C^*)$: The probability that the loan is always in $ C $
2. $P(t, P^*)$: The probability that the loan is in $ P $ after previously having always been $ C $
3. $T(t, 3^*)$: The probability that in the current month, the loan goes to $ 3 $ having always been $ C $
####
These values can be easily represented in terms of the previous months’ values.
$$P(t, C^*) = P(t-1, C^*)T(t-1, C^*)$$
Similar equations exist for $ P $ and for $ 3 $ as well.
####
If the loan matures at time $ \tau $, then compute $ P(\tau, 3^*) $ (using formulas such as those above), which is the probability that the loan ever reaches state $ 3 $. For each time $ t $, compute
$$T'(t, 3) = \frac{T(t, 3)}{P(\tau, 3)}$$
####
This is the probability that the loan enters state $ 3 $ at time $ t $ given that it ever enters state $ 3 $. Now select a time $ t' $ randomly, weighted by $ T'(t, 3) $, and begin simulation from time $ t' $, given that the loan was current up until then. In this way, compute $ \vec{s'}(t) $ for all $ t \leq \tau $, taking $ \vec{s'}(t) = C $ where $ t < t' $. Similarly, define the status vector $ S(t) = \{P(t, C^*), P(t, P^*), 0\} $. Now the status vector $ \vec s(t) $ can be computed as
$$\label{hybrid}
\vec s(t) = P(\tau, 3)\vec{s'}(t) + (1 - P(\tau, 3))S(t)$$
####
This computation has the advantages of both simulation and Markov matrix multiplication, and none of the weaknesses. On average, only approximately $ 1.5 $ columns of $ M $ need to be computed, rather than $ m $. In addition, non-Markovian regressors can be used freely, since in both branches of this computation, the complete state vector up to time $ t $ is always known with certainty. The simulation noise of this method is much, much less than the simulation noise introduced by raw simulation, since the rare events (transitions through $ 3 $) are over sampled, and the common events ($ C \rightarrow C$) are computed exactly.
####
Lastly, this approach allows the practitioner to spend resources where they are most needed. I propose the following method, performed for each interest rate and house price path. For each loan number $ n $, compute $ w(n) = P(\tau, 3) $, with the understanding that $ P(\tau, 3) $ is a function of $ n $.
1. Compute $ \gamma = \sum_{n} w(n) $, where the sum is taken over all $ n $ loans.
2. Determine some threshold, call it $ \varepsilon $, perhaps $ \varepsilon = \frac{\gamma}{nq} $ for some integer $ q $.
3. Loop through all loans, compute $ \alpha(n) = \min(1, \frac{w(n)}{\varepsilon}) $
4. For each loan, draw a random value $ \nu(n) \in [0,1) $
5. If $ \nu(n) > \alpha(n) $, skip this loan
6. Otherwise reduce $ w(n) $ by $ \varepsilon $, and compute a path (as defined in equation \[hybrid\]) for this loan
7. Continue looping until all $ nq $ simulations have been performed, or no loan has $ w(n) > \frac{\varepsilon}{2} $.
####
This procedure is designed to allocate calculations efficiently. In general, loans that have a higher probability crossing status $ 3 $ will get more simulations. The end result is that all loans will have a similar amount of simulation noise at the end of the process. This avoids the very common problem where most loans have very little uncertainty, but a few have a lot, so most computational resources are wasted on the loans where it makes little difference.
####
An alternate version would involve no randomness, and would simply apply $ \frac{w(n)}{\varepsilon} $ (rounded up, presumably) computations to each loan.
Results
=======
####
The ITEM model was implemented as discussed above, and applied to the Freddie Mac loan level dataset. For the purposes of this demonstration, only a limited set of regressors were used. In a true production setting, a much larger set would be used, but the results would be otherwise very simliar. The inclusion of several of the most important regressors will be enough to prove the point.
####
Additionally, it should be noted that only the first few million observations from the database were used. The observation count is limited by the RAM of the test machine. In a live production setting, the observations used would be selected randomly, in addition, they would be shuffled. Without the shuffling, it is known that the observations used correspond to the oldest loans. In addition, the adaptive optimization is somewhat undermined by the fact that the first few blocks of calculations represent only a few (about 10k) loans originated around the same time. Therefore, no curve is reachable that doesn’t improve the fit of these loans. This deficiency was allowed to remain in order to show the behavior of a multi-curve age regressor, which otherwise would have ended up with only two curves rather than the 5 shown here.
Initial Curve Fitting Convergence
---------------------------------
####
One important aspect of fitting is that it must be parsimonious with the curves to be added. The model performs well on this score. The table below describes the curves added when the model is allowed to fit on 1 million data points from the dataset. The model was allowed to select curves from age, incentive, fico, LTV and calendar month (to capture seasonal effects). Among these 5 regressors, 13 curves were drawn. The model was allowed to draw as many curves as it found useful, but stopped after 13.
####
Regressor toState type center slope negLL AIC BIC
----------- --------- ---------- -------- ------- ---------- ---------- ----------
age P logistic 30.7 0.6 0.135644 -7565.79 -7530.34
age P gaussian 30.73 34.87 0.134044 -2716.92 -2681.47
incentive P logistic -0.18 1.01 0.132952 -1775.78 -1740.33
age 3 logistic 30.7 0.16 0.132498 -632.64 -597.19
incentive P logistic -0.18 0.99 0.132143 -673.67 -638.22
... ... ... ... ... ... ... ...
age P logistic 30.72 0.265 0.130196 -233.67 -198.22
... ... ... ... ... ... ... ...
age 3 logistic 30.72 0.074 0.130020 -21.65 13.80
: Curves (1M datapoints)
####
![Fit Convergence as a Function of Curve Count[]{data-label="fig:convergence"}](convergence.jpg)
####
In the table above, the first 5 of these curves are shown, as well as curve 10 and curve 13. The AIC shows exponential decay behavior, though the decay slows for curves 6 through 10. It is interesting to note that curve 5 improves the AIC more than curve 4. The improvement in the age fit allowed for a better selection of incentive curve in the next step than was possible previously, this is a good example of the path dependency that annealing is designed to handle. After an intial improvement, the results quickly level off, as expected.
####
The BIC for each of these curves is also recorded in the table. It differs from the AIC by a constant of roughly 35 for a dataset of this size. If the BIC is used instead of the AIC, then only 10 curves (rather than 13) would have been drawn. The difference in final log likelihood is immaterial ($ 0.130071 $ vs $ 0.130020 $), but the computational cost would have been reduced by roughly 20%.
####
After these 13 (or 10) curves, the model is unable to find any better curves. The primary reason for this is that the std. deviation of the log likelihood is large enough that all future optimizations rapidly halt, as there is not enough data to conclusively point the way to a better curve. With a larger dataset, it would be expected that this situation would improve. Even so, the curve count is expected to be highly sub-linear in dataset size. Notice also that ITEM did something that a typical human analyst would not do, it did not use all of the regressors, instead opting to place more curves on the most important few regressors rather than placing a few on on each.
####
The concentration of the curves on a few regressors is an interesting phenomenon. What is really happening here is that several of the regressors have strong colinearity and a dramatic difference in predictive power. Any weak regressor will appear to be pure noise if the distribution of stronger regressors is not uniform when projected onto this regressor. This prevents weaker regressors from getting any curves on them until the stronger regressors are adequately fit. At the same time, some strong regressors are strongly colinear with each other, meaning that each curve drawn on one of the regressors actually greatly improves the smoothness of the other regressor. Consequently, the algorithm moves back and forth between the two strong colinear regressors, ignoring all others until those regressors have been fully accounted for. You can see this oscillation in the order in which the curves were fit.
####
Regressor toState type center slope negLL AIC BIC
----------- --------- ---------- -------- ------- ---------- ----------- -----------
age P logistic 31.56 0.72 0.132417 -38371.64 -38331.37
age P gaussian 31.56 35.08 0.131104 -11024.60 -10984.33
incentive P logistic -0.04 1.92 0.129150 -18802.34 -18762.07
age P gaussisn 31.56 35.08 0.128644 -4714.76 -4674.49
age P logistic 31.57 0.89 0.128296 -3300.35 -3260.08
age 3 logistic 31.57 0.13 0.128080 -2074.88 -2034.61
ltv 3 logistic 74.05 0.55 0.127905 -1676.31 -1636.04
incentive 3 logistic -0.14 1.09 0.127570 -1994.14 -1953.87
age P logistic 31.57 0.06 0.127537 -241.15 -200.88
: Curves (5M datapoints)
####
In a larger dataset, with 5 million points (see table above), fewer curves are added, but better results are achieved. The reason for this is related to the defense against overfitting built into the model. The model will not refine a parameter beyond the point where it moves the log likelihood by less than one sample standard deviation. This protects against over fitting by avoiding cures that might otherwise have a spuriously high AIC score. In a larger dataset, not only is the noise within the dataset suppressed (thus allowing easier fitting to the smooth model curves), but the parameters can be refined further due to the smaller standard deviation of the mean of the log likelihood. This means that each curve is in general a better fit, which can offset the ability of the dataset to otherwise support additional curves. Also, note that there was no difference between AIC and BIC, they both would have included the same set of curves.
####
Note here that the centrality parameter of these curves shows little variation. This is apparently due to the presence of numerous local minima in the noisy dataset. One solution to this problem is to pick the centrality parameter by simply selecting the relevant value from a random point in the dataset. This will have the effect of starting the curves (typically) in concentrated parts of the dataset, but the added noise should help to overcome this problem related to local minima.
Fitting Computational Cost
--------------------------
For the purposes of this test, the fitting is allowed to draw any number of curves on any of 5 regressors for 3 transitions. This fitting was performed several times with different dataset sizes to see how the fitting time grows with dataset size. It was also retried with several configuration options related to adaptive optimization to measure the performance with and without the improvements brought about by adaptive optimization. All tests were performed on a 2009 Mac Pro with 4 physical CPU cores, 8 virtual cores.
####
Adaptive Sigma Stop Observations Curves -LL Time/Curve (ms) Time (ms) Curve 8 (ms)
---------- ------------ ---------------- -------- -------- ----------------- ------------ --------------
TRUE TRUE $ 1 * 10^{6} $ 13 0.1300 165,000 2,147,031 855,181
TRUE TRUE $ 5 * 10^{6} $ 9 0.1275 514,000 4,626,543 4,103,329
TRUE TRUE $ 1 * 10^{7} $ 8 0.1284 983,000 7,864,467 7,864,467
FALSE TRUE $ 1 * 10^{6} $ 19 0.1299 115,707 2,198,424 1,008,361
FALSE TRUE $ 5 * 10^{6} $ 16 0.1272 370,584 7,041,099 3,499,648
FALSE TRUE $ 1 * 10^{7} $ 18 0.1280 910,426 17,298,097 8,283,654
FALSE FALSE $ 1 * 10^{6} $ 16 0.1299 112,765 2,142,544 1,008,361
FALSE FALSE $ 5 * 10^{6} $ 16 0.1272 592,825 11,263,666 5,652,720
FALSE FALSE $ 1 * 10^{7} $ 18 0.1280 1,271,021 24,149,392 10,042,209
: Fitting Computational Cost
####
The column Adaptive notes whether or not this test optimizes by examining the whole dataset on each iteration, or simply examines a subset if that is enough to determine which point is better. The column “Sigma Stop” indicates whether or not the optimization stops when the points are all within one standard deviation of each other.
####
The table above includes three time estimates. One is the time per curve, another is the total time, and the third is the time required to draw 8 curves. This last (curve 8) time is perhaps the fairest measure. The values are noisy, so it is hard to draw any firm conclusions, but the adaptive optimization does generally perform better. The total time is much better for the adaptive optimization, since it avoids drawing a large number of relatively less important curves for larger dataset sizes. Notice that the rows where both of these flags are false indicate typical optimization algorithms. The adaptive optimization with a 1-sigma stop performs almost 3x faster (20% faster on a curve for curve basis) already by the time the dataset gets to 10 million points. With a much larger dataset near a billion points, this difference would become much, much larger. The adaptive optimization does not lose any meaningful degree of accuracy, and in fact is far more parsimonious with curves as well simply due to the fact that it avoids fitting curves that cannot match well on several subsets of the data set.
####
It is expected that this trend continues into very large dataset sizes, saving substantial computation time. In this case, ten times more data requires approximately 4 times as much computation time. For very large dataset sizes, the cost of adaptive optimization (at a given level of precision) would approach a constant. It is expected that the cost of a round of annealing would be similar to the cost of drawing all these curves initially.
####
The real limitation in this context is RAM. The calculation requires approximately 500 MB of RAM per million observations. Given that the computer used for this exercise has only 8 GB of RAM, it is hard to push much beyond 10 million observations, or roughly 1% of the dataset. Similarly, for reasons of simplicity, this sample is not being chosen uniformly from the entire dataset, but is rather composed of the first observations from the set. This is done just for speed and efficiency when loading the data. In a real production setting, a representative sample (possibly the whole dataset) would be precompiled, and that would then be used. It is estimated that a computer with roughly 500-1000 GB of RAM (and about 40 CPU cores) could optimize a model over the entire dataset. A system such as this is well within the reach of even a small corporation, and might cost approximately \$50,000. Such an optimization would take roughly 6 hours, easily accomplished as an overnight job. Smaller optimizations on representative subsets of the data could be finished in minutes, allowing for very rapid prototyping and development.
####
Fit Type Accuracy Observations rows/ms ns/row cycles/row
------------- --------------------- -------------- ----------- ----------- ------------
Curve Correctly Rounded $ 10^{6} $ $ 6042 $ $ 165.5 $ $ 1760 $
Curve Approx $ 10^{-15} $ $ 10^{6} $ $ 18416 $ $ 54.3 $ $ 577 $
Coefficient Correctly Rounded $ 10^{6} $ $ 1095 $ $ 9127 $ $ 9711 $
Coefficient Approx $ 10^{-15} $ $ 10^{6} $ $ 1469 $ $ 6803 $ $ 7238 $
: Cost Per Row
####
As can be seen from the table above, the cost per row of these optimizations is very moderate. It requires roughly 1700 clock cycles to do a row of curve fitting, and roughly 10,000 to do a row of coefficient optimization. In this case, the coefficient optimization was fitting only flags, that cost would be expected to rise somewhat as more curves are introduced. The curve fitting cost is far more important, as more than 90% of the fitting time is consumed by curve fitting rather than coefficient fitting.
####
The code used for these examples is not fully optimized. In particular, it uses correctly rounded $ e^{x} $ and $ \ln $ functions. Replacing the correctly rounded exponential function with one with a relative error of approximately $ 10^{-15} $ more than triples the performance of the computation at no material cost to accuracy. Additional optimizations are certainly possible. The cycles/row computation assumes 100% utilization of all four cores in the CPU of the test machine, but in practice the utilization rate is somewhat lower (about 70%), primarily due to the small size of the dataset preventing a more efficient division. Additional tuning would increase that value to nearly 100%, reducing the cost still further. For the curve fitting, it should be possible to process a single observation in less than 200 clock cycles on a typical modern CPU in Java. Straight C or C++ code would perform similarly, though it might be possible to do better with carefully hand tuned assembly code that takes full advantage of the SIMD units within newer CPUs.
####
Similarly, the coefficient optimization is essentially the same operation that would be used to project future behavior. Even with no additional code improvements, this cost is in line with the resource budget set out at the beginning of the paper. Careful code optimization should keep the calculations under budget even in a full scale model using numerous regressors and interaction terms.
Fitting Results {#fitResult}
---------------
With the example above (using 1 million data points), already the results are greatly improved by the curves that were drawn. Notice that all of these fits were computed entirely without human intervention other than for a human to define the regressors, and give the model the list of regressors it may use. A typical mortgage model built within industry takes at least a man-month of data analysis, fiting and validation. The ITEM model automates most of these tasks, accomplishing in minutes what a human would typically do in a few days in a more traditional setting.
####
![C to P vs. Age[]{data-label="fig:ageBeforeAfter"}](ageBeforeAfter.jpg)
![C to P vs. Incentive[]{data-label="fig:incentBeforeAfter"}](incentiveBeforeAfter.jpg)
####
In these charts, the curve labled Before is from the model with all flags fit, but before any of the curves have been drawn. So this model does not include the effect of age or incentive. The curve labeled After is from the model with the curves added. The charts show clearly that even a few curves have dramatically improved the fitting of these regressors.
####
Note that the age curve shown is strongly influenced by the fact that the loans under consideration are all originated near the same time. Therefore, the spike seen around age 36 months is strongly influenced by the prevailing interest rates at that time. With a larger dataset, this correlation between age and time would smooth out, and a more classic age curve would be recovered. However, for the purposes of this demonstration, it is enough to show that ITEM can construct a model that represents the data. It is not necessary to show that the data set chosen is a truly fair representation of the mortgage universe.
####
The seasonality curve shows a good example where again annealing might be helpful.
####
![C to P vs. Calendar Month[]{data-label="fig:seasonBeforeAfter"}](seasonBeforeAfter.jpg)
####
It can be seen in this example that the seasonality effect was not captured. Ideally, that spike near the end should be captured, but give all the smaller spikes throughout the year, the optimizer would be likely to hit a very unsatisfying local minimum rather than finding the actual behavior related to December and January. Also, in this example, the low season for prepayment spans the year end, so it is not localized when looking at the seasonality in this way. The model might still draw curves here that would help, but it would take a large amount of data and would take at least two curves to really improve this significantly. This is an example where a small amount of randomness could help dramatically, by allowing the optimizer to find the spike at the end.
Model Curves
------------
A full and complete examination of all curves drawn by this model is beyond the scope of this paper. However, for illustrative purposes, the curve related to incentive for the 5 million point dataset is shown below.
####
![C to P multiple vs. Incentive[]{data-label="fig:incentResponse"}](incentiveResponse.jpg)
####
As can be seen from Figure \[fig:incentResponse\], the response of the model to incentive is an extremely smooth curve that rapidly saturates in both the positive and negative directions. This curve has been represented here as a multiple. Assuming the $ C -> P $ probability is not too large, then with a given value of incentive, that probability is multiplied by the value of this curve. Only relative multiples matter here, so it doesn’t matter if the curve passes through 1 or not. This shows that a loan with strong negative incentive is about one third as likely to prepay as a loan with strong positive incentive. This fits with intuition, and in fact is a close approximation of the historical data seen in Figure \[fig:incentBeforeAfter\]. Notice that in Figure \[fig:incentBeforeAfter\] the highest vs. lowest prepayment multiple is approximately 7 to 1, but that graph is confounded by numerous other factors (occupancy type, age, etc...), whereas the response above is purely for incentive acting on its own.
####
The primary advantage of the ITEM model is that it expresses all responses to input variables in terms of these extremely smooth curves, and thus tends not to over fit. This can be seen in the response to Age, which has 5 different curves fit to it for the $ C -> P $ transition.
####
![C to P multiple vs. Age[]{data-label="fig:ageResponse"}](ageResponse.jpg)
####
Even here, the curves combine to make a single very smooth curve. This response is largely driven by the correlation between age and interest rates, due to the limited fitting sample. Arguably, this is a worst case scenario. As the sample size increases, and more variation in the loan-months is pulled in, this age curve will flatten out and look smoother. However, even as it is, the age curve makes intuitive sense. Loans tend to not prepay for the first few years of life, then they enter a period of fast prepays before finally tailing off into old age. This curve captures that phenomenon surprisingly well already.
Advanced Fitting Topics
=======================
####
The previous sections explored some basic fitting routines and results. This section will concentrate on a single large fit (roughly 15 million data points) drawn randomly from the dataset. The loans in this dataset are selected by hasing the loan id from the dataset using SHA-256, then reducing it mod 50 and taking any loan with a reduction congruent to 0. This selection procedure should ensure a well distributed random sample.
####
In this exercise, several experimental fitting choices were made in an attempt to improve the fit.
1. The ordering of the data points was randomized
2. The starting curve centrality parameter was set to the regressor value of a random selection from the sample
3. The starting curve slope parameter was randomized in sign and magnitude
4. Annealing was run at the conclusion of the initial round of fitting
####
The addition of these random choices would be expected to make the model less likely to get caught up in local minima, and indeed the quality of the fit is qualitatively better than those seen before. One primary reason for this is that the optimizer finds a saddle point at zero beta, which means that it has a very difficult time reversing the sign of beta during the optimization. Always starting the beta as a positive number would therefore tend to bias the selection strongly towards results that would naturally have a positive beta, reducing the fit quality. Using a random starting point and random sign of beta helps this problem greatly.
####
One unexpected side-effect is that gaussian curves become significantly more common than logistic curves. The likely reason for this is that since the beta sign cannot be easily flipped, a logistic that starts off with the wrong sign will be unable to improve the results. However, a Gaussian can overcome the incorrect beta sign by simply moving the centrality parameter to be very high or very low. In the future, perhaps an adjustment to attempt both signs on every iteration would reduce this bias towards Gaussians.
####
Surprisingly, annealing was unable to improve the quality of this fit. The issue appears to be the high degree of colinearity in the regressors. With so much colinearity, dropping all the curves from a given regressor does not provide a very clean slate, and it may be found that only curves that interlock with existing curves in a very particular way are very effective. Presumably, annealing would be more effective if it was used earlier in the process, as here it was used only at the end. In addition, in a less contrived example where more regressors are considered, it is expected that annealing will prove more useful.
####
The tables below show the in sample validation against several key variables. Notice that the qualitative fit is not much better, but the regressors (especially age) that are strongly correlated with time make a lot more sense due to the better sampling that reduces the contemporality of the selected loans.
![C to P vs. Age[]{data-label="fig:ageValidation"}](ageValidation.jpg)
####
![C to P vs. FICO[]{data-label="fig:ficoValidation"}](ficoValidation.jpg)
####
![C to P vs. Incentive[]{data-label="fig:incentValidation"}](incentiveValidation.jpg)
####
![C to P vs. LTV[]{data-label="fig:ltvValidation"}](ltvValidation.jpg)
####
The one curve that stands out is LTV. Though the model was allowed to draw curves on LTV, it did not do so, yet the LTV results fit very well. Primarily, this is due to colinearity between LTV and other regressors, primarily FICO. If more up-to-date LTV values were available, this regressor woudl be far more useful. Below, the model curves for these regressors are shown. Here the model transition probabilities are charted directly for some sensible choice of the regressors not being charted. The effect of the excessive use of Gaussians is immediately obvious, as the curves start to do unusual things once they leave the domain where the data lies. For loans that could be extreme outliers, this could matter. A solution as simple as capping/flooring each regressor at the 1-percentile level would suffice to eliminate any danger here.
####
![Transition Probability by Age[]{data-label="fig:ageCurve"}](ageCurve.jpg)
####
![Transition Probability by Fico[]{data-label="fig:ficoCurve"}](ficoCurve.jpg)
####
![Transition Probability by Incentive[]{data-label="fig:incentCurve"}](incentiveCurve.jpg)
####
![Transition Probability by LTV[]{data-label="fig:ltvCurve"}](ltvCurve.jpg)
####
Notice in particular how FICO begins to bend back after roughly 800. There is simply no appreciable level of defaults at that FICO level, and very little data of any form. Improvements to the fitting routines might eliminate the usage of Gaussians there, and greatly reduce the propensity for this sort of issue. A similar story holds for incentive.
####
Again, the important result here is not that these curves are dramatically better than other curves show by other mortgage researchers. The advantage of this approach is that the curves are generated with no human intervention. Using this as the starting point, it is easy to correct minor issues (e.g. cap and floor some regressors) and thus very quickly arrive at a very high quality model.
Conclusion
==========
The ITEM model was designed to automate repetitive tasks and allow for an efficient model to be built automatically. In the examples above, all fits were performed automatically, with no manual intervention beyond the initial definition of the regressors. In a more typical mortgage model, a modeler would typically spend months analyzing data, selecting regressors, and examining model residuals. Each iteration of this process would require several days, and many dead ends would be explored, largely due to inefficient regressor and parameter choices. Generally, another process would be required to remove insignificant parameters, which would then be followed up again by further rounds of fitting and analysis.
####
The ITEM model automates this entire cycle. It runs thousands of iterations of fit, extend, analyze residuals, and then fit again, all within the space of minutes rather than days. The model does not add insignificant parameters, so there are none to be removed, though some annealing may be needed. Similarly, ITEM does not choose suboptimal curves or regressors, so it explores fewer dead ends. Since the model is additive in logit space, projecting it onto any individual regressor gives a fair representation of its behavior, greatly easing the job of validation.
####
The workflowing using ITEM is to simply select the dataset, define the regressors, allow ITEM to fit a model, and then go straight to final model validation.
Reference Implementation
========================
####
Attached to this submission is a Java reference implementation of the ITEM model core. This reference implementation provides only the core modeling functionality. In order to run this model, the practitioner will need to define enumerations (see edu.columbia.tjw.item.base.StandardCurveType as an example) describing the statuses available to the modeled phenomenon, and also describing the regressors available. In addition, the practitioner will need to provide a class to produce grids of data implementing edu.columbia.tjw.item.ItemFittingGrid, and related interfaces.
####
Once these interfaces are implemented, the model may be fit using the ItemFitter. The resulting model is suitable for projection, but no general projection code is provided in this reference implementation. Similarly, at this time, the annealing code is not provided, but could be made available if there is interest.
####
This reference implementation is made available as a Maven package:
- groupId: edu.columbia.tjw
- artifactId: item
- version: 1.0
####
If there is interest, this archive could be published to the Maven central repository.
Future Work
===========
Anticipated future work includes some results for annealing passes and the effect of noise introduction during fitting. In addition, the model could be applied to a larger set of regressors to show a more fully featured mortgage model. Applying the model to large regressor sets can reveal previously unknown behavior which may be worth research in its own right. The hybrid simulation approach could also be investigated more fully.
[9]{}
Hastie and Tibshirani *Generalized Additive Models* Statistical Science, 1986, Vol. 1, No 3, 297-318
Goffe, Ferrier, Rogers *Global optimizatoin of statistical functions with simulated annealing*
S. Czepiel *Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation*
B. Krishnapuram et. all. *Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds* IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 6, JUNE 2005
R. Tibshirani *Regression Shrinkage and Selection via the Lasso* Journal of the Royal Statistical Society. Series B Volume 58, Issue 1 1996
B. Ripley *Selecting Amongst Large Classes of Models* http://www.stats.ox.ac.uk/ ripley/Nelder80.pdf
Freddie Mac *Single Family Loan-Level Dataset* http://www.freddiemac.com/news/finance/sf\_loanlevel\_dataset.html
JP Morgan *The JPMorgan Prepayment Model* http://www.investinginbonds.com/assets/files/JP\_Morgan\_Prepayment\_Model.pdf
*Hedonic Regression* http://en.wikipedia.org/wiki/Hedonic\_regression
*Discrete Space* https://en.wikipedia.org/wiki/Discrete\_space
Appendix: Information-Theoretic Efficiency
==========================================
####
This paper mentions information-theoretic efficiency in several places. This term can have several meaning, so this appendix will define the meaning for the purposes of this paper.
####
The first thing to note about modeling in general, is that it is closely related to the concept of relative entropy, i.e. entropy against a party that has access to some models or data. In fact, a model is little more than a compression function, with all the limitations that entails. For instance, suppose the datasets $ X $ and $ Y $ are available. If one wished to transmit this data to another person, it might be more efficient to first fit a model $ A_{XY}(\vec x) = \vec y + \varepsilon $, and then transmit $ X $, $ A_{XY} $, and $ \varepsilon $. Ideally, the entropy of the model and the residual together would be smaller than the entropy of the response $ \vec y $. We know, however, that any compression algorithm that makes at least one dataset shorter must make at least one dataset longer. So it is known apriori that there are some datasets for which this model will give such bad predictions that it actually increases the entropy. Therefore, there can be no perfect model, as any model could be defeated by carefully chosen data.
####
Because of this, most of the approaches in this paper are heuristic, with some motivation from fields such as analysis but no complete proofs.
####
The relative entropy of a data set is simply its log likelihood (up to some constant factors), so a model that improves the log likelihood fo the dataset enough, would be worthwhile, and one that doesn’t is not. Various information criteria (e.g. AIC, BIC) are based on a calculation of the total entropy contained in the parameter set $ \theta $ and the residuals $ \varepsilon $, and comparing that value to what it would be for an alternate parameters set $ \theta' $ and $ \varepsilon' $. The best model is then the one with the lowest AIC, it has extracted as much information as possible from the dataset without passing the point where additional model complexity is counter productive.
####
With this in mind, suppose that in the typical fashion, the phenomenon actually follows some well defined but unknown distribution, with some associated noise term. Assume that the noise is pure noise, in that no model can improve its entropy, taking into consideration the entropy of the model itself.
$$\vec y = f(\vec x, \vec w) + \varepsilon$$
####
Here, $ \vec x $ is some observable data, but $ \vec w $ is unobservable. This situation might be simplified by absorbing the unobservable information into the noise term $ \varepsilon $. The noise terms may no longer be i.i.d., but they should at least be mean zero.
$$\vec y = f(\vec x) + \varepsilon$$
####
N.B. that the noise term is no longer pure noise, it is possible that a model could reduce it if there is correlation between $ \vec w $ and $ \vec x $. This will result in a lower log likelihood, but will also result in a misattribution of some behavior away from $ \vec w $ towards $ \vec x $. However, being aware of the dangers, suppose the model assumes that $ \vec y $ follows some parameterized distribution $ g $ with some unknown parameters $ \theta $.
$$\vec y \approx g(\vec x | \theta) + \varepsilon$$
####
The parameter (in practice, perhaps many parameters) $ \theta $ is unknown, but it could be estimated, perhaps with maximum likelihood estimation. This would recover an estimator for $ \theta $, namely $ \hat \theta $. Now, to state that a model is information theoretically efficient, we will in this paper mean the combination of three requirements, expressed in terms of the above quantities and also the sample size $ N $.
1. $ \lim_{N \rightarrow \infty} g(\vec x | \theta) = f(\vec x) $
2. $ \lim_{N \rightarrow \infty} \hat \theta = \theta $
3. The variance of $ \hat \theta $ reaches the Cramer-Rao bound asymptotically.
####
Briefly, requirements (2) and (3) above are the standard requirements for an efficient estimator. The first condition states simply that the model $ g $ will converge to the actual function defining the phenomenon, given enough data, i.e. the model is not misspecified. For parameterized models, this is almost never true. Only in extremely rare circumstances would it be the case that the actual physical phenomenon being modeled converges to exactly the functional form chosen for the model. As the dataset becomes large, the majority of the residual error in the model would be due to this misspecification.
####
If the model is known to be misspecified and does not converge to the true distribution, then it may still be possible to define a sequence of models such that.
$$\lim_{n \rightarrow \infty} g_{n}(\vec x | \theta_{n}) = f(\vec x)$$
####
Where here, the varable $ n $ is determining how many variables are included in the model. In other words, $ g_{n}(\vec x | \theta_{n}) $ is now a family of models with an unbounded number of parameters. In this case, information theoretic efficiency is expanded to include now four separate items. It is understood that all the following equations hold only as $ N \rightarrow \infty $.
1. $ \lim_{n \rightarrow \infty} g_{n}(\vec x | \theta_{n}) = f(\vec x) $
2. $ | g_{n}(\vec x | \theta_{n}) - f(\vec x) | $ is minimized for all $ n $
3. $ \hat \theta = \theta $
4. The variance of $ \hat \theta $ reaches the Cramer-Rao bound asymptotically.
####
Where again, the first item is ideal but unlikely in practice. Similarly, the second item is an idealization, but can be taken to mean just that the model is parsimonious with parameters, getting meaningful improvement with each parameter added. The last two items are as before, measures of unbiased and asymptotically efficient estimation. A maximum likelihood estimation is (under most conditions) guaranteed to be both asymptotically efficient and unbiased, so the last two conditions will then be satisfied. Any parameterized model is also virtually guaranteed to not converge to exactly the actual phenomenon under study. What remains then is to consider the distance between the model and the phenomenon for any given number of parameters, and when information theoretic efficiency is discussed, this is typically what will be meant. For this notion of closeness, a typical metric would be Kullback-Leibler divergence though any equivalent metric would do.
####
Since no model can be perfect due to the compression arguments above, there is no true solution to this problem and no ideal model. It is therefore necessary to from here proceed with heuristics for which there is some theoretical basis to believe that they may in many cases produce good models. One common assumption is that $ f(\vec x) $ is smooth and continuous with respect to $ \vec x $, depending on the model family, other assumptions may be needed as well.
|
---
abstract: 'We present a physical scenario in which both Fermi arcs and two-dimensional gapless Dirac states coexist as boundary modes at the *same* two-dimensional surface. This situation is realized in topological insulator–Weyl semimetal interfaces in spite of explicit time reversal symmetry breaking. Based on a heuristic topological index, we predict that the coexistence is allowed when (i) the corresponding states of the Weyl semimetal and topological insulator occur at disconnected parts of the Brillouin zone separated by the Weyl nodes and (ii) the time-reversal breaking vector defining the Weyl semimetal has no projection parallel to the domain wall. This is corroborated by a numerical simulation of a tight binding model. We further calculate the optical conductivity of the coexisting interface states, which can be used to identify them through interference experiments.'
author:
- 'Adolfo G. Grushin'
- 'Jörn W. F. Venderbos'
- 'Jens H. Bardarson'
title: Coexistence of Fermi arcs with two dimensional gapless Dirac states
---
*Introduction*—Protected surface states are the key characteristic of topological phases of matter. Time reversal invariant topological insulators (TI) host at their surface two-dimensional (2D) massless Dirac quasiparticles protected by time reversal symmetry ($\mathcal{T}$) [@HK10; @QZ11; @HM11]. Weyl semimetals (WSM) are 3D gapless materials described at low energy by Weyl fermions. They host topologically robust surface states referred to as Fermi arcs since they form an open Fermi surface [@Wan:2011hi; @Burkov:2011de; @WK12; @Hosur:2013eb; @Turner:2013tf; @Vafek:2014hl]. Their emergence can be understood in terms of charge conservation (gauge invariance) of the effective field theory describing the WSM’s response to external electromagnetic fields, in analogy with the quantum Hall effect [@Grushin:2012cb; @Goswami:2013jp; @Ramamurthy:2014uh]. Such an effective response predicts a number of striking physical properties, such as a finite Hall conductivity [@Burkov:2011de; @Aji:2012gs; @Grushin:2012cb; @Goswami:2013jp; @Liu:2013kv], a current parallel to an external magnetic field (chiral magnetic effect) [@Zyuzin:2012ca; @Grushin:2012cb; @Goswami:2013jp; @Landsteiner:2014fw], and a finite angular momentum induced by a thermal gradient (axial magnetic effect) [@CCG14]. The band structure of a WSM is characterized by a linear dispersion around a set of non-degenerate band touchings points called Weyl nodes. Their existence requires the breaking of either time reversal $\mathcal{T}$ or inversion $\mathcal{I}$ symmetry [@Turner:2013tf; @Vafek:2014hl]. Each Weyl node is chiral and has an associated momentum space Berry flux that gives it the character of a Berry flux monopole. Since the total flux in momentum space is required to be zero by gauge invariance [@BH13], the Weyl nodes must appear in pairs with opposite monopole charge [@NielNino81a; @NielNino81b]. They are therefore topologically stable as they can only be annihilated by bringing together a pair with opposite chirality [@KLINKHAMER:2005bi]. The simplest realization of a topological semimetal, one exhibiting only a single pair of nodes, necessarily breaks time-reversal symmetry. Indeed, $\mathcal{T}$ symmetry connects two Weyl nodes with the *same* monopole charge [@YZT12], implying the existence of at least another pair with opposite chirality [@HB12; @O13]. From this symmetry perspective, the coexistence of 2D Dirac TI surface states and pairs of Fermi arcs is in principled allowed if $\mathcal{T}$ symmetry is respected, as inferred from ab-initio calculations [@WSC12]. However, in the minimal two-Weyl-node model, this symmetry is broken and such coexistence seems to be mutually exclusive; the 2D Dirac TI surface state is protected by $\mathcal{T}$ while Fermi arcs are only realized in its absence.
![\[Fig:phasediag\] (Color online) Left: Phase diagram for the model at $b_0 = 0$ as a function of $\mathbf{b}=(b_{x},0,0)$ and $M$. The four gapped phases are a weak (WTI) and a strong (STI) topological insulator and trivial (I) insulator. The Weyl semimetal phases have six (WSM$_{6}$), four (WSM$_{4,4'}$) or two Weyl nodes (WSM$_{2,2'}$). Solid lines represent gap closing or Weyl node annihilations at the corresponding Brillouin zone momenta. Upper right panel: Geometry, with periodic boundary conditions, used in numerical simulations. Lower right panel: A schematic of a proposed interference experiment to observe the coexistence of surface states.](Fig1.pdf)
In this work we show how to circumvent this apparent dichotomy and realize both states at the *same* surface, the interface of a WSM–TI heterostructure, and we calculate the optical conductivity of the coexisting interface states, which serves as their distinct experimental signature. We demonstrate this by modelling such interfaces using a canonical cubic lattice model describing TIs supplemented with symmetry breaking fields that can drive the system into a WSM phase [@QHZ08; @Vazifeh:2013fe]. Spatially dependent parameter fields realize a generic model of a domain wall between two different phases. We numerically observe that coexistence occurs when the Fermi arcs and 2D massless Dirac surface states occupy distinct parts of the Brillouin zone delimited by the Weyl nodes [^1]. This observation is captured by a heuristic topological index $$\label{eq:index}
\mathcal{J}_{a}=C_{a}\pi_{a},$$ defined for each surface time reversal invariant momenta $\Lambda_a$ and written in terms of known properties of the two phases (we use indices $a,b,\ldots$ to label surface momenta and $i,j,\ldots$ bulk momenta). Namely, the time reversal polarizations $\pi_{a}=\pm 1$ determine the presence or absence of 2D Dirac surface states at $\Lambda_a$ [@FKM07; @FK07] while $C_{a}=0\, (1)$ when a Fermi arc exists (is absent) in the vicinity of $\Lambda_a$. By computing the index $\mathcal{J}_a$ for all $\Lambda_a$ one can predict whether coexistence of both surface states is allowed in a given surface or not (details are given below). *Coexistence of Fermi arcs and 2D massless Dirac states*—We model the bulk phases with the two orbital spinful cubic lattice Hamiltonian [@QHZ08; @Vazifeh:2013fe]
\[H\] $$\begin{aligned}
H &= H_{\mathrm{TI}} + H_{b}, \\
H_{\mathrm{TI}}&=t\sum_{\bf{x}, \hat{\mathbf{j}}} c^{\dagger}_{\mathbf{x}}\frac{\Gamma_0 - i\Gamma_j}{2}
c_{\mathbf{x}+ \hat{\mathbf{j}}} + \text{h.c.} + M \sum_{\mathbf{x}}c^{\dagger}_{\mathbf{x}} \Gamma_0 c_{\mathbf{x}},\\
H_{b} &= \sum_{\mathbf{x},\mu}b_{\mu}~c^{\dagger}_\mathbf{x} \Gamma_{\mu}^{(b)} c_{\mathbf{x}}.\end{aligned}$$
The position vector $\mathbf{x}$ runs over the sites of the cubic lattice, $\hat{\mathbf{j}}=\hat{x},\hat{y},\hat{z}$ is a unit vector in each cartesian direction, and $\mu=0,x,y,z$. The operator $c_{\mathbf{x}}=(c_{\mathbf{x}A\uparrow},c_{\mathbf{x}A\downarrow}, c_{\mathbf{x}B\uparrow},c_{\mathbf{x}B\downarrow})$ where $c_{\mathbf{x}\sigma s}$ annihilates an electron in orbital $\sigma=A,B$ at $\mathbf{x}$ with spin $s = \uparrow,\downarrow$. The gamma matrices are given by $\Gamma^{\mu}=(\sigma_{x}\otimes s_{0},\sigma_{z}\otimes s_{y},\sigma_{z}\otimes s_{x},\sigma_{y} \otimes s_0)$ and $\Gamma_{\mu}^{(b)}=(\sigma_{y}\otimes s_{z},\sigma_{x} \otimes s_{x},\sigma_{x} \otimes s_{y},\sigma_0 \otimes s_{z})$, where $s_j$ and $\sigma_j$ are Pauli matrices describing the spin and orbital degree of freedom respectively and $s_0,\sigma_0$ are the corresponding identity operators. The first term $H_{\mathrm{TI}}$ generally models a gapped insulator, apart from special parameter values that correspond to phase transitions between different insulating phases. At the time reversal symmetric momenta $\Lambda_i$ of the cubic lattice, $\boldsymbol{\Gamma}=(0,0,0),\mathbf{X}=\mathcal{P}[(\pi,0,0)],\mathbf{M}=\mathcal{P}[(\pi,\pi,0)]$ and $\mathbf{R}=(\pi,\pi,\pi)$ with $\mathcal{P}$ the permutation operator, the gap is given by $2m_{i}$ with $m_{\boldsymbol{\Gamma},\boldsymbol{R}}=M\mp 3t$ and $m_{\boldsymbol{X},\boldsymbol{M}}=M\mp t$. Depending on the relative signs of the masses one obtains a strong (STI, $t<|M|<3t$), a weak (WTI, $|M|<t$), or a trivial insulator (I, $|M| > 3t$), cf. the horizontal axis in the phase diagram of Fig. \[Fig:phasediag\]. The term $H_b$ is parameterized by the four-vector $b_{\mu}=(b_{0},\mathbf{b})$, where the pseudo-scalar $b_{0}$ breaks $\mathcal{I}$ and the pseudo-three-vector $\mathbf{b}$ breaks $\mathcal{T}$. For now, we focus on the case of main interest $b_0 = 0$ and take $\mathbf{b}=b_{x}\hat{x}$ without loss of generality. We consider the effect of a nonzero $b_0$ later. With increasing $b_x$ the gap closes at one (or more) of the bulk time reversal symmetric momenta at which point the bulk spectrum is characterized by a 3D Dirac cone. Upon further increase of $b_x$ this Dirac cone splits into two 3D Weyl nodes and a WSM phase is obtained. Depending on the number of gap closings one obtains a WSM phase with two, four, or six Weyl nodes. A fully representative corner of the phase diagram is provided in Fig. \[Fig:phasediag\]. To model an interface between two distinct phases we endow the parameters of the Hamiltonian with a position dependence. We limit ourselves to sharp interfaces parallel to the $x-z$ plane [^2] with an infinite lattice in the $x$ and $z$ directions and a finite width $L_y$ in the $y$ direction. To avoid interfaces with the vacuum, we take periodic boundary conditions in the $y$ direction resulting in two domain walls, see Fig. \[Fig:phasediag\]. Explicitly, $$(b_x,M) =
\begin{cases}
(b_1,M_1) & \text{if } \frac{L_y}{4} < y < \frac{3L_y}{4}, \\
(b_2,M_2) & \text{otherwise}.
\end{cases}$$ Depending on the values of $(b_1,M_1)$ and $(b_2,M_2)$, we obtain an interface between any two phases of the model; In the following we focus on interfaces between a WSM and either a strong or a weak TI. In Fig. \[Fig:interpol\] we plot cuts through the energy spectrum for $k_{z}=0$ and $k_{z}=\pi$ as a function of $k_{x}$ for domain walls STI–WSM$_4$ and WTI–WSM$_{4'}$, demonstrating the coexistence at zero energy of Fermi arcs and massless 2D Dirac states separated by the Weyl nodes. For the WTI and STI case there are an even and odd number of massless Dirac states at the interface respectively. The zero modes are doubly degenerate with one state localized at each domain wall. Importantly, not all STI–WSM or WTI–WSM domain walls have coexisting Fermi arc and Dirac states. For instance, at a STI–WSM$_{4'}$ interface (data not shown), where both types of surface states would have to exist in the same region of momentum space, only Fermi arcs are numerically observed. This suggests that coexistence at the interface is allowed only as long as the states do not overlap in momentum space. To assess the robustness of these states, we have studied the effect of a finite $\mathcal{I}$ breaking term $b_{0}$ in the WSM side of the domain wall. For a bulk WSM nonzero $b_{0}$ acts as a relative energy shift for the Weyl nodes and may lead to the chiral magnetic effect, the existence of which is still debated [@Volovik:1999da; @Aji:2012gs; @Zyuzin:2012ca; @Grushin:2012cb; @Zyuzin:2012kl; @Goswami:2013jp; @Liu:2013kv; @Vazifeh:2013fe; @Chen:2013ep; @Goswami:wl; @Hosur:2012fl; @Landsteiner:2014fw]. At the interface $b_0$ endows the Fermi arcs with finite dispersion and tilts the 2D Dirac states, see right column of Fig. \[Fig:interpol\]. Increasing $b_{0}$ further ultimately opens up a gap in the WSM and destroys the Fermi arcs [@Zyuzin:2012ca; @Grushin:2012cb]. The coexistence is, furthermore, only allowed when $\mathbf{b}$ is perpendicular to the domain wall direction $\hat{y}$; a finite parallel component ($b_y\neq 0$) acts as a Zeeman term for the 2D Dirac surface states and opens up a gap (as we have verified numerically). This effect is minimized by aligning the domain wall along $\mathbf{b}$, which physically is an intrinsic magnetization and likely to be aligned with an experimentally identifiable crystallographic direction.
![\[Fig:interpol\] (Color online) Upper row: Band structure of a finite slab of WSM$_4$–STI heterostructure with periodic boundary conditions \[$(b_x,M)_\text{STI} = (0,2.6)$ and $(b_x,M)_{\text{WSM}_4} = (2,2.6)$\]. Lower row: Band structure for a finite slab corresponding to a WSM$_{4'}$–WTI configuration \[$(b_x,M)_{\text{STI}} = (0,0.2)$ and $(b_x,M)_{\text{WSM}_{4'}} = (2,0.2)$\]. The $k_{z}=0,\pi$ cuts shown in the first and second columns demonstrate the coexistence of 2D Dirac states with Fermi arcs in both cases. The third column shows the effect of $b_{0}=0.25$. Band structures are obtained for systems with linear dimension $L_y = 160$.](figure2_v4c.pdf){width="\columnwidth"}
*Coexistence from bulk topology—*Our numerical results are captured by the topological index $\mathcal{J}_{a}$ defined in Eq. , the construction and use of which we now explain. The index relies on the observation that the bulk properties of the WSM and TI impose conditions on where their corresponding surface states must occur. When they are all compatible, the coexistence is allowed and protected by the presence of the Weyl nodes.
To locate the TI Dirac surface states in momentum space we follow Refs. and define for each bulk time reversal invariant momentum $\Lambda_i$ the product of the parity eigenvalues $\delta_{i}$ of the filled Kramers pairs. For a surface perpendicular to a lattice vector, each time-reversal invariant surface momenta $\Lambda_a$ is a projection of two bulk momenta $\Lambda_{i}$ and $\Lambda_{j}$, see Fig. \[Fig:index\](a). It has associated with it the time reversal polarization $\pi_{a}=\delta_{i}\delta_{j}=\pm 1$ that determines the number and connectivity of the Dirac surface states [@FKM07; @FK07]. Namely, for any path connecting $\Lambda_{a}$ and $\Lambda_{a'}$ with $a \neq a'$ there is an odd (even) number of crossings at the Fermi level if $\pi_{a}\pi_{a'}=-1\,(1)$. For $H_{\mathrm{TI}}$ in $\delta_{i}=-\mathrm{sign}[m_{i}]$.
To similarly locate the Fermi arcs we employ the construction shown in Fig. \[Fig:index\](b). Namely, we define a 2D Hamiltonian by restricting the 3D Hamiltonian to a cylinder enclosing a 3D time reversal invariant momenta $\Lambda_i$ but none of the Weyl nodes, and parameterize it by $(k_y,\lambda)$ with $\lambda \in [0,2\pi]$ describing circles in the $(k_x,k_z)$ plane. If a Fermi arc exists between the surface Brillouin zone projections of two given Weyl nodes around $\Lambda_a$ it will cross this momenta as long as there is particle-hole symmetry [@Ran06; @JRL14]. For open boundary conditions in $k_{y}$, the 2D Hamiltonian has a midgap state at the intersection of the cylinder and the Fermi arc, see Fig. \[Fig:index\](b). This enables us to define an index $C_{a}=0\,(1)$ that counts if there is a Fermi arc crossing $\Lambda_a$ (or not). [^3]
The two quantities $C_{a}$ and $\pi_{a}$ are separately obtained from the bulk of the WSM and the TI respectively. Since we are interested in a domain wall we combine them in the index $\mathcal{J}_{a}=0,\pm 1$ introduced in associated to each interface momentum $\Lambda_{a}$. From the index $\mathcal{J}_{a}$ we deduce the occurrence of nontrivial surface phenomena as follows. At every $\Lambda_{a}$ where $\mathcal{J}_{a}=0$ a Fermi arc occurs. At a $\Lambda_{a}$ for which $\mathcal{J}_{a}\neq0$ the Fu-Kane criterion described above directly applies and analysis of the $\pi_{a}$ determines the existence of Dirac surface states. Hence, the key physical content captured by $\mathcal{J}_{a}$ is that both types of states at any given $\Lambda_a$ are mutually exclusive. We note that $C_{a}$, and by extension $\mathcal{J}_{a}$, relies on the fact that the Fermi arc crosses the 2D cylinder shown on the left panel of Fig. \[Fig:index\](b), which is guaranteed even if $\mathcal{I}$ breaking terms are present, as long as there is no gap opening in the WSM and particle hole symmetry is respected. To exemplify the use of we apply the presented construction to the two domain walls considered above, as shown schematically in Fig. \[Fig:index\](c). First, for the STI–WSM$_{4}$ case we find that $C_{a}=0$ for $a =(k_{x},k_{y})=(0,\pi)$ and $(\pi,0)$, which are thus intersected by Fermi arcs. Second, $C_{a}=1$ and $\pi_{a}=-1,1$ for $\Lambda_a =(0,0),(\pi,\pi)$ respectively, indicating an odd number of crossings at the Fermi level between those two surface momenta represented by a shaded circle in Fig. \[Fig:index\](c) left panel. These conclusions are in perfect agreement with the numerical results shown in the upper panels of Fig. \[Fig:interpol\]. For the second domain wall, the WTI–WSM$_{4'}$, the same procedure predicts two (an even number because of the WTI) Dirac surface states centered around $(0,\pi)$ and $(\pi,0)$ and two Fermi arcs crossing $(0,0)$ and $(\pi,\pi)$, see Fig. \[Fig:index\](c) centre panel. Again, this agrees with the numerical results (see Fig. \[Fig:interpol\] lower panels). An immediate consequence of our analysis is that not all WSM–TI interfaces host coexisting surface states, even when $\mathbf{b}$ is aligned perpendicular to the domain wall. For instance, a domain wall involving the interpolation $(b_1, M_{1})\in$ STI $\to (b_2, M_{2}) \in$ WSM$_{4'}$ imposes, through $\mathcal{J}_{a}$, that the Dirac nodes and the WSM Fermi arcs must cross $E=0$ at the same $\Lambda_{a}$, Fig. \[Fig:index\](c) right panel. Since, as described above, Dirac states can occur only at $\Lambda_{a}$ where there are no Fermi arcs, only the two Fermi arcs corresponding to the WSM$_{4'}$ phase exist and cross $(k_{x},k_{z})=(0,0),(\pi,\pi)$. Consistent with our analysis based on $\mathcal{J}_{a}$, and as shown in the rightmost panels in Fig. \[Fig:interpol\], the inclusion of $b_{0}$ does not alter the coexistence of the surface states as long as it does not drive a phase transition to an insulator.
![\[Fig:index\] (Color online) Construction of the topological index: a) Projection of the different bulk band parity products $\delta_{i}$ defining the time reversal polarization $\pi_{a}$ at each surface momenta $\Lambda_{a}$. The sign of all $\pi_{a}$ determine the presence (absence) of TI surface states [@FKM07; @FK07]. b) A cylindric 2D Hamiltonian around a bulk time reversal invariant momenta $\Lambda_i$ parametrized by $k_y$ and $\lambda \in [0,2\pi)$ (left panel). For open boundary conditions in $k_{y}$, the presence (absence) of a Fermi arc intersecting the cylinder, shows up as one (no) midgap state, defining $C_{a}=0 (1)$ (right panel). Lower panels: Schematic interface Brillouin zone including the value of Eq. for three different domain wall configurations (from left to right): STI–WSM$_{4}$, WTI–WSM$_{4'}$ and STI–WSM$_{4'}$. The shaded regions enclose 2D surface massless Dirac states separated from the Fermi arcs (FA) in momentum space. ](Fig3.pdf)
*Optical conductivity of the surface states—*The presence of the surface states alters the response to an external electromagnetic field, which can be probed by optical spectroscopy. The reflection coefficients determine the optical response and are related to the optical conductivity composed of a bulk and a surface state contribution [@Born99]. Experimentally, the bulk optical signature of TIs has been accessed via optical spectroscopy [@LFP10] and the bulk WSM optical signature is theoretically well understood [@HQ14; @AC14]. Here, we compute the optical conductivity of the surface states reported above; in particular, the optical conductivity of the Fermi arc, to our knowledge, has not been calculated before. We are interested in the linear response, long wavelength limit, where the incoming radiation has frequency $\omega$ and the momentum transfer satisfies $\mathbf{p}\to 0$. The Fermi arc and the massless 2D Dirac fermion exist in separated parts of the Brillouin zone. The total interface conductivity is therefore given by the sum of their individual and independent contributions $\sigma^{ij}_\text{surf}(\omega)=\sigma^{ij}_\text{df}(\omega)+\sigma^{ij}_\text{fa}(\omega)$. The optical conductivity of a single 2D Dirac fermion, $\sigma_\text{df}^{xx}(\omega)=\sigma_\text{df}^{zz}(\omega)=\frac{\pi}{8}\frac{e^2}{h}$, is well known from the context of graphene [@CGP09] and is isotropic and independent of the frequency $\omega$. The Fermi arc can be modeled as a single chiral Fermion $\psi_{+}$ with definite chirality, chosen to be positive without loss of generality. Its Lagrangian is $\mathcal{L}= \psi^{\dagger}_{+}(i\partial_{0}+i\partial_{z})\psi_{+}$. To calculate its contribution we use the Kubo formula $$\mathrm{Re}\,\sigma^{ij}(\omega)=-\lim_{\mathbf{p}\to 0}\frac{1}{\omega}\mathrm{Im}\,\Pi^{ij}(\omega,\mathbf{p})$$ that expresses the optical conductivity in terms of the polarization tensor $\Pi^{ij}(\omega,\mathbf{p})$. We find (see [^4] for details of the calculation) $$\begin{aligned}
\label{eq:faopt}
\mathrm{Re}\,\sigma^{zz}_\text{fa}(\omega)&=&\dfrac{e^2\kappa_{0}}{2\pi^2\omega},\; \mathrm{Re}\,\sigma^{xz}_\text{fa}(\omega)=\mathrm{Re}\,\sigma^{xx}_\text{fa}(\omega)=0,\end{aligned}$$ where $2\kappa_{0}$ is the separation between Weyl nodes in momentum-energy space. This prefactor reflects the fact that the Fermi arc only exists on a bounded part of the 2D surface Brillouin zone delimited by the surface Weyl node projection. The optical conductivity of the Fermi arc is thus highly anisotropic and divergent as $\omega\rightarrow0$ in the clean limit. The total interface optical conductivity $\sigma^{ij}_\text{surf}(\omega)$ could be measured in a setup such as the one schematically shown in the lower left panel of Fig. \[Fig:phasediag\] where a TI thin film is deposited on top of a WSM, inspired by existing optical probes [@LFP10]. The TI bulk response vanishes for frequencies less than the bulk gap while the bulk WSM is proportional to $\omega$ [@HQ14; @AC14]. Thus, for sufficiently low frequencies the response is determined by the surface and the coexistence can be probed by measuring the anisotropic Drude-like peak given in Eq. and a constant isotropic contribution from the 2D Dirac states.
*Discussion and conclusions—*In this work, we have numerically demonstrated the possibility for Fermi arcs and 2D Dirac fermions to coexist at the same surface, the interface of a Weyl semimetal and a topological insulator, in spite of explicit $\mathcal{T}$ symmetry breaking. This is only possible if they do not coexist in the same region of reciprocal space and the time reversal breaking ${\bf b}$-vector of the WSM is perpendicular to the domain wall direction. We have introduced a heuristic topological index $\mathcal{J}_{a}$, based on bulk topology, that can predict if and where the surface states are realized. This index captures the universal features of the bulk phases independent of the crystalline symmetry. Thus, it applies also to systems with rhombohedral symmetry, (e.g. the Bi$_2$Se$_3$ family) that share the structure of the generic Hamiltonian studied here [@LQZ10]. Even though Fermi arcs and Dirac cones can coexist, the latter are not as robust as those at TI–trivial insulator interface. A component of the ${\bf b}$-vector parallel to the domain wall direction acts a Zeeman-term and gaps out the Dirac cone. The optical conductivity of the Fermi arc is found to be highly anisotropic and therefore optical spectroscopy serves as a probe of coexistence of 2D Dirac and Fermi arc surface states. In sum, our results uncover the interplay of distinct topological bulk phenomena, topological insulators and semimetals, by showing that surface states with different nature can coexist at the same surface.
*Acknowledgements—*A.G. G. is grateful to E. V. Castro for inspiring discussions from which this work originated. We thank F. de Juan and W. Witczak-Krempa for interesting comments and remarks, and T. Ojanen for discussions.
[48]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1146/annurev-conmatphys-062910-140432) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.85.045124) @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.89.081407) @noop [**]{} (, ) [****, ()](\doibase 10.1016/0550-3213(81)90361-8) [****, ()](\doibase 10.1016/0550-3213(81)90524-1) @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevLett.108.140405) [****, ()](\doibase 10.1103/PhysRevB.85.035103) [****, ()](\doibase 10.1103/PhysRevB.87.245112) [****, ()](\doibase 10.1103/PhysRevB.85.195320) [****, ()](\doibase 10.1103/PhysRevB.78.195424) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.98.106803) [****, ()](\doibase 10.1103/PhysRevB.76.045302) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [ ()]{}, [****, ()](\doibase 10.1103/PhysRevB.89.161117) @noop [**]{}, ed. (, ) [****, ()](\doibase 10.1103/PhysRevB.81.125120) @noop [ ()]{}, [****, ()](\doibase 10.1103/PhysRevB.89.245121) [****, ()](\doibase
10.1103/RevModPhys.81.109) [****, ()](\doibase
10.1103/PhysRevB.82.045122) @noop [**]{} (, ) [****, ()](\doibase 10.1016/0550-3213(94)90410-3)
\[sec:app\] Appendix: Optical conductivity of the surface states
================================================================
In this appendix we derive the linear response of the surface states to an external electromagnetic field in the long wavelength limit, where the incoming radiation has frequency $\omega$ and the momentum transfer satisfies $\mathbf{p}\to 0$. Since the Fermi arc and the massless 2D Dirac fermion live in distinct regions of the surface Brillouin zone we calculate separately their contributions to the conductivity, labelled $\sigma^{ij}_\text{df}(\omega,\mathbf{p})$ and $\sigma^{ij}_\text{fa}(\omega,\mathbf{p})$ respectively. Mathematically $$\label{eqapp:current}
j^{i}(\omega,\mathbf{p})=\sigma^{ij}(\omega,\mathbf{p})E_{j}(\omega,\mathbf{p})
=\left[\sigma^{ij}_\text{df}(\omega,\mathbf{p})+\sigma^{ij}_\text{fa}(\omega,\mathbf{p})\right]E_{j}(\omega,\mathbf{p}),$$ where the current density $j^{i}(\omega,\mathbf{p})$ and the external electric field $E_{i}(\omega,\mathbf{p})$ depend on the external frequency $\omega$ and momentum $\mathbf{p}$.\
For completeness and to establish notation, we begin by reviewing the calculation of the contribution from the single massless 2D Dirac fermion. This can be adapted from known results in the context of graphene [@CGP09] by simply dropping the valley and spin degeneracy factors. In linear response, we express the current in terms of the vector potential $A_{j}$ as $j^{i}=\Pi^{ij}A_{j}$, written in terms of the spatial component of the polarization tensor $\Pi^{\mu\nu}$. To first order, this propagator is given by the bubble diagram shown in Fig. \[Fig:bubble\] that represents the integral $$\label{eqapp:bubbleint}
i\Pi^{\mu\nu}_\text{df}(p)=-(-ie)^2\mathrm{Tr}\int \dfrac{d^3k}{(2\pi)^3} G(k)\gamma^{\mu}G(k+p)\gamma^{\nu},$$ with $G(k)=i/\slashed{k}$, $k^{\mu}=(k_{0},\mathbf{k})$, and $p^{\mu}=(\omega,\mathbf{p})$. We use Feynman’s slashed notation defined through $\slashed{k}=k_{\mu}\gamma^{\mu}$. The integral, written as $$\label{eqapp:bubble}
\Pi^{\mu\nu}_\text{df}(p)=(-ie)^2\int \dfrac{d^3k}{(2\pi)^3}\dfrac{\mathrm{Tr}\left[\slashed{k}\gamma^{\mu}(\slashed{k}+\slashed{p})\gamma^{\nu}\right]}{k^2(k+p)^2},$$ has the precise same form as the QED(2+1) bubble and it is linearly divergent by naive power counting. Nonetheless, the final result is finite and can be obtained by using for example dimensional regularization. By analytic continuation to dimension $d$ the result can be written in terms of the auxiliary Feynman parameter $x\in[0,1]$ as [@Peskin] $$\label{eqapp:dbubble}
i\Pi^{\mu\nu}_\text{df}(p)=-i(p^2 g^{\mu\nu}-p^{\mu}p^{\nu})\dfrac{2e^2}{(4\pi)^{d/2}}\mathrm{Tr}\left[\mathbbm{1}\right]\int_{0}^{1} dx\dfrac{\hspace{2mm} x(1-x) \Gamma(2-d/2)}{(-x(1-x)p^2)^{2-d/2}},$$ where $\mathrm{Tr}\left[\mathbbm{1}\right]$ is the trace over the dimension of the Pauli matrices. This last result will prove useful for the computation of the Fermi arc contribution to the conductivity. For the 2D massless Dirac cone one can set the spacetime dimension to $d=3$ and $\mathrm{Tr}\left[\mathbbm{1}\right]=2$ to evaluate the integral and obtain [@GGV94] $$\label{eqapp:dbubbleres}
i\Pi^{\mu\nu}_\text{df}(p)=-i(p^2 g^{\mu\nu}-p^{\mu}p^{\nu})\dfrac{4e^2}{(4\pi)^{3/2}}
\dfrac{-i\pi^{3/2}}{8\sqrt{\omega^2-\mathbf{p}^2}}=-\dfrac{e^2}{16}\dfrac{(p^2 g^{\mu\nu}-p^{\mu}p^{\nu})}{\sqrt{\omega^2-\mathbf{p}^2}}.$$ We are interested in the real part of the optical conductivity, which can for example be probed by optical spectroscopy, related to the polarization tensor $\Pi^{ij}$ through $$\begin{aligned}
\label{eqapp:Kubo1}
\mathrm{Re}\,\sigma^{ij}(\omega,\mathbf{p})&=&-\lim_{\mathbf{p}\to 0}\dfrac{1}{\omega}\mathrm{Im}\,\Pi^{ij}(\omega,\mathbf{p}).\end{aligned}$$ Using that $\partial_{\mu} j^{\mu}=0$ $$\mathrm{Re}\,\sigma^{ij}(\omega,\mathbf{p})=-\lim_{\mathbf{p}\to 0}\dfrac{\omega}{p_{i}p_{j}}\mathrm{Im}\,\Pi^{00}(\omega,\mathbf{p})
=\lim_{\mathbf{p}\to 0}\dfrac{\omega}{p_{i}p_{j}}\mathrm{Im}\left[\dfrac{e^2}{16}\dfrac{\mathbf{p}^2}{\sqrt{v_{F}\mathbf{p}^2-\omega^2}}\right]
\label{eqapp:final2D}
=\dfrac{\pi}{8}\dfrac{e^2}{h}\delta^{ij}.$$ where in the last step we have introduced the Kronecker delta function $\delta^{ij}$ and restored $e^2/\hbar$ to obtain the optical conductivity for 2D massless Dirac fermions used in the main text [@CGP09].
![\[Fig:bubble\] Polarization bubble representing the integral . The solid curved lines represent a fermionic propagator $G_{k}$ evaluated at the corresponding momenta. For the 2D Dirac fermion it can be written as $G_{k}=i/\slashed{k}$ with $k^{\mu}=(k_{0},k_{1},k_{2})$ ($\slashed{k}=k_{\mu}\gamma^{\mu}$) while for the chiral Fermion describing the Fermi arc is given $G_{k}=i\mathcal{P_{+}}\slashed{k}/k^2$ with $k=(k_{0},k_1)$. The wavy lines represent the external electromagnetic field carrying external momentum $p=(\omega,\mathbf{p})$. ](bubble.pdf)
Now we want to calculate the contribution from the Fermi arc to the optical conductivity. The Fermi arc is a single chiral Fermion and thus its Lagrangian can be written as $$\mathcal{L}= \psi^{\dagger}_{+}(i\partial_{0}+i\partial_{1})\psi_{+}=\bar{\psi}\mathcal{P}_{+}\gamma_{\mu}\partial^{\mu}\psi,$$ where in the last step we have written the chiral fermion in terms of the projection of a Dirac fermion with the help of the projector operator $\mathcal{P}_{\pm}=\frac{1}{2}(1\pm\gamma_{5})$. We note that the $\gamma$ matrices here do not correspond to the $\gamma$ matrices of the 2D calculation but instead can be chosen to have the representation$\gamma^{0}=\sigma_{y},\gamma^{1}=i\sigma_{x}$ and $\gamma^{5}=\gamma^{0} \gamma^{1}=\sigma_{z}$.
The propagator for such a state is $G_{k}=i\mathcal{P_{+}}\slashed{k}/k^2$ with $k=(k_{0},k_1)$. Note that the chiral fermion is different from the usual Dirac fermion in $1+1$ dimensions in that i) the propagator of the latter can be written as $G^{1+1}_{k}=i\mathcal{P_{+}}\slashed{k}/k^2+i\mathcal{P_{-}}\slashed{k}/k^2$ and ii) the former is embedded in a 2+1 dimensional space time, unlike its $1+1$ dimensional Dirac counter part. For the calculation of the polarization tensor of the Fermi arc, the integral in one of the directions is constrained by the separation between Weyl points in momentum space $2\kappa_{0}$, a fact taken into account by setting the limits of the first integral in $k_{x}$ to $-\kappa_{0}<k_{x}<\kappa_{0}$. Therefore, for the Fermi arc case Eq. reads $$\label{eqapp:bubblefa1}
i\Pi^{\mu\nu}_\text{fa}(p)=
(-ie)^2\int^{\kappa_{0}}_{-\kappa_{0}} \dfrac{dk_{x}}{2\pi}\int\dfrac{d^2k}{(2\pi)^2}\dfrac{\mathrm{Tr}\left[\mathcal{P_{+}}\slashed{k}\gamma^{\mu}\mathcal{P_{+}}(\slashed{k}+\slashed{p})\gamma^{\nu}\right]}{k^2(k+p)^2}.$$ Using the definition of $\mathcal{P_{+}}$ and the properties of the trace, the integral can be rewritten as: $$\label{eqapp:bubblefa2}
i\Pi^{\mu\nu}_\text{fa}(p)=
\dfrac{(-ie)^2}{2}\dfrac{2\kappa_{0}}{2\pi}\int\dfrac{d^2k}{(2\pi)^2}\left[\dfrac{\mathrm{Tr}\left[\slashed{k}\gamma^{\mu}(\slashed{k}+\slashed{p})\gamma^{\nu}\right]}{k^2(k+p)^2}\right.
+\left.\dfrac{\mathrm{Tr}\left[\gamma_{5}\slashed{k}\gamma^{\mu}(\slashed{k}+\slashed{p})\gamma^{\nu}\right]}{k^2(k+p)^2}\right].$$ We can use the fact that $\gamma_{5}\gamma^{\mu}=-\epsilon^{\mu\nu}\gamma_{\nu}$ to write the second integral in the same mathematical form as the first by pulling the $\epsilon^{\mu\nu}$ tensor out. Then, we notice that both integrals are formally the same as but in one dimension less. Thus, we can use with $d=2$ and $\mathrm{Tr}[\mathbbm{1}]=2$ to write the final solution $$\label{eqapp:bubblefaf}
i\Pi^{\mu\nu}_\text{fa}(p)=\dfrac{ie^2 \kappa_{0}}{2\pi^2p^2}\left[(p^2 g^{\mu\nu}-p^{\mu}p^{\nu})\right. +
\left.(p^2 \epsilon^{\mu\nu}-\epsilon^{\mu\rho}p_{\rho}p^{\nu})\right].$$ To relate this result with the actual surface conductivity notice that here $\mu,\nu \in \{0,1\}$ while for the 2D Dirac cone we had $\mu,\nu \in \{0,1,2\}$. This means that the Fermi arc has only one non-vanishing longitudinal component of the optical conductivity (i.e. $\sigma^{zz}$), while the 2D Dirac fermion has both $\sigma^{zz}$ and $\sigma^{xx}$ contributions. Using , the optical conductivity of the Fermi arc is therefore
$$\begin{aligned}
\label{eqapp:finalfermiarc}
\mathrm{Re}\,\sigma^{zz}_\text{fa}(\omega)&=&\dfrac{e^2\kappa_{0}}{2\pi^2\omega},\\
\mathrm{Re}\,\sigma^{xx}_\text{fa}(\omega)&=&0,\\
\mathrm{Re}\,\sigma^{xz}_\text{fa}(\omega)&=&0.\end{aligned}$$
The total surface conductivity is given given by the sum of and for $\sigma^{xx}$ and by for $\sigma^{zz}$.\
[^1]: By delimited it is to be understood that the Weyl nodes act like a gap closing topological phase transition for a 2D Hamiltonian defined by a cut in a direction perpendicular to the line connecting two Weyl nodes.
[^2]: We have studied numerically the effect of smoothing the domain wall as opposed to a sharp interface. The reported interface features survive, details of which will be reported elsewhere.
[^3]: We note that $C_{a}$ matches one minus the Chern number of a 2D Chern insulator Hamiltonian defined by a momentum space cut perpendicular to the line connecting the Weyl nodes if the WSM has only *two* Weyl nodes.
[^4]: See supplementary material for further details on the calculation of the optical conductivity of the Fermi arc.
|
---
abstract: 'We construct asymptotic expansions for the normalised incomplete gamma function $Q(a,z)=\Gamma(a,z)/\Gamma(a)$ that are valid in the transition regions, including the case $z\approx a$, and have simple polynomial coefficients. For Bessel functions, these type of expansions are well known, but for the normalised incomplete gamma function they were missing from the literature. A detailed historical overview is included. We also derive an asymptotic expansion for the corresponding inverse problem, which has importance in probability theory and mathematical statistics. The coefficients in this expansion are again simple polynomials, and therefore its implementation is straightforward. As a byproduct, we give the first complete asymptotic expansion as $a\to-\infty$ of the unique negative zero of the regularised incomplete gamma function $\gamma^*(a,x)$.'
address: 'School of Mathematics, The University of Edinburgh, James Clerk Maxwell Building, The King’s Buildings, Peter Guthrie Tait Road, Edinburgh EH9 3FD, UK'
author:
- Gergő Nemes
- 'Adri B. Olde Daalhuis'
title: Asymptotic expansions for the incomplete gamma function in the transition regions
---
[^1]
Introduction and main results {#section0}
=============================
The normalised incomplete gamma function $Q(a,z)=\Gamma(a,z)/\Gamma(a)$ is one of the most widely used special functions of two variables. It is used in constructing gamma distributions [@JKB94 Ch. 17], which appear naturally in the theory associated with normally distributed random variables. In many applications, Poisson random variables are used in which the Poisson rate is not fixed, and the function $Q(a,z)$ plays an important role in these cases (see, for example, [@Giles2016]). Consequently, efficient and accurate approximations for the incomplete gamma function are essential. In many applications the inverse problem also arises naturally, that is, the problem of determining the solution $x$, real and positive, of the equation $Q(a,x)=q$ for given $0<q<1$ and $a>0$.
In the important papers [@Temme1979; @Temme1996b], Temme established asymptotic expansions for the normalised incomplete gamma function as $a\to\infty$, that are valid uniformly for $|z|\in[0,\infty)$. The asymptotic expansions of the corresponding inverse functions are discussed in [@Temme1992]. The large region of validity is important in applications, and many papers in physics, applied statistics, network and control theory, and so on, refer to these uniform asymptotic expansions (see, e.g., [@Lukas2014; @Navarra2010; @Paillard2008; @Palffy2017; @Ravelomanana2004; @Trailovic2004]).
However, applying Temme’s expansions in the neighbourhood of the (most interesting) point $z = a$ can be difficult because its coefficients possess a removable singularity at this point. To overcome this difficulty, Temme [@Temme1979] gave power series expansions for these coefficients. Having to use local power series expansions for the coefficients in uniform asymptotic expansions, is not a very elegant solution. In this paper, we propose an alternative approach to the problem.
We will construct what we call *transition region expansions*, and provide full details of the inversion of these new expansions. These are expansions that are valid in the regions in which $Q(a,z)$ changes dramatically, and their coefficients are polynomials satisfying simple recurrence relations. The region of validity overlaps with those of the non-uniform “outer” expansions. Furthermore, the coefficients of their inversions are simple polynomials, whose computation and implementation are straightforward.
It is surprising to us that these transition region expansions for the normalised incomplete gamma function have not yet been discussed in the literature, given the fact that expansions of similar type for Bessel functions are well known (see [@NIST:DLMF [§10.19(iii)](http://dlmf.nist.gov/10.19.iii)]). What our new expansions and the transition region expansions for the Bessel functions have in common is that both mimic the corresponding uniform expansions; compare the similarities between and , and between [@NIST:DLMF [Eq. 10.19.8](http://dlmf.nist.gov/10.19.E8)] and [@NIST:DLMF [Eq. 10.20.4](http://dlmf.nist.gov/10.20.E4)]. The ideas in this paper should also be applicable to other cumulative distribution functions using the results of [@Temme1982].
The proofs of our new results rely heavily on the uniform asymptotic expansions by Temme. Thus, we shall provide full details of his expansions, and even include new recurrence relations for the coefficients in the local power series expansions mentioned above (see Appendix \[appendixb\]). For more information about existing results in the literature and their connection with those we prove in this paper, the reader is referred to the historical overview in Section \[section1\].
The following theorem is the main result of the paper.
\[thm1\] The normalised incomplete gamma function admits the asymptotic expansions $$\label{eq7}
Q\left(a,a+\tau a^{\frac{1}{2}}\right) \sim \tfrac{1}{2} \erfc\left(2^{-\frac{1}{2}}\tau\right) + \frac{1}{\sqrt {2\pi a}}
\exp\left(-\frac{\tau^2}{2}\right) \sum_{n=0}^\infty\frac{C_n(\tau)}{a^{n/2}}$$ and $$\begin{gathered}
\label{eq8}
\begin{split}
\frac{\e^{\pm\pi \im a}}{2\im \sin (\pi a)}Q\left(-a,(a+\tau a^{\frac{1}{2}})\e^{\pm\pi \im }\right) \sim & \pm \tfrac{1}{2}\erfc
\left( \pm 2^{-\frac{1}{2}} \im \tau \right)
\\ &- \frac{\im }{\sqrt{2\pi a}}\exp \left(\frac{\tau^2}{2}\right)\sum_{n=0}^\infty\frac{(-\im )^n C_n(\im \tau)}{a^{n/2}},
\end{split}\end{gathered}$$ as $a\to \infty$ in the sector $|\arg a| \le \pi-\delta<\pi$, uniformly with respect to bounded complex values of $\tau$. Here, $\erfc$ denotes the complementary error function [@NIST:DLMF [Eq. 7.2.2](http://dlmf.nist.gov/7.2.E2)]. The coefficients $C_n(\tau)$ are polynomials in $\tau$ of degree $3n+2$ and satisfy $$C_0 (\tau ) = \frac{1}{3}\tau^2-\frac{1}{3},$$ $$C_n (\tau ) + \tau C'_n (\tau ) - C''_n (\tau ) = \tau (\tau ^2 - 2)C_{n - 1} (\tau ) - (2\tau^2 - 1)C'_{n - 1} (\tau ) + \tau C''_{n - 1} (\tau )$$ for $n\geq 1$. In addition, the even- and odd-order polynomials are even and odd functions, respectively.
The explicit forms of the next two coefficients in are as follows: $$C_1 (\tau) = \frac{1}{18}\tau^5-\frac{11}{36}\tau^3+\frac{1}{12}\tau,\quad C_2 (\tau) = \frac{1}{162}\tau^8-\frac{29}{324}\tau^6+\frac{133}{540}\tau^4-\frac{23}{540}\tau^2-\frac{1}{540}.$$ For higher coefficients and for an algorithm to generate them, see the end of Section \[section2\].
The most common case in applications is when both variables of $Q(a,z)$ are real and positive. It is possible to combine our transition region expansion , with the outer expansions that exist in the literature and cover the complete range $\tau > -a^{1/2}$ (or equivalently, the range $z>0$). First we note that in Theorem \[thm1\] we assume that $\tau$ is bounded. We can weaken this restriction as follows. Since $C_n(\tau)$ is a polynomial of degree $3n+2$ it follows that we can allow $\tau = \mathcal{O}\left( |a|^\mu\right)$ as $a\to \infty$, as long as $\mu < \frac{1}{6}$. See the proposition below. In Section \[section1\], we will give the expansions and . These are also asymptotic expansions with simple coefficients, and with the notation of this section, these two outer expansions are valid for $|\tau| \geq a^\mu$ as $a\to +\infty$, with any $\mu >0$ (see Appendix \[appendixa\]). Therefore, by combining these three expansions, we can cover the whole range $\tau > -a^{1/2}$.
\[prop1\] The expansions and are still valid in the sense of generalised asymptotic expansions (see, e.g., [@NIST:DLMF [§2.1(v)](http://dlmf.nist.gov/2.1.v)]) provided that $\tau = \mathcal{O}\left( |a|^\mu\right)$ as $a\to \infty$ with some constant $\mu < \frac{1}{6}$.
In the inverse problem we are trying to find $x=x(a,q)$ such that $$\label{inveq}
Q(a,x)=q$$ holds, where $0<q<1$ and $a>0$ are given. The inversion of the normalised incomplete gamma function is an important topic in mathematical statistics and probability theory, in particular for computing percentage points of the $\chi^2$-distribution (see, for example, [@Fettis1979]). A suitable inversion of the expansion yields the following result.
\[thminv\] For any $0<q<1$, define $\tau_0$ to be the unique real root of the equation $$\label{qdef}
q = \tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \tau _0\right).$$ Then the inverse function $x=x(a,q)$ that satisfies the equation has the asymptotic expansion $$\label{invexp}
x(a,q) \sim a + \tau _0 a^{\frac{1}{2}} + \sum_{n = 0}^\infty \frac{d_n(\tau_0)}{a^{n/2}},$$ as $a\to +\infty$, provided that $q$ is bounded away from $0$ and $1$. The coefficients $d_n(\tau_0)$ are polynomials in $\tau_0$ of degree $n+2$. In addition, the even- and odd-order polynomials are even and odd functions, respectively.
The first few coefficients in are as follows: $$d_0(\tau_0) = \frac{1}{3}\tau_{0}^2-\frac{1}{3},\quad d_1(\tau_0) = \frac{1}{36}\tau_{0}^3-\frac{7}{36}\tau_{0},\quad d_2(\tau_0) = -\frac{1}{270}\tau_{0}^4-\frac{7}{810}\tau_{0}^2+\frac{8}{405}.$$ For higher coefficients and for an algorithm to generate them, see Section \[section4\].
In Theorem \[thminv\], we assume that $q$ is bounded away from $0$ and $1$. In practice, these bounds can be very small. It follows from that $\tau_0 \to +\infty$ as $q\to 0$ and $\tau_0 \to -\infty$ as $q \to 1$. Since $$\tfrac{1}{2}\erfc\left(2^{-\frac{1}{2}}\tau_0\right) \sim \frac{1}{\sqrt {2\pi }\tau_0}\exp\left(-\frac{\tau_0^2}{2}\right)$$ as $\tau_0 \to +\infty$, and $$\tfrac{1}{2}\erfc\left(2^{-\frac{1}{2}}\tau_0\right) \sim 1+\frac{1}{\sqrt {2\pi }\tau_0}\exp\left(-\frac{\tau_0^2}{2}\right)$$ as $\tau_0 \to -\infty$, we can infer that $|\tau _0| \sim \sqrt { - 2\log (q(1 - q))} $ as $q\to 0$ or $1$. Since the $d_n(\tau_0)$’s are polynomials in $\tau_0$ of degree $n+2$, for to be an asymptotic expansion we need $\tau_0=o\left(\left|a\right|^{1/2}\right)$ as $a\to\infty$. Hence, we need $\log (q(1 - q)) = o(|a|)$, i.e., $q$ can be sub-exponentially close in $\left|a\right|$ to $0$ or $1$.
The other expansion is particularly well suited for the study of the large-$a$ behaviour of the $x$-zeros of the regularised incomplete gamma function $\gamma^\ast (a,x) = x^{-a} P(a,x) = x^{-a} (1 - Q(a,x))$. The function $\gamma^\ast(a,x)$ is entire in $a$ and $x$ and has a unique negative zero $x_{-}(a)$ when $a$ is negative and $a \neq 0,-1,-2,\ldots$. An asymptotic approximation for $x_{-}(a)$ as $a \to -\infty$ was given by Tricomi [@Tricomi1950], which was corrected later by Kölbig [@Kolbig1972]. A more accurate asymptotic approximation was proved recently by Thompson [@Thompson2012], which reads $$\label{eq11}
x_{-}(a) = a - \tau_1 \left(-a\right)^{\frac{1}{2}} - \tfrac{1}{3}\tau_1^2-\tfrac{1}{3} + \mathcal{O}\left(\left(-a\right)^{-\frac12}\right),$$ as $a \to -\infty$, provided that $a$ is bounded away from the non-positive integers. Here $\tau_1$ is the unique solution of the equation $$\label{eq12}
\cot (- \pi a) = \sqrt {\frac{2}{\pi}} \int_0^{\tau _1 } \exp \left( \frac{t^2 }{2} \right)\,\d t
=\frac{2}{\sqrt\pi}F\left(2^{-\frac{1}{2}}\tau_1\right)\exp\left(\frac{\tau_1^2}{2} \right) ,$$ where $F(w)$ denotes Dawson’s integral [@NIST:DLMF [§7.2(ii)](http://dlmf.nist.gov/7.2.ii)]. In the present paper, we extend the result by deriving a complete asymptotic expansion for $x_-(a)$.
\[thmzero\] The unique negative zero $x_{-}(a)$ of the regularised incomplete gamma function $\gamma^\ast(a,x)$ has the asymptotic expansion $$\label{eq20}
x_-(a) \sim a - \tau_1 \left(-a\right)^{\frac{1}{2}} + \sum_{n = 0}^\infty \frac{(-\im)^n d_n(\im\tau_1)}{\left(-a\right)^{n/2}},$$ as $a\to -\infty$, provided that $a$ is bounded away from the non-positive integers. Here $\tau_1$ is the unique solution of the equation . The coefficients $d_n(\im\tau_1)$ are identical to those appearing in Theorem \[thminv\] with $\tau_0$ replaced by $\im\tau_1$.
In Theorem \[thmzero\], we require that $a$ is bounded away from the non-positive integers. In practice, these bounds can be very small. Indeed, let $k$ be a negative integer. It follows from that $\tau_1 \to \pm\infty$ as $a\to k$. Since $$\cot(-\pi a) \sim -\frac{1}{\pi(a - k)},$$ as $a \to k$, and $$\frac{2}{\sqrt \pi}F\left(2^{-\frac{1}{2}} \tau_1\right)\exp \left( \frac{\tau_1^2}{2} \right) \sim \sqrt {\frac{2}{\pi}} \frac{1}{\tau_1}\exp \left( \frac{\tau _1^2 }{2} \right),$$ as $\tau_1 \to \pm\infty$, we can infer from that $|\tau_1| \sim \sqrt{-2\log | a-k |}$ as $a \to k$. Since the $d_n(\im \tau_1)$’s are polynomials in $\tau_1$ of degree $n+2$, for to be an asymptotic expansion we need $\tau_1=o\left(\left|a\right|^{1/2}\right)$ as $a\to-\infty$. Thus, we need $\log |a - k| = o(|a|)$, i.e., $a$ can be sub-exponentially close in $\left|a\right|$ to a negative integer.
A different asymptotic expansion for $x_-(a)$ can be derived using the known uniform asymptotic expansion of the function $\gamma^\ast(a,x)$ [@Temme1996b] and the method described in [@Temme1992]. The resulting expansion would also hold when $a$ is arbitrarily close to a non-positive integer. However, the coefficients in this expansion would be complicated functions possessing removable singularities, whereas the expansion does not suffer from this inconvenience, because its coefficients are polynomials.
The remaining part of the paper is structured as follows. Section \[section1\] provides a historical overview on the existing asymptotic expansions of the (normalised) incomplete gamma functions and the relation between those expansions and ours. In Section \[section2\], we prove the asymptotic expansions given in Theorem \[thm1\]. In Section \[section3\], we show that Proposition \[prop1\] holds. In Section \[section4\], we prove the asymptotic expansions for the inverse and for the unique negative zero of the regularised incomplete gamma function stated in Theorems \[thminv\] and \[thmzero\]. Finally, in Section \[section5\], we compare numerically our new transition region expansion with one of Temme’s uniform asymptotic expansions. We focus especially on the case that $\tau$ is unbounded, but will make the surprising observation that even in the case that $\tau$ is comparable with $a^{1/6}$, we still obtain reasonable approximations from .
Historical overview {#section1}
===================
The incomplete gamma functions of the complex variables $a$ and $z$ are defined by the integrals $$\gamma (a,z) = \int_0^z t^{a - 1} \e^{ - t} \,\d t ,\quad \Gamma(a,z) = \int_z^\infty t^{a - 1} \e^{ - t} \,\d t ,$$ where the paths of integration do not cross the negative real axis, and in the case of $\Gamma(a,z)$ exclude the origin. The definition of $\gamma(a,z)$ requires the condition $\Re(a) > 0$ while in that for $\Gamma(a,z)$ it is assumed that $|\arg z|<\pi$. When $z\neq 0$, $\Gamma(a,z)$ is an entire function of $a$, and $\gamma(a,z)$ extends to a meromorphic function with simple poles at $a=-n$, $n \in \mathbb{N}$, with residue $(-1)^n/n!$. For fixed values of $a$, both $\gamma(a,z)$ and $\Gamma(a,z)$ are multivalued functions of $z$ and are holomorphic on the Riemann surface of the complex logarithm.
The asymptotic behaviour of $\gamma(a,z)$ and $\Gamma(a,z)$ when either $a$ or $z$ is large is well understood (see, for instance, [@NIST:DLMF [§8.11(i)](http://dlmf.nist.gov/8.11.i) and [§8.11(ii)](http://dlmf.nist.gov/8.11.ii)]). The treatment when both $a$ and $z$ become large is significantly more complicated since the resulting expansions have to take into account the presence of the transition point at $z=a$, about which the asymptotic structures of $\gamma(a,z)$ and $\Gamma(a,z)$ go through an abrupt change. For example, in the case of large positive variables, the normalised incomplete gamma function $Q(a,z)=\Gamma(a,z)/\Gamma(a)$ exhibits a sharp decay near the transition point $z=a$, since it is approximately unity when $z<a$ and decreases algebraically to zero when $z>a$. The behaviour of the complementary normalised incomplete gamma function $P(a,z)=\gamma(a,z)/\Gamma(a)$ follows from the functional relation $P(a,z)+Q(a,z)=1$.
The earliest asymptotic expansions of $\gamma(a,z)$ and $\Gamma(a,z)$ for large $a$ and $z$ were given by Mahler [@Mahler1930] and Tricomi [@Tricomi1950]. In modern notation, taking $\lambda=z/a$, Mahler’s expansion for $\gamma(a,z)$ may be written $$\label{eq1}
\gamma (a,z) \sim - z^a \e^{ - z} \sum_{n = 0}^\infty \frac{\left( - a\right)^n b_n (\lambda )}{\left(z - a\right)^{2n + 1} },$$ as $a\to \infty$ in the sector $|\arg a|\leq \frac{\pi}2-\delta<\frac{\pi}2$, provided $0<\lambda<1$. With the same notation, Tricomi’s result for $\Gamma(a,z)$ reads $$\label{eq2}
\Gamma (a,z) \sim z^a \e^{ - z} \sum_{n = 0}^\infty \frac{\left( - a\right)^n b_n (\lambda )}{\left(z - a\right)^{2n + 1} } ,$$ as $a\to \infty$ in the sector $|\arg a|\leq \frac{3\pi}{2}-\delta<\frac{3\pi}{2}$, provided $\lambda>1$. The coefficients $b_n(\lambda)$ are polynomials in $\lambda$ of degree $n$, the first few being $$\begin{gathered}
b_0 (\lambda ) = 1,\quad b_1 (\lambda ) = \lambda ,\quad b_2 (\lambda ) = 2\lambda ^2 + \lambda ,\\ b_3 (\lambda ) = 6\lambda ^3 + 8\lambda ^2 + \lambda ,\quad b_4 (\lambda ) = 24\lambda ^4 + 58\lambda ^3 + 22\lambda ^2 + \lambda .\end{gathered}$$ Higher coefficients can be computed using the recurrence relation [@NIST:DLMF [Eq. 8.11.9](http://dlmf.nist.gov/8.11.9)] $$b_k(\lambda)=\lambda(1-\lambda)b'_{k-1}(\lambda)+(2k-1)\lambda b_{k-1}(\lambda) ,$$ for $k\geq 1$. For other representations of $b_k(\lambda)$, including an explicit expression involving the Stirling numbers of the second kind, the reader is referred to [@Nemes16].
An analogous expansion for $\Gamma (-a,z)$ was subsequently provided by Gautschi [@Gautschi1959] (which was extended later to complex $a$ by Temme [@Temme1994]) in the form $$\label{eq3}
\Gamma (-a,z) \sim z^{-a} \e^{ - z} \sum_{n = 0}^\infty \frac{a^n b_n (-\lambda )}{\left(z + a\right)^{2n + 1} } ,$$ as $a\to \infty$ in the sector $|\arg a|\leq \frac{\pi}{2}-\delta< \frac{\pi}{2}$, provided $z/a=\lambda>-1$. If we restrict $\lambda$ to positive values, the expansion remains valid in the wider range $|\arg a| \le \frac{3\pi}{2} - \omega - \delta < \frac{3\pi}{2} - \omega$ with $\omega = \arg ( \lambda + \log \lambda + \pi\im)$, $0 < \omega < \pi$. For large positive values of $a$ and complex values of $\lambda$, the asymptotic behaviour of $\Gamma (-a,z)$ was discussed by Dunster [@Dunster1996].
The asymptotic expansions – break down as $\lambda \to 1$ (i.e., as the transition point $z=a$ is approached) since their terms become singular in this limit. Similarly, fails to hold as $\lambda \to -1$. An expansion which is useful near the transition point is as follows: $$\label{eq4}
\Gamma (a,z) \sim z^a \e^{ - z} \sqrt {\frac{\pi }{2z}} \sum_{n = 0}^\infty \frac{a_{2n} (\varepsilon )}{z^n } - z^{a - 1} \e^{ - z} \sum_{n = 0}^\infty \frac{a_{2n + 1} (\varepsilon )}{z^n } ,$$ as $z\to \infty$ in the sector $|\arg z|\leq 2\pi-\delta<2\pi$ with $\varepsilon = z - a$ being bounded. The coefficients $a_n(\varepsilon)$ are polynomials in $\varepsilon$ of degree $n$, the first few of which are $$\begin{gathered}
a_0 (\varepsilon ) = 1,\quad a_1 (\varepsilon ) = \varepsilon + \frac{1}{3},\quad a_2 (\varepsilon ) = \frac{1}{2}\varepsilon ^2 + \frac{1}{2}\varepsilon + \frac{1}{{12}},\\ a_3 (\varepsilon ) = \frac{1}{3}\varepsilon ^3 + \frac{2}{3}\varepsilon ^2 + \frac{1}{3}\varepsilon + \frac{4}{{135}},\quad a_4 (\varepsilon ) = \frac{1}{8}\varepsilon ^4 + \frac{5}{{12}}\varepsilon ^3 + \frac{5}{{12}}\varepsilon ^2 + \frac{1}{8}\varepsilon + \frac{1}{{288}}.\end{gathered}$$ In the special case that $\varepsilon=0$, the expansion is well known. Its remarkably large region of validity was proved recently by the first author [@Nemes16]. For the case of general complex $\varepsilon$, can be proved by employing the method of steepest descents to the integral representation [@NIST:DLMF [Eq. 8.6.7](http://dlmf.nist.gov/8.6.7)] $$\Gamma (a,z) = z^a \e^{ - z} \int_0^{ + \infty } \exp \left( - z(\e^t - t - 1)\right)\e^{ - \varepsilon t} \,\d t, \quad \Re(z)>0.$$ The region of validity of the resulting asymptotic expansion depends only on the singularity structure of the inverse function of $\e^t - t - 1$ and hence, it is independent of $\varepsilon$.
Dingle [@Dingle73 Ch. VIII, Sec. 3 and Ch. XXIII, Sec. 8] gave several interesting (formal) results about the expansions – and with $\varepsilon=0$, including re-expansions for the remainder terms and asymptotic approximations for the high-order coefficients. Many of his results have been rigorously justified recently by the first author [@Nemes15b; @Nemes16]. These papers also contain sharp bounds for the error terms of the asymptotic expansions , and with $\varepsilon=0$.
The expansions – are suitable for numerical computation when $|z-a|$ is large compared with $|a|^{1/2}$, for otherwise the early terms of these expansions do not decrease in magnitude. Similarly, the expansion is useful only when $z = a + o\left(\left|a\right|^{1/2}\right)$. Thus, corresponding to values of $|z-a|$ that are comparable with $\left|a\right|^{1/2}$, there are gaps within which neither of these expansions is suitable. We call these gaps the transition regions.
The first attempt to obtain an asymptotic estimate which is valid in the transition regions was made by Tricomi [@Tricomi1950], who showed that $$Q\left(a + 1,a + \tau a^{\frac12} \right) = \tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \tau \right) +
\frac{1}{\sqrt {2\pi a}}\exp \left( -\frac{\tau ^2 }{2} \right)\frac{\tau ^2 + 2}{3} + \mathcal{O}\left(\left|a\right|^{-\frac12} \right),$$ as $a\to \infty$ in the sector $|\arg a|\leq \frac{\pi}{2}-\delta< \frac{\pi}{2}$, with any fixed $\tau$ satisfying $\left|\arg \left(\tau a^{1/2} \right)\right| \le \pi - \delta < \pi$. He also gave, in place of the error term $\mathcal{O}\left(\left|a\right|^{-1/2}\right)$, a complicated expansion in terms of incomplete gamma functions.
A similar asymptotic approximation for $Q\left(a,a + \tau a^{1/2}\right)$ was subsequently given by Pagurova [@Pagurova1956] with an error term of $\mathcal{O}\left(a^{-3}\right)$. Pagurova’s approximation is limited to positive real values of $a$ and real values of $\tau$, but is considerably simpler than that obtained by Tricomi.
Expansions of a different type were provided by Paris [@Paris2002b; @Paris2018] in the form $$\label{eq4a}
\left.\begin{array}{l}\gamma(a,z)\\\Gamma(a,z)\end{array}\!\!\right\} \sim z^{a - \frac{1}{2}} \e^{ -z} \left( \sqrt {\frac{\pi}{2}}
\exp \left( \frac{\chi ^2 }{2} \right)\erfc \left( \mp 2^{ - \frac{1}{2}} \chi \right)\sum_{n = 0}^\infty \frac{A_n (\chi)}{z^{n/2}} \pm
\sum_{n = 1}^\infty \frac{B_n (\chi )}{z^{n/2}} \right)$$ as $z\to \infty$ in the sector $|\arg z|\leq \frac{\pi}{2}-\delta< \frac{\pi}{2}$, and $\Re(z - a) \leq 0$ for $\gamma(a, z)$ and $\Re(z - a) \geq 0$ for $\Gamma(a, z)$. Here $\chi$ is defined by the equality $a = z - \chi z^{1/2}$ and the coefficients $A_n(\chi)$ and $B_n(\chi)$ are polynomials in $\chi$ of degree $3n$ and $3n-1$, respectively. The first few of these coefficients are as follows: $$\begin{gathered}
A_0 (\chi ) = 1,\quad A_1 (\chi ) = \frac{1}{6}\chi ^3 + \frac{1}{2}\chi ,\quad A_2 (\chi ) = \frac{1}{72}\chi ^6 + \frac{1}{6}\chi ^4 + \frac{3}{8}\chi ^2 + \frac{1}{12},\\
B_1 (\chi ) = \frac{1}{6}\chi ^2 + \frac{1}{3},\quad B_2 (\chi ) = \frac{1}{72}\chi ^5 + \frac{11}{72}\chi ^3 + \frac{1}{4}\chi .\end{gathered}$$ The asymptotic expansions together cover the transition regions and are also valid away from those regions, i.e., when $\chi$ is large. However, as it was noted by Paris [@Paris2018], even for moderately large values of $\chi$, the practical use of the expansions is problematic because of severe numerical cancellations. Therefore, the use of these asymptotic expansions should be confined to bounded values of $\chi$.
Other asymptotic expansions for $\gamma(a,z)$ and $\Gamma(a,z)$, similar to those of , were derived by Dingle [@Dingle73 p. 249] and López et al. [@FLP05].
Asymptotic expansions for the normalised incomplete gamma function $Q(a,z)$ which are uniformly valid in the variables $a$ and $z$ were first established by Temme [@Temme1979; @Temme1996b]. His results may be stated as follows. Define $$\lambda = z/a \; \text{ and } \; \eta =\eta(\lambda) = \left(2(\lambda - 1 - \log \lambda )\right)^{\frac{1}{2}} ,$$ where the branch of the square root is continuous and satisfies $\eta(\lambda) \sim \lambda-1$ as $\lambda \to 1$. Then as $a\to \infty$ in the sector $|\arg a|\leq \pi-\delta< \pi$, $$\label{eq5}
Q(a,z) \sim \tfrac{1}{2} \erfc\left(2^{ - \frac{1}{2}} \eta a^{\frac{1}{2}} \right) + \frac{1}{\sqrt {2\pi a} }\exp \left( -\tfrac{1}{2}\eta^2 a \right)
\sum_{n = 0}^\infty \frac{c_n(\eta)}{a^n}$$ and $$\label{eq6}
\frac{\e^{ \pm \pi \im a}}{2\im \sin (\pi a)}Q\left( - a,z\e^{ \pm\pi\im}\right) \sim \pm \tfrac{1}{2} \erfc\left(\pm 2^{-\frac{1}{2}}\im\eta a^{\frac{1}{2}} \right)
- \frac{\im }{\sqrt {2\pi a}}\exp \left( {\tfrac{1}{2}\eta ^2 a} \right)\sum_{n = 0}^\infty \left(-1\right)^n \frac{c_n (\eta )}{a^n } ,$$ uniformly with respect to $\lambda$ in the sector $|\arg \lambda|\leq 2\pi-\delta< 2\pi$. The coefficients $c_n (\eta)$ are holomorphic functions of $\eta$ and are defined recursively by $$\label{eq6a}
c_0 (\eta ) = \frac{1}{\lambda - 1} - \frac{1}{\eta},\quad c_n (\eta) = \frac{\gamma _n }{\lambda - 1} + \frac{1}{\eta}\frac{\d c_{n - 1} (\eta )}{\d\eta}\quad (n \ge 1),$$ where the $\gamma_n$’s are the Stirling coefficients appearing in the well-known asymptotic expansion of the gamma function (see, for example, [@Nemes13]). Temme’s expansions feature the characteristic error function behaviour near the transition point $z=a$ (where $\eta=0$) and also describe the large-$a$ asymptotics uniformly in $z$. However, they suffer from the inconvenience of the coefficients $c_n (\eta)$ having a removable singularity at $\eta=0$, which makes their evaluation in the neighbourhood of the transition point difficult. To overcome this difficulty, Temme [@Temme1979] gave power series expansions in $\eta$ for these coefficients which do not have a removable singularity. The asymptotic properties of the $c_n (\eta)$’s for large $n$ were studied by Dunster et al. [@Dunster1998] and the second author [@OD98a]. Explicit expressions for these coefficients can be found in the paper [@Nemes13]. Computable error bounds for the asymptotic expansion were established by Temme [@Temme1979] for real variables, and by Paris [@Paris2002a] for complex variables.
Although our expansions and are not valid in as large a domain of $a$ and $z$ as those in and , the coefficients have the advantage of not possessing a removable singularity at the transition point $z=a$ and so are more straightforward to compute. Pagurova’s result is a special case of the expansion truncated after the first six terms and restricted to large positive real values of $a$ and real values of $\tau$. The expansions are similar in character to the expansion , but they are for the non-normalised functions $\gamma(a,z)$ and $\Gamma(a,z)$ and have more complicated forms and smaller regions of validity compared to . Also, the coefficients of these asymptotic expansions seem not to satisfy simple recurrence relations like those of the expansion . Asymptotic expansions analogous to and for the complementary normalised incomplete gamma function $P(a,z)$ follow from the functional relation $P(a,z)+Q(a,z)=1$.
Proof of Theorem \[thm1\] {#section2}
=========================
From , we can assert that for any non-negative integer $N$, $$Q(a,z) = \tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \eta a^{\frac{1}{2}}\right)+ \frac{1}{\sqrt {2\pi a}}\exp \left( - \tfrac{1}{2}\eta ^2 a \right)\left(
\sum_{n = 0}^{N - 1} \frac{c_n (\eta )}{a^n } + \mathcal{O}_{N,\delta } \left( \frac{1}{|a|^N } \right) \right),$$ as $a\to \infty$ in the sector $|\arg a|\leq \pi-\delta< \pi$, uniformly with respect to $\lambda$ in the sector $|\arg \lambda|\leq 2\pi-\delta< 2\pi$. (Throughout this paper, we use subscripts in the $\mathcal{O}$ notations to indicate the dependence of the implied constant on certain parameters.) We employ this result with the choice of $\lambda = 1 + \tau a^{ - 1/2}$ and assume that $\left|\tau a^{ - 1/2}\right|<\frac{1}{2}$, in particular, $|\arg \lambda|<\frac{\pi}{2}$. Consequently, we have $$\begin{gathered}
\tfrac{1}{2}\left|\eta\right|^2 = \left| \tau a^{ - 1/2} - \log (1 + \tau a^{ - 1/2} )\right|
=\left| \tau a^{ - 1/2}\right|^2 \left| \int_0^1 \frac{t}{1 + \tau a^{ - 1/2} t}\,\d t \right| \\
< \left| \tau a^{ - 1/2} \right|^2 \int_0^1 \frac{t}{1 - \left| {\tau a^{ - 1/2} } \right|t}\,\d t < \left| \tau a^{ - 1/2} \right|^2 < \left| \tau a^{ - 1/2} \right|,\end{gathered}$$ that is, $|\eta| < 1$. We have the convergent expansions $$\begin{aligned}
\tfrac{1}{2}\eta ^2 = \tau a^{ - 1/2} - \log \left(1 + \tau a^{ - 1/2} \right) &= \sum_{k = 2}^\infty\left(-1\right)^k \frac{\tau ^k}{k}\frac{1}{a^{k/2}}
\\& = \frac{\tau ^2 }{2}\frac{1}{a}\left( 1 + \sum_{k = 1}^\infty\left(-1\right)^k \frac{2\tau ^k }{k + 2}\frac{1}{a^{k/2} } \right)\end{aligned}$$ and $$\eta = \frac{\tau }{a^{1/2}} + \sum_{k = 2}^\infty \frac{m_k (\tau )}{a^{k/2}},$$ where $m_k(\tau)$ is a monomial in $\tau$ of degree $k$.
Let $h$ denote a complex number whose value will be specified later. Employing the known expression for the higher derivatives of the complementary error function [@NIST:DLMF [Eq. 7.10.1](http://dlmf.nist.gov/7.10.1)] and the Taylor formula with integral remainder, we find $$\begin{aligned}
\tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \tau + h\right)=\; & \tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \tau\right)\\&+ \frac{1}{\sqrt \pi}\exp \left( - \frac{\tau ^2}{2}\right)\left( \sum_{n = 1}^{N - 1} \frac{\left(-1\right)^n}{n!}H_{n - 1} \left(2^{ - \frac{1}{2}} \tau \right)h^n + R_N (h,\tau ) \right),\end{aligned}$$ where $$\begin{aligned}
R_N (h,\tau ) &= \frac{\sqrt \pi}{2(N - 1)!}\exp \left( \frac{\tau ^2}{2} \right)\int_0^h {\erfc^{(N)}}\left(2^{ - \frac{1}{2}} \tau+t\right)\left(h-t\right)^{N-1} \,\d t
\\ &= \frac{\left(-1\right)^N }{(N - 1)!}\int_0^h H_{N - 1}\left(2^{ - \frac{1}{2}} \tau+t\right)\exp\left(-2^{\frac{1}{2}} \tau t-t^2 \right)\left(h-t\right)^{N-1} \,\d t
\\ &= \mathcal{O}_N\left(\left(|\tau| + 1\right)^{N - 1} \left|h\right|^N \exp \left(\mathcal{O}(|\tau h|)\right) \right)\end{aligned}$$ as $h\to 0$ and $H_n(w)$ denotes the $n$th Hermite polynomial [@NIST:DLMF [§18.3](http://dlmf.nist.gov/18.3)]. Assuming that $\tau = o\left(|a|^{1/4} \right)$ for large $a$ and using this expansion with $$h = \sum_{k = 1}^\infty 2^{ - \frac{1}{2}} \frac{m_{k + 1} (\tau )}{a^{k/2} } = \mathcal{O}\left( \frac{|\tau |^2 }{|a|^{1/2} } \right) = o(1)$$ and with $N+1$ in place of $N$, we obtain $$\begin{aligned}
& \tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \eta a^{\frac{1}{2}}\right)= \tfrac{1}{2}\erfc \left( 2^{ - \frac{1}{2}} \tau + \sum_{k = 1}^\infty 2^{ - \frac{1}{2}} \frac{m_{k + 1} (\tau )}{a^{k/2}} \right) = \tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \tau\right)\\ &+ \frac{1}{\sqrt {2\pi a}}\exp \left( { - \frac{\tau ^2 }{2}} \right)\left(
\sum_{n = 0}^{N - 1} \frac{p_n (\tau )}{a^{n/2}} + \mathcal{O}_N \left( \frac{(|\tau| + 1)^{3N + 2}}{\left| a \right|^{N/2} }
\exp \left( \mathcal{O}\left( \frac{|\tau |^3 }{|a|^{1/2} } \right) \right) \right) \right).\end{aligned}$$ Here $p_n(\tau)$ is a polynomial in $\tau$ of degree at most $3n+2$.
We have $$\exp \left( - \frac{\tau ^2}{2} + h \right) = \exp \left( - \frac{\tau ^2}{2} \right)\left( {\sum_{n = 0}^{N - 1} {\frac{{h^n }}{{n!}}}
+ \mathcal{O}_N \left(|h|^N \exp (|h|)\right)} \right)$$ for any complex $h$. Applying this expansion with $$h = \sum_{k = 1}^\infty\left(-1\right)^{k + 1} \frac{\tau ^{k + 2} }{k + 2}\frac{1}{a^{k/2}} =
\mathcal{O}\left( \frac{\left|\tau\right|^3}{\left|a\right|^{1/2}} \right),$$ we deduce $$\begin{aligned}
\exp \left( - \tfrac{1}{2}\eta ^2 a \right) & = \exp \left( - \frac{\tau ^2}{2} + \sum_{k = 1}^\infty\left(-1\right)^{k + 1}
\frac{\tau ^{k + 2}}{k + 2}\frac{1}{a^{k/2}} \right) \\
& = \exp \left( - \frac{\tau ^2}{2} \right)\left( \sum_{n = 0}^{N - 1} \frac{q_n (\tau )}{a^{n/2}}
+ \mathcal{O}_N \left( {\frac{\left(|\tau |+1\right)^{3N} }{\left|a\right|^{N/2} }
\exp \left( \mathcal{O}\left( \frac{\left|\tau\right|^3 }{\left|a\right|^{1/2} } \right) \right)} \right) \right),\end{aligned}$$ where $q_n(\tau)$ is a polynomial in $\tau$ of degree at most $3n$.
We have the power series expansion $$c_n (h) = c_n (0) + \sum_{k = 1}^\infty {f_{n,k} h^k } ,$$ which converges if $|h|<2\sqrt{\pi}$ (cf. [@NIST:DLMF [Eq. 8.12.11](http://dlmf.nist.gov/8.12.11)] or Appendix \[appendixb\]). Applying this expansion with $h= \eta$ ($|\eta|<1$), we find $$c_n (\eta ) = c_n \left( \frac{\tau }{a^{1/2}} + \sum_{k = 2}^\infty \frac{m_k (\tau )}{a^{k/2} } \right) = \sum_{k = 0}^\infty \frac{r_k (\tau )}{a^{k/2} } ,$$ where $r_k(\tau)$ is a monomial in $\tau$ of degree $k$. Consequently, $$\sum_{n = 0}^{N - 1} \frac{c_n (\eta )}{a^n} + \mathcal{O}_{N,\delta } \left( \frac{1}{|a|^N }\right) =
\sum_{n = 0}^{N - 1} \frac{s_n (\tau )}{a^{n/2}} + \mathcal{O}_{N,\delta } \left( \frac{\left(|\tau | + 1\right)^N }{\left|a\right|^{N/2} } \right)$$ where $s_n(\tau)$ is a polynomial in $\tau$ of degree at most $n$. Collecting all the partial results, we finally have $$\begin{gathered}
\label{eq22}
\begin{split}
& Q(a,z)=Q\left(a,a+\tau a^{\frac{1}{2}}\right) = \tfrac{1}{2} \erfc\left(2^{-\frac{1}{2}}\tau\right) \\&+ \frac{1}{\sqrt {2\pi a}}\exp\left(-\frac{\tau^2}{2}\right)
\left(\sum_{n=0}^{N-1}\frac{C_n(\tau)}{a^{n/2}} + \mathcal{O}_{N,\delta} \left( \frac{\left(|\tau| + 1\right)^{3N + 2}}{\left| a \right|^{N/2} }
\exp \left( {\mathcal{O}\left( \frac{\left|\tau\right|^3 }{\left|a\right|^{1/2} } \right)} \right) \right) \right)
\end{split}\end{gathered}$$ for any non-negative integer $N$ as $a\to \infty$ in the sector $|\arg a|\leq \pi-\delta< \pi$, provided that $\tau = o\left(\left|a\right|^{1/4}\right)$. Here $C_n(\tau)$ is a polynomial in $\tau$ of degree at most $3n+2$. If $\tau$ is bounded, then the error term in is of the same order of magnitude as the first neglected term and hence, the asymptotic expansion holds.
In a similar manner, starting with the asymptotic expansion , it can be shown that $$\begin{gathered}
\label{eq9}
\begin{split}
& \frac{\e^{\pm\pi\im a}}{2\im\sin (\pi a)}Q\left(-a,(a+\tau a^{\frac{1}{2}})\e^{\pm\pi\im}\right) = \pm \tfrac{1}{2}\erfc\left(\pm2^{-\frac{1}{2}}\im\tau\right) \\
& - \frac{\im}{\sqrt{2\pi a}}\exp \left(\frac{\tau^2}{2}\right) \left(\sum_{n=0}^{N-1}\frac{\widetilde{C}_n(\tau)}{a^{n/2}}
+ \mathcal{O}_{N,\delta} \left( \frac{\left(|\tau| + 1\right)^{3N + 2}}{\left| a \right|^{N/2} }
\exp \left( {\mathcal{O}\left( \frac{\left|\tau\right|^3 }{\left|a\right|^{1/2} } \right)} \right) \right) \right)
\end{split}\end{gathered}$$ for any non-negative integer $N$ as $a\to \infty$ in the sector $|\arg a|\leq \pi-\delta< \pi$, provided that $\tau = o\left(\left|a\right|^{1/4}\right)$. The coefficients $\widetilde{C}_n(\tau)$ are polynomials in $\tau$ of degree at most $3n+2$. If $\tau$ is bounded, then the error term in is of the same order of magnitude as the first neglected term and hence, the asymptotic expansion holds. To show that $\widetilde C_n (\tau ) = \left(-\im\right)^n C_n (\im\tau )$, we can proceed as follows. Assuming that $\tau$ is bounded and applying with $a\e^{ \mp \pi\im/2}$ in place of $a$, $a \to +\infty$, we find $$\begin{gathered}
\frac{1}{1 - \e^{ - 2\pi a}}Q\left( a\e^{ \pm \frac{\pi }{2}\im} ,a\e^{ \pm \frac{\pi}{2}\im} +
\tau \e^{ \pm \frac{\pi}{2}\im} \left( a\e^{ \pm \frac{\pi}{2}\im} \right)^{\frac{1}{2}} \right) \\
\sim \tfrac{1}{2} \erfc\left(2^{ - \frac{1}{2}} \tau \e^{ \pm \frac{\pi}{2}\im}\right) + \frac{1}{\sqrt {2\pi a\e^{ \pm \frac{\pi }{2}\im} } }
\exp \left( \frac{\tau ^2 }{2} \right)\sum_{n = 0}^\infty \frac{\left( \pm\im\right)^n \widetilde{C}_n (\tau)}{\left(a\e^{ \pm \frac{\pi}{2}\im}\right)^{n/2}} .\end{gathered}$$ Since, for any non-negative integer $N$, $1/(1 - \e^{ - 2\pi a} ) = 1+\mathcal{O}(a^{ - N} )$ as $a\to +\infty$, the right hand side must agree with that of when applied with $\tau \e^{ \pm \frac{\pi }{2}\im}$ in place of $\tau$ and $a\e^{ \pm \frac{\pi }{2}\im}$ in place of $a$, $a \to +\infty$. Therefore, we find that $\left( \pm\im\right)^n \widetilde C_n (\tau ) = C_n ( \pm\im\tau )$, that is $\widetilde C_n (\tau ) = \left( \mp\im\right)^n C_n ( \pm \im\tau )$. By the parity properties of the polynomials $C_n(\tau)$ (see below), this latter equality simplifies to $\widetilde C_n(\tau)=\left(-\im\right)^nC_n(\im\tau )$.
We proceed by proving the claimed properties of the polynomials $C_n(\tau)$. The differential equation [@NIST:DLMF [§8.2(iii)](http://dlmf.nist.gov/8.2.iii)] $$\frac{\partial ^2 Q(a,z)}{\partial z^2} + \left( 1 + \frac{1 - a}{z} \right)\frac{\partial Q(a,z)}{\partial z} = 0$$ implies that $$\left(1 + \tau a^{ - \frac{1}{2}}\right)\frac{\partial ^2 Q\left(a,a + \tau a^{\frac{1}{2}}\right)}{\partial \tau ^2 }
+ \left(\tau + a^{ - \frac{1}{2}} \right)\frac{\partial Q\left(a,a + \tau a^{\frac{1}{2}} \right)}{\partial \tau } = 0.$$ Substituting into the left-hand side of this equation and equating the coefficients of $a^{-n/2}$ on both sides yield $C_0 (\tau ) = \frac{1}{3}\tau^2 -\frac{1}{3}$ and $$\label{eq10}
C_n (\tau ) + \tau C'_n (\tau ) - C''_n (\tau ) = \tau (\tau ^2 - 2)C_{n - 1} (\tau ) - (2\tau ^2 - 1)C'_{n - 1} (\tau ) + \tau C''_{n - 1} (\tau )$$ for $n\geq 1$. Using induction on $n$ and the fact that the corresponding homogeneous equation $w(\tau ) + \tau w'(\tau ) - w''(\tau ) = 0$ has no non-zero polynomial solution, it follows that determines the polynomials $C_n(\tau)$ uniquely. An induction on $n$ and imply that the degree of the polynomial $C_n(\tau)$ is exactly $3n+2$.
Lastly, we prove that $C_n ( - \tau )= \left(-1\right)^nC_n ( \tau )$, i.e., the even- and odd-order polynomials are even and odd functions, respectively. We substitute the polynomial expansion $$\label{eq10a}
C_n(\tau) = \sum_{k=0}^{3n+2} c_{n,k}\tau^k,$$ into and obtain for the coefficients the recurrence relation $$\label{eq10b}
c_{n,k}=(k+2)c_{n,k+2}+(k+1) c_{n-1,k+1}-\frac{2k}{k+1}c_{n-1,k-1}+\frac{1}{k+1}c_{n-1,k-3}.$$ Since $C_0 (\tau ) = \frac{1}{3}\tau^2 -\frac{1}{3}$, it follows that $$\label{eq10c}
c_{n,3n+2}=\frac{1}{3^{n+1}(n+1)!},\qquad c_{n,3n+1}=0,$$ holds for $n=0$, and by the recurrence relation we can show that holds for $n\geq1$. Starting with the first equation in and then taking $k=3n, 3n-2, 3n-4, \ldots$, we can compute the non-zero coefficients in the expansion . Similarly, it follows that $c_{n,k}=0$ for $k=3n+1, 3n-1, 3n-3, \ldots$. Accordingly, $C_n ( - \tau )= \left(-1\right)^nC_n ( \tau )$.
The first several polynomials $C_n ( \tau )$ are given in Table \[table1\].
\[c\][l l]{} &\
\[-2ex\] $n$ & $C_n(\tau)$\
\[-2ex\] &\
&\
\[-1.5ex\] 0 & $\frac{1}{3}{\tau}^{2}-\frac{1}{3}$\
\[1ex\] 1 & $\frac{1}{18}{\tau}^{5}-{\frac {11}{36}} {\tau}^{3}+\frac{1}{12}\tau$\
\[1ex\] 2 & $\frac{1}{162}\tau^{8}-\frac{29}{324}\tau^{6}+\frac{133}{540}\tau^{4}-\frac{23}{540}\tau^{2}-\frac{1}{540}$\
3 & ${\frac {1}{1944}}{\tau}^{11}-{\frac {7}{486}}{\tau}^{9}+{
\frac {463}{4320}}{\tau}^{7}-{\frac {883}{4320}}{\tau}^{5}+{\frac
{23}{864}}{\tau}^{3}-{\frac {1}{288}}\tau$\
4 & ${\frac {1}{29160}}{\tau}^{14}-{\frac {23}{14580}}{
\tau}^{12}+{\frac {881}{38880}}{\tau}^{10}-{\frac {1507}{12960}}{
\tau}^{8}+{\frac {7901}{45360}}{\tau}^{6}-{\frac {61}{3024}}{\tau
}^{4}+{\frac {23}{6048}}{\tau}^{2}+{\frac{25}{6048}}$\
5 & ${\frac {1}{524880}}{\tau}^{17}-{\frac {137}{
1049760}}{\tau}^{15}+{\frac {2149}{699840}}{\tau}^{13}-{\frac {42331}{1399680}}{\tau}^{
11}+{\frac {5910101}{48988800}}{\tau}^{9}-{\frac {413177
}{2721600}}{\tau}^{7}$\
& $+{\frac {3239}{194400}}{\tau}^{5}-{\frac {319
}{155520}}{\tau}^{3}-{\frac {139}{51840}}\tau$\
6 & ${\frac {1}{11022480}}{\tau}^{20}-{\frac {
191}{22044960}}{\tau}^{18}+{\frac {1483}{4898880}}{\tau}^{16}-{
\frac {47639}{9797760}}{\tau}^{14}+{\frac {1810717}{
48988800}}{\tau}^{12}$\
& $-{\frac {1996859}{16329600}}{\tau}^{10}+{\frac {146611}{1088640}}{
\tau}^{8}-{\frac {2203}{155520}}{\tau}^{6}+{\frac {11}{10368}}{
\tau}^{4}+{\frac {259}{155520}}{\tau}^{2}+{\frac{101}{155520}}$\
7 & ${\frac {1}{264539520}}{\tau}^{23}-{\frac {127}{
264539520}}{\tau}^{21}+{\frac {2939}{125971200}}{\tau}^{19}-{\frac {2911}{5248800}}{\tau
}^{17}+{\frac {3217793}{470292480}}{\tau}^{15}$\
& $-{\frac {
33561391}{783820800}}{\tau}^{13}+{\frac {13697119}{
111974400}}{\tau}^{11}-{\frac {3797293}{31352832}}{\tau}^{9}+{\frac {213841
}{17418240}}{\tau}^{7}-{\frac {4069}{7464960}}{\tau}^{5}$\
& $-{\frac {6101
}{7464960}}{\tau}^{3}+{\frac {571}{2488320}}\tau$\
\[1ex\] 8 & ${\frac {1}{7142567040}{\tau}^{26}
}-{\frac {163}{7142567040}}{\tau}^{24}+{\frac {35047}{
23808556800}}{\tau}^{22}-{\frac {41279}{850305600}}{\tau}^{20}+{\frac {
168240799}{190468454400}}{\tau}^{18}$\
\[1ex\] & $-{\frac {26988481}{
3023308800}}{\tau}^{16}+{\frac {1266722761}{26453952000}}{\tau}^{14}-{\frac {
3211451701}{26453952000}}{\tau}^{12}+{\frac {21361155917}{193995648000}}{\tau}^{10
}$\
\[1ex\] & $-{\frac {419199161}{38799129600}}{\tau}^{8}+{\frac
{889093}{2771366400}}{\tau}^{6}+{\frac {510887}{
923788800}}{\tau}^{4}-{\frac {2016373}{3695155200}}{\tau}^{2}-{\frac{3184811}{3695155200}}$\
&\
Proof of Proposition \[prop1\] {#section3}
==============================
The new asymptotic expansions and were proved under the assumptions that $a$ is large and $\tau$ is bounded. Here we show that the restriction on $\tau$ can be relaxed and the expansions and are still valid in the sense of generalised asymptotic expansions.
Suppose that $|\tau| = \mathcal{O}\left(|a|^\mu\right)$ as $a\to \infty$ with some constant $\mu < \frac{1}{6}$. With these provisos, since $C_n(\tau)$ is a polynomial in $\tau$ of degree $3n+2$, the sequences $C_n (\tau )a^{ - n/2}$ and $\left(-\im\right)^n C_n (\im\tau )a^{ - n/2}$ ($n=0,1,2,\ldots$) are asymptotic sequences as $a \to \infty$. Furthermore, the remainder terms in and have the same order of magnitude as the corresponding first neglected terms $C_N (\tau )a^{ - N/2}$ and $\left(-\im\right)^N C_N (\im\tau )a^{ - N/2}$, respectively. This proves that, with the above assumptions on $\tau$, the expansions and are indeed generalised asymptotic expansions.
Proof of Theorems \[thminv\] and \[thmzero\] {#section4}
============================================
We begin with the proof of Theorem \[thminv\]. The equations , and imply that $$\label{Eeq}
E(\tau):=\sqrt {\frac{\pi }{2}} \exp \left( \frac{{\tau ^2 }}{2}\right)\left( \erfc\left(2^{ - \frac{1}{2}} \tau _0\right) -
\erfc\left(2^{ - \frac{1}{2}} \tau \right) \right) \sim \sum_{n = 0}^\infty \frac{C_n (\tau )}{a^{(n + 1)/2}},$$ as $a\to+\infty$. The function $E(\tau)$ satisfies the second order linear homogeneous differential equation $$\label{Ediffeq}
E''(\tau ) - \tau E'(\tau ) - E(\tau ) = 0.$$ Expanding $E(\tau)$ into a power series around $\tau=\tau_0$, and substituting it into , we find that $$\label{Eexp}
E(\tau ) = \sum_{k = 1}^\infty P_k (\tau _0 )\left(\tau - \tau _0 \right)^k$$ where $P_1 (\tau _0 ) = 1$, $P_2 (\tau _0 ) = \frac{1}{2}\tau _0$ and $$k P_{k} (\tau _0 ) = \tau _0 P_{k - 1} (\tau _0 ) + P_{k - 2} (\tau _0 ),$$ for $k\geq 3$.
We seek for a solution of the asymptotic equality in the form $$\label{tauexp}
\tau \sim \tau _0 + \sum_{k = 0}^\infty \frac{d_k (\tau _0 )}{a^{(k + 1)/2} },\quad a\to +\infty.$$ Substituting into and expanding in powers of $a^{-1/2}$, we deduce $$E(\tau ) \sim \sum_{k = 0}^\infty \left( \sum_{m = 1}^{k + 1} P_m (\tau _0 )\textbf{\textsf{B}}_{k + 1,m} (d_0 (\tau _0 ), \ldots ,d_{k - m + 1} (\tau _0 )) \right)\frac{1}{a^{(k + 1)/2} } ,$$ as $a\to+\infty$, where $\textbf{\textsf{B}}_{k,m}$ are the partial ordinary Bell polynomials (see, e.g., [@Nemes13 Appendix]). Similarly, employing , we obtain $$\begin{aligned}
\sum_{n = 0}^\infty \frac{C_n (\tau )}{a^{(n + 1)/2} }
& \sim \sum_{n = 0}^\infty \sum_{m = 0}^{3n + 2} \frac{c_{n,m}}{a^{(n - m + 1)/2} }\left( \frac{\tau }{a^{1/2} } \right)^m \\
& \sim \sum_{k = 0}^\infty \left( \sum_{n = 0}^k \sum_{m = 0}^{3n + 2} c_{n,m}
\textbf{\textsf{B}}_{k - n+m,m} \left(\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - n - 1} (\tau _0 )\right) \right)\frac{1}{a^{(k + 1)/2}} ,\end{aligned}$$ as $a\to+\infty$. By equating to zero the coefficients of powers of $a^{-1/2}$, and using $P_1 (\tau _0 ) = 1$, we derive the recurrence relation $d_0 (\tau _0 ) = \frac{1}{3}\tau _0^2 - \frac{1}{3}$ and $$\begin{gathered}
\label{drec}
\begin{split}
d_k (\tau _0 ) = & - \sum_{m = 2}^{k + 1} P_m (\tau _0 )\textbf{\textsf{B}}_{k + 1,m} (d_0 (\tau _0 ), \ldots ,d_{k - m + 1} (\tau _0 )) \\ & + \sum_{n = 0}^k \sum_{m = 0}^{3n + 2} c_{n,m} \textbf{\textsf{B}}_{k - n+m,m} (\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - n - 1} (\tau _0 )) ,
\end{split}\end{gathered}$$ for $k\geq 1$. We can use this recurrence and induction on $k$ to prove that $d_k ( - \tau _0 ) = \left( - 1\right)^k d_k (\tau _0 )$, i.e., that the even and odd-order polynomials are even and odd functions, respectively. The identity clearly holds for $k=0$. Suppose that $k\geq 1$. Using induction, it is easy to see that $P_m ( - \tau _0 ) = \left( - 1\right)^{m + 1} P_m (\tau _0 )$. From the properties of the partial ordinary Bell polynomials, we can infer that $$\begin{aligned}
\textbf{\textsf{B}}_{k + 1,m} (d_0 ( - \tau _0 ), \ldots ,d_{k - m + 1} ( - \tau _0 )) & = \textbf{\textsf{B}}_{k + 1,m} (d_0 (\tau _0 ), \ldots ,\left(-1\right)^{k - m + 1} d_{k - m + 1} (\tau _0 )) \\ & = \left(-1\right)^{k - m + 1} \textbf{\textsf{B}}_{k + 1,m} (d_0 (\tau _0 ), \ldots ,d_{k - m + 1} (\tau _0 ))\end{aligned}$$ and $$\begin{aligned}
& \textbf{\textsf{B}}_{k - n + m,m} ( - \tau _0 ,d_0 ( - \tau _0 ), \ldots ,d_{k - n - 1} ( - \tau _0 )) \\ & = \textbf{\textsf{B}}_{k - n + m,m} ( - \tau _0 ,d_0 (\tau _0 ), \ldots ,\left(-1\right)^{k - n - 1} d_{k - n - 1} (\tau _0 ))\\ & = \left(-1\right)^{k - n + m} \textbf{\textsf{B}}_{k - n + m,m} (\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - n - 1} (\tau _0 )) .\end{aligned}$$ Finally, $C_n ( - \tau ) = \left( - 1\right)^n C_n (\tau )$ implies $c_{n,m} = \left( - 1\right)^{ - n + m} c_{n,m}$. Employing all these relations in , it follows readily that $d_k ( - \tau _0 ) = \left( - 1\right)^k d_k (\tau _0 )$.
A similar argument would show that the degree of $d_k(\tau_0)$ as a polynomial in $\tau_0$ is at most $3k+2$. To prove that the degree is actually $k+2$, we need another recurrence for these polynomials. In order to obtain this recurrence, we proceed as follows. If we differentiate both sides of the asymptotic equality $$\tfrac{1}{2}\erfc\left(2^{ - \frac{1}{2}} \tau _0 \right) \sim Q\left( a,a + \tau _0 a^{\frac{1}{2}} + \sum_{k = 0}^\infty \frac{d_k (\tau _0 )}{a^{k/2}} \right),\quad a\to +\infty,$$ with respect to $\tau_0$, we find $$\begin{aligned}
- \frac{1}{\sqrt {2\pi }}\exp \left( - \frac{\tau_0 ^2 }{2} \right) \sim - & \frac{1}{\sqrt {2\pi }}\frac{1}{\Gamma ^ * (a)}\left( 1 + \left( \tau _0 + \sum_{k = 0}^\infty \frac{d_k (\tau _0 )}{a^{(k + 1)/2}} \right)a^{ - \frac{1}{2}} \right)^{a - 1} \\ & \times \exp \left( - \tau _0 a^{\frac{1}{2}} - \sum_{k = 0}^\infty \frac{d_k (\tau _0 )}{a^{k/2} } \right)\left( 1 + \sum_{k = 0}^\infty \frac{d'_k (\tau _0 )}{a^{(k + 1)/2}} \right)\end{aligned}$$ as $a\to+\infty$, where $\Gamma ^ * (a): = \Gamma (a)\e^a a^{1/2 - a} (2\pi )^{ - 1/2}$ is the so-called scaled gamma function. Taking logarithms, we can assert that $$\begin{aligned}
- \frac{\tau_0 ^2}{2} \sim & - \log \Gamma ^ * (a) + (a - 1)\log \left( 1 + \left( \tau _0 + \sum_{k = 0}^\infty \frac{d_k (\tau _0 )}{a^{(k + 1)/2} } \right)a^{ - \frac{1}{2}} \right) \\ & - \tau _0 a^{\frac{1}{2}} - \sum_{k = 0}^\infty \frac{d_k (\tau _0 )}{a^{k/2} } + \log \left( 1 + \sum_{k = 0}^\infty \frac{d'_k (\tau _0 )}{a^{(k + 1)/2} } \right).\end{aligned}$$ Expanding the right-hand side in powers of $a^{-1/2}$, applying the known asymptotic expansion of $\log \Gamma^*(a)$ [@NIST:DLMF [Eq. 5.11.1](http://dlmf.nist.gov/5.11.1)], and comparing the coefficients of powers of $a^{-1/2}$ on each side, we deduce $$\begin{gathered}
\label{altdrec}
\begin{split}
& \sum_{m = 2}^{k + 2} \frac{\left(-1\right)^m }{m}\textbf{\textsf{B}}_{k + 2,m} (\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - m + 1} (\tau _0 )) \\ & + \sum_{m = 1}^k \frac{{\left(-1\right)^m }}{m}\left( \textbf{\textsf{B}}_{k,m} (d'_0 (\tau _0 ), \ldots ,d'_{k - m} (\tau _0 )) - \textbf{\textsf{B}}_{k,m} (\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - m - 1} (\tau _0 )) \right) \\ & = \begin{cases} -\cfrac{{B_{2n} }}{{2n(2n - 1)}} &\mbox{if } k=4n-2, \\
0 & \mbox{otherwise.}\end{cases}
\end{split}\end{gathered}$$ Here $B_{2n}$ denotes the even-order Bernoulli numbers [@NIST:DLMF [§24.2(i)](http://dlmf.nist.gov/24.2.i)]. We can extract out the highest coefficient $d_{k - 1} (\tau _0 )$ and write the above expression as $$\begin{aligned}
& \tau _0 d_{k - 1} (\tau _0 ) - d'_{k - 1} (\tau _0 ) + \tfrac{1}{2}\sum_{m = 1}^{k - 1} {d_{m - 1} (\tau _0 )d_{k - m - 1} (\tau _0 )} \\ & + \sum_{m = 3}^{k + 2} {\frac{{\left(-1\right)^m }}{m}\textbf{\textsf{B}}_{k + 2,m} (\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - m + 1} (\tau _0 ))}
\\ & + \sum_{m = 2}^k {\frac{{\left(-1\right)^m }}{m}\textbf{\textsf{B}}_{k,m} (d'_0 (\tau _0 ), \ldots ,d'_{k - m} (\tau _0 ))} - \sum_{m = 1}^k {\frac{{\left(-1\right)^m }}{m}\textbf{\textsf{B}}_{k,m} (\tau _0 ,d_0 (\tau _0 ), \ldots ,d_{k - m - 1} (\tau _0 ))}
\\ & = \begin{cases} -\cfrac{{B_{2n} }}{{2n(2n - 1)}} &\mbox{if } k=4n-2, \\
0 & \mbox{otherwise.}\end{cases}\end{aligned}$$ Now a simple induction argument using the properties of the partial ordinary Bell polynomials shows that the degree of $d_{k-1}(\tau_0)$ as a polynomial in $\tau_0$ is at most $k+1$. It remains to prove that the leading coefficients of these polynomials are non-zero. Let us denote $\delta _1 = 1$ and $$d_k (\tau _0 ) = \delta _{k + 2} \tau _0^{k + 2} + \cdots ,$$ for $k\geq 0$. From , we can infer that $$\sum_{m = 2}^k \frac{\left(-1\right)^m}{m}\textbf{\textsf{B}}_{k,m} (\delta _1 , \ldots ,\delta _{k - m + 1} ) = 0,\quad k \ge 3,$$ or, equivalently, $$\sum_{m = 1}^k \frac{\left(-1\right)^{m + 1}}{m}\textbf{\textsf{B}}_{k,m} (\delta _1 , \ldots ,\delta _{k - m + 1} ) = \delta _k ,\quad k \ge 3.$$ By the definition of the partial ordinary Bell polynomials, this equality translates to the following relation between formal power series $$\log \left( 1 + \sum_{k = 1}^\infty \delta _k x^k \right) = -\frac{x^2}{2} + \sum_{k = 1}^\infty \delta _k x^k .$$ By the results of the paper [@Brassesco2011], we have the explicit expression $$\delta _k = \frac{1}{k!}\left[ \frac{\d^{k - 1} }{\d x^{k - 1} }\left( \frac{x^2/2}{\e^x - x - 1} \right)^{k/2} \right]_{x = 0}$$ provided $k \ge 3$. Combining this expression with the results of the paper [@Nemes16], we find that the sequence $\delta_k$ is related to the coefficients of the large-$a$ asymptotic expansion of $\Gamma (a,a)$ as follows: $$\Gamma (a,a) \sim \sqrt {\frac{\pi }{2}} a^{a - 1/2} \e^{ - a} \left( 1 - \frac{1}{3}\sqrt {\frac{2}{\pi a}} + \sum_{k = 2}^\infty \frac{(k + 1)!\delta _{k + 1} }{2^{k/2} \Gamma \left( \frac{k}{2} + 1 \right)}\frac{1}{a^{k/2}} \right), \quad a\to +\infty.$$ The coefficients in this asymptotic expansion are known to be non-zero as a consequence of their integral representations given in [@Nemes16]. Thus, $\delta _k\neq 0$ if $k\geq 3$. The case $k=2$ is trivial since $\delta_2=\frac{1}{3}$.
We continue with the proof of Theorem \[thmzero\]. A simple manipulation of shows that $$\label{eq15}
\left(-x\right)^{a} \gamma^\ast (a,x) \sim \cos (\pi a) + \frac{\sin (\pi a)}{\pi}\left( 2\sqrt \pi F\left(2^{ - \frac{1}{2}} \tau\right)
+ \sqrt {\frac{2\pi}{-a}} \sum_{n = 0}^\infty \frac{\left(-\im\right)^n C_n (\im\tau )}{\left(-a\right)^{n/2}} \right)\exp \left( \frac{\tau ^2 }{2} \right),$$ as $a\to -\infty$, uniformly with respect to bounded real values of $\tau$ (compare [@NIST:DLMF [Eq. 8.11.11](http://dlmf.nist.gov/8.11.11)]). Here $x=a-\tau \left(-a\right)^{\frac{1}{2}}$. Now, let $x_{-}(a)$ be the unique negative zero of $\gamma^\ast (a,x)$ and write $x_{-}(a)=a-\tau \left(-a\right)^{\frac{1}{2}}$. Employing the definition of $\tau_1$, may be rewritten in the form $$\sqrt 2\exp \left( - \frac{\tau^2}{2} \right)\left( F\left(2^{ - \frac{1}{2}} \tau _1\right)\exp \left( \frac{\tau _1^2}{2} \right) - F\left(2^{ - \frac{1}{2}} \tau\right)
\exp \left( \frac{\tau ^2}{2} \right) \right) \sim \sum_{n = 0}^\infty \frac{\left(-\im\right)^n C_n (\im\tau )}{\left( - a\right)^{(n + 1)/2}} ,$$ as $a\to -\infty$. This asymptotic equality is further equivalent to $$\label{invaseq}
\sqrt {\frac{\pi }{2}} \exp \left( - \frac{\tau ^2}{2} \right)\left( \erfc(2^{ - \frac{1}{2}} \im\tau _1 ) - \erfc(2^{ - \frac{1}{2}} \im\tau ) \right) \sim \sum_{n = 0}^\infty \frac{( - \im)^{n + 1} C_n (\im\tau )}{( - a)^{(n + 1)/2}},$$ as $a\to -\infty$. Thus, the inversion problem is formally equivalent to that of , with $\im\tau$, $\im\tau_1$ and $ - a\e^{\pi\im}$ in place of $\tau$, $\tau_0$ and $a$. Therefore, if we seek for a solution of the asymptotic equality in the form $$\tau \sim \tau _1 + \sum\limits_{k = 0}^\infty \frac{\widetilde d_k (\tau _1 )}{( - a)^{(k + 1)/2} },\quad a\to -\infty,$$ using the above procedure, we find that $\widetilde d_k (\tau _1 ) = - ( - \im)^k d_k (\im\tau _1 )$.
We close this section by commenting on the implementation of the recurrence relation . To evaluate the partial ordinary Bell polynomials, we use the formal generating function [@Nemes13 Appendix] $$\label{eq19a}
\exp \left( y\sum\limits_{n = 1}^\infty \alpha _n x^n \right) = \sum\limits_{k = 0}^\infty \sum\limits_{m = 0}^\infty \frac{\textbf{\textsf{B}}_{k,m} \left( \alpha _1 ,\alpha _2 , \ldots ,\alpha _{k - m + 1} \right)}{m!}x^k y^m.$$ The Taylor coefficients of this generating function can be easily obtained with any computer algebra package. We combine with to compute the $d_{k}(\tau_0)$’s. The explicit forms of the first several coefficients are presented in Table \[table2\] below. Note that when evaluating $\textbf{\textsf{B}}_{k,m}$ ($m=0,1,\ldots,k$) via , the sum $\displaystyle \sum_{n=1}^\infty$ can be replaced by the finite sum $\displaystyle \sum_{n=1}^{k+1}$.
\[c\][l l]{} &\
\[-2ex\] $n$ & $d_n(\tau_0)$\
\[-2ex\] &\
&\
\[-1.5ex\] 0 & ${\frac{1}{3}}\tau_{0}^{2}-{\frac{1}{3}}$\
\[1ex\] 1 & ${\frac{1}{36}}\tau_{0}^{3}-{\frac{7}{36}}\tau_{0}$\
\[1ex\] 2 & $-{\frac{1}{270}}\tau_{0}^{4}-{\frac{7}{810}}\tau_{0}^{2}+{\frac{8}{405}}$\
3 & ${\frac{1}{4320}}\tau_{0}^{5}+{\frac{8}{1215}}\tau_{0}^{3}-{\frac{433}{38880}}\tau_{0}$\
4 & ${\frac{1}{17010}}\tau_{0}^{6}-{\frac{1}{840}}\tau_{0}^{4}-{\frac{923}{204120}}\tau_{0}^{2}+{\frac{184}{25515}}$\
5 & $-{\frac{139}{5443200}}\tau_{0}^{7}-{\frac{1451}{48988800}}\tau_{0}^{5}+{\frac{289517}{146966400}}\tau_{0}^{3}+{\frac{289717}{146966400}}\tau_{0}$\
6 & ${\frac{1}{204120}}\tau_{0}^{8}+{\frac{769}{9185400}}\tau_{0}^{6}-{\frac{151}{874800}}\tau_{0}^{4}-{\frac{104989}{55112400}}\tau_{0}^{2}+{\frac{2248}{3444525}}$\
7 & $-{\frac{571}{2351462400}}\tau_{0}^{9}-{\frac{1087}{41990400}}\tau_{0}^{7}-{\frac{30469}{235146240}}\tau_{0}^{5}+{\frac{219257}{661348800}}\tau_{0}^{3}+{\frac{1500053}{846526464}}\tau_{0}$\
\[1ex\] 8 & $-{\frac{281}{1515591000}}\tau_{0}^{10}+{\frac{49271}{15588936000}}\tau_{0}^{8}+{\frac{997903}{15588936000}}\tau_{0}^{6}+{\frac{101251277}{654735312000}}\tau_{0}^{4}$\
\[1ex\] & $-{\frac{96026707}{280600848000}}\tau_{0}^{2}-{\frac{19006408}{15345358875}}$\
9 & ${\frac{163879}{2172751257600}}\tau_{0}^{11}+{\frac{209488529}{293321419776000}}\tau_{0}^{9}-{\frac{252836779}{20951529984000}}\tau_{0}^{7}-{\frac{15974596457}{146660709888000}}\tau_{0}^{5}$\
& $-{\frac{556030221167}{2639892777984000}}\tau_{0}^{3}+{\frac{487855454729}{2639892777984000}}\tau_{0}$\
10 & $-{\frac{5221}{354648294000}}\tau_{0}^{12}-{\frac{1316963}{2708223336000}}\tau_{0}^{10}-{\frac{329677}{255346771680}}\tau_{0}^{8}+{\frac{1386098437}{53622822052800}}\tau_{0}^{6}$\
\[1ex\] & $+{\frac{11414859619}{71497096070400}}\tau_{0}^{4}+{\frac{1069525622411}{3217369323168000}}\tau_{0}^{2}-{\frac{5667959576}{12567848918625}}$\
&\
Numerical comparisons {#section5}
=====================
In this section, we compare the global uniform asymptotic expansion with our new transition region expansion . We denote $\lambda=z/a$ as usual.
The transition region expansion
-------------------------------
According to Proposition \[prop1\], the transition region expansion holds for $\tau=\mathcal{O}\left(|a|^\mu\right)$, as $a\to\infty$, with $\mu<\frac16$, that is for $\lambda-1=\mathcal{O}\left(|a|^\nu\right)$, with $\nu<-\frac13$. The main reason for this is that the coefficients $C_n(\tau)$ are polynomials of degree $3n+2$, and if $\tau\sim K a^{1/6}$ held, all the terms in the divergent asymptotic expansion would be of the same order. It follows from and that if we replaced the $C_n(\tau)$’s in by their leading terms, then the resulting infinite series would actually converge. Numerical experiments illustrate that even in the case that $\tau\sim K a^{1/6}$, the expansion still provides reasonable approximations. Accordingly, the expansion has an asymptotic property when $\tau=\mathcal{O}\left(|a|^\mu\right)$, with $\mu<\frac16$, and we can allow $\mu$ to be close to $\frac16$.
The global uniform asymptotic expansion
---------------------------------------
The global uniform asymptotic expansion has the advantage that it is valid as $a\to\infty$, uniformly with respect to $\lambda$ in the sector $|\arg \lambda|\leq 2\pi-\delta< 2\pi$. The main drawback of the expansion is that the coefficients are significantly more complicated than the $b_n(\lambda)$’s and $C_n(\tau)$’s above, which are polynomials satisfying simple recurrence relations. The evaluation of the coefficients $c_n(\eta)$ near the point $\eta=0$ ($\lambda=1$) is especially difficult because they possess a removable singularity at this point. In Appendix \[appendixb\], we give several methods to compute the $c_n(\eta)$’s in a stable manner.
In Figure \[fig1\], we compare the terms of the asymptotic series and in the case that $a=3$ and $\tau=0.1$. Since $\tau$ it small, it is illustrated that $C_{2n}(\tau)\approx c_{n}(\eta)$, which follows from the identity $C_{2n}(0)=c_{n}(0)$. The terms first decrease in magnitude, reach a minimum, and then increase. This is the typical behaviour of Gevrey-1 divergent series (see, e.g., [@MS16]). The grey asterisks in Figure \[fig1\] are the absolute values of the remainders in after $n$ terms. As expected, they are approximately of the same size as the corresponding first neglected terms. This will also be the case for the uniform asymptotic expansion . It is suggested by Figure \[fig1\] that in the case of $n=34$, our approximation produces 11 correct digits whereas the uniform asymptotic approximation gives us only 9 correct digits.
Similarly, in Figure \[fig2\], we consider the case that $a=3$ and $\tau=1.1$. No significant difference is seen between Figures \[fig1\] and \[fig2\].
Finally, in Figure \[fig3\], we consider the case when $a=3$ and $\tau=1.321$, that is, when $\tau\approx 1.1 a^{1/6}$. Thus, the divergent series does not have an asymptotic property anymore. Although the behaviour of the terms in has clearly changed, it seems that the magnitudes of the smallest terms in and remained approximately equal.
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors thank the anonymous referee for his/her helpful comments and suggestions on the manuscript.
Asymptotic expansions away from the transition points {#appendixa}
=====================================================
The outer expansions and break down near the transition point, that is, when $\lambda$ approaches $1$, since their terms become singular in this limit. Note that in Theorem \[thm1\] we take $z=a+\tau a^{1/2}$, thus $\lambda=z/a=1+\tau a^{-1/2}$. In the forthcoming we examine the $\tau$-region of validity of these outer expansions.
For and to have an asymptotic property we need $\left(z-a\right)^2/a=a\left(\lambda-1\right)^2$ to be large as $a\to\infty$. Hence, we require that $\lambda-1=\mathcal{O}\left(|a|^\nu\right)$, as $a\to\infty$, with $\nu>-\frac12$. Therefore, since $\tau=(\lambda-1)a^{1/2}$, we require that $\tau=\mathcal{O}\left(|a|^\mu\right)$, as $a\to\infty$, with $\mu>0$.
We can also consider the optimal number of terms of the divergent series and to illustrate the necessity of the condition $\nu>-\frac12$. For this, we need the asymptotic behaviour of the coefficients $b_n(\lambda)$ as $n\to+\infty$. We take $\lambda>0$, $\lambda \neq 1$. Then, according to [@Dingle73 Eq. 23, p. 161], [@Nemes16], we have $$\label{lateterm1}
b_n (\lambda)=\frac{1}{\sqrt {2\pi}}\frac{\Gamma \left( n+\frac{1}{2} \right)\left(\lambda-1\right)^{2n+1} }{\left(\lambda-\log\lambda-1\right)^{n + \frac{1}{2}} } \operatorname{sgn} (\lambda - 1) \left( 1 +
\mathcal{O}_\lambda\left( \frac{1}{n} \right)\right)$$ as $n \to +\infty$. Now let $n$ be the optimal number of terms of the divergent series and , i.e., the index of the numerically least term. Then, by , approximately $$1\approx\left|\frac{\frac{\left(-a\right)^{n+1}b_{n+1}(\lambda)}{\left(z-a\right)^{2n+3}}}{\frac{\left(-a\right)^{n}b_{n}(\lambda)}{\left(z-a\right)^{2n+1}}}\right|
=\left|\frac{b_{n+1}(\lambda)}{a\left(\lambda-1\right)^2b_{n}(\lambda)}\right|\sim\left|\frac{n}{a(\lambda-1-\log\lambda)}\right|,$$ as $n\to+\infty$. In the case that $\lambda-1=\mathcal{O}\left(|a|^\nu\right)$, with $-\frac12<\nu<0$, we have $2n\approx |a|^{1+2\nu}$, and the optimal number of terms shrinks to zero as $\nu$ approaches $-\frac12$.
The coefficients in the uniform asymptotic expansions {#appendixb}
=====================================================
As mentioned above, in Temme [@Temme1979] power series expansions in $\eta$ are given for the coefficients $c_n(\eta)$. Here we consider the same expansion, but also one in powers of $\lambda-1$; the latter seems more natural, and the details are simpler. In both cases, we give (new) recurrence relations for the coefficients. We introduce the expansions $$\label{eq6b}
c_n(\eta)=\sum_{k=0}^\infty e_{k,n}\left(\lambda-1\right)^k=\sum_{k=0}^\infty f_{k,n}\eta^k.$$ Observe that $c_0(\eta)$ satisfies the equation $$2\left(\lambda-\log\lambda-1\right)\left((\lambda-1)c_0(\eta)-1\right)^2=\left(\lambda-1\right)^2,$$ and since $\lambda\eta\frac{\d\eta}{\d\lambda}=\lambda -1$, the recurrence relation in can be written as $$c_n (\eta) = \frac{\gamma _n }{\lambda - 1} + \frac{\lambda}{\lambda-1}\frac{\d c_{n - 1} (\eta)}{\d\lambda},\quad n \ge 1.$$ Therefore, we obtain for the coefficient $e_{k,n}$ the recurrence relations $$\label{eq6e}
e_{k,0}=\frac{\left(-1\right)^{k+1}}{k+3}-2\sum_{l=1}^{k}\frac{\left(-1\right)^{l}}{l+2}e_{k-l,0}
-\sum_{l=1}^{k}\sum_{m=1}^{l} \frac{\left(-1\right)^{m}}{m+1}e_{k-l,0}e_{l-m,0},$$ thus $e_{0,0}=-\frac13$, and $$\label{eq6f}
e_{k,n+1}=(k+1)e_{k+1,n}+(k+2)e_{k+2,n}.$$ Hence, it is straightforward to compute the first several coefficients. Note that if we know the first $K$ Taylor coefficients of $c_n(\eta)$, then we can use to compute the first $K-2$ Taylor coefficients of $c_{n+1}(\eta)$. In each step we lose two Taylor coefficients. Fortunately, this is not a serious problem, since it is easy to compute many of the coefficients for $c_0(\eta)$ via .
Regarding the coefficients $f_{k,n}$ in , we observe that $c_0(\eta)$ satisfies the differential equation $$\eta \frac{\d c_{0} (\eta)}{\d\eta}+\eta^2 c_{0}^3 (\eta)+\left(\eta^2+3\eta\right)c_{0}^2 (\eta)+(2\eta+3)c_{0} (\eta)+1=0,$$ and that we can rewrite as $$\label{eq6h}
c_n (\eta) = \gamma _n c_0 (\eta ) + \frac{1}{\eta}\left(\frac{\d c_{n - 1} (\eta )}{\d\eta}+\gamma_n\right).$$ Therefore, we obtain for the coefficient $f_{k,n}$ the recurrence relations $$\label{eq6j}
-(k+3)f_{k,0}=2f_{k-1,0}+3\sum_{l=1}^{k}f_{l-1,0}f_{k-l,0}+\sum_{l=2}^{k}f_{l-2,0}f_{k-l,0}+\sum_{l=2}^{k}\sum_{m=0}^{k-l}f_{l-2,0}f_{m,0}f_{k-l-m,0},$$ with $f_{0,0}=-\frac13$, and $$\label{eq6i}
f_{k,n+1}=(k+2)f_{k+2,n}-f_{1,n}f_{k,0}.$$ Note that from it follows that $\gamma_{n+1}+f_{1,n}=0$, and we have used this to express the final term in in terms of $f_{1,n}$.
In , the first series converges for $|\lambda-1|<1$ and the second for $|\eta|<2\sqrt\pi$. Hence, although the $\lambda$-sum has the advantage that it is in terms of the original parameter, the $\eta$-sum has the advantage that it converges in a larger region.
Another way to compute the $c_n(\eta)$’s in a stable manner is to use the integral representation [@Dunster1998] $$c_n(\eta)=\frac{\im\left(-1\right)^n\Gamma(n+\frac12)}{\left(2\pi\right)^{\frac32}}
\oint_{\{1,\lambda\}}\frac{\,\d t}{(t-\lambda)\left(t-1-\log t\right)^{n+\frac12}},$$ where the contour of integration is a simple loop surrounding the points $1$ and $\lambda$ in the positive sense. Taking $\lambda$ close to $1$, we can choose for the contour a circle with centre $1$ and radius $r<1$. According to the paper [@TW14], the trapezoidal rule converges exponentially fast for this type of integral and we obtain a simple method to compute the coefficients via $$\label{eq19d}
c_n(\eta)\approx \frac{\Gamma(n+\frac12)}{2M\sqrt{2\pi}}
\sum_{m=1-M}^M\frac{\sqrt{\displaystyle\frac{\omega_m^2}{\omega_m-\log(\omega_m+1)}}}{(\lambda-\omega_m-1)
\left(\log(\omega_m+1)-\omega_m\right)^n},~~~\textrm{where}~~\omega_m=r\e^{\pi\im m/M}.$$ The introduction of the factor $\omega_m^2$ inside the square root makes it single-valued, and therefore computer algebra packages can work with without any problems.
[10]{}
S. Brassesco, M. A. Méndez, The asymptotic expansion for $n!$ and the Lagrange inversion formula, *Ramanujan J.* **24** (2011), no. 2, pp. 219–234.
R. B. Dingle, *Asymptotic Expansions: Their Derivation and Interpretation*, Academic Press, London, 1973.
T. M. Dunster, Asymptotics of the generalized exponential integral, and error bounds in the uniform asymptotic smoothing of its Stokes discontinuities, *Proc. Roy. Soc. London Ser. A* **452** (1996), no. 1949, pp. 1351–1367.
T. M. Dunster, R. B. Paris, S. Cang, On the high-order coefficients in the uniform asymptotic expansion for the incomplete gamma function, *Methods Appl. Anal.* **5** (1998), no. 3, pp. 223–247.
W. Gautschi, Exponential integral $\int_1^\infty \e^{-xt} t^{-n} \d t$ for large values of $n$, *J. Res. Nat. Bur. Standards* **62** (1959), pp. 123–125.
M. B. Giles, Algorithm 955: Approximation of the inverse Poisson cumulative distribution function, *ACM Trans. Math. Softw.* **42** (2016), no. 1, Article 7, 22 pp.
C. Ferreira, J. L. López, E. Pérez-Sinusía, Incomplete gamma functions for large values of their variables, *Adv. in Appl. Math.* **34** (2005), no. 3, pp. 467–485.
H. E. Fettis, An asymptotic expansion for the upper percentage points of the $\chi^2$-distribution, *Math. Comp.* **33** (1979), no. 147, pp. 1059–1064.
N. L. Johnson, S. Kotz, N. Balakrishnan, *Continuous Univariate Distributions. Vol. 1*, second ed., Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons, Inc., New York, 1994.
K. S. Kölbig, On the zeros of the incomplete gamma function, *Math. Comp.* **26** (1972), no. 119, pp. 751–755.
M. A. Lukas, Performance criteria and discrimination of extreme undersmoothing in nonparametric regression, *J. Stat. Plan. Inference* **153** (2014), pp. 56–74.
K. Mahler, Über die Nullstellen der unvollständigen Gammafunktionen, *Rend. del Circ. Matem. Palermo* **54** (1930), pp. 1–41.
C. Mitschi, D. Sauzin, *Divergent series, summability and resurgence. [I]{}*, Lect. Notes Math. **2153**, Springer, 2016.
A. Navarra, C. M. Pinotti, V. Ravelomanana, F. Betti Sorbelli, R. Ciotti, Cooperative training for high density sensor and actor networks, *J. Sel. Areas Commun.* **28** (2010), no. 5, pp. 753–763.
G. Nemes, An explicit formula for the coefficients in Laplace’s method, *Constr. Approx.* **38** (2013), no. 3, pp. 471–487.
G. Nemes, The resurgence properties of the incomplete gamma function II, *Stud. Appl. Math.* **135** (2015), no. 1, pp. 86–116.
G. Nemes, The resurgence properties of the incomplete gamma function, I, *Anal. Appl. (Singap.)* **14** (2016), no. 5, pp. 631–677.
*NIST Digital Library of Mathematical Functions*. <http://dlmf.nist.gov/>, Release 1.0.19 of 2018-06-22. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, and B. V. Saunders, eds.
A. B. Olde Daalhuis, On the resurgence properties of the uniform asymptotic expansion of the incomplete gamma function, *Methods Appl. Anal.* **5** (1998), no. 4, pp. 425–438.
V. I. Pagurova, An asymptotic formula for the incomplete gamma function, *U.S.S.R. Comput. Math. and Math. Phys.* **5** (1965), no. 1, pp. 162–166.
G. Paillard, V. Ravelomanana, Limit theorems for degree of coverage and lifetime in large sensor networks, *IEEE INFOCOM 2008 - The 27th Conference on Computer Communications*, Phoenix, Arizona, USA, 2008, pp. 106–110.
P. Palffy-Muhoray, E. G. Virga, X. Zheng, Onsager’s missing steps retraced, *J. Phys.: Condens. Matter* **29** (2017), no. 47, Article 475102, 13 pp.
R. B. Paris, Error bounds for the uniform asymptotic expansion of the incomplete gamma function, *J. Comput. Appl. Math.* **147** (2002), no. 1, pp. 215–231.
R. B. Paris, A uniform asymptotic expansion for the incomplete gamma function, *J. Comput. Appl. Math.* **148** (2002), no. 2, pp. 323–339.
R. B. Paris, A uniform asymptotic expansion for the incomplete gamma functions revisited, preprint, [arXiv:1611.00548](https://arxiv.org/abs/1611.00548)
V. Ravelomanana, Extremal properties of three-dimensional sensor networks with applications, *IEEE Trans. Mobile Comput.* **3** (2004), no. 3, pp. 246–257.
N. M. Temme, The asymptotic expansion of the incomplete gamma functions, *SIAM J. Math. Anal.* **10** (1979), no. 4, pp. 757–766.
N. M. Temme, The uniform asymptotic expansion of a class of integrals related to cumulative distribution functions, *SIAM J. Math. Anal.* **13** (1982), no. 2, pp. 239–253.
N. M. Temme, Asymptotic inversion of incomplete gamma functions, *Math. Comp.* **58** (1992), no. 198, pp. 755–764.
N. M. Temme, Computational aspects of incomplete gamma functions with large complex parameters, *Approximation and Computation: A Festschrift in Honor of Walter Gautschi* (R. V. M. Zahar, ed.), Internat. Ser. of Numer. Math., Vol. 119, 1994, pp. 551–562.
N. M. Temme, Uniform asymptotics for the incomplete gamma functions starting from negative values of the parameters, *Methods Appl. Anal.* **3** (1996), no. 3, pp. 335–344.
I. Thompson, A note on the real zeros of the incomplete gamma function, *Integral Transforms Spec. Funct.* **23** (2012), no. 6, pp. 445–453.
L. Trailović, L.Y. Pao, Computing budget allocation for efficient ranking and selection of variances with application to target tracking algorithms, *IEEE Trans. Automat. Contr.* **49** (2004), no. 1, pp. 59–67.
L. N. Trefethen, J. A. C. Weideman, The exponentially convergent trapezoidal rule, *SIAM Rev.* **56** (2014), pp. 385–458.
F. G. Tricomi, Asymptotische Eigenschaften der unvollständigen Gammafunktion, *Math. Z.* **53** (1950), pp. 136–148.
[^1]: The authors’ research was supported by a research grant (GRANT11863412/70NANB15H221) from the National Institute of Standards and Technology.
|
---
abstract: 'Linear Programs (LPs) appear in a large number of applications and offloading them to the GPU is viable to gain performance. Existing work on offloading and solving an LP on GPU suggests that performance is gained from large sized LPs (typically 500 constraints, 500 variables and above). In order to gain performance from GPU for applications involving small to medium sized LPs, we propose batched solving of a large number of LPs in parallel. In this paper, we present the design and CUDA implementation of our batched LP solver library, keeping memory coalescent access, reduced CPU-GPU memory transfer latency and load balancing as the goals. The performance of the batched LP solver is compared against sequential solving in the CPU using an open source solver GLPK (GNU Linear Programming Kit). The performance is evaluated for three types of LPs. The first type is with the initial basic solution as feasible, the second type is with the initial basic solution as infeasible and the third type is with the feasible region as a Hyperbox. For the first type, we show a maximum speedup of $18.3\times$ when running a batch of $50k$ LPs of size $100$ ($100$ variables, $100$ constraints). For the second type, a maximum speedup of $12\times$ is obtained with a batch of $10k$ LPs of size $200$. For the third type, we show a significant speedup of $63\times$ in solving a batch of nearly $4$ million LPs of size 5 and $34\times$ in solving 6 million LPs of size $28$. In addition, we show that the open source library for solving linear programs-GLPK, can be easily extended to solve many LPs in parallel with multi-threading. The thread parallel GLPK implementation runs $9.6\times$ faster in solving a batch of $1e5$ LPs of size $100$, on a $12$-core Intel Xeon processor. We demonstrate the application of our batched LP solver in the domain of state-space exploration of mathematical models of control systems design.'
address: 'Department of Computer Science & Engineering, National Institute of Technology Meghalaya, Shillong - 793003, India'
author:
- 'Amit Gurung\*'
- Rajarshi Ray
bibliography:
- 'mybibfile.bib'
title: Solving Batched Linear Programs on GPU and Multicore CPU
---
Linear programming, Batched linear programs, GPU, Simplex method, Pivot selection rules, GLPK library
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the National Institute of Technology Meghalaya, India and by the the DST-SERB, GoI under project grant No. YSS/2014/000623.
|
---
abstract: 'We have observed the Kondo effect in strongly coupled semiconducting nanowire quantum dots. The devices are made from indium arsenide nanowires, grown by molecular beam epitaxy, and contacted by titanium leads. The device transparency can be tuned by changing the potential on a gate electrode, and for increasing transparencies the effects dominating the transport changes from Coulomb Blockade to Universal Conductance Fluctuations with Kondo physics appearing in the intermediate region.'
author:
- |
Thomas Sand Jespersen,$^{1\ast}$ Martin Aagesen$^{1}$, Claus Sørensen$^{1}$, Poul Erik Lindelof$^{1}$, Jesper Nygård$^{1}$\
\
\
\
title: Kondo physics in tunable semiconductor nanowire quantum dots
---
As building blocks for quantum devices, semiconducting nanowires offer an appealing combination of properties not found in other systems. These properties include a wide range of material parameters of the nanowire itself (e.g. the vanishing nuclear spin of Si-nanowires or the high $g$-factor and strong spin-orbit coupling in InAs) and the the wire geometry offering a wide choice of contact materials (e.g. superconductors or magnetic materials). Furthermore, nanowires offer the possibility of changing the crystal composition along the wire to facilitate e.g. tailored optical properties. In order to exploit these properties it is important to investigate basic quantum dot (QD) phenomena in nanowire devices and already, single electron transport has been demonstrated in weakly coupled nanowire quantum dots in the Coulomb Blockade (CB) regime[@Defranceschi:2003; @Bjork:2004; @Zhong:2005] and Universal Conductance Fluctuations (UCF), Weak Localization and Anti-Localization[@Hansen:2005] have been observed in low resistance nanowire devices. As a key example of a many-body effect in solid state physics, the Kondo effect in quantum dots has received considerable theoretical and experimental attention, but has sofar remained elusive in nanowires partly because of the lack of barrier tunability. We here report on semiconducting nanowire devices with electrostatically tunable tunnel barriers, and the observation of the Kondo effect in the regime where the device acts as a strongly coupled quantum dot. We find that the Lande $g$-factor is significantly different from the bulk value for InAs due to the strong confinement of the wire. Moreover, indications of non-equilibrium Kondo effects are observed.\
{width="8.5cm"}
Our devices are based on semiconducting InAs-nanowires grown by molecular beam epitaxy (MBE) from *in-situ* deposited Au-catalyst particles. The wire diameters are $\sim 70 \,
\mathrm{nm}$ and the lengths $\mathrm{2-5}\,\mu \mathrm m$. These are the first measurements on MBE-grown InAs-nanowires and we expect that this well-established technique for growing ultra high quality III-V materials will produce wires of higher crystal quality and lower impurity concentrations than wires grown by other methods, thus resulting in cleaner systems for low temperature transport. Details on the growth and characterization of the wires will be published elsewhere [@aagesen:2006]. The wires are transferred from the growth substrate to a highly doped Si-substrate with a $500\, \mathrm{nm}$ insulating $\mathrm{SiO}_2$ cap-layer by gently pressing the two wafers together. By optical microscopy the wires are located with respect to predefined alignment marks and contacts are defined by E-beam lithography and subsequent evaporation of Ti/Au ($10\, \mathrm{nm}$/$60\,\mathrm{nm}$). To obtain clean metal-nanowire interfaces the devices are treated by a 10 sec.oxygen plasma etch followed by a $5\,\mathrm{sec.}$ wet etch in buffered hydrofluoric acid prior to metal evaporation. Omitting these last steps results in high resistance devices. Figure 1(a) shows a device schematic and Fig. 1(b) a scanning electron micrograph of a typical device. The substrate acts as a back-gate electrode and for a given temperature $T$ and external perpendicular magnetic field $B$ the differential conductance $dI/dV_{sd}$ is measured as a function of the applied source-drain voltage $V_{sd}$ and back-gate potential $V_g$ using standard lock-in methods.\
Fig. 1(c) shows measurements of the linear response conductance $G$ as a function of $V_g$ at different temperatures for a device with $L=200 \, \mathrm{nm}$ electrode separation. At room temperature (inset) the conductance varies monotonically from $0.9\,e^2/h$ at $V_g=-10\,\mathrm V$ to $2.8\,e^2/h$ at $V_g=10\,\mathrm V$ thereby identifying the carriers as *n*-type (here $e$ is the electron charge and $h$ Plancks constant). At lower temperatures the slope of the $G(V_g)$-trace increases but the conductance at $V_g =
10\,\mathrm V$ remains effectively unchanged. This indicates a low barrier at the contacts at $V_g = 10\,\mathrm V$ increasing with smaller gate-voltages. At $0.3\,\mathrm{K}$ (main panel), the behavior is as follows: For $V_g \lesssim -7.5\,\mathrm V $ the device behaves as a quantum dot in the CB regime with large tunnel barriers between the leads and the dot with transport exhibiting sharp peaks separated by regions of zero conductance[@Kouwenhoven:1997]. For $V_g \gtrsim 2\,\mathrm V$ reproducible oscillations due to UCF are observed indicating low barriers between the leads and the QD. In the intermediate region the CB peaks are broadened due to a stronger coupling of the QD to the leads and Kondo physics emerges as discussed below. The periodic pattern of CB-peaks shows that the device behaves as a single dot rather than multiple dots in series[@Defranceschi:2003]. The average peak separations $\Delta
V_g \approx 80\, \mathrm{mV}$ determine the capacitance from the dot to the back gate $C_g = e/\Delta V_g \sim 2\,\mathrm{aF}$ and the linear dependence of $C_g$ on the contact separation (Fig. 1.d) shows that the QD is defined by barriers at the contacts rather than defects along the wire which would result in random dot size fluctuations. The curve extrapolates to zero at $L_0 \approx 100 \,
\mathrm{nm}$ suggesting a significant depleted region at the contacts. For metal contacts to bulk InAs it has been shown that the Fermi level pins in the conduction band (resulting in good electrical contact[@Bhargava:1997]), and assuming a similar behavior for InAs nanowires we believe that the QD is formed in the middle of the wire segment between the contacts due to the nonuniform electrical potential produced by the backgate and the grounded contacts.\
![(color online) (a) $G$ vs. $V_g$ for temperatures $T=300\, \mathrm{mK}$ (black) and $T\approx 800\, \mathrm{mK}$ (red) showing overlapping CB-peaks. Alternating high/low valley conductance with corresponding decreasing/increasing behavior for increasing sample temperature (arrows) is observed. The peak spacings $\Delta
V_g$ (inset) and their magnetic field dependence \[panel (b)\] follow a similar even-odd pattern. (d) Corresponding stability diagram (white\[black\] corresponds to $0e^2/h$ \[$1.5e^2/h$\]) showing faintly visible CB-diamonds indicated by black lines and marked E/O according to even/odd electron occupation number $N$. A zero-bias Kondo resonance is observed through each odd-$N$ CB diamond. The truncation of diamonds (horizontal features) indicate the onset of inelastic co-tunneling.](figure2n.eps){width="8.5cm"}
We now focus on the intermediate gate-region of Fig. 1(c). Black and red traces of Fig. 2(a) show $G(V_g)$ for $T= 300 \mathrm{mK}$ and $T \approx 800 \,\mathrm{mK}$, respectively and a series of broadened CB peaks are observed with each valley corresponding to a fixed number of electrons ($N$) on the dot. The non-zero valley conductance $G_v$ indicate a significant contribution to the conductance from elastic co-tunneling processes exhibits an alternating pattern where high-$G_v$ valleys are followed by valleys of lower $G_v$ and vice versa. The even-odd pattern repeats in the addition energies (peak separations $\Delta V_g$, see inset) suggesting a twofold degeneracy of the dot levels. The electron spin is identified as the origin of this degeneracy by measuring the evolution of the peaks in a magnetic field as shown in Fig. 2(b). Valleys corresponding to odd(even) $N$ are expected to widen(shrink) by the Zeeman splitting $g\mu_BB$[@Cobden:1998] ($\mu_B$ is the Bohr magneton) and this pairing behavior is readily observed. Figure 2(c) shows $dI/dV_{sd}$ as a function of $V_{sd}$ and $V_g$ (stability diagram). A pattern of low conductance CB-diamonds (dashed lines), is visible, and the diamonds are numbered E/O according to even/odd $N$ as determined from the magnetic field dependency. Pronounced conductance ridges at zero source-drain bias appear in every odd-$N$ diamond. These observations are consistent with the Kondo effect: Whenever the dot holds an unpaired electron (odd $N$), its spin is screened by the conduction electrons in the contacts through multiple spin-flip tunnel events giving rise to a transport resonance at zero bias[@Glazman:2005; @GoldhaberNature:1998]. When an even number of electrons reside on the dot the net spin is zero and no Kondo screening takes place.\
{width="6.5cm"}
{width="7.5cm"}
In addition to the zero-bias resonances in the odd $N$ diamonds the Kondo effect is distinguishable by unique magnetic field and temperature dependencies. The arrows in Fig. 2(a) indicate the temperature dependence of the valley conductance: In the absence of Kondo correlations (even $N$) $G_v$ increases upon heating of the sample as for usual CB [@Kouwenhoven:1997], but in odd-$N$ valleys $G_v$ decreases. The temperature dependence of a Kondo peak is described by the interpolation-function $G(T) =
G_0\big(T_K'^2/(T^2 + T_K'^2)\big )^s$. Here $T_K' = T_K/(2^{1/s}
-1)^{1/2}$ and $s = 0.22$ expected for a spin-half system[@GoldhaberGordon:1998] and $T_K$ is the so-called Kondo temperature. The inset to Fig. 3 shows a stability diagram measured in a different cool-down of the same device. A particularly strong Kondo resonance is observed and Fig. 3 shows $dI/dV_{sd}$ vs.$V_{sd}$ through the middle of the ridge at $300\, \mathrm{mK}$ (lower curve) and the valley conductance for different temperatures (upper curve). The solid line shows a fit to the formula above and the agreement is excellent yielding $T_K = \,2.1\mathrm{K}$ and $s =
0.22$ supporting the Kondo nature of the ridge. The Kondo temperature can also be estimated from the full width at half maximum $\Gamma \approx 2 k_B T_k /e$ of the Kondo peak and for the data in Fig. 3 we find $\Gamma = 0.34 \,\mathrm{mV}$ and thus $T_K
= \Gamma e /2k_B = 2.2\,\mathrm K$ in good agreement with the estimate above.\
In a magnetic field the spin-degenerate states of the quantum dot are split by the Zeeman splitting $g\mu_BB$, which for Kondo resonances leads to peaks at $V_{sd} = \pm g\mu_BB/e$, independent of gate voltage[@Glazman:2005]. The insets to Fig. 4(a) show stability diagrams of a Kondo ridge measured at zero field (left inset) and at $B = 0.5\,\mathrm{T}$ (right). A gate independent splitting of the ridge is observed and the vertical traces through the middle of the diamond for magnetic fields of $0 \mathrm{-} 0.9
\,\mathrm T$ (Fig. 4(a)) shows the evolution of the splitting. Different methods for determining $g$ from the splitting of the Kondo peak have been suggested in the literature. Refs. [@Cronenwett:1998; @Kogan:2004; @Hewson:2006] suggest using the distance $\Delta_K$ between the peaks in $dI/dV$, however, for the weak coupling regime theoretical work has suggested the use of the distance $\delta_K$ between peaks in $d^2I/dV^2$ (steepest points in $dI/dV_{sd}$)[@Paaske:2004]. Figure 4(b) shows $\delta_K$ (open squares) and $\Delta_K$ (solid squares) extracted from the data of Fig. 4(a) and measurements at higher fields. Both $\Delta_K$ and $\delta_K$ show a clear linear dependence, and the parameters of the linear fits are shown in the inset with $g$ calculated from the slope and $\Delta_0$ the extrapolated splitting at $B=0\, \mathrm
T$. Both methods gives $g \approx 8$. The considerable downshift of the $g$-factor with respect to that of of bulk InAs ($|g_{bulk}| =
15$) reflects the confinement of the nanowire geometry and agrees with measurements of $g$ in InAs nanowire QD’s in the CB-regime[@Bjork:2005].\
At a critical bias voltage $V_{sd} = V_c$ given by the excitation energy for the dot, inelastic cotunneling processes which leave the dot in an excited state set in, giving rise to horizontal ridges at finite bias within the Coulomb diamonds (Fig. 2(c) and Fig. 4). Figure 4(a) shows a sharp peak at zero field appearing at this onset ($V_c \approx 1.25\,\mathrm{mV}$), in addition to the zero bias Kondo peak (odd $N$). In a magnetic field the doublet excited state splits by $\Delta_c = g \mu_B B$, i.e. half that of the Kondo peak, allowing for an independent estimate of $g$. The splitting is readily observed in Fig. 4(a) and the linear dependence of $\Delta_c$ vs. $B$ (Fig. 4(b)) gives a $g$-factor of $8.5 \pm 0.7$ consistent with that measured from the Kondo peak.\
We note that non-equilibrium population of the excited states at finite bias[@Paaske:2004] may result in a broad cusp at the cotunneling on-set rather than a simple finite bias cotunneling step. This mechanism, however, cannot alone account for the peak in Fig. 4(a), being considerably narrower than the threshold bias and we propose that this peak is a signature of a Kondo resonance existing out of equilibrium. For carbon nanotubes, a detailed quantitative analysis recently showed that for an even $N$ quantum dot inelastic cotunneling processes can result in a non-equilibrium singlet-triplet Kondo effect, accompanying transitions between the singlet ground state ($S=0$) and an excited triplet state ($S=1$)[@Paaske:2006]. The Kondo correlations are in this case indicated by peaks at the cotunneling onset being narrower than the threshold bias. We see similar, sharp peaks in our devices in even $N$ diamonds, however, for the odd $N$ example in Fig. 4a, the effect coexists with the zero-bias Kondo effect and the cotunneling peak is intensified by (spin-flipping) transitions promoting the dot into excited states with the same total spin $S=1/2$ as the ground state[^1]. No theoretical models have been solved to allow for a quantitative description of this scenario, but presumably, the situation is readily achieved in other Kondo dot systems and it deserves theoretical attention.\
The data illustrates that MBE grown InAs nanowires can constitute highly tunable quantum dots with a rich behavior, including strongly correlated electronic states and Kondo physics. Data from two devices is presented, however, the results are not unique to these: of the 9 devices investigated so far, the Kondo effect was observed in 4; the remaining 5 exhibiting only CB. Thus, given the recent advances in making superconducting contacts to nanowires[@Doh:2005] the present results show the promise of using nanowires for studying in a semiconductor system the interplay between two of the most pronounced many body effects in solid state physics: superconductivity and the Kondo effect. This subject has been experimentally addressable only in carbon nanotube based systems[@Buitelaar:2002].\
For helpful discussions we acknowledge K. Grove-Rasmussen, J. Paaske, C. Marcus, K. Flensberg, C. Flint, and E. Johnson.
[10]{}
De Franceschi, S., van Dam, J. A., Bakkers, E. P. A. M., Feiner, L. F., Gurevich, L., and Kouwenhoven, L. P. , 344 (2003).
Bj[ö]{}rk, M. T., Thelander, C., Hansen, A. E., Jensen, L. E., Larsson, M. W., Wallenberg, L. R., and Samuelson, L. , 1621 (2004).
Zhong, Z., Fang, Y., Lu, W., and Lieber, C. , 1143 (2005).
Hansen, A. E., Bj[ö]{}rk, M., Fasth, C., Thelander, C., and Samuelson, L. , 205328 (2005).
Aagesen, M. .
Kouwenhoven, L., Marcus, C., McEuen, P., Tarucha, S., Westervelt, R., and Wingreen, N. In [*Mesoscopic Electron Transport,* ]{}Sohn, L., Kouwenhoven, L., and Sch[ö]{}n, G., editors, 105–214. Proceedings of the NATO Advanced Study Institute on Mesoscopic Electron Transport, (1997).
Bhargava, S., Blank, H., Narayanamurti, V., and Kroemer, H. , 759 (1997).
Cobden, D. H., Bockrath, M., Mceuen, P. L., Rinzler, A. G., and Smalley, R. E. , 681 (1998).
Glazman, L. and Pustilnik, M. *Low-temperature transport through a quantum dot*, H. Bouchiat *et al.*, eds. (Lectures notes of the Les Houches Summer School 2004), pp. 427-478. cond-mat/0501007.
Goldhaber-Gordon, D., Shtrikman, H., Mahalu, D., Abusch-Magder, D., Meirav, U., and Kastner, M. , 156 (1998).
Goldhaber-Gordon, D., Gores, J., Kastner, M. A., Shtrikman, H., Mahalu, D., and Meirav, U. , 5225 (1998).
Kogan, A., Amasha, S., Goldhaber-Gordon, D., Granger, G., Kastner, M. A., and Shtrikman, H. , 166602 (2004).
Cronenwett, S., Oosterkamp, T., and Kouwenhoven, L. , 540 (1998).
Hewson, A. C., Bauer, J., and Koller, W. , 045117 (2006).
Paaske, J., Rosch, A., and Wolfle, P. , 155330 (2004).
Bj[ö]{}rk, M. T., Fuhrer, A., Hansen, A. E., Larsson, M. W., Froberg, L. E., and Samuelson, L. , 201307(R) (2005).
Paaske, J., Rosch, A., Wolfle, P., Mason, N., Marcus, C., and Nyg[å]{}rd, J. , 460 (2006).
Doh, Y. J., van dam, J. A., Roest, A. L., Bakkers, E. P. A. M., Kouwenhoven, L. P., and De Franceschi, S. , 272 (2005).
Buitelaar, M. R., Nussbaumer, T., and Sch[ö]{}nenberger, C. , 256801 (2002).
[^1]: For a dot with nearly equidistantly spaced levels such as the region analyzed in Fig. 2(a) (insert), two different excited orbital states can contribute to this resonance, cf. the schematic in Fig. 4(b)
|
---
abstract: 'Strategic decision making involves affective and cognitive functions like reasoning, cognitive and emotional empathy which may be subject to age and gender differences. However, empathy-related changes in strategic decision-making and their relation to age, gender and neuropsychological functions have not been studied widely. In this article, we study a one-shot prisoner dilemma from a psychological game theory viewpoint. Forty seven participants (28 women and 19 men), aged 18 to 42 years, were tested with a empathy questionnaire and a one-shot prisoner dilemma questionnaire comprising a closiness option with the other participant. The percentage of cooperation and defection decisions was analyzed. A new empathetic payoff model was calculated to fit the observations from the test whether multi-dimensional empathy levels matter in the outcome. A significant level of cooperation is observed in the experimental one-shot game. The collected data suggests that perspective taking, empathic concern and fantasy scale are strongly correlated and have an important effect on cooperative decisions. However, their effect in the payoff is not additive. Mixed scales as well as other non-classified subscales (25+8 out of 47) were observed from the data.'
author:
- 'Giulia Rossi, Alain Tcheukam and Hamidou Tembine [^1]'
title: ' Empathy in One-Shot Prisoner Dilemma'
---
[**Keywords:**]{} Empathy, other-regarding payoff, cooperation
Introduction
============
In recent years a growing field of behavioral game studies has started to emerge from several academic perspective. Some of these approaches and disciplines such as neuroscience, social psychology, artificial intelligence have already produced major collections and experiments on empathy. In the context of strategic interaction, empathy may play a key role in the decision-making and the outcome of the game.
Widely known results in repeated games {#widely-known-results-in-repeated-games .unnumbered}
---------------------------------------
For long-run interactions under suitable monitoring assumption it has been shown that cooperative outcomes may emerge as time goes. This is known as “Folk Theorem" or general feasibility theorem (see [@folk1; @folk2]). For example, the Tit-For-Tat Strategy which consists to start the game by cooperating,Then do whatever your other participant did on the previous iteration, leads to a partial cooperation between the players . While cooperation may emerge by means of repeated long-run interactions under observable plays, there is very little study on how cooperation can be possible in one-shot games.
How about cooperative behavior in one-shot games? {#how-about-cooperative-behavior-in-one-shot-games .unnumbered}
--------------------------------------------------
Unfortunately, the Folk theorem result does not apply to one-shot games. This is because there is no previous iteration. There is no next iteration in one-shot games. There is no opportunity to detect, learn or punish from experiences. For the same reasons, the existing reputation-based schemes do not apply directly.
Is cooperation possible outcome in experimental one-shot games? {#is-cooperation-possible-outcome-in-experimental-one-shot-games .unnumbered}
---------------------------------------------------------------
Experimental results have revealed a strong mismatch between the outcome of the experiments and the predicted outcome from classical game model. Is the observed mismatch because of some important factors that are not considered in the classical game formulation? Is it because the empathy of the players are neglected in the classical formulation?
This work conducts a basic experiment on one-shot prisoner dilemma and establishes correlation between players choices and their empathy levels. The prisoner’s dilemma is a canonical example of a game analyzed in classical game theory that shows why two individuals (without empathy consideration) might not cooperate, even if it appears that it is in their best interests to do so in terms of collective decision. It was originally framed by Flood and Dresher working at RAND. In 1950, Tucker gave the name and interpretation of prisoner’s dilemma to Flood and Dresher’s model of cooperation and conflict, resulting in the most well-known game theoretic academic example.
Contribution {#contribution .unnumbered}
------------
The contribution of this work can be summarized as follows. We investigate how players behave and react in experimental one-shot Prisoner Dilemma in relation to their levels of empathy. The experiment is conducted with several voluntary participants from different countries, cultures and educational backgrounds. For each participant to the project the Interpersonal Reactivity Index (IRI) which is a multi-dimensional empathy measure, is used. In contrast to the classical empathy scale studied in game theory literature that are limited to perspective taking, this work goes one-step further by investigating the effect of three other empathy subscales: empathy concern, fantasy scale and personal distress. The experiment reveals a strong mixture of the empathy scales across the population. In addition, each participant responds to a questionnaire that mimics one-shot Prisoner Dilemma situation and specific reaction time and closiness to the other participant. We observe that empathic concern as well as fantasy scale dimensions may affect positively other-regarding payoff. In contrast to the classical prisoner dilemma in which Defection is known as a dominating strategy, the experimental game exhibits a significant level of cooperation in the population (see Section \[newmodel1\]). In particular, Defection not a dominating strategy anymore when players’ psychology is involved. Based on these observations an empathetic payoff model in Section \[newmodel\], that better captures the preferences of the decision-makers, is proposed. With this empathetic payoff, the outcome of the game captures more the observed cooperation level in the one-shot prisoner dilemma. The experiment reveals not only positive affect of empathy but also a dispositional negative affect (spiteful or malicious) of empathy in the decision-making of some of the participants. Spitefulness is observed at the personal distress scale in the population. Person distress scale is negatively correlated with perspective taking scale. Taken together, these findings suggest that the empathy types of the participants play a key role in their payoffs and in their decision in one-shot prisoner dilemma. It also reduces the gap between game theory and game practice by closing-the-loop between model and observations. It provides an experimental evidence that strengthens the model of empathetic payoff and its possible engineering applications [@t1].
Stucture {#stucture .unnumbered}
--------
The rest of the article is organized as follows. The next section presents some background and literature overview on empathy. Section 3 presents the experimental study about the impact of individual psychology on human decision making in one-shot prisoner dilemma. The analysis of the results of the experiment are presented in Section 4. An explanation of the results is given in Section 5. Section 6 concludes the paper.
Background on Empathy
=====================
From the field of relational care to the field of economy, the concept of empathy seems to have an ubiquitously position. Empathy is not only an important and longstanding issue, but a commonly used term in everyday life and situations. Even if it is easily approached and used, this concept has until nowadays different definitions and meanings. Born in the aesthetic and philosophical field, it came to be an important operative concept in behavioral game theory, where once more it is used as an instrument to create relations between decision-makers. The public opinion and the world scientific scenario, however, are not always giving the correct attention to what is implying an empathetic reaction with the others: be empathetic is not something simple as well as is not something given once to the personality of persons. It depends on the context of relations and on the social interaction dimension where people are involved as players, consumers, or agents.
Definition of Empathy {#definition-of-empathy .unnumbered}
----------------------
[ We present historical definitions and concepts of empathy. Rather than having to choose which of the ’definitions’ of empathy is correct, this work suggests a better appreciation for it as a multidimensional phenomenon at least allows a perspective and the ability to specify which aspect of empathy the experimentalist and the theorist are referring to when making particular particular investigation in behavioral games.]{}
- [Philosophy:]{} From philosophical perspective, Empathy derives from the Greek word $\acute{ \epsilon } \mu \pi \acute{\alpha} \theta \epsilon i \alpha$ (empatheia), which literally means physical affection. In particular, it is composed by $\acute{ \epsilon} v$ (en) “in, at" and $ \pi \acute{\alpha} \theta o \varsigma $ (pathos), “passion” or “suffering”. The work in [@c1] has introduced the term Einfuhlung to aesthetic philosophical field in his main book “On the optical Sense of Form: A contribution to Aesthetics” and many other authors (see [@c2] and the references therein), have introduced the concept to feeling and quasi-perceptual acts.
- [Psychology:]{} From a psychological perspective, it corresponds to a cognitive awareness of the emotions, feelings and thoughts of the other persons. In this sense, the term primary significance is that of “an intellectual grasping of the affects of another, with the result of a mimic expression of the same feelings” [@c3].
- [Sociology: ]{}From a sociological perspective, empathy corresponds to an ability to be aware of the internal lives of the others. It is related to the existence of language as a sort of personal awareness of us as selves [@c4]. Regarding a neuroscientific perspective, empathy has been studied as a “two empathic sub-processes, explicitly considering those states and sharing other’s internal states” [@c5]. These two cognitive processes are exhaustively represented in the Figure \[fig1\].
![Cognitive and emotional empathy are the bases of Empathy. They are related to cognitive and affective Theory of Mind as suggested in [@c10].[]{data-label="fig1"}](empathyconcept2t){width="8cm" height="4cm"}
Empathy lies in the structures that include the anterior insula cortex and the dorsal anterior cingulate cortex. In particular, empathy for others pain is considered being located in anterior insula cortex [@c7; @c6]. These areas are studied in relation to empathy and empathy concerns of collective actions (CA) [@c8]. Oxytocin (OT), an “evolutionarily ancient molecule that is a key part of the mammalian attachment system”, is considered in these studies as a sort of variable to be manipulated to increase or decrease CA in human beings.
Furthermore, empathy must be analyzed in relation to the concepts of compassion and sympathy. Compassion, from Latin ecclesiastical compati (suffer with), literally means have feelings together. Nowadays it is associated with the capacity of feeling the other’s worries, even tough it doesn’t imply an automatic action. Sympathy, from the Greek $ \sigma u \mu \pi \acute{a} \theta \epsilon i a $, literally means fellow feelings. Its meaning lies in the capacity of understanding the internal feelings of the other with the intentional desire of changing his/her worries. As per indication in [@c9], “the object of sympathy is the other person’s well being. The object of empathy is understanding”. It has been difficult to distinguish empathy from sympathy because they both involve the emotional state of one person to the state of another. This problem was increased by the fact that the mapping of the terms has recently reversed: what is now commonly called empathy was referred to before the middle of the twentieth century as sympathy [@c10ptreston]. At the end, the concept of empathy must be understood also in relation to Theory of Mind. Theory of Mind, also described as ToM, is the capability to understand of others as mental beings, with personal mental states, for example feelings, motives and thoughts. It is one of the most important developments in early childhood social cognition and it is influencing children life at home as well as at school. Its development from birth to 5 years of age is now described in research literature with the possibility to understand how infants and children behave in experimental and natural situations [@d4].
Different types of empathy {#different-types-of-empathy .unnumbered}
---------------------------
We are not restricting ourselves to the positive part of empathy. Empathy may have a dark or at least costly side specially when the environment is strategic and interactive as it is the case in games. Can empathy be bad for the self? Empathy can be used, for example, by a other player attacker to identify the weak nodes in the network. Can empathy be bad for others? Empathetic users may use their ability to destroy the opponents. In strategic interaction between people, empathy may be used to derive antipathetic response (distress at seeing others’ pleasure, or pleasure at seeing others distress). In both cases, it will influence the dynamics of the self and other regarding preferences.
The ability of empathy to generate moral behavior and determine cooperation is limited by three common occurrences: over-arousal, habituation and bias.
- Empathic over-arousal is an involuntary process that occurs when an observer’s empathic distress becomes so painful and intolerable that it is transformed into an in tense feeling of personal distress, which may move the person out of the empathic mode entirely [@f13; @f14] .
- Generally speaking, in a classical relation victim-observer, the greater is the victim’s distress, the greater is the observer’s empathic distress. If a person is exposed repeatedly to distress over time, the person’s empathic distress may diminish to the point where the person becomes indifferent to the distress of others. This is called habituation. This diminished empathic distress and corresponding indifference is very common in those who, for example, abuse and kill animals.
- Humans evolved in small groups. These groups sometimes competed for scarce resources: in this way is not surprising that evolutionary psychologists have identified kin selection has a moral motivator with evolutionary roots. The forms of familiarity bias include in-group bias and similarity biases. In-group bias is simply the tendency to favour one’s own group. This is not one group in particular, but whatever group we are able to associate with at a particular time. In-group bias is working on self-esteem of the members. On the opposite side of these biases, we have out-group ones, where people out of the groups are considered in a negative way, with a different (and, for the most of the time) and worst treatment (e. g, racial inequality). The similarity bias derives from psychological heuristic pertaining to how people make judgments based on similarity. More specifically, similarity biases are used to account for how people make judgments based on the similarity between current situations and other situations or prototypes of those situations. The goal of these biases is to maximize productivity through favorable experience while not repeating unfavourable experiences (adaptive behaviour).
Empathy main integrative theories {#empathy-main-integrative-theories .unnumbered}
----------------------------------
During the 1970s, in the public psychological scenario empathy was conceived as a procedure with affective and cognitive implications. The work in [@d12] introduced the first multidimensional model of empathy in the psychological literature, where affective and cognitive procedures were working together. According to her, although empathy is defined as a shared emotion between two persons, it depends on cognitive factors. In her integrative-affective model, the affective empathy reaction derives from three components factors, hereinafter described. The first is represented by the cognitive ability to discriminate affective cues in others, the second by the cognitive skills that are involved in assuming the perspective and role of the others and the third factor is, at the end, described by the emotional responsiveness, the affective ability to experience emotions. By the other hand, one of the most comprehensive perspectives on empathy and its relation to the moral development is provided in [@d13]. The author considered empathy as a biologically based disposition for altruistic behavior [@d13]. He conceives of empathy as being due to various modes of arousal, which allows us to respond empathically in light of a variety of distress cues from another person. The author mentions mimicry, classical conditioning, and direct association as fast acting and automatic mechanisms producing an empathic response. The author lists mediated association and role taking in relation to more cognitively demanding modes, mediates by language and proposed some of the limitations in our natural capacity to empathize or sympathize with others, particularly what he refers to as ’here and now’ biases. In other words, the main tendency according to [@d13], is to empathize more with persons that are in some sense perceived to be closer to us. The authors in [@prest02] defined empathy as a shared emotional experience occurring when one person (comes to feel a similar emotion to another (the object) as a result of perceiving the other’s state. This process results from the representations of the emotions activated when the subject pays attention to the emotional state of the object. The neural mechanism assumes that brain areas have processing domains based on their cellular composition and connectivity.
The main theories that we will discuss further are related to the use we will have of empathy concept in game theory analysis. Broadly speaking, we are approaching empathy as made up of two components, an affective and a cognitive one. The affective (or emotional) component develops from infants and its structure may be summarized in this progressive intertwinement, a.) emotion recognition, b.) empathic concern, c.) personal distress and d.) emotional contagion. The cognitive component of empathy develops during the progressive development of the person from his her childhood. It is based on the theory of mind, imagination (of emotional future outcomes) and on perspective taking.
According to [@d5] two main approaches have been used to study empathy: the first one focuses on cognitive empathy or the ability to take the perspective of another person and to infer his mental state (Theory of Mind). The second one emphasizes emotional or affective empathy [@d6] defined as an observer’s emotional response to another person’s emotional state. The Table \[fig3\] below highlights some of principal features of affective and cognitive empathy [@c10].
Emotional Empathy Cognitive Empathy
---------------------------------------- ------------------------------------------
Simulation System Mentalizing system
Theory of Mind
Emotional contagion, personal distress perspective taking
Empathic concern, emotion recognition imagination of emotional future outcomes
Core structure Core structure
Development Development
A model of empathy-altruism was developed in [@b8]. Through the lines of this model, the author simply assumes that empathy feelings for another person create an altruistic motivation to increase that person’s welfare. In particular, the work in [@c11ba] found out how the participants in a social dilemma experiment allocated some of their resource to a person for whom they felt empathy. The author developed also the model of empathy-joy [@b8]. This hypothesis underlines how a prosocial act is not completely explained only by empathy but, also, by the positive emotion of joy a helper expects as a result of helping another person in need. In connection with this theory, empathy relies on an automatic process that generates, immediately, other types of behavior useful to predict the other-regarding actions. In relation to one-trial prisoner’s dilemma [@c12] underlined how empathy-altruism should increase cooperation (that, then, will emerge in the situation).
The model of empathic brain proposed in [@c13] proposes a modulate model of empathy, where different factors occur in its development. These factors are four, in particular related to four different situations so described, i) one is in affective state, ii) this state is isomorphic to another person’s affective state, iii) this state is elicited by the observation or imagination of another person’s affective state, iv) one knows that the other person is the source of one’s own affective state. Condition (a) is particularly important as it helps to differentiate empathy from mentalizing. Mentalizing is the ability to represent others’ mental states without emotional involvement. In particular, the two authors are underlying the epistemological value of empathy, by one side related to provide information about future actions between people and, by the other side, to function as “origin of the motivation for cooperative and prosocial behavior”. The model we provide is based on both these theories and, in particular, it will take into account the distinction between empathy itself and the cortical representations of the emotions. By a developmental point of view, empathy is studied also in relation with prosocial behavior. Two theoretical studies [@d1; @d2] are fundamental in this sense. In particular, the author considers that the development of a vicarious affective reaction to another distress is beginning from infancy. Individual patterns of behavior that responses to the distress are, also, tailored to the needs of the other. According to [@d3] although an empathic basis to altruistic behavior entails a net cost to the actor, cooperation and altruism require behavior tailored to the feelings and needs of others.
How to measure empathy? {#how-to-measure-empathy .unnumbered}
-----------------------
We overview empathy measurement in psychology and present existing models of empathy’s effect in game theory.
### Empathy measures in Psychology {#empathy-measures-in-psychology .unnumbered}
Psychologists used to study both situational and dispositional empathy concepts. Situational empathy, i.e., empathic reactions in a specific situations, is measured by asking subjects about their experiences immediately after they were exposed to a particular situation, by studying the “facial, gestural, and vocal indices of empathy-related responding" [@d7] or by various physiological measures such as the measurement of heart rate or skin conductance. Dispositional empathy, understood as a person’s stable character trait, has been measured either by relying on the reports of others (particularly in case of children) or, most often (in researching empathy in adults), by relying on the administration of various questionnaires associated with specific empathy scales.
- [ Measuring empathic ability:]{} The work in [@d8] proposes to test empathic ability by measuring the degree of correspondence between a person A and a person B’s ratings of each other on six personality traits-such as self-confidence, superior-inferior, selfish-unselfish, friendly-unfriendly, leader-follower, and sense of humor-after a short time of interacting with each other. More specifically, empathic ability is measured through a questionnaire that asked both persons, i) to rate themselves on those personality traits, ii) to rate the other as they see them, iii) to estimate from their perspective of how the other would rate himself and to rate themselves according to how they think the other would rate them. Person’s A empathic ability is then determined by the degree to which A’s answers to (iii) and (iv) corresponds to B’s answer to (i) and (ii). The less A’s answers diverge from B’s, the higher one judges A’s empathic ability and accuracy. The test aims to measure the level of empathy thanks to the dimension of role-taking.
- [ Empathy Test:]{} The authors in [@d9] created the so-called Empathy Test, that was used in industry in the 1950s. The main purpose of the test is to measure person’s ability to ”anticipate” certain typical reactions, feelings and behaviour of other people. This test consists of three sections, which require persons to rank the popularity of 15 types of music, the national circulation of 15 magazines and the prevalence of 10 types of annoyance for a particular group of people.
- [ Measure of cognitive empathy:]{} The authors in [@d10] is a cognitive empathy scale that consists of 64 questions selected from a variety of psychological personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI) and the California Personality Inventory (CPI). Hogan chose those questions in response to which he found two groups of people-who were independently identified as either low-empathy or high-empathy individuals-as showing significant differences in their answers.
- [ Measure of emotional empathy:]{} EETS, Emotional Empathy Tendency Scale, has been developed in [@d11]. The questionnaire consists of 33 items divided into seven subcategories testing for “susceptibility to emotional contagion", ”appreciation of the feelings of unfamiliar and distant others,“extreme emotional responsiveness", “tendency to be moved by others’ positive emotional experiences", “tendency to be moved by others’ negative emotional experience", “sympathetic tendency", and “willingness to be in contact with others who have problems". This questionnaire emphasizes the original definition of the empathy construct in its seven subscales that together show high split-half reliability, indicating the presence of a single underlying factor thought to reflect affective or emotional empathy. The authors in [@f1] suggested more recently, however, that rather than measuring empathy per se, the scale more accurately reflects general emotional arousability. In response, a revised version of the measure, the Balanced Emotional Empathy Scale [@f2] intercepts respondent’s reactions to others’ mental states [@f3].
- [ Multidimensional measure of empathy:]{} The Interpersonal Reactivity Index has been developed in [@d18] as an instrument whose aim was to measure individual differences in empathy. The test is made of 28 items belonging to cognitive and emotional domain, and, in particular, they are belonging to four different domains. These are represented by four different subscales, Perspective Taking, Empathic Concern, Fantasy and Personal Distress, each of which includes seven item answered on a 5-point scale ranging from 0 (Does not describes me very well) to 4 (Describes me very well). The Perspective Taking subscale measures the capability to adopt the views of the others spontaneously. The Empathic Concern subscale measures a tendency to experience the feelings of others and to feel sympathy and compassion for the unfortunate people.
- [ Self-report empathy measure:]{} Regarding the dimension of self-report empathy measures, we consider important to be mentioned the Scale of Ethnocultural Empathy [@f4], the Jefferson Scale of Physician Empathy [@f5], the Nursing Empathy Scale [@f6], the Autism Quotient [@f7] and the Japanese Adolescent Empathy Scale [@f8]. Although these instruments were designed for use with specific groups, aspects of these scales may be suitable for assessing a general capacity for empathic responding.
- [ Measuring deficit in theory of mind:]{} The Autism Quotient [@f7] was developed to measure Autism spectrum disorder symptoms. The authors viewed a deficit in theory of mind as the characteristic symptom of this disease [@f9] and a number of items from this measure relate to broad deficits in social processing (e.g., “I find it difficult to work out people’s intentions.”). Thus, any measure of empathy should exhibit a negative correlation with this measure. The magnitude of this relation, however, will necessarily be attenuated by the other aspects of the Autism Quotient, which measure unrelated constructs (e.g., attentional focus and local processing biases). Additional self-report measures of social interchange appearing in the neuropsychological literature contain items tapping empathic responding including the Disexecutive Questionnaire [@f10] and a measure of emotion comprehension developed in [@f11]. These scales focus on the respondent’s ability to identify the emotional states expressed by another (e.g., “I recognize when others are feeling sad".). Current theoretical notions of empathy emphasize the requirement for understanding of another’s emotions to form an empathic response [@f12]. Only a small number of items on current measures of empathy, however, assess this ability. Table \[table:Empathymeasurement\] summarizes the Empathy scales measurement in Psychology overviewed above.
ITEMS SCALE CORE CONCEPT [ AUTHORS]{}
------------------------------------------ ------- ------------------------- ------------------------------------------------------------------------------------------------------- ----------------------------------------------------------
Empathy Ability 24 0-4 Likert Scale Imaginative transposing of oneself into the thinking of another Dymond (1949)
Empathy Test 40 Ranking Multiple Choice Cognitive role-taking Kerr & Speroff (1954)
Empathy Scale 64 0-4 Likert Scale Apprehension of another’s condition or state of mind by an intellectual or imaginati ve point of view Hogan (1969)
EETS - Emotional Empathy Tendency Scale 33 -4 to 4 Likert Scale Emotional Empathy Mehrabian & Epstein (1972)
IRI- [ Interpersonal Reactivity Index]{} 28 0-4 Likert Scale Reactions of one individual to the observed experiences of another Davis (1980)
Ethnocultural Empathy Scale 31 Culturally Specific Empathy Patterns Wang, David son,Yakushko, Savoy, Tan, Bleier (2003)
Jefferson Scale of Physician Empathy 5 0 - 5 Likert Scale Patient oriented vs technology oriented empathy in physicians Kane, Gotto, Mangione, West , Hojat, (2001)
Nursing Empathy Scale 12 0-7 Likert Scale Nurses empathic behaviour in the context of interaction with the client Reynolds (2000)
Autism Quotient 50 0-4 Likert Scale Autism Spectrum Disorder symptoms Baron-Cohen, Wheelwright, Skinner, Martin, Cubley (2001)
Japanese Adolescent Empathy Scale 30 Likert Scale Empathy to feel or not to feel positive and negative feelings towards the others Hashimoto, Shiomi (2002)
### Empathy Models in Game Theory {#empathy-models-in-game-theory .unnumbered}
In this subsection, we review some fundamental aspect of behavioral game theory. By a game theoretic approach, empathy and emotive intelligence are considered essential for the development of the games themselves between players. In particular, empathy is essential for the strategic evolution of the games and foundational of Nash Equilibrium [@b2; @b3]. In other words, empathy itself is the instrument that let the dynamic process between n-players happen as well as the understanding and evaluation of their preferences and beliefs. Cooperative behavior patterns must be considered an important sample of close relations between individuals, based on confidding and disclosure. Relations like helping and assistance behavior, as well as mutual confiding, mutual communication and self disclosure are of cooperative behavior. To understand the possible role of friendship in cooperation or defection between n-players in one-shot prisoner dilemma, it’s essential to reflect on how the evolutionary line of our specie has the possibility to create close relationship only between persons who are considering themselves keen in terms of genes. The same kind of attitude is also influencing communal relationship.The cooperative behaviour in one shot Prisoner Dilemma between friends, in particular, lead to the activation of the so called cooperative-parithetic system, that is activated only when there is the perception that the last goal may be reached through a sort of collaboration between the players of the group itself. Empathy has been approached in different ways regarding game theory field. The author in [@b4] underlines how homo economicus must be empathetic to some degree, even if in a different meaning from the concept used in [@d19; @d20]. In particular, in relation to game theory, he introduces the concept of empathy in connection to the study of interpersonal comparison of utility in games.
More specifically, the model of empathy - altruism developed in [@b8] whose assumption is that empathy feelings for another person create an altruistic motivation to increase that person’s welfare. Furthermore, the work of [@d21] is related to how the participants in a social dilemma experiment allocate some of their resource to a person for whom they felt empathy. In the context of one trial prisoner’s dilemma the authors underlined how empathy altruism should increase cooperation (that, then, will emerge in the situation). Secondly, the model of empathy joy developed in [@d22]. This hypothesis underlines how a prosocial act is not completely explained only by empathy but, also, by the positive emotion of joy a helper expects as a result of helping (or, better, of having a beneficial impact on) another person in need. In connection with this theory, empathy relies on an automatic process that generates, immediately, other types of behaviour useful to predict the other regarding actions [@book; @rmg].
The works in [@d15; @d16] proposed the exploration of more psychological and process-oriented models as a more productive framework in comparison with the classical ones in game theory (fairness, impure altruism, reciprocity). At the light of this perspective, many concepts related to human behaviour are introduced to explain choice in game theory approach. Empathy as an operative concept must be understood as different from biases affecting belief formation and biases affecting utility. Empathy is operating through both beliefs and utility formation. Hence, in the presence of empathy, “beliefs and utility become intricately linked” [@d17]. Regarding empathy as a process of beliefs formation, the author proposed to analyze two mechanisms, imagine-self and imagine-other. Imagine self-players are able to imagine themselves in other people’s shoes, in other words they try to imagine themselves in similar circumstances. ”Imagine other” is when a person tries to imagine how another person is feeling. The authors underlined how empathy refers to people’s capability to infer what others think or feel, the so-called mind reading. They underlined, at the same time, how empathy itself may have also any consequences on each player evaluation. The main contribution of the authors lies in a critique analysis of altruistic behaviour in game theory. With three toy games, they demonstrated how empathy-altruism is not always linked with imagine-other dimension (and the so-called beliefs formation), since players may use only imagine-self dimension.
The authors in [@d14] criticize a common concept of given empathy present in public good experiment and, at the end, they demonstrate how empathy may be linked more to the context and social interaction itself in game theory experimental researches. The work [@b6] states that a disposition for empathy does not influence the behaviour related to different games (towards them a central role is played by Theory of Mind). Regarding this position, the same authors are underlying that also individual differences related to empathy do not shape social preferences. On the contrary, many other studies conducted show how empathy may influence the structure of the games themselves. The work in [@b7] study the Ultimatum Game in an evolutionary concept and underline, in their study, how empathy can lead to the evolution of fairness. The work in [@b9] studied the correlation between empathy, anticipated guilt and pro social behaviour; in this study he found out that empathy affects pro-social behaviour in a more complex way than the one represented by the model of social choices.
Recently, the concept of empathy has been introduced in mean-field-type games in [@t1; @tt1v; @tt2v; @tt3v] in relation to cognitively plausible explanation models of choices in wireless medium access channel and mobile devices strategic interaction. The main results of this applied research that lie in an operative and real world use of empathy concept, are represented by the enforcement of mean-field equilibrium payoff equity and fairness itself between players.
Study
=====
Participants {#participants .unnumbered}
------------
The population of participants includes 47 persons between 18 and 42 years old. The population is composed of 19 men and 28 women chosen from different educational backgrounds, cultures and nationalities (see Table \[table:gender\]). The names of the participants are not revealed. Different numbers are generated and assigned to the participants.
Gender Number Frequency %
-------- -------- -------------
Men 19 40.42
Women 28 59.58
Total 47
: Composition: gender and frequency of the participants []{data-label="table:gender"}
All the subjects were asked to perform two different tests: an IRI test (Interpersonal Reactivity Index [@b11]) and a questionnaire that is mimicking, with an empathic and moral emphasis, a prisoner dilemma situation.
Empathy questionnaire {#empathy-questionnaire .unnumbered}
---------------------
The IRI is a 28-item, 5-point Likert-type scale that evaluates four dimensions of empathy: Perspective-Taking, Fantasy, Empathic Concern, and Personal Distress. Each of these four subscales counts 7 items. The Perspective-Taking subscale measures empathy in the form of individual’s tendency to adopt, in a spontaneous way, the other’s points of view. The Fantasy subscale of the IRI evaluates the subject’s ability to put themselves into the feelings and behaviours of fictional characters in books, movies, or plays. The Empathic Concern subscale assesses individual’s feelings of concern, warmth, and sympathy toward others. The Personal Distress subscale measures self-oriented anxiety and distress feelings regarding the distress experienced by others. As pointed out by Baron-Cohen and colleagues [@d24], however, the Fantasy and Personal Distress subscales of this measure contain items that may more properly assess imagination (e.g., “I daydream and fantasize with some regularity about things that might happen to me”) and emotional self-control (e.g., “in emergency situations I feel apprehensive and ill at ease"), respectively, than theoretically-derived notions of empathy. Indeed, the Personal Distress subscale appears to assess feelings of anxiety, discomfort, and a loss of control in negative environments. Factor analytic and validity studies suggest that the Personal Distress subscale may not assess a central component of empathy [@d25]. Instead, Personal Distress may be more related to the personality trait of neuroticism, while the most robust components of empathy appear to be represented in the Empathic Concern and Perspective Taking subscales [@d26]
IRI Davis Scale has been chosen for its relation to the measurement of individual differences in empathy construct and, secondly, for its relation with measures of social functioning and the so-called psychological superior functions [@d23]. Table \[tab:table1\] summarizes the first questionnaire on multidimensional empathy measure.
----------------------------------------------- -------- -------- -------- --------- -------- ------- -------- --------
[PT]{} EC [FS]{} [PD]{} [PT]{} EC [FS]{} [PD]{}
\(1) Daydream and fantasize (FS)
\(2) Concerned with unfortunates (EC) 0.6
\(3) Can’t see others’ views$^{*}$ (PT)
\(4) Not sorry for others $^{*}$ (EC)
\(5) Get involved in novels (FS) 0.8
\(6) Not-at-ease in emergencies (PD) 0.7
\(7) Not caught-up in movies$^{*}$ (FS)
\(8) Look at all sides in a fight (PT) 0.9124 0.2444
\(9) Feel protective of others (EC) 0.3
\(10) Feel helpless when emotional (PD)
\(11) Imagine friend’s perspective (PT) 0.8393 0.824
\(12) Don’t get involved in books$^{*}$ (FS)
\(13) Remain calm if other’s hurt $^{*}$ (PD)
\(14) Others’ problems none mine$^{*}$ (EC)
\(15) If I’m right I won’t argue$^{*}$ (PT)
\(16) Feel like movie character (FS)
\(17) Tense emotions scare me (PD)
\(18) Don’t feel pity for others $^{*}$ (EC)
\(19) Effective in emergencies$^{*}$ (PD)
\(20) Touched by things I see (EC) -0.3452
\(21) Two sides to every question (PT)
\(22) Soft-hearted person (EC)
\(23) Feel like leading character (FS)
\(44) Lose control in emergencies (PD)
\(25) Put myself in others’ shoes (PT)
\(26) Image novels were about me (FS)
\(27) Other’s problems destroy me (PD)
\(28) Put myself in other’s place (PT) 0.42
----------------------------------------------- -------- -------- -------- --------- -------- ------- -------- --------
Game questionnaire {#game-questionnaire .unnumbered}
------------------
The second questionnaire is about a prisoner dilemma game. Each of the 47 participants is asked to answer with a yes or no to 4 questions (see Table \[table:questionaire0\]), each related to the level of cooperation - cooperation (CC), cooperation - defection (CD), defection - cooperation (DC), and defection-defection (DD). A virtual other participant is represented in each interaction leading to 94 decision-makers in the whole process. The set of choices of each participant is $\{C,D\}$ where $D$ is also referred to $N$ for non-cooperation.
[cc|c|c|c|c|l]{} & &\
& & Cooperate & Defect\
& & $(A,A)$ & $ (B,C) $\
& & $(C,B) $ & $(D,D)$\
Data collection {#data-collection .unnumbered}
---------------
Regarding the approach to the test, the whole population had a complete comprehension and adherence to the tasks. Only 2 questions have been left out by a participant in the IRI test. In total we have a 99,63% of responsiveness in all the questions. In the next section, we analyze the results of the second questionnaire and study the impact of the four IRI scales on the decision making of the population.
Method and Analysis
===================
The analysis is divided into three steps. In the first step, the population is classified based on the result of the IRI scale. In the second step we analyze the result of the cooperation. Lastly, the level of cooperation is studied on the basis of classification of the population in the IRI scale.
IRI scale and population classification {#iri-scale-and-population-classification .unnumbered}
---------------------------------------
The first step of the analysis concerns the results of women and men population respectively at the IRI scale. We depict the characteristic of each individual who participated to the test in table \[table:IRIdistra\]. Table \[table:IRIdistr2b\] represents the number of people belonging to each sub-scale and those who do not.
[**Scale Type**]{} [**Women ID**]{} [**Men ID**]{}
-------------------- ------------------------ ------------------
PT 1,3, 4,10, 16,19,26,27 12
EC 15 -
FS 11,25 8
PD - 18
PT + EC 7,8,20 -
PT + PD - 4,15
PT + FS 6,12,24 -
EC + FS 17 14
EC + PD 2 -
PT + EC + FS 9,18,21 5,19
PT + EC + PD 23 6,11
PT + FS + PD - 17
EC + FS + PD 5 -
PT + EC + FS + PD 13,14,22 9
None of the scale 28 1,2,3,7,10,13,16
: IRI scale and participant identification[]{data-label="table:IRIdistra"}
[**Scale Type**]{} [**Women**]{} [**Men**]{} [**Total** ]{} [**Freq**]{}
-------------------- --------------- ------------- ---------------- --------------
PT 8 1 9 19,14%
EC 1 - 1 2,12 %
FS 2 1 3 6,38%
PD - 1 1 2,12 %
PT + EC 3 - 3 6,38%
PT + PD - 2 2 4,25%
PT + FS 3 - 3 6,38%
EC + FS 1 1 2 4,25%
EC + PD 1 - 1 2,12 %
PT + EC + FS 3 2 5 10,63%
PT + EC + PD 1 2 3 6,38%
PT + FS + PD - 1 1 2,12 %
EC + FS + PD 1 - 1 2,12 %
PT + EC + FS + PD 3 1 4 8,51%
None of the scale 1 7 8 17,02%
Participants 28 19 47
: IRI scale and population distribution[]{data-label="table:IRIdistr2b"}
The classification of the population based on different empathy subscales is presented in Table \[table:IRIdistr2b\]. The result shows that 14 people belong to a pure IRI scale while 25 people has a mixed IRI characteristics and 8 people do not belong to any IRI scale. In the next section we study the level of cooperation based on the classification of the population in the IRI scale.
Cooperation study: Prisoner Dilemma {#cooperation-study-prisoner-dilemma .unnumbered}
-----------------------------------
The analysis is based on the IRI test results and on the prisoner’s dilemma test results. The result of the prisoner’s dilemma suggested that the 35,71 % of the women population and the 36,84 % of men population have fully confessed. The results are depicted in Table \[table:fem1\], \[table:mal1\] and \[table:pop1\]. Notice that 53,57 % of the women population and 31,57 % of the men population have partially confessed. When considering the whole population, 23,40% of them has partially confessed. Hence, it is necessary to classified them looking at the cooperation level within the population of those who partially confessed.
[|c|c|c|c|]{}\
Decision & positively & partially & deny\
Result& 10 out of 28 & 15 out of 28 & 3 out of 28\
Frequency & 35,71% & 53,57% & 10,71%\
[|c|c|c|c|]{}\
Decision & positively & partially & deny\
Result& 7 out of 19 & 6 out of 19 & 6 out of 19\
Frequency & 36,84 % & 31,57% & 31,57%\
[|c|c|c|c|]{}\
Decision & positively & partially & deny\
Result& 17 out of 47 & 21 out of 47 & 9 out of 47\
Frequency & 36,17 % & 44,68% & 19,14%\
A more refined version of cooperation among the women population who partially confessed (15 out of 28) is given in Table \[table:par 1\]. We want to compute the level of cooperation within that population and hence we care about all the answer of the participants at the questionnaire. Then we derive the level of cooperation in that population by computing the marginal probability of confess in the population. More precisely, we consider two random variables $X =\{ c_1, d_1\}$ and $Y= \{ c_2, d_2\}$ for player 1 and player 2 respectively where $c_i$ stands for cooperation of player $i$ and $d_i$ defection. We then compute the marginal probability of cooperation of the player 1 given that the player 2 can confess or defect. $$p(c_1) = \sum_{y \in \{ c_2, d_2\}} p(X = c_1, Y = y).$$ We use the sample statistics to compute the probability of player 1 to cooperate through the number of occurrences of $c_1$. The result of the marginal probability of cooperation is equals to $0,51$. Hence in average, 7 people out the 15 can be classified as positively confessed.
[cc|c|c|c|c|l]{} & &\
& & Cooperate & Defect\
& & 10 & $ 1 $\
& & $5 $ & 13\
In the case of the men, a more refined version of cooperator among those who partially confess (6 out of 19) is given in Table \[table:par 2\]. The marginal probability of cooperation is equal to $p(c_1) = 0,46$. Hence in average, 2 people out the 6 can be classified as positively confessed.
[cc|c|c|c|c|l]{} & &\
& & Cooperate & Defect\
& & 5 & $ 2 $\
& & $1 $ & 5\
When considering the whole population, the fraction of people who partially confess is 21/47 and the marginal probability of cooperation is equals to $p(c_1) = 0,5$. Hence in average, 10 people out of the 21 can be classified as positively confessed.
[cc|c|c|c|c|l]{} & &\
& & Cooperate & Defect\
& & 15 & $ 3 $\
& & $6 $ & 18\
Cooperation vs IRI scale {#cooperation-vs-iri-scale .unnumbered}
------------------------
In this subsection, we are interested in computing the level of cooperation in each IRI ’pure’ subscale. For this aim, we consider the subpopulation belonging to each scale. We then use the answer of cooperation in the prisoner dilemma game for computing the probability of cooperation.\
\
[ **PT vs Cooperation**]{}
[|c|c|c|c|c|c|]{}\
[ Coop\\PT ]{} & A & B & C & D & E\
p(c) &0 % &0.5 % & 0.75% & 0.66% &100%\
[|c|c|c|c|c|c|]{}\
& A & B & C & D & E\
p(c) &0 % &0 % & 0% & 66,66% &0%\
[|c|c|c|c|c|c|]{}\
& A & B & C & D & E\
p(c) &0 % &0,5 % & 75% & 66% &100%\
The Pearson correlation coefficient between the level of cooperation and the PT scale is $r_{Women}$$ = 0.7797$ $(p < .01)$ for Women, $r_{men}$ = 1 $(p < .01)$ for men and $r_{population}$ $= 0.6347$ $(p < .01)$ for the population belonging to the PT scale. The interpretation is that there is a positive correlation for Women. The fact that only one man was PT and had positively cooperate leads to a strong positive correlation. The overall population of PT positively cooperates.
[ **PD vs Cooperation**]{}
[|c|c|c|c|c|c|]{}\
[ Cooperation\\PD ]{} & A & B & C & D & E\
p(c) & 0 %&0 % & [ 0 % ]{} & 0% &0%\
There is only one man who is PD in the IRI scale and the probability of his level of cooperation was zero since he only denied.
[ **EC vs Cooperation: Women**]{}
[|c|c|c|c|c|c|]{}\
[ Cooperation\\EC ]{} & A & B & C & D & E\
p(c) & 0 % &0 % & [ 100 % ]{} & 0% &0%\
There is only one woman EC in the IRI scale and the probability of his level of cooperation is 1 since she positively confessed. Therefore the Pearson correlation coefficient is one.
[ **FS vs Cooperation**]{}
[|c|c|c|c|c|c|]{}\
[ Coop\\FS ]{} & A & B & C & D & E\
p(c) &0 % &0 % & 0 % & [ 100% ]{} &0%\
[|c|c|c|c|c|c|]{}\
& A & B & C & D & E\
p(c) &0 % &0 % & 0% & 0% & [ 66,66% ]{}\
[|c|c|c|c|c|c|]{}\
& A & B & C & D & E\
p(c) &0 % &0 % & 0% & [ 100% ]{} &[ 66,66% ]{}\
The Pearson correlation coefficient between the level of cooperation and the FS scale is $r_{Women}$$ = 1$ $(p < .01)$ for Women, $r_{men}$ = 1 $(p < .01)$ for men and $r_{population}$ $= 0.9891$ $(p < .01)$ for the population belonging to the FS scale. The strong positive correlation between FS and cooperation is due to the fact that two people had positively confessed and one had partially confessed.
coordinates [ (awful,3) (bad,13) (average,14) (good,16) (excellent,9) (ideal, 20) ]{}; coordinates [ (awful,1) (bad,1) (average,2) (good,3) (excellent,0) (ideal, 20) ]{};
coordinates [ (awful,0) (bad,0) (average,0) (good,0) (excellent,0) (ideal, 10) ]{}; coordinates [ (awful,0) (bad,2) (average,4) (good,1) (excellent,0) (ideal, 10) ]{};
coordinates [ (awful,2) (bad,0) (average,2) (good,3) (excellent,0) (ideal, 10) ]{}; coordinates [ (awful,0) (bad,0) (average,0) (good,0) (excellent,0) (ideal, 0) ]{};
coordinates [ (awful,1) (bad,3) (average,3) (good,6) (excellent,2) (ideal, 10) ]{}; coordinates [ (awful,1) (bad,1) (average,1) (good,1) (excellent,3) (ideal, 10) ]{};
Cooperation vs IRI mixed scale {#newmodel1 .unnumbered}
------------------------------
In this section we analyze the correlation between the mixed scale of length two and we also compute the level of cooperation in each sub-population.
- Case [**PT + EC**]{}: the sub-population is composed of only women. The Pearson correlation coefficient between PT and EC is $r_{Women} =$ $r_{population}$ $ = 0,8108$ $(p < .01).$ The probability of cooperation was p(c) = 0,5.
- Case [**PT + FS**]{}: the sub-population is composed of only women. The Pearson correlation coefficient between PT and FS is $r_{Women} =$ $r_{population}$ $ = 0,9382$ $(p < .01).$ The probability of cooperation was p(c) = 0,62.
- Case [**EC + FS**]{}: the sub-population is composed of women and men. The Pearson correlation coefficient between EC and FS is $r_{Women} =$ $ = 0,7845$ $(p < .01)$, $r_{Men} =$ $ = 0,8709$ $(p < .01),$ $r_{population}$ $ = 0,7148$ $(p < .01)$ for women, men and the global population respectively. The probability of cooperation was p(c) = 0,75.
- Case [**PT + PD**]{}: the sub-population is composed of only men. The Pearson correlation coefficient between PT and PD is $r_{Men} =$ $r_{population}$ $ = 0,2796$ $(p < .01).$ The probability of cooperation was p(c) = 0,6.
- Case [**EC + PD** ]{}: the sub-population is composed of only women. The Pearson correlation coefficient between PT and FS is $r_{Women} =$ $r_{population}$ $ = -0,3462$ $(p < .01).$ The probability of cooperation was p(c) = 0,5.
Pearson correlation PT EC FS PD
--------------------- ---- --------------- ----------------- ---------
PT - [ **0,81**]{} [ **0,9382**]{} 0,2796
EC - - [ **0,8709**]{} -0,3462
FS - - - -
PD - - - -
: subscale correlation[]{data-label="ms"}
The result of level of cooperation corresponding to the mixed IRI scale of Table \[ms\] is given in Table \[mscoop\]. We can observe that a high level of cooperation associated to a high correlation coefficient correspond to the Empathy-Altruism behaviour (namely PT + FS and EC + FS ). A high level of cooperation associated to a low correlation coefficient corresponds to Empathy-Spitefulness (namely PT + PD, EC + PD).
Cooperation level
--------- -------------------
PT + EC 50%
PT + FS 62,5%
PT + PD 66,66 %
EC + FS 75%
EC + PD 50%
: Level of cooperation at mixed scales[]{data-label="mscoop"}
The effect of empathy on decisions {#empathyDecisions .unnumbered}
----------------------------------
In this section we study the effect of individual scales on the degree of cooperation. For this aim we will compare the result of a pure scale with the group of individual who do not belong to any IRI sub-scale. The motivation of this placebo test is that people who do not belongs to any scale can be a valuable sample for assessing the impact of an IRI scale like PT, EC and FS on user’s decision making.
Since our dataset is not too large, we rely on the nonparametric linear regression using the Theil’s method [@Theil1950a; @Theil1950b; @Theil1950c] for computing the slope median value, given a dependent variable set $\{y\}$ and an independent variable $\{x\}$.
The dataset of the independent variable $\{x\}$ is represented by an IRI scale and it is obtained in the following way: (i) we first select the scale we want to study ( let say PT scale). The cardinality of the dataset is then given by the number of people belonging to that scale (see Table \[table:IRIdistra\]). Next the value $x_i$ associate to the individual $i$ is computed by choosing the value (let say A) with the highest choice within the questionnaire $\{ A, B, C, D,E\}$. Based on the choice’s result, an integer value is assigned to $\{x_i\}$ following $\{ A= 1, B = 2, C= 3, D=4, E = 5 \} $.
The dataset of the dependable variable $\{y\}$ is represented by the result of the cooperation. An individual $i$ will be assigned a value $y_i = 1$ if he has fully cooperated, $y_i = 0.5$ if he has partially cooperated and $y_i = 0$ if he has denies.
Based on the above definition we can now study the effect of the PT scale on the level of cooperation. There are 9 individual belonging to the pure PT scale (see Table \[table:PTlr\] ).
\[h\]
[|c|c|c|c|c|c|c|c|c|c|]{}\
& 2 & 3 & 3 & 4 & 4 & 4 & 4 & 4 & 5\
& 0.5 & 0.5 & 1 & 0 & 0.5 & 1 & 1 & 1 & 1\
\
The result of the Theil’s slope median is given by $\beta_{PT} = \bf{0.2515}$, where $\beta_{PT} $ is the median slope value of the set
$\{ -1.0010, -0.5010, -0.4980, -0.2500, 0.001, 0.001, 0.001, \\ 0.001, 0.001, 0.002, 0.002, 0.002, 0.003, 0.003, \\
0.1680, 0.2502, 0.2506, 0.2510, { \color{black} \bf{0.2515} }, 0.4990, 0.4995, \\ 0.4995, 0.5000, 0.5025, 1, 1, 1, 1.004, \\
167, 250, 250.75, 334, 499, 499, \\ 500.5, 502 \}$
For the placebo test, the dataset for the individual belonging to “None of the scale” is given by the Table \[table:nonPTlr\]
\[h\]
[|c|c|c|c|c|c|c|c|c|]{}\
& 1 &2 &2 &2 &2 &2 &4& 4\
& 0& 0& 0& 0.5& 0.5& 1& 0.5 & 1\
\
The result of the Theil’s slope median is given by $\beta_{\mbox{non of the scale}} = \bf{0.4995}$, where $\beta_{\mbox{non of the scale}} $ is the median slope value of the set
$\{
-0.2495, 0.0005, 0.0005, 0.001 , 0.001, 0.002 , 0.1673, \\
0.2501, 0.2503, 0.2505, 0.2506, 0.3336, 0.499, { \color{black} \bf{ 0.4995 } }, \\
0.4995 , 0.4998, 0.996 , 1, 1, 166.6667, 249.5 , \\
249.5, 249.75, 250, 332.6667 , 498 , 499 , \\ 499
\}$
Result interpretation: the angular coefficient $\beta$ measures the effect of the independent variable $x$ on the dependent variable $y$. The more is the value of the angular coefficient, the better is the effect of the independent variable $x$ on the variable $y$. The results we get are $\beta_{PT} = 0.2515$ and $\beta_{non \,\, of \,\, the \,\, scale} = 0.4995$ and $\beta_{PT} < \beta_{non \,\, of \,\, the \,\, scale}$. Since we are studying the effect of the PT scale on the level of cooperation and based on the following result, we cannot concludes that the only factor which increases the level of cooperation is given by the PT component.
Similarly, we are interested in studying the influence of Fantasy scale component on the level of cooperation and we follow and apply the approach mentioned above on the FS component.
The result of the Theil’s slope median is given by $\beta_{FS} = \bf{0.5000 }$, where $\beta_{FS} $ is the median slope value (see Table \[table:FSlr\] ) of the set $\{ -0.0010, { \color{black} \bf{ 0.5000 } }, 501.0000 \}$
\[h\]
[|c|c|c|c| ]{}\
& 4& 4 &5\
& 0.5& 1& 1\
\
In the case of individual belonging to “None of the scale”, the result of the Theil’s slope median is given by $ \beta_{non \,\, of \,\, the \,\, scale} = 0.5000 $ where, $\beta_{non \,\, of \,\, the \,\, scale} $ is the median slope (see Table \[table:nonFSlr\] ) value of the set
$\{ -0.5005, -0.499, 0.001, 0.0010, 0.001, 0.002, 0.002, \\
0.2504, 0.2508 , 0.4995 , 0.4995 , 0.5 , 0.5003 , { \color{black} \bf{ 0.5005 }}, \\
0.5010, 0.5015, 0.9980, 0.9980, 0.999, \\ 1, 1,
167, 250, 250, 498, 499, 499, 500
\}$
\[h\]
[|c|c|c|c|c|c|c|c|c|]{}\
& 1 &1 &1 &1& 2 &2& 2 &3\
& 0 &0 &0.5& 0.5& 0 &0.5& 1& 1\
\
The dataset of the component EC and PD is to small to perform the nonparametric linear regression on it.
Explanation {#newmodel .unnumbered}
-----------
### Game without empathy {#game-without-empathy .unnumbered}
We consider the one-shot game given by Table \[table:noEmpathy\]. [ Pareto efficiency, or Pareto optimality, is an action profile in which it is not possible to make any one player better off without making at least one player worse off. A Nash equilibrium is a situation in which no player can improve her payoff by unilateral deviation. The action profile $(D,D)$ is the unique Nash equilibrium, and $D$ is a dominant strategy choice for each player. But $(C,C)$ Pareto-dominates $(D,D).$ The three choice pairs $(C,C)$, $(C,D),$ and $(D,C)$ are all Pareto optimal, but $(C,C)$ is the most socially efficient choice pair.]{}
[cc|c|c|c|c|l]{} & &\
& & Cooperate & Defect\
& & (-6,-6) & $ (-120,0) $\
& & $(0,-120) $ & (-72,-72)\
The classical game model fails to explain the experimental observation:
In the classical prisoner dilemma (without empathy consideration), the strategy which consists to defect is a dominated strategy. Hence it is expected that - in the rational case - all the people decides to defect in the questionnaire of the first experiment (see Table \[table:noEmpathy\]). But this was not the case since 19,14% of the population defected (see Table \[table:pop1\]). This result can be due to some psychological aspect of the human behavior when taking part of the game. The idea is to modify the classical payoff and integrate the empathy in the preferences of the players. This leads to an empathetic payoff as explained below.
### Game with empathy consideration {#game-with-empathy-consideration .unnumbered}
As observed from the data, a significant level of cooperation appears in the experimental game. This suggests a new modelling and design of the classical game and better understanding the behavior of the participants. We propose a a new payoff matrix that takes into consideration the effect of empathy in the outcome. Denote by $\lambda_{12}$ the degree of empathy of the prisoner $1$ has over the prisoner $2$ and by $\lambda_{21}$ the vice versa. The payoff of the classical prisoner dilemma game (see Table \[table:noEmpathy\]) changes and now depends on the level of empathy $\lambda_{12}$ and $\lambda_{21}$ of the prisoners (see Table \[table:Empathyconsideration\]). Now we are interested in finding all the possible equilibria of the new game based on the value of $\lambda_{12}$ and $\lambda_{21}$.
[cc|c|c|c|c|l]{} & &\
& & C & D\
& & $(-6-6 \lambda_{12}, -6 -6\lambda_{21})$ & $ (-120, -120\lambda_{21}) $\
& & $(-120\lambda_{12},-120)$ & $(-72-72\lambda_{12}, -72-72\lambda_{21})$\
[ Equilibrium analysis of Table \[table:Empathyconsideration\] ]{}
- CC is an equilibrium if $\lambda_{12} \geq \frac{6}{114};$ $\lambda_{21} \geq \frac{6}{114}$
- CN is an equilibrium if $\lambda_{12} \geq \frac{2}{3};$ $\lambda_{21} \leq \frac{6}{114}$
- NC is an equilibrium if $\lambda_{12} \leq \frac{6}{114};$ $\lambda_{21} \geq \frac{2}{3}$
- NN is an equilibrium if $\lambda_{12} \leq \frac{2}{3};$ $\lambda_{21} \leq \frac{2}{3}$
Since empathy can be positive, negative or null, we analyze the outcome of the game with different signs of the parameters $\lambda_{12}$ and $\lambda_{21}$.\
\
[ **Analysis 1: $\lambda_{12}, \lambda_{21} \geq 0$** ]{} we now consider $\lambda_{12}$ and $\lambda_{21}$ as two random variables with a distribution across the population. We generalized the outcome of the game based on their values.
- [**case 1:**]{} [**(medium-medium)**]{}: if $\lambda_{12},\lambda_{21} $ $ \in$ $\left[ \frac{6}{114}, \frac{2}{3} \right] $ then we have 3 equilibria: CC, NN, and the mixed equilibria $p_2C + (1-p_2)N$ or $p_1C + (1-p_1)N$ with $p_1 = \frac{8 - 12\lambda_{12} }{ 7 (1 + \lambda_{12})}$ and $p_2 = \frac{8 - 12\lambda_{21} }{ 7 (1 + \lambda_{21})}$.
- [**case 2 (high - high):**]{} if $\lambda_{12}, \lambda_{21}$ $ \in$ $\left[ \frac{2}{3},1 \right] $ then CC is the unique equilibrium.
- [**case 3$_{a}$ (high - low):**]{} if $\lambda_{12}$ $ \in$ $\left[ \frac{2}{3},1 \right] $ and $\lambda_{21} \in \left[ 0, \frac{6}{114} \right] $ then CN is the unique equilibrium.
- [**case 3$_{b}$ (low - high):**]{} if $\lambda_{12}$ $\in$ $\left[ 0, \frac{6}{114}\right] $ and $\lambda_{21} \in \left[ \frac{2}{3},1 \right] $ then NC is the unique equilibrium.
- [**case 4$_{a}$ (medium - low):**]{} if $\lambda_{12}$ $\in$ $\left[ \frac{6}{114}, \frac{2}{3} \right] $ and $\lambda_{21} \in \left[ 0, \frac{6}{114} \right] $ then NN is the unique equilibrium.
- [**case 4$_{b}$ ( low - medium ):**]{} if $\lambda_{12}$ $\in$ $ \left[ 0, \frac{6}{114} \right] $ and $\lambda_{21} \in$ $\left[ \frac{6}{114}, \frac{2}{3} \right] $ then NN is the unique equilibrium.
- [**case 5$_{a}$ ( $\lambda_{12}$ high ):**]{} if $\lambda_{12} > \frac{2}{3} $ then C is a dominating strategy (unconditional cooperation) for player 1.
- [**case 5$_{b}$ ( $\lambda_{21}$ high ):**]{} if $\lambda_{21} > \frac{2}{3} $ then C is a dominating strategy (unconditional cooperation). for player 2.
- [**case 6$_{a}$ ( $\lambda_{12}$ low ):**]{} if $\lambda_{12}$ $\in$ $\left[ 0, \frac{6}{114}\right] $ then N is a dominating strategy (unconditional non-cooperation) for player 1.
- [**case 6$_{b}$ ( $\lambda_{21}$ low ):**]{} if $\lambda_{21}$ $\in$ $\left[ 0, \frac{6}{114}\right] $ then N is a dominating strategy (unconditional non-cooperation) for player 2.
[ **Analysis 2: $\lambda_{12}, \lambda_{21} < 0$** ]{}
- if $\lambda_{12}, \lambda_{21} < 0$ then NN is the unique equilibrium.
[ **Analysis 3: $\lambda_{12}>0, \lambda_{21} < 0$** ]{}
- if $\lambda_{21} < 0$ then N is the dominating strategy (unconditional non-cooperation) for player 2.
- if $\lambda_{12} > \frac{2}{3} $ then CN is an equilibrium.
- if $\lambda_{12} < \frac{2}{3} $ then NN is an equilibrium.
[ **Analysis 4: $\lambda_{12}<0, \lambda_{21} > 0$** ]{}
- if $\lambda_{12} < 0$ then N is a dominating strategy (unconditional non-cooperation) for player 1.
- if $\lambda_{21} > \frac{2}{3} $ and $\lambda_{12}$ $\in$ $\left[ - \frac{6}{114}, 0 \right] $ then NC is an equilibrium.
- if $\lambda_{21} < \frac{2}{3} $ then NN is an equilibrium.
Figure \[fig:twoEmpathy\] summarizes the outcome of the two-player game.
![Equilibrium of the game with Empathy consideration[]{data-label="fig:twoEmpathy"}](empathy2D "fig:")\
The proposed empathetic payoff better captures the preferences of the players as a negligible proportion of cooperators are obtained analytically in the new game. Thus, if one quantity accurately the empathy’s effect in the payoff, then the resulting game is more adapted to the results of experiment. By doing this iteratively over several experiments and model adjustment we obtain better game theoretic models for real life interaction. We believe that the generic approach developed here can be extended to other class of games as indicated in [@t1; @mimo].
=-1 =-1
![ Empathy scale distribution across the population of participants []{data-label="fig2:repartition"}](repartition2){width="10cm" height="10cm"}
Conclusion
==========
We have proposed a basic experiment on the role of empathy in one-shot Prisoner Dilemma. We analyzed multidimensional components of empathy using IRI scale. The experiment on the field conducted at NYUAD, Learning and Game Theory Lab, was composed of population of 47 persons (28 women and 19 men). The experimental game provided interesting data. A non-negligible proportion of the participants (35,71% of women and 36,84% of the men population) have fully confessed. Considering the whole population, 36,17% have fully confessed and the 19,14% have fully denied. In terms of partial confession behaviors, 0.46% of the women population and 0,54% of the men population have partially confessed. Considering the whole population, 0,45% have partially confessed in this reproduction of Prisoner Dilemma game. Regarding the distribution of women and men population at the Interpersonal Reactivity Index, the experimental results reveal that the dominated strategies of the classical game theory are not dominated any more when users’s psychology is involved, and a significant level of cooperation is observed among the users who are positively partially empathetic.
The next future lines of our work would be based on a possible creation and implementation of a new model of empathy measurement that may take into account a multi-faceted presence of different variables (general attitude to risk, a general estimation of the different heuristics present in the person, how is internalized individually the model Imagine Self/Imagine other that leads to reciprocity, individual attitude to fairness). Our aim is to make of this model a possible and valid model of measurement related to every day life situations where empathy is playing a key role not only in engineering field but, also, in social, economic and institutional area.
Everyday life seems to be really different and far from laboratory situations, where all the concepts are built around a crystalized idea of what it is or it should be. The evolutionary lines are then taken into account in our research as an important way to get data from a longitudinal perspective. The dissatisfaction for a simple instrument of testing empathy leads us to rethink, first of all, our next step in empathy measurement. It would take into account two lines, essentially:
1. a combined series of instrument of measurement, that will consider a multidimensional level of empathy.
2. a possibility to test and retest the variable in different moments and ways (construct validity, test and retest the person about the same cluster, e. g affective empathy).
3. A feedback from verbal and not verbal communication analysis software.
It would be interesting to investigate (i) if there is any relationship between age and strategic decision making, (ii) How the altruism and spitefulness evolve with increasing age. Furthermore, gender difference (if any) should be investigated in a bigger population and in games with distribution-dependent payoffs [@temftg1; @temftg2; @temftg3; @temftg4].
Compliance with Ethical Standards {#compliance-with-ethical-standards .unnumbered}
=================================
Conflict of interest: The authors declare that they have no conflict of interest.
Ethical approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent: Informed consent was obtained from all individual participants included in the study.
Friedman, J. , A non-cooperative equilibrium for supergames, Review of Economic Studies 38 (1): 1-12, (1971)
Myerson, Roger B. Game Theory, Analysis of conflict, Cambridge, Harvard University Press (1991).
Robert Vischer, On the Optical Sense of Form: A Contribution to Aesthetics, Empathy, Form, and Space: Problems in German Aesthetics, 1873-1893. Eds. Harry Francis Mallgrave and Eleftherios Ikonomou. Santa Monica, CA: The Getty Center for the History of Art and Humanities, 89-122, 1993.
Theodor Lipps, Empathy, Inner Imitation and Sense Feelings, in A Modern Book of Aesthetics, 374-382, New York: Rinehart and Winston, 1979.
Arthur S. Reber, Rhianon Allen and Emily S. Reber, The Penguin Dictionary of Psychology, London: Penguin, 2009.
Allan G., The Blackwell Dictionary of Sociology: A User’ s guide to sociological language, Oxford (England): Malden, Mass: Blackwell Publishers, 2000.
Jamil Zaki & Kevin N Ochsner, The Neuroscience of empathy: progress, pitfalls and promise, Nature Neuroscience 15, 675-680, 2012.
Xiaosi Gu, Zhixian Gao, Xinghao Wang, Xun Liu, Robert T. Knight, Patrick R. Hof, Jin Fan, Anterior insular cortex is necessary for empathetic pain perception, Oxford University (England) 2012, 135, 2726-2735.
Jorge A. Barraza, Paul J. Zak, The neurobiology of collective action, Frontiers in Neuroscience, 2013.
Matthew Lieberman, Social: why our brains are wired to connect, New York: Crown Publishers, 2013
Lauren Wispe, The Distinction between Sympathy and Empathy. To Call Forth a Concept, A Word is Needed, Journal of Personality and Social Psychology, 50(2), 1986, 314- 321.
Stephanie Preston, A perception action model for empathy,Behavioral and Brain Sciences / Volume 25 / Issue 01 / February 2002, pp. 1- 20, Cambridge University Press
Janet Wilde Astington, Margaret J. Edward, The Development of Theory of Mind in Early Childhood, 2010.
Hoffman M.L., 1978. Toward a theory of empathic arousal and development. In The development of affect (pp. 227-256). Springer US.
Hoffman M.L., Empathy and moral development: Implications for caring and justice, Cambridge University Press, Cambridge, MA (2000).
Feshbach, N. (1979). Empathy training: a field study in affective education. In Aggression and Behavior Change: Biological and Social Processes, Seymour Feshbach and Adam Fraczek (Eds.). New York: Praeger, 1979.
Hoffman, M.L. (1982) Development of prosocial motivation: Empathy and guilt. In N. Eisenberg (Ed.), The development of prosocial behavior (pp. 281-338). New York: Academic Press.
Preston, S.D. and De Waal, F.B., 2002. Empathy: Its ultimate and proximate bases. Behavioral and brain sciences, 25(01), pp.1-20.
Dziobek, I., Rogers, K., Fleck, S., Bahnemann M., Heekeren, H. R., Wolf, O. T., & Convit, A. (2008). Dissociation of cognitive and emotional empathy in adults with Asperger syndrome using the Multifaceted Empathy Test (MET). Journal of Autism and Developmental Disorders, 38, 464-473.
Eisenberg, N., & Miller, P. (1987). The relation of empathy to prosocial and related behaviors. Psychological Bulletin, 101, 91-119.
Shamay-Tsoory SG, Tomer R, Berger BD, Goldsher D, Aharon Peretz J. Impaired affective theory of mind is associated with right ventromedial prefrontal damage. Cognitive and Behavioural Neurology 2005, 18(1): 55-67.
C. Daniel Batson, Bruce D. Duncan, Paula Ackerman, Terese Buckley and Kimberly Bich, Is Emphatic Emotion a Source of Altruistic Motivation, Journal of Personality and Social Psychology, Vol. 40, N.2, 290-302, 1981.
C.D Batson, J.C. Todd, R.M. Brummett, B.H. Shaw, Aldeguer C.M.R, Empathy and the collective good: caring for one of the others in a social dilemma. Journal of Personality and Social Psychology, Journal of Personality and Social Psychology, 1995, Vol. 68, 619-631
C. Daniel Batson, Tecia Moran, Empathy induced altruism in a prisoner’s dilemma, European Journal of Social Psychology, 1999, Vol. 29, N.7, 909-924.
Frederique De Vignemont and Tania Singer, The empathic brain: how, when and why? Trends in Cognitive Science. 2006 Oct;10(10):435-41. Elsevier 2006 Sep 1.
Sagi, Abraham; Hoffman, Martin L., Empathic distress in the newborn., Developmental Psychology, Vol 12(2), 1976, 175-176.
Feshbach, N. D. (1975). Empathy in children: Some theoretical and empirical consideration. Counseling Psychologist, 5 (2), 25-30.
Marcus, R. F., Tellen, S., & Roke, E. J. (1979.) Relation between cooperation and empathy in young children. Developmental Psychology, 15, 346-347.
John Nash, Non - Cooperative Games, The Annals of Mathematics, Second Series, Vol. 54, Issue 2 (Sep., 1951), 286-295.
G. A. Cory, The consilient brain: the bioneurological basis of economic, society and Politics, Politics and the Life Science, 23 (2): 64-65, 2004.
Ken Binmore, Game Theory and the Social Contract, vol 1, Playing Fair, MIT Press, 1994.
David Hume, A treatise of Human Nature. Oxford University Press, Oxford, 1739.
Smith Adam,The Theory of Moral Sentiments, Millar, 1759.
Batson, C.D, Batson J.G, Todd, R.M., Brummett, B.H., Shaw, L.L. & Aldeguer, C.M.R. (1995). Empathy and the collective good: caring for one of the others in social dilemma. Journal of Personality and Social, l Psychology, 68, 619-631.
Batson, C.D., Batson J.C, Singlsby, J.K, Harrell, K.L., Peekna H.M. & Todd, R.M (1991). Empathy joy and the empathy-altruism hypothesis. Journal of Personality and Social Psychology, 61, 413-426.
Werner Guth. How ultimatum offers emerge: A study in bounded rationality. Mimeo, 2000.
Werner Guth and Hartmut Kliemt. Bounded rationality and theory absorption. Mimeo, 2004.
Groh Jan, Huck Steffen, Mattias Justin. A Note on Empathy in Games. Journal of Economic Behavior & Organization, vol. 108, Dec. 2014, pp. 383-388 H. Tembine: Psychological mean-field-type games, Preprint, 2017. H. Tembine: Mean-field-type games, Preprint, 2017.
Alan Kirman, Miriam Teschl, Selfish or Selfless? The role of empathy in economics, vol.: 365, Issue: 1538, 2010
F. Artinger, F Exadaktylos, H. Koppel, S. SŠŠaaksvuori. In others’ shoes: do individual differences in empathy and theory of mind shape social preferences?, PloS one, 2014.
Karen M. Page, Martin Nowak, Empathy leads to Fairness, Bulletin of mathematical biology, pp. 1101-1116, 2002.
Vittorio Pelligra, Empathy, guilt-aversion, and patterns of reciprocity. Journal of Neuroscience, Psychology, and Economics, Educational Publishing Foundation (2011).
Giulia Rossi, Alain Tcheukam and Hamidou Tembine, How Much Does Users’ Psychology Matter in Engineering Mean-Field-Type Games, Workshop on Game Theory and Experimental Methods June 6-7, 2016, Second University of Naples, Department of Economics Convento delle Dame Monache, Capua (Italy)
Zhou, Q., Valiente, C., & Eisenberg, N. (2003). Empathy and its measurement. In S. Lopez & C.R. Snyder (Eds.), Positive psychological assessment: A handbook of models and measures (pp. 269-284). Washington DC: American Psychological Association.
Dymond, Rosalind F., A scale for the measurement of empathic ability, Journal of Consulting Psychology, Vol 13(2), Apr 1949, 127-133.
Willard A. Kerr & Boris J. Speroff (1954) Validation and Evaluation of the Empathy Test, The Journal of General Psychology, 50:2, 269-276
Hogan, R. (1969). Development of an empathy scale. Journal of Consulting and Clinical Psychology, 33, 307-316.
Mehrabian, A., & Epstein, N. (1972). A measure of emotional empathy. Journal of Personality, 40(4), 525-543
Albert Mehrabian, Andrew L. Young and Sharon Sato, Emotional empathy and associated individual differences, Current Psychology, 1988, Volume 7, Number 3, Page 221.
Mehrabian, A. (2000). Beyond IQ: Broad-based measurement of individual success potential or emotional intelligence.Genetic, Social, and General Psychology Monographs, 126,133-239. B. Djehiche, T. Basar, H. Tembine, Mean-Field-Type Game Theory, Springer, under preparation, 2017 Alain Bensoussan, Boualem Djehiche, Hamidou Tembine, Phillip Yam, Risk-Sensitive Mean-Field-Type Control , Preprint, 2017, arXiv:1702.01369.
Lawrence EJ, Shaw P, Baker D, Baron-Cohen S, David A. Measuring empathy: Reliability and validity of the Empathy Quotient. Psy Med. 2004; 34: 911 - 924.
Davis, M. H. (1983). Measuring individual differences in empathy: evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44, 113-126.
Wang, Y., Davidson, M. M., Yakushko, O. F., Savoy, H. B., Tan, J. A., & Bleier, J. K. (2003). The Scale of Ethnocultural Empathy: Development, validation, and reliability. Journal of Counseling Psychology, 50, 221-234.
Hojat, M., Mangione, S., Nasca, T. J., Cohen, M. J. M., Gonnella, J. S., Erdmann, J. B., et al. (2001). The Jefferson Scale of Physician Empathy: Development and preliminary psychometric data. Educational and Psychological Measurement, 61(2), 349-366.
Reynolds, W., P. A. Scott, and Wendy Austin. ”Nursing, Empathy and Perception of the Moral.” (2000).
Baron-Cohen, S. et al. (2001) The AutismSpectrum Quotient (AQ): evidence from Asperger Syndrome/high-functioning autism, males and females, scientists and mathematicians. J. Autism Dev. Disord. 31, 5-17
Hashimoto, Hidemi, and Kunio Shiomi. ” The structure of empathy in Japanese adolescents: Construction and examination of an empathy scale.” Social Behavior and Personality: an international journal 30.6 (2002): 593-601.
Baron-Cohen, S., et al. (1995) Are children with autism blind to the mentalistic significance of the eyes?. British Journal of Developmental Psychology 13.4: 379-398.
Burgess, P.W., Alderman, N., Evans, J., Emslie, H. and Wilson, B.A., 1998. The ecological validity of tests of executive function. Journal of the International Neuropsychological Society, 4(06), pp.547-558.
Hornak, J., Rolls, E.T. and Wade, D., 1996. Face and voice expression identification in patients with emotional and behavioural changes following ventral frontal lobe damage. Neuropsychologia, 34(4), pp. 247-261.
Hall, J.A. and Bernieri, F.J. eds., 2001. Interpersonal sensitivity: Theory and measurement. Psychology Press.
B. Djehiche, H. Tembine, Risk-Sensitive Mean-Field Type Control Under Partial Observation, Book chapter in Stochastics of Environmental and Financial Economics, pp 243-263, Springer International Publishing, 2016
H. Tembine, Nonasymptotic mean-field games, IEEE transactions on cybernetics 44 (12), pp. 2744-2756, 2014 D. Bauso, B. M. Dia, B. Djehiche, H. Tembine, R Tempone: Mean-field games for marriage, PloS one 9 (5), e94933 Davis, M. H. (1980). A multidimensional approach to individual differences in empathy. JSAS Catalog of Selected Documents in Psychology, 10, 85.
Baron-Cohen S, Wheelwright S. The empathy quotient: an investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences. Journal of Autism and Developmental Disorders. 2004;34:163-175.
Cliffordson C. Parents’ judgments and students’ self-judgments of empathy: The Structure of empathy and agreement of judgments based on the Interpersonal Reactivity Index (IRI) European Journal of Psychological Assessment. 2001;17:36-47.
Alterman AI, McDermott PA, Cacciola JS, Rutherford MJ. Latent structure of the Davis Interpersonal Reactivity Index in methadone maintenance patients. Journal of Psychopathology and Behavioral Assessment. 2003;25:257-265.
L.S. Vygotsky. Razvitie vysshikh psikhicheskikh funktsii. The development of higher mental functions, Moscow, 1960, pp. 182-223
Axelrod Robert, William D. Hamilton: The Evolution of Cooperation, Science, New Series, Vol. 211, No 4489, 1390-1396
H. Tembine: Distributed massive MIMO network games: Risk and Altruism. CDC 2015: 3481-3486
H. Theil, rank-invariant method of linear and polynomial regression analysis, I, in Proc. Kon. Nederl. Akad. Wetensch A. 53, 1950a, pp.386-392;
H. Theil, A rank-invariant method of linear and polynomial regression analysis, II, in Proc. Kon. Nederl. Akad. Wetensch A. 53, 1950b, pp. 521-525;
H. Theil, A rank-invariant method of linear and polynomial regression analysis, III, pubblicato su Proc. Kon. Nederl. Akad. Wetensch A. 53, 1950c, pp. 1397-1412
H. Tembine, Distributed Strategic Learning for Wireless Engineers, CRC Press, Taylor & Francis, 496 pages, ISBN: 1439876371, 2012.
M. A. Khan, H. Tembine, Random matrix games in wireless networks, IEEE Global High Tech Congress on Electronics (GHTCE 2012), November 18-20, 2012, Shenzhen, China
Statistical Data
=================
Below we provide statistical data from the experimental game conducted in the Laboratory. Tables \[table:PTfemNNt\] \[table:ECfemNNsss\] \[table:FSfemNN\] \[table:PDfemNN\] report the data for women in the four IRI scales (PT,EC,FS,PD).
Tables \[table:PTmenNN\], \[table:ECmenNN\], \[table:FSmenNN\],\[table:PDmenNN\] focus on men statistical data for the four IRI scales (PT,EC,FS,PD).
[|c|c|c|c|c|c|]{} &\
[ PT]{} & A & B & C & D & E\
Woman 3 &0 & 3&0 & 4 &0\
Woman 5 & 0& 3& 3& 1& 0\
Woman 6 &0&1&2& 3&1\
Woman 11 &0 &2& 4& 1&0\
Woman 15 & 0 & 5& 2 &0 & 0\
Woman 16 &1&2 &1 &3&0\
Woman 17 & 0 & 3 & 3 & 0 & 1\
Woman 19 & 1& 1 & 0& 1 & 4\
Woman 24 & 2 & 1 & 2 & 2 &0\
Woman 27 & 0& 2 & 3 & 1 & 0\
\
Who is PT &\
Not PT &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PT]{} & A & B & C & D & E\
Woman 2 &1 &3 & 3 & 0 &0\
Woman 4 &1 & 1& 2& 2&1\
Woman 7 & 2& 0 & 0&0& 5\
Woman 8 &2& 0& 2 & 3&0\
Woman 9 & 0&2 & 3 & 2 & 0\
Woman 10 & 0& 3& 2 & 2 &0\
Woman 12 & 0& 1& 5& 1& 0\
Woman 14 & 0& 1& 4& 2& 0\
Woman 18 & 0& 2& 0& 1& 4\
Woman 20 & 1& 1& 1& 1& 3\
Woman 21 & 1& 1& 1& 4& 0\
Woman 22 & 0& 0& 7& 0& 0\
Woman 23 & 0& 2& 3& 1& 1\
Woman 25 & 1& 3& 1& 0& 2\
Woman 26 & 0& 1& 3& 1& 1\
\
Who is PT &\
Not PT &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PT]{} & A & B & C & D & E\
Woman 1 &0 &1 &2 &4 & 0\
Woman 13 &0 &1 &2 & 2 &2\
Woman 28 & 3 & 1 & 1 & 2 & 0\
\
Who is PT &\
Not PT &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ EC]{} & A & B & C & D & E\
Woman 3 &2 & 3& 1& 1 &0\
Woman 5 & 2& 1& 0& 4& 0\
Woman 6 &0&4&1& 1&1\
Woman 11 &0 &4& 1& 1&1\
Woman 15 & 3 & 0& 2 &2 & 0\
Woman 16 & 3& 1 &2 &1&0\
Woman 17 & 2 & 1 & 1 & 3 & 0\
Woman 19 & 3& 1 & 1& 1 & 1\
Woman 24 & 0 & 4 & 1& 1 &1\
Woman 27 & 2& 2 & 2 & 1 & 0\
\
Who is EC &\
Not EC &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ EC]{} & A & B & C & D & E\
Woman 2 &1 &2 & 0 & 3 &1\
Woman 4 &1 & 2& 4& 0&0\
Woman 7 & 1& 2 & 0&1& 3\
Woman 8 &3& 0& 1 & 3&0\
Woman 9 & 1&2 & 0 & 4 & 0\
Woman 10 & 3& 1& 1 & 2 &0\
Woman 12 & 0& 4& 2& 1& 0\
Woman 14 & 0& 0& 4& 3& 0\
Woman 18 & 1& 0& 2& 1& 3\
Woman 20 & 3& 0& 0& 1& 3\
Woman 21 & 2& 1& 0& 3& 1\
Woman 22 & 1& 2& 0& 4& 0\
Woman 23 & 0& 2& 2& 3& 0\
Woman 25 & 4& 0& 1& 1& 1\
Woman 26 & 2& 1& 2& 0& 2\
\
Who is EC &\
Not EC &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ EC]{} & A & B & C & D & E\
Woman 1 &1 &1 &4 &1 & 0\
Woman 13 &0 &2 &2 & 3 &0\
Woman 28 & 2 & 1 & 2 & 0 & 2\
\
Who is EC &\
Not EC &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ FS]{} & A & B & C & D & E\
Woman 3 &4 & 1& 2& 0 &0\
Woman 5 & 3& 0& 3& 1& 0\
Woman 6 &0&2&4& 1&0\
Woman 11 &0 &2& 1& 4&0\
Woman 15 & 3 & 2& 2 &0 & 0\
Woman 16 & 1& 5 &0 &1&0\
Woman 17 & 2 & 1 & 2 & 2 & 0\
Woman 19 & 2& 3 & 0& 1 & 1\
Woman 24 & 0 & 1 & 5& 1 &0\
Woman 27 & 2& 0 & 3 & 2 & 0\
\
Who is FS &\
Not FS &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ FS]{} & A & B & C & D & E\
Woman 2 &2 &1 & 1 & 1 &2\
Woman 4 &1 & 4& 1& 0&1\
Woman 7 & 0& 3 & 1&3& 0\
Woman 8 &1& 1& 3 & 1&1\
Woman 9 & 2&0 & 0 & 5 & 0\
Woman 10 & 1& 6& 0 & 0 &0\
Woman 12 & 0& 1& 4& 2& 0\
Woman 14 & 0& 0& 2& 4& 1\
Woman 18 & 1& 1& 2& 2& 1\
Woman 20 & 2& 1& 1& 1& 2\
Woman 21 & 1& 1& 3& 2& 0\
Woman 22 & 1& 1& 2& 3& 0\
Woman 23 & 0& 0& 1& 1& 5\
Woman 25 & 1& 1& 2& 2& 1\
Woman 26 & 0& 4& 3& 0& 0\
\
Who is FS &\
Not FS &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ FS]{} & A & B & C & D & E\
Woman 1 &2 &3 &1 &1 & 0\
Woman 13 &0 &0 &0 & 3 &4\
Woman 28 & 4 & 0 & 0 & 2 & 1\
\
Who is FS &\
Not FS &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
& A & B & C & D & E\
Woman 3 &1 & 2& 2& 2 &0\
Woman 5 & 3& 0& 0& 4& 0\
Woman 6 &2&3&0& 4&0\
Woman 11 &0 & 4& 3& 0&0\
Woman 15 & 0 & 4& 2 &1 & 0\
Woman 16 & 1& 2 &3 &1&0\
Woman 17 & 3 & 3 & 1 & 0 & 0\
Woman 19 & 4& 1 & 0& 1 & 1\
Woman 24 & 1 & 2 & 3& 1 &0\
Woman 27 & 2& 3 & 2 & 0 & 0\
\
Who is PD &\
Not PD &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PD]{} & A & B & C & D & E\
Woman 2 & 3 &0 & 1 & 1 &2\
Woman 4 &1 & 1& 4& 0&1\
Woman 7 & 1& 2 & 1&2& 1\
Woman 8 &3& 1& 2 & 1&0\
Woman 9 & 0&4 & 3 & 0 & 0\
Woman 10 & 4& 2& 1 & 0 &0\
Woman 12 & 0& 4& 2& 1& 0\
Woman 14 & 0& 1& 5& 1& 0\
Woman 18 & 2& 2& 1& 2& 0\
Woman 20 & 1& 2& 3& 0& 1\
Woman 21 & 1& 5& 0& 1& 0\
Woman 22 & 0& 3& 2& 1& 1\
Woman 23 & 0& 0& 5& 2& 0\
Woman 25 & 2& 4& 0& 0& 1\
Woman 26 & 2& 3& 1& 0& 1\
\
Who is PD &\
Not PD &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PD]{} & A & B & C & D & E\
Woman 1 &2 &1 &1 &1 & 1\
Woman 13 &0 &0 &3 & 4 &0\
Woman 28 & 3 & 1 & 0 & 1 & 2\
\
Who is PD &\
Not PD &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PT]{} & A & B & C & D & E\
Man 1 &0 & 3& 2& 1 &1\
Man 4 & 0& 2& 1& 1& 3\
Man 8 &0&2&4& 1&0\
Man 10 & 0&1 &2& 4& 1\
Man 11 & 3 & 0& 1 &1 & 2\
Man 12 &1&1 &2 &3&0\
Man 19 & 0 & 1 & 1 & 4 & 1\
\
Who is PT &\
Not PT &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PT]{} & A & B & C & D & E\
Man 3 & 0 &3 & 1 & 3 &0\
Man 5 &0 & 3& 3& 0&0\
Man 6 & 1& 0 & 1&4& 1\
Man 7 & 3& 3& 0 & 0&1\
Man 13 & 2&5 & 0 & 0 & 0\
Man 14 & 4 & 0& 1 & 0 &2\
\
Who is PT &\
Not PT &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PT]{} & A & B & C & D & E\
Man 2 &0 &3 &2 &0 & 2\
Man 9 &0 &1 &3 &3 & 0\
Man 15 &0 &2 &1 &4 & 0\
Man 16 &0 &4 &3 &0 & 0\
Man 17 &0 &3 &3 &1 & 0\
Man 18 &0 &4 &3 &0 & 0\
\
Who is PT &\
Not PT &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ EC]{} & A & B & C & D & E\
Man 1 &3 & 3& 0& 1 &0\
Man 4 & 3& 0& 2& 1& 1\
Man 8 &0&4&3& 0&0\
Man 10 & 0&2 &4& 0& 1\
Man 11 & 1 & 0& 1 &4 & 1\
Man 12 &3&1 &1 &2&0\
Man 19 & 1& 0 & 1 & 4 & 1\
\
Who is EC &\
Not EC &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ EC]{} & A & B & C & D & E\
Man 3 & 0 &5 & 2 & 0 &0\
Man 5 &1 & 1& 3& 0&2\
Man 6 & 1& 1 & 0&4& 1\
Man 7 & 1& 2& 4 & 0&0\
Man 13 & 0& 5 &2 & 0 & 0\
Man 14 & 0 & 1& 4 & 2 &0\
\
Who is EC &\
Not EC &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
Man [ EC]{} & A & B & C & D & E\
Man 2 &2 &2 &0 &2 & 1\
Man 9 &0 &3 &1 &3 & 0\
Man 15 &1 &3 &3 &0 & 0\
Man 16 &3 &1 &3 &0 & 0\
Man 17 &0 &3 &4 &0 & 0\
Man 18 &3 &2 &1 &1 & 0\
\
Who is EC &\
Not EC &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ FS]{} & A & B & C & D & E\
Man 1 &1 & 3& 1& 0 & 2\
Man 4 & 1& 2& 1& 0& 1\
Man 8 &1&1&1& 1&3\
Man 10 & 0& 2 &3 & 2& 0\
Man 11 & 3 & 1& 0 &0 & 3\
Man 12 &0&2 &4 &1&0\
Man 19 & 0& 2 & 0 & 4 & 1\
\
Who is FS &\
Not FS &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ FS]{} & A &Man B & C & D & E\
Man 3 & 2 &5 & 0 & 0 &0\
Man 5 &0 & 0& 4& 1&2\
Man 6 & 4& 3 & 0&0& 0\
Man 7 & 4& 3& 0 & 0&0\
Man 13 & 5& 1 &1 & 0 & 0\
Man 14 & 0 & 1& 6 & 0 &0\
\
Who is FS &\
Not FS &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ FS]{} & A & B & C & D & E\
Man 2 &3 &1 &0 &3 & 0\
Man 9 &0 &2 &4 &1 & 0\
Man 15 &0 &3 &2 &1 & 1\
Man 16 &1 &3 &2 &1 & 0\
Man 17 &1 &1 &4 &1 & 0\
Man 18 &0 &4 &3 &0 & 0\
\
Who is FS &\
Not FS &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PD]{} & A & B & C & D & E\
Man 1 &2 & 2& 0& 3 & 0\
Man 4 & 2& 0& 2& 0& 3\
Man 8 &0&3&2& 2&0\
Man 10 & 0& 3 &3 & 1& 0\
Man 11 & 1 & 1& 1 &0 & 4\
Man 12 &3&2 &2 &0&0\
Man 19 & 2& 1 & 2 & 1 & 1\
\
Who is PD &\
Not PD &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
[ PD]{} & A & B & C & D & E\
Man 3 & 0 &5 & 2 & 0 &0\
Man 5 &0 & 4& 1& 2&0\
Man 6 & 1& 0 & 4&2& 0\
Man 7 & 4& 1& 0 & 0&2\
Woman 13 & 1& 3 &3 & 0 & 0\
Woman 14 & 0 & 4 & 1& 1 &1\
\
Who is PD &\
Not PD &\
Unclear &\
[|c|c|c|c|c|c|]{} &\
& A & B & C & D & E\
Man 2 &4 &1 &0 &1 & 1\
Man 9 &1 &2 &3 &1 & 0\
Man 15 &1 &2 &0 &4 & 0\
Man 16 &0 &7 &0 &0 & 0\
Man 17 &2 &0 &1 &4 & 0\
Man 18 &0 &2 &4 &1 & 0\
\
Who is PD &\
Not PD &\
Unclear &\
Biography {#biography .unnumbered}
=========
[**Giulia Rossi**]{} received her Master degree with summa cum laude in Clinical Psychology in 2009 from the University of Padova. She worked as independent researcher in the analysis and prevention of psychopathological diseases and in the intercultural expression of mental diseases. Her research interests include behavioral game theory, social norms and the epistemic foundations of mean-field-type game theory. She is currently a research associate in the Learning & Game Theory Laboratory at New York University Abu Dhabi.
[**Alain Tcheukam**]{} received his PhD in 2013 in Computer Science and Engineering at the IMT Institute for Advanced Studies Lucca. His research interests include crowd flows, smart cities and mean-field-type optimization. He received the Federbim Valsecchi award 2015 for his contribution in design, modelling and analysis of smarter cities, and a best paper award 2016 from the International Conference on Electrical Energy and Networks. He is currently a postdoctoral researcher with Learning & Game Theory Laboratory at New York University Abu Dhabi.
[**Hamidou Tembine**]{} (S’06-M’10-SM’13) received his M.S. degree in Applied Mathematics from Ecole Polytechnique and his Ph.D. degree in Computer Science from University of Avignon. His current research interests include evolutionary games, mean field stochastic games and applications. In 2014, Tembine received the IEEE ComSoc Outstanding Young Researcher Award for his promising research activities for the benefit of the society. He was the recipient of 7 best paper awards in the applications of game theory. Tembine is a prolific researcher and holds several scientific publications including magazines, letters, journals and conferences. He is author of the book on “distributed strategic learning for engineers” (published by CRC Press, Taylor & Francis 2012), and co-author of the book “Game Theory and Learning in Wireless Networks” (Elsevier Academic Press). Tembine has been co-organizer of several scientific meetings on game theory in networking, wireless communications and smart energy systems. He is a senior member of IEEE.
[^1]: The authors are with Learning & Game Theory Laboratory, New York University Abu Dhabi, Email: tembine@nyu.edu
|
---
abstract: 'While deep learning strategies achieve outstanding results in computer vision tasks, one issue remains. The current strategies rely heavily on a huge amount of labeled data. In many real-world problems it is not feasible to create such an amount of labeled training data. Therefore, researchers try to incorporate unlabeled data into the training process to reach equal results with fewer labels. Due to a lot of concurrent research, it is difficult to keep track of recent developments. In this survey we provide an overview of often used techniques and methods in image classification with fewer labels. We compare 21 methods. In our analysis we identify three major trends. 1. State-of-the-art methods are scaleable to real world applications based on their accuracy. 2. The degree of supervision which is needed to achieve comparable results to the usage of all labels is decreasing. 3. All methods share common techniques while only few methods combine these techniques to achieve better performance. Based on all of these three trends we discover future research opportunities.'
author:
- |
Lars Schmarje, Monty Santarossa,Simon-Martin Schröder, Reinhard Koch\
Multimedia Information Processing Group, Kiel University, Germany\
[{las,msa,sms,rk}@informatik.uni-kiel.de]{}
bibliography:
- 'references.bib'
title: |
A survey on Semi-, Self- and Unsupervised Techniques in Image Classification\
[Similarities, Differences & Combinations]{}
---
Introduction {#sec:intro}
============
Deep learning strategies achieve outstanding successes in computer vision tasks. They reach the best performance in a diverse range of tasks such as image classification, object detection or semantic segmentation.
![ This image illustrates and simplifies the benefit of using unlabeled data during deep learning training. The red and dark blue circles represent labeled data points of different classes. The light grey circles represent unlabeled data points. If we have only a small number of labeled data available we can only make assumptions (dotted line) over the underlying true distribution (black line). This true distribution can only be determined if we also consider the unlabeled data points and clarify the decision boundary. []{data-label="fig:concept-semi-supervised-learning"}](concept-semi-supervised-learning){width="\linewidth"}
\
The quality of a deep neural network is strongly influenced by the number of labeled / supervised images. ImageNet [@imagenet] is a huge labeled dataset which allows the training of networks with impressive performance. Recent research shows that even larger datasets than ImageNet can improve these results [@limits-supervision]. However, in many real world applications it is not possible to create labeled datasets with millions of images. A common strategy for dealing with this problem is transfer learning. This strategy improves results even on small and specialized datasets like medical imaging [@schmarje2019]. While this might be a practical workaround for some applications, the fundamental issue remains: Unlike humans, supervised learning needs enormous amounts of labeled data.\
For a given problem we often have access to a large dataset of unlabeled data. Xie et al. were among the first to investigate unsupervised deep learning strategies to leverage this data [@dec]. Since then, the usage of unlabeled data has been researched in numerous ways and has created research fields like semi-supervised, self-supervised, weakly-supervised or metric learning [@metric-survey]. The idea that unifies these approaches is that using unlabeled data is beneficial during the training process (see for an illustration). It either makes the training with few labels more robust or in some rare cases even surpasses the supervised cases [@iic].\
Due to this benefit, many researchers and companies work in the in the field of semi-, self- and unsupervised learning. The main goal is to close the gap between semi-supervised and supervised learning or even surpass these results. Considering presented methods like [@S4L; @uda] we believe that research is at the break point of achieving this goal. Hence, there is a lot of research ongoing in this field. This survey provides an overview to keep track of the major and recent developments in semi-, self- and unsupervised learning.\
Most investigated research topics share a variety of common ideas while differing in goal, application contexts and implementation details. This survey gives an overview in this wide range of research topics. The focus of this survey is on describing the similarities and differences between the methods. Moreover, we will look at combinations of different techniques.\
While we look at a broad range of learning strategies, we compare these methods only based on the image classification task. The addressed audience of this survey consists of deep learning researchers or interested people with comparable preliminary knowledge who want to keep track of recent developments in the field of self- and unsupervised learning.
Related Work
------------
In this subsection we give a quick overview about previous works and reference topics we will not address further in order to maintain the focus of this survey.\
The research of semi- and unsupervised techniques in computer vision has a long history. There has been a variety of research and even surveys on this topic. Unsupervised cluster algorithms were researched before the breakthrough of deep learning and are still widely used [@k-means]. There are already extensive surveys that describe unsupervised and semi-supervised strategies without deep learning [@old-clustering; @old-semi-supervised]. We will focus only on techniques including deep neural networks.\
Many newer surveys focus only on self-, semi- or unsupervised learning [@deep-survey; @survey-self; @survey-semi].\
Min et al. wrote an overview about unsupervised deep learning strategies [@deep-survey]. They presented the beginning in this field of research from a network architecture perspective. The authors looked at a broad range of architectures. We focus ourselves on only one architecture which Min et al. refer to as “Clustering deep neural network (CDNN)-based deep clustering” [@deep-survey]. Even though the work was published in 2018, it already misses the recent development in deep learning of the last years. We look at these more recent developments and show the connections to other research fields that Min et al. didn’t include.\
Van Engelen and Hoos give a broad overview about general and recent semi-supervised methods [@survey-semi]. While they cover some recent developments, the newest deep learning strategies are not covered. Furthermore, the authors do not explicitly compare the presented methods based on their structure or performance. We provide such a comparison and also include self- and unsupervised methods.\
Jing and Tian concentrated their survey on recent developments in self-supervised learning [@survey-self]. Like us the authors provide an performance comparison and a taxonomy. They do not compare the methods based on their underlying techniques. Jing and Tian look at different tasks apart from classification but ignore semi- and unsupervised methods.\
Qi and Luo are one of the few who look at self-, semi- and unsupervised learning in one survey [@survey-semi-unsuper]. However, they look at the different learning strategies separately and give comparison only inside the respective learning strategy. We distinguish between these strategies but we look also at the similarities between them. We show that bridging these gaps leads to new insights, improved performance and future research approaches.\
Some surveys focus not on the general overviews about semi-, self- and unsupervised learning but on special details. In their survey Cheplygina et al. present a variety of methods in the context of medical image analysis [@medicine-survey]. They include deep learning and older machine learning approaches but look at different strategies from a medical perspective. Mey and Loog focused on the underlying theoretical assumptions in semi-supervised learning [@semi-theory]. We keep our survey limited to general image classification tasks and focus on their practical application.\
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
Keeping the above mentioned limitations in mind the topic of self-, semi- and unsupervised learning still includes a broad range of research fields. In this survey we will focus on deep learning approaches for image classification. We will investigate the different learning strategies with a spotlight on loss functions. Therefore, topics like metric learning and general adversarial networks will be excluded.
Underlying Concepts {#sec:pre}
===================
In this section we summarize general ideas about semi-, self- and unsupervised learning. We extend this summarization by our own definition and interpretation of certain terms. The focus lies on distinguishing the possible learning strategies and the most common methods to realize them. Throughout this survey we use the terms learning strategy, technique and method in a specific meaning. The *learning strategy* is the general type/approach of an algorithm. We call each individual algorithm proposed in a paper *method*. A method can be classified to a learning strategy and consists out of *techniques*. Techniques are the parts or ideas which make up the method/algorithm.
Learning strategies
-------------------
Terms like supervised, semi-supervised and self-supervised are often used in literature. A precise definition which clearly separates the terms is rarely given. In most cases a rough general consensus about the meaning is sufficient but we noticed a high variety of definitions in borderline cases. For the comparison of different methods we need a precise definition to distinguish between them. We will summarize the common consensus about the learning strategies and define how we view certain borderline cases. In general, we distinguish the methods based on the amount of used labeled data and at which stage of the training process supervision is introduced. Taken together, we call the semi-, self- and unsupervised (learning) strategies *reduced supervised* (learning) strategies. illustrates the four presented deep learning strategies.
### Supervised
Supervised learning is the most common strategy in image classification with deep neural networks. We have a set of images $X$ and corresponding labels or classes $Z$. Let $C$ be the number of classes and $f(x)$ the output of a certain neural network for $x \in X$.\
The goal is to minimize a loss function between the outputs and labels. A common loss function to measure the difference between $f(x)$ and the corresponding label $z$ is cross-entropy. $$\label{eq:ce}
\begin{split}
CE(f(x),z) &= \sum_{c=1}^{C} P_{f(x)}(c) log(P_z(c)) \\
&= H(P_z)) + KL(P_z|P_{f(x)})
\end{split}$$ $P$ is a probability distribution over all classes. $H$ is the entropy of a probability distribution and $KL$ is the Kullback-Leibler divergence. The distribution $P$ can be approximated with the output of neural network $f(x)$ or the given label $z$. It is important to note that cross-entropy is the sum of entropy over $z$ and a Kullback-Leibler divergence between $f(x)$ and $z$. In general the entropy $H(P_z)$ is zero due to one-hot encoded label $z$.
#### Transfer Learning
\
A limiting factor in supervised learning is the availability of labels. The creation of these labels can be expensive and therefore limits their number. One method to overcome this limitation is to use transfer learning.\
Transfer learning describes a two stage process of training a neural network. The first stage is to train with or without supervision on a large and generic dataset like ImageNet [@imagenet]. The second stage is using the trained weights and fine-tune them on the target dataset. A great variety of papers have shown that transfer learning can improve and stabilize the training even on small domain-specific datasets [@schmarje2019].
### Unsupervised
In unsupervised learning we only have images $X$ and no further labels. A variety of loss functions exist in unsupervised learning [@dac; @iic; @dec]. In most cases the problem is rephrased in such a way that all inputs for the loss can be generated, e.g. reconstruction loss in auto encoders [@dec]. Despite this automation or self-supervision we do call these methods unsupervised. Please see below for our interpretation of self-supervised learning.
### Semi-Supervised
Semi-supervised learning is a mixture of unsupervised and supervised learning. We have labels $Z$ for a set of images $X_l$ like in supervised learning. The rest of the images $X_u$ have no corresponding label. Due to this mixture, a semi-supervised loss can have a variety of shapes. A common way is to add a supervised and an unsupervised loss. In contrast to other learning strategies $X_u$ and $X_l$ are used in parallel.
### Self-supervised
Self-supervised uses a pretext task to learn representations on unlabeled data. The pretext task is unsupervised but the learned representations are often not directly usable for image classification and have to be fine-tuned. Therefore, self-supervised learning can be interpreted either as an unsupervised, a semi-supervised or a strategy of its own. We see self-supervised learning as a special strategy. In the following, we will explain how we arrive at such a conclusion. The strategy cannot be call unsupervised if we need to use any labels during the fine-tuning. There is also a clear difference to semi-supervised methods. The labels are not used simultaneously with unlabeled data because the pretext task is unsupervised and only the fine-tuning uses labels. For us this separation of the usage of labeled data into two different subtasks characterizes a strategy on its own.
Techniques {#subsec:techniques}
----------
Different techniques can be used to train models in reduced supervised cases. In this section we present a selection of techniques that are used in multiple methods in the literature.
### Consistency regularization
A major line of research uses consistency regularization. In a semi-supervised learning process these regularizations are used as an additional loss to a supervised loss on the unsupervised part of the data. This constraint leads to improved results due to the ability of taking unlabeled data into account for defining the decision boundaries [@mean-teacher; @temporal-ensembling; @S4L]. Some self- or unsupervised methods take this approach even further by using only this consistency regularization for the training [@iic; @amdim].
#### Virtual Adversarial Training (VAT)
\
VAT [@vat] tries to make predictions invariant to small transformations by minimizing the distance between an image and a transformed version of the image. Miyato et al. showed how a transformation can be chosen and approximated in an adversarial way. This adversarial transformation maximizes the distance between an image and a transformed version of it over all possible transformations. illustrates the concept of VAT. The loss is defined as $$\begin{split}
VAT(f(x)) &= D(P_{f(x)} ,P_{f(x + r_{adv})}) \\
r_{adv} &= arg max_{r;||r|| \leq \epsilon} D(P_{f(x)} ,P_{f(x + r)})
\end{split}$$ In this equation $x$ is an image out of the dataset $X$ and $f(x)$ is the output for a given neural network. P is the probability distribution over these outputs and D is a non-negative function that measures the distance. Two examples of used distance measures are cross-entropy [@vat] and Kullback-Leiber divergence [@S4L; @uda].
![Illustration of the VAT concept - The blue and red circles represent two different classes. The line is the decision boundary between these classes. The $\epsilon$ spheres around the circles define the area of possible transformations. The arrows represent the adversarial change $r$ which push the decision boundary away from any data point.[]{data-label="fig:vat"}](vat){width="0.7\linewidth"}
#### Mutual Information (MI)
\
MI is defined for two probability distributions as the Kullback Leiber (KL) divergence between the joint distribution and the marginal distributions [@ig]. This measure is used as a loss function instead of CE in several methods [@imsat; @iic; @amdim]. The benefits are described below. For images $x,y$, certain neural network outputs $f(x), f(y)$ and the corresponding probability distributions $P_{f(x)},P_{f(y)}$, we can maximize the mutual information by minimizing the following: $$\begin{split}
-I(P_{f(x)},P_{f(y)}) & = -KL(P_{(f(x),f(y))}|P_{f(x)} * P_{f(y)}) \\
& = -H(P_{f(x)}) + H(P_{f(x)}|P_{f(y)})
\end{split}$$ An alternative representation of mutual information is the separation in entropy $H(P_{f(x)})$ and conditional entropy $H(P_{f(x)}|P_{f(y)})$.\
Ji et al. describe the benefits of using MI over CE in unsupervised cases [@iic]. One major benefit is the inherent property to avoid degeneration due to the separation in entropy and conditional entropy. MI balances the effects of maximizing the entropy with a uniform distribution for $P_{f(x)}$ and minimizing the conditional entropy by equalizing $P_{f(x)}$ and $P_{f(y)}$. Both cases are undesirable for the output of a neural network.
#### Entropy Minimization (EntMin)
\
Grandvalet and Bengio proposed to sharpen the output predictions in semi-supervised learning by minimizing entropy [@entropy-min]. They minimized the entropy $H(P_{f(x)})$ for all probability distributions $P_{f(x)}$ based on a certain neural output $f(x)$ for an image $x$. This minimization only sharpens the predictions of a neural network and cannot be used on it’s own.
#### Mean Squared Error (MSE)
\
A common distance measure between two neural network outputs $f(x), f(y)$ for images $x,y$ is MSE. Instead of measuring the difference based on probability theory it uses the euclidean distance of the output vectors $$MSE(f(x), f(y)) = || f(x) - f(y) || _2^2$$ The minimization of this measure can contract two outputs to each other.
### Overclustering
Normally, if we have $k$ classes in a supervised case we use also $k$ clusters in an unsupervised case. Research showed that it can be beneficial to use more clusters than actual classes $k$ exist [@deep-cluster; @iic]. We call this idea *overclustering*.\
Overclustering can be beneficial in reduced supervised cases due to the effect that neural networks can decide ’on their own’ how to split the data. This separation can be helpful in noisy data or with intermediate classes that were sorted into adjacent classes randomly.
### Pseudo-Labels
A simple approach for estimating labels of unknown data are Pseudo-Labels [@pseudolabel]. Lee proposed to predict classification for unseen data with a neural network and use the predictions as labels. What sounds at first like a self-fulfilling assumption works reasonably well in real world image classification tasks. Several modern methods are based on the same core idea of creating labels by predicting them on their own [@mean-teacher; @mixmatch].
Methods {#sec:methods}
=======
In the following, we give a short overview over all methods in this survey in an alphabetical order and separated according to their learning strategy. Due to the fact that they may reference each other you may have to jump to the corresponding entry if you would like to know more. This list does not claim to be complete. We included methods which were referenced often in related work, which are comparable to the other methods and which are complementary to presented methods.
Semi-Supervised
---------------
#### Fast-Stochastic Weight Averaging (fast-SWA)
\
In contrast to other semi-supervised methods Athiwaratkun et al. do not change the loss but the optimization algorithm [@fast-SWA]. They analysed the learning process based on ideas and concepts of SWA [@SWA], $\pi$-model [@temporal-ensembling] and Mean Teacher [@mean-teacher]. Athiwaratkun et al. show that averaging and cycling learning rates are beneficial in semi-supervised learning by stabilizing the training. They call their improved version of SWA fast-SWA due to a faster convergence and lower performance variance [@fast-SWA]. The architecture and loss is either copied from $\pi$-model [@temporal-ensembling] or Mean Teacher [@mean-teacher].
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
#### Mean Teacher
\
With Mean Teacher Tarvainen & Valpola present a student-teacher-approach for semi-supervised learning [@mean-teacher]. They develop their approach based on the $\pi$-model and Temporal Ensembling [@temporal-ensembling]. Therefore, they also use MSE as a consistency loss between two predictions but create these predictions differently. They argue that Temporal Ensembling incorporates new information too slowly into predictions. The reason for this is that the exponential moving average (EMA) is only updated once per epoch. Therefore, they propose to use a teacher based on average weights of a student in each update step. For their model Tarvainen & Valpola show that the KL-divergence is an inferior consistency loss in comparison to MSE. An illustration of this method is given in .
#### MixMatch
\
MixMatch [@mixmatch] uses a combination of a supervised and an unsupervised loss. Berthelot et al. use CE as the supervised loss and MSE between predictions and generated Pseudo-Labels as their unsupervised loss. These Pseudo-Labels are created from previous predictions of augmented images. They propose a novel sharping method over multiple predictions to improve the quality of the Pseudo-Labels. Furthermore, they extend the algorithm mixup [@mixup] to semi-supervised learning by incorporating the generated labels. Mixup creates convex combinations of images by blending them into each other. An illustration of the concept is given in . The prediction of the convex combination of the corresponding labels turned out to be beneficial for supervised learning in general [@mixup].
#### $\pi$-model and Temporal Ensembling
\
Laine & Aila present two similar learning methods with the names $\pi$-model and Temporal Ensembling [@temporal-ensembling]. Both methods use a combination of the supervised CE loss and the unsupervised consistency loss MSE. The first input for the consistency loss in both cases is the output of their network from a randomly augmented input image. The second input is different for each method. In the $\pi$-model a augmentation of the same image is used. In Temporal Ensembling an exponential moving average of previous predictions is evaluated. Laine & Aila show that Temporal Ensembling is up to two times faster and and more stable in comparison to the $\pi$-model [@temporal-ensembling]. Illustrations of these methods are given in .
![Illustration of mixup - The images of a cat and a dog are combined with a parametrized blending. The labels are also combined by the same parametrization. The shown images are taken from the dataset STL-10 [@stl-10][]{data-label="fig:mixup"}](mixup){width="\linewidth"}
#### Pseudo-Labels
\
Pseudo-Labels [@pseudolabel] describes a common technique in deep learning and a learning method on its own. For the general technique see above in . In contrast to many other semi-supervised methods Pseudo-Labels does not use a combination of an unsupervised and a supervised loss. The Pseudo-Labels approach uses the predictions of a neural network as labels for unknown data as described in the general technique. Therefore, the labeled and unlabeled data are used in parallel to minimize the CE loss. The usage of the same loss is a difference to other semi-supervised methods, but the parallel utilization of labled and unlabeled data classifies this method as semi-supervised.
#### Self-Supervised Semi-Supervised Learning (S$^4$L)
\
S$^4$L [@S4L] is, as the name suggests, a combination of self-supervised and semi-supervised methods. Zhai et al. split the loss in a supervised and an unsupervised part. The supervised loss is CE while the unsupervised loss is based on the self-supervised techniques using rotation and exemplar prediction [@self-rotation; @self-exemplar]. The authors show that their method performs better than other self-supervised and semi-supervised techniques [@self-exemplar; @self-rotation; @vat; @entropy-min; @pseudolabel]. In their *Mix Of All Models* (MOAM) they combine self-supervised rotation prediction, VAT, entropy minimization, Pseudo-Labels and fine-tuning into a single model with multiple training steps. We count S$^4$L as a semi-supervised method due to this combination.
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
#### Unsupervised Data Augmentation (UDA)
\
Xie et al. present with UDA a semi-supervised learning algorithm which concentrates on the usage of state-of-the-art augmentation [@uda]. They use a supervised and an unsupervised loss. The supervised loss is CE while the unsupervised loss is the Kullback Leiber divergence between output predictions. These output predictions are based on an image and an augmented version of this image. For image classification they propose to use the augmentation scheme generated by AutoAugment [@autoaugment] in combination with Cutout [@cutout]. AutoAugment uses reinforcement learning to create useful augmentations automatically. Cutout is an augmentation scheme where randomly selected regions of the image are masked out. Xie et al. show that this combined augmentation method achieves higher performance in comparison to previous methods on their own like Cutout, Cropping or Flipping. In addition to the different augmentation they propose to use a variety of other regularization methods. They proposed Training Signal Annealing which restricts the influence of labeled examples during the training process in order to prevent overfitting. They use EntMin [@entropy-min] and a kind of Pseudo-Labeling [@pseudolabel]. We use a kind of Pseudo-Labeling because they do not use the predictions as labels but they use them to filter unsupervised data for outliers. An illustration of this method is given in .
#### Virtual Adversarial Training (VAT)
\
VAT [@vat] is not just the name for a regularization technique but it is also a semi-supervised learning method. Miyato et al. used a combination of VAT on unlabeled data and CE on labeled data [@vat]. They showed that the adversial transformation leads to a lower error on image classification than random transformations. Furthermore, they proved that adding EntMin [@entropy-min] to the loss increased accuracy even more.
Self-Supervised
---------------
#### Augmented Multiscale Deep InfoMax (AMDIM)
\
AMDIM [@amdim] maximizes the MI between inputs and outputs of a network. It is an extension of the method DIM [@dim]. DIM usually maximizes MI between local regions of an image and a representation of the image. AMDIM extends the idea of DIM in several ways. Firstly, the authors sample the local regions and representations from different augmentations of the same source image. Secondly, they maximize MI between multiple scales of the local region and the representation. They use a more powerful encoder and define mixture-based representations to achieve higher accuracies. Bachman et al. fine-tune the representations on labeled data to measure their quality. An illustration of this method is given in .
#### Contrastive Predictive Coding (CPC)
\
CPC [@cpc; @cpcv2] is a self-supervised method which predicts representations of local image regions based on previous image regions. The authors determine the quality of these predictions by identifying the correct prediction out of randomly sampled negative ones. They use standard CE loss and adopt the loss to the summation over the complete image. This adaption results in their loss InfoNCE [@cpc]. Van den Oord et al. showed that minimizing InfoNCE maximizes the lower bound for MI between the previous image regions and the predicted image region [@cpc]. An illustration of this method is given in .
#### DeepCluster
\
DeepCluster [@deep-cluster] is a self-supervised technique which generates labels by k-means clustering. Caron et al. iterate between clustering of predicted labels to generate Pseudo-Labels and training with cross-entropy on these labels. They show that it is beneficial to use overclustering in the pretext task. After the pretext task they fine-tune the network on all labels. An illustration of this method is given in .
#### Deep InfoMax (DIM)
\
DIM [@dim] maximizes the MI between local input regions and output representations. Hjelm et al. show that maximizing over local input regions rather than the complete image is beneficial for image classification. In addition, they use a discriminator to match the output representations to a given prior distribution. At the end they fine-tune the network with an additional small fully-connected neural network.
![Illustration of the Context pretext task - A central patch and an adjacent patch from the same image are given. The task is to predict one of the 8 possible relative positions of the second patch to the first one. In the example the correct answer is upper center. The illustration is inspired by [@self-context].[]{data-label="fig:context"}](context){width="\linewidth"}
#### Invariant Information Clustering (IIC)
\
IIC [@iic] maximizes the MI between augmented views of an image. The idea is that images should belong to the same class regardless of augmentation. The augmentation has to be a transformation to which the neural network should be invariant. The authors do not maximize directly over the output distributions but over the class distribution which is approximated for every batch. Ji et al. use auxiliary overclustering on a different output head to increase their performance in the unsupervised case. This idea allows the network to learn subclasses and handle noisy data. Ji et al. use Sobel filtered images as input instead of the original RGB images. Additionally, they show how to extend IIC to image segmentation. Up to this point the method is completely unsupervised. In order to be comparable to other semi-supervised methods they fine-tune their models on a subset of available labels. An illustration of this method is given in .
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
[0.24]{} {width="\textwidth"}
#### Representation Learning - Context
\
Doersch et al. propose to use context prediction as a pretext task for visual representation learning [@self-context]. A central patch and an adjacent patch from an image are used as input. The task is to predict one of the 8 possible relative positions of the second patch to the first one. An illustration of the pretext task is given in . Doersch et al. argue that this task becomes more easy if you recognize the content of these patches. The authors fine-tune their representations for other tasks and show their superiority in comparison to random initialization. Aside from fine-tuning, Doersch et al. show how their method could be used for Visual Data Mining.
#### Representation Learning - Exemplar
\
Dosovitskiy et al. were one of the first to propose a self-supervised pretext task with additional fine-tuning [@self-exemplar]. They randomly sample patches from different images and augment these patches heavily. Augmentations can be for example rotations, translations, color changes or contrast adjustments. The classification task is to map all augmented versions of a patch to the correct original patch.
#### Representation Learning - Jigsaw
\
Noroozi and Favaro propose to solve Jigsaw puzzles as a pretext task [@self-jigsaw]. The idea is that a network has to understand the concept of a presented object in order to solve the puzzle. They prevent simple solutions which only look at edges or corners by including small random margins between the puzzle patches. They fine-tune on supervised data for image classification tasks. Noroozi et al. extended the Jigsaw task by adding image parts of a different image [@self-jigsaw++]. They call the extension Jigsaw++. An example for Jigsaw and Jigsaw++ is given in .
#### Representation Learning - Rotation
\
Gidaris et al. use a pretext task based on image rotation prediction [@self-rotation]. They propose to randomly rotate the input image by 0, 90, 180 or 270 degrees and let the network predict the chosen rotation degree. In their work they also evaluate different numbers of rotations but four rotations score the best result. For image classification they fine-tune on labeled data.
Unsupervised
------------
#### Deep Adaptive Image Clustering (DAC)
\
DAC [@dac] reformulates unsupervised clustering as a pairwise classification. Similar to the idea of Pseudo-Labels Chang et al. predict clusters and use these to retrain the network. The twist is that they calculate the cosine distance between all cluster predictions. This distance is used to determine whether the input images are similar or dissimilar with a given certainty. The network is then trained with binary CE on these certain similar and dissimilar input images. During the training process they lower the needed certainty to include more images. As input Chang et al. use a combination of RGB and extracted HOG features. Additionally, they use an auxiliary loss in their source code which is not reported in the paper.[^1]
[0.32]{} {width="\textwidth"}
[0.32]{} {width="\textwidth"}
[0.32]{} {width="\textwidth"}
#### Invariant Information Clustering (IIC)
\
IIC [@iic] is described above as a self-supervised learning method. In comparison to other presented self-supervised methods IIC creates usable classifications without fine-tuning the model on labeled data. The reason for this is that the pretext task is constructed in such a way that label predictions can be extracted directly from the model. This leads to the conclusion that IIC can also be interpreted as an unsupervised learning method.
#### Information Maximizing Self-Augmented Training (IMSAT)
\
IMSAT [@imsat] maximizes MI between the input and output of the model. As a consistency regularization Hu et al. use CE between an image prediction and an augmented image prediction. They show that the best augmentation of the prediction can be calculated with VAT [@vat]. The maximization of MI directly on the image input leads to a problem. For datasets like CIFAR-10, CIFAR-100 [@cifar] and STL-10 [@stl-10] the color information is too dominant in comparison to the actual content or shape. As a workaround Hu et al. use the features generated by a pretrained CNN on ImageNet [@imagenet] as input.
Comparison {#sec:compare}
==========
In this chapter we will analyze which techniques are shared or different between methods. We will compare the performance of all methods with each other on common deep learning datasets.
Datasets {#subsec:datasets}
--------
In this survey we compare the presented methods on a variety of datasets. We selected four datasets that were used in multiple papers to allow a fair comparison. An overview of example images is given in .
#### CIFAR-10 and CIFAR-100
are large datasets of tiny color images with size 32x32 [@cifar]. Both datasets contain 60,000 images belonging to 10 or 100 classes respectively. The 100 classes in CIFAR-100 can be combined into 20 superclasses. Both sets provide 50,000 training labels and 10,000 validation labels. The presented results are only trained with 4,000 labels for CIFAR-10 and 10,000 labels for CIFAR-100 to represent a semi-supervised case. If a method uses all labels this is marked independently.
#### STL-10
is dataset designed for unsupervised and semi-supervised learning [@stl-10]. The dataset is inspired by CIFAR-10 [@cifar] but provides less labels. It only consists of 5,000 training labels and 8,000 valdiation labels. However, 100,000 unlabeled example image are also provided. These unlabeled examples belong to the training classes and some different classes. The images are 96x96 color images and were acquired in combination with their labels from ImageNet [@imagenet].
#### ILSVRC-2012
is a subset of ImageNet [@imagenet]. The training set consists of 1.2 million images while the validation and the test set include 150,000 images. These images belong to 1000 object categories. Due to this large number of categories, it is common to report Top-5 and Top-1 accuracy. Top-1 accuracy is the classical accuracy where one prediction is compared to one ground truth label. Top-5 accuracy checks if a ground truth label is in a set of at most five predictions. For further details on accuracy see . The presented results are only trained with 10% of labels to represent a semi-supervised case. If a method uses all labels this is marked independently.
Evaluation metrics {#subsec:metric}
------------------
We compare the performance of all methods based on their classification score. This score is defined differently for unsupervised and all other settings. We follow standard protocol and use the classification accuracy in most cases. For unsupervised strategies we use the cluster accuracy because we need to handle the missing labels during the training. We need to find the best one-to-one permutations from the network cluster predictions to the ground-truth classes.\
For vectors $x,y \in \mathds{Z}^N$ with $N \in \mathds{N}$ the accuracy is defined as follows: $$ACC(x, y) = \frac{\sum_{i=1}^{N} \mathds{1}_{y_i = x_i}}
{N}$$ For the cluster accuracy we additionally maximize over all possible one-to-one permutations $\sigma$. $$ACC(x, y) = \max_{\sigma} \frac{\sum_{i=1}^{N} \mathds{1}_{y_i = \sigma (x_i)}}
{N}$$
Comparison of methods {#subsec:comparison}
---------------------
In this subsection we will compare the methods with regard to their used techniques and performance. We will summarize the presented results and discuss the underlying trends in the next subsection.\
#### Comparison with regard to used techniques
\
In we present all methods and their used techniques. We evaluate only techniques which were used frequently in different papers. Special details such as the mixup algorithm in MixMatch, the different optimizer for fast-SWA or the used distance measure for VAT are excluded. Please see for further details.\
Most methods share similar supervised and unsupervised techniques. All semi-supervised methods use the cross-entropy loss during training. All self-supervised methods use a pretext task and fine-tune in the end. All unsupervised methods do not use any technique which requires ground-truth labels. Due to our definition of the learning strategies this grouping is expected.\
The similarities and differences are not that clear for unsupervised techniques. We still step through all techniques based on the significance for differentiating the learning strategies and methods.\
MI is not used by any semi-supervised method and most semi-supervised methods do not use a pretext task. Self-supervised methods often rely only on one or both of these techniques. Most unsupervised methods use also MI but do not use a pretext task. $S^4L$ and IIC stand out due to the fact that they use a pretext task. $S^4L$ is the only method which combines a self-supervised pretext task and semi-supervised learning. IIC uses a pretext task in the unsupervised case based on MI. The pretext task creates representations which can be interpreted as classifications without fine-tuning.\
The techniques VAT, EntMin and MSE are often used for semi-supervised methods. VAT and EntMin are more often used together than MSE with either of them. This correlation might be introduced by the selected methods. The methods fast-SWA, Mean Teacher, $\pi$-model and Temporal Ensembling are extensions or derivations of each other. Also VAT+EntMin is an extension of VAT.\
Pseudo-Labels are used in several different methods. Due to the simple and flexible idea of Pseudo-Labels, it can be used in a variety of different methods. UDA for example uses this technique to filter the unlabeled data for useful images.
#### Comparison with regard to performance
\
We compare the performance of the different methods based on their respective reported results or cross-references in other papers. For a better comparability we would have liked to recreate every method in a unified setup but this was not feasible. While using reported values might be the only possible approach, it leads to drawbacks in the analysis.\
Kolesnikov et al. showed that changes in the architecture can lead to significant performance boost or drops [@revisiting-self]. They state that ’neither \[...\] the ranking of architectures \[is\] consistent across different methods, nor is the ranking of methods consistent across architectures’ [@revisiting-self]. While most methods try to achieve comparability with previous ones by a similar setup, over time small differences still aggregate and lead to a variety of used architectures. Some methods use only early convolutional networks such as AlexNet [@alexnet] but others use more modern architectures like Wide ResNet-Architecture [@wide-resnet] or Shake-Shake-Regularization [@shake-shake].\
Oliver et al. proposed guidelines to ensure more comparable evaluations in semi-supervised learning [@realistic-semi-supervised]. They showed that not following these guidelines may lead to changes in the performance [@realistic-semi-supervised]. While some methods try to follow these guidelines, we cannot guarantee that all methods do so. This impacts the comparability further.\
shows the collected results for all presented methods. We also provide results for the respective supervised baselines reported by the authors. To keep a fair comparability we did not add state-of-the-art baselines with more complex architectures.\
Considering the above mentioned limitations, we do not focus on small differences but look for general trends and specialities instead.\
In general, the used architectures become more complex and the accuracies rise over time. This behavior is expected as new results are often improvements of earlier works. The changes in architecture may have led to the improvements. However, many papers include ablation studies and comparisons to only supervised methods to show the impact of their method. We believe that a combination of more modern architecture and more advanced methods leads to the improvement.\
For the CIFAR-10 dataset almost all semi- and self-supervised methods reach about or over 90% accuracy. The best method MixMatch reaches an accuracy of about 95% and is roughly two percent worse than the fully supervised baseline. For the CIFAR-100 dataset fewer results are reported. MixMatch is with about 74% on this dataset the best method in comparison to the fully supervised baseline of about 80%.\
For the STL-10 dataset most methods report a better result than the supervised baseline. These results are possible due to the unlabeled part of the dataset. The unlabeled data can only be utilized by semi-, self- or unsupervised methods. MixMatch achieves the best results with about 94%.\
The ILSVRC-2012 dataset is the most difficult dataset based on the reported Top-1 accuracies. Most methods achieve only a Top-1 accuracy which is roughly 20% worse than the reported supervised baseline with around 79%. Only the methods S$^4$L and UDA achieve an accuracy which is less than 10% worse than the baseline. S$^4$L achieves the best accuracy with a Top-1 accuracy of about 73% and a Top-5 accuracy of around 92%.\
The unsupervised methods are separated from the supervised baseline by a clear margin of up to 50%. IIC achieves the best results of about 61% on CIFAR-10 and STL-10. IMSAT reports an accuracy of about 94% on STL-10. Due to the fact that IMSAT uses pretrained ImageNet features, a superset of STL-10, the results are not directly comparable.
Discussion
----------
In this subsection we discuss the presented results of the previous subsection. We divide our discussion into three major trends which we identified. All these trends lead to possible future research opportunities.
#### 1. Trend: Real World Applications
\
Previous methods were not scaleable to real world images and applications and used workarounds e.g. extracted features [@imsat] to process real world images. Many methods can report a result of over 90% on CIFAR-10, a simple low-resolution dataset. Only two methods are able to achieve a Top-5 accuracy of over 90% on ILSVRC-2012, a high-resolution dataset. We conclude that most methods are not scalable to real world image classification problems. However, the best reported methods like MixMatch and S$^4$L surpassed the point of only scientific usage and can be applied to real world applications.\
This conclusion applies to real world image classification tasks with balanced and clearly separated classes. This conclusion also implicates which real world issues need to solved in future research. Class imbalance or noisy labels are not treated by the presented methods. Datasets with also few unlabeled data points are not considered.
#### 2. Trend: Needed supervision is decreasing
\
We see that the gap between reduced supervised and supervised methods is shrinking. For CIFAR-10, CIFAR-100 and ILSVRC-2012 we have a gap of less than 5% left between total supervised and reduced supervised learning. For STL-10 the reduced supervised methods even surpass the total supervised case by about 30% due to the additional set of unlabeled data. We conclude that reduced supervised learning reaches comparable results while using only roughly 10% of the labels.\
A lot of newly proposed methods are semi- or self-supervised in comparison to unsupervised ones. Unsupervised methods like IIC still reach results of over 60% and show that this kind of training can be beneficial for semi-supervised learning [@iic]. However, the results are still surpassed by semi- or self-supervised methods by a large margin e.g. over 30% on CIFAR-10. The integration of the knowledge of some labels into the training process seems to be crucial.\
In general we considered a reduction from 100% to 10% of all labels. Some methods [@S4L; @mixmatch] try even to train their models with only 1% of all labels. For ILSVRC-2012 this is equivalent to about 13 images per class. We expect that future research will concentrate on achieving comparable results for only 1% or even fewer of all labels. In the end research fields like few-shot, single-shot and semi-supervised learning might even merge.\
We assume that in parallel to the reduction of needed labels, the usage of total unsupervised methods will decrease further. The benefit of even some labels as guiding reference for learning methods is too valuable to ignore. We believe that for many real world applications a very low supervision in form of archetypes might be sufficient. This will lead to a shift in the corresponding research efforts.
#### 3. Trend: Combination of techniques
\
In the comparison we identified that semi-supervised and self-supervised methods share few unsupervised techniques.\
We believe there is only little overlap between semi- and self-supervised learning due to the different aims of the respective authors. Many self-supervised papers focus on creating good representations. They fine-tune their results only to be comparable. Semi-supervised papers aim for the best accuracy scores with as few labels as possible.\
The comparison showed that MixMatch and S$^4$L are the best methods or, if we consider the above mentioned limitations due to architecture differences, one of the best. Both methods combine several techniques in their method. $S^4L$ stands out as it combines many different techniques and the combined approach is even called “Mix of all models” [@S4L]. We assume that this combination is the reason for their improved performance. This assumption is supported by the included comparisons in the original papers. S$^4$L shows the impact of each method separately as well as the combination of all [@S4L].\
IIC is the only method which can be used as an unsupervised or self-supervised method with fine-tuning. This flexibility allows approaches with a smooth transition between no and small supervision.\
The comparison showed that the technique Pseudo-Labels can be applied to a variety of methods. However, only few methods use this technique.\
We believe that the combination of different basic techniques is a promising future research field due to the fact that many combinations are not yet explored.
Conclusion
==========
In this paper, we provided an overview over semi-, self- and unsupervised techniques. We analyzed their difference, similarities and combinations based on 21 different methods. This analysis led to the identification of several trends and possible research fields.\
We based our analysis on the definition of the different learning strategies (semi-, self- or unsupervised) and common techniques in these strategies. We showed how the methods work in general, which techniques they use and as what kind of strategy they could be classified. Despite the difficult comparison of the methods’ performances due to different architectures and implementations we identified three major trends.\
Results of over 90% Top-5 accuracy on ILSVRC-2012 with only 10% of the labels show that semi-supervised methods are applicable for real world problems. However, issues like class imbalance are not considered. Future research has to address these issues.\
The performance gap between supervised and semi- or self-supervised methods is closing. For one dataset it is even surpassed by about 30%. The number of labels to get comparable results to fully supervised learning is decreasing. Future research can lower the number of needed labels even further. We noticed that, as time progresses, unsupervised methods are used less often. These two conclusions lead us to the assumption that unsupervised methods will lose significance for real world image classification in the future.\
We concluded that semi- and self-supervised learning strategies mainly use a different set of techniques. In general, both strategies use a combination of different techniques but there are few overlaps in these techniques. S$^4$L is the only presented method which gaps this separation. We identified the trend that a combination of different techniques is beneficial to the overall performance. In combination with the small overlap between the techniques we identified possible future research opportunities.
[^1]: https://github.com/vector-1127/DAC/blob/master/STL10/stl.py
|
---
abstract: 'A fast moving infrared excess source (G2) which is widely interpreted as a core-less gas and dust cloud approaches Sagittarius A\* (SgrA\*) on a presumably elliptical orbit. VLT K$_s$-band and Keck K’-band data result in clear continuum identifications and proper motions of this $\sim$19$^m$ Dusty S-cluster Object (DSO). In 2002-2007 it is confused with the star S63, but free of confusion again since 2007. Its near-infrared (NIR) colors and a comparison to other sources in the field speak in favor of the DSO being an IR excess star with photospheric continuum emission at 2 microns than a core-less gas and dust cloud. We also find very compact L’-band emission ($<$0.1”) contrasted by the reported extended (0.03” up to $\sim$0.2” for the tail) Br$\gamma$ emission. The presence of a star will change the expected accretion phenomena, since a stellar Roche lobe may retain a fraction of the material during and after the peri-bothron passage.'
title: 'The infrared K-band identification of the DSO/G2 source from VLT and Keck data'
---
SgrA\* at the center of our galaxy is associated with a $4 \times 10^{6}$[L$_{\odot}$ ]{}super-massive central black hole. It is a highly variable radio, near-infrared (NIR) and X-ray source. In early 2014 the dusty G2/DSO object (Gillessen et al. 2012ab, Eckart et al. 2013b) will pass by SgrA\* at a distance between 120 and 200 AU (1500 and 2400 Schwarzschild radii; Phifer et al. 2013, Gillessen 2013b) and is expected to loose matter or even be completely disrupted during the periapse section of its orbit - possibly resulting in quite luminous accretion events. We expect that the NIR/X-ray flux density of SgrA\* will increase substantially. The enhanced activity will be strong in the mm- and sub-mm part as well. To probe the accretion process and investigate geometrical aspects (outflow and disk orientation) of the enhanced activity, (sub-)millimeter/radio observations between June 2013 and (beyond) April 2014 in parallel with the NIR polarization observations will be essential. So far our NIR VLT, sub-mm APEX and mm ATCA monitoring runs in June, August and September did not show any exceptional flux density variations. The activity, however, expected in 2014 will also give an outstanding opportunity to improve the derivation of the spin and inclination of the SMBH from NIR/mm observations.
![[**Left:**]{} Identification of the DSO/G2 source in the raw and deconvolved Keck images from August 2010. Top: raw adaptive optics images; Bottom: deconvolved with a PSF extracted from the image and re-convolved with a Gaussian to an angular resolution close to that achieved during these observations. [**Right:**]{} Decomposition of the DSO spectrum including our K$_s$-band detection and H-band limit. A mixture of dust and stellar contribution is possible. The dashed ellipse highlights the 2$\mu$m limit (Gillessen et al. 2013ab) and our detection. []{data-label="fig1"}](eckart-fig1.eps){width="3.4in"}
Gillessen et al. (2012ab) interpret the G2/DSO source as a preferentially core-less gas and dust cloud approaching SgrA\* on an elliptical orbit. Eckart et al. (2013a) present the first K$_s$-band identifications and proper motions of the DSO. As further support, Eckart et al. (2013b) present the results of the analysis of 4 epoch of public Keck K’-band adaptive optics imaging data (2008 to 2011; see Tab.1 in Eckart et al. 2013b). Based on the comparison to VLT NACO L’- and K$_s$-band data (Eckart et al. 2013ab, Gillessen et al. 2013) we can clearly identify the DSO in its K’-band continuum emission as measured by the NIRC2 camera at the Keck telescope. The G2/DSO counterpart can even be seen in the direct (not deconvolved) Keck adaptive optics data (Fig.\[fig1\], left). For all four public Keck epochs (2008, 2009, 2010, 2011) very similar structures compared to those derived from the VLT data presented by Eckart et al. (2013a).
The NIR colors of the DSO imply that it could rather be an IR excess star. Very compact L’-band emission is found (pointing at the presence of a star), contrasted by the broad Br$\gamma$ emission (pointing at the presence of a very extended optically thin tail or envelope) reported by Gillessen et al. (2012ab) and modeled by Burkert et al. (2012) and Schartmann et al. (2012). The presence of a star will change the expected accretion phenomena (observable through expected excess mm- NIR and X-ray flux) since a stellar Roche lobe may retain much of the material (Eckart et al. 2013ab) during and after the peri-bothron passage.
![ [**a-c)**]{} The orientation of the G2-tail emission (Gillessen et al. 2013) with respect to other features at similar position angles like the interaction zone of the disk system associated with the mini-spiral; [**b)**]{} Vollmer & Duschl (2000) and a dust filament visible in high-pass filtered L-band images (in [**c)**]{}). The linear feature LF crosses the northern arm and the extended feature EF is associated with the eastern arm (Eckart et al. 2006). [**d)**]{} Mass input into the feeding region around the SgrA\* black hole. Using square averaged wind velocities feeding is averaged over stellar orbits (Shcherbakov & Baganoff, 2010). The approximate distances of the DSO and the two cometary shaped sources X3 and X7 (Muzic, Eckart, Schödel et al. 2007, 2010) are shown. [**e)**]{} Sketch of the radius dependent accretion onto the central black hole. Only a very small fraction of the matter that can be accreted reached the center. A dominant fraction of it is blown away in an outward-bound wind component. []{data-label="fig2"}](eckart-fig2.eps){width="3.4in"}
**The Broadband spectrum of the DSO**
=====================================
In Fig. \[fig1\](right) we show a possible spectral decomposition (Eckart et al. 2013a) of the DSO using the M-band measurement by Gillessen et al. (2012a). The interesting arising question is about the nature of the G2/DSO K-band emission. Could the K$_s$-band flux density be due to emission from warm dust? Most of the mini-spiral dust filaments are externally heated and have a dust temperature of the order of 200 K (Cotera et al. 1999). Only in the immediate vicinity of embedded stars (i.e. IRS 3, IRS 7 - at 0.5” angular resolution these are the hottest dust sources known until now) the dust temperature is of the order of 200-300 K (see Cotera et al. 1999). For the DSO we need 650 K (Gillessen et al. 2012a) to explain the K-band emission only by dust. So it would have to be exceptionally hot and/or have an exceptional overabundance of small grains. At a temperature of only 450 K this scenario can be excluded (Fig. \[fig1\] right). Applying the same surface brightness of more than 14$^m$/arcsec$^2$ for the DSO to the mini-spiral, it would be much brighter and clearly stand out in its dust emission at 2$\mu$m, which is in contradiction to the observations. The alternative is internal heating - however, that would require an internal heating source, i.e. most likely a star. In Fig. \[fig1\](right) we show a decomposition in which we assume that 50% of the current K$_s$-band flux is due to a late dwarf (blue line), an A/F giant, or AGB star (magenta line). We added to it dust at a temperature of 450 K (plotted in red) The sum of the spectra is shown by the thick black line (dashed at short wavelengths for the AF giant/AGB case). The points correspond to the L- and K$_S$-band magnitudes, and H-band upper limit from Eckart et al. (2013a) and the M-band measurement and K-band upper limit of Gillessen et al. (2012ab). Red and blue dashed curves also show their 550K and 650K warm dust fits. In solid blue, red, and magenta lines the emission from three different possible stellar types of the DSO core are plotted. Any of these stars embedded in 450K dust (solid brown line) can produce the black line that fits all the NIR DSO photometric measurements. Black body luminosities and the detection of photospheric emission imply possible stellar luminosities of up to 30 [L$_{\odot}$ ]{}; i.e. masses of 10-20 [M$_{\odot}$]{} are possible. Details are given by Eckart et al. (2013a).
The simulations by Jalali et al. (2013, submitted) show that dusty sources like the DSO or the IRS13N cluster can actually be formed from small molecular cloud complexes. If they are on elliptical orbits (e.g. originating from the CND). The gravitational focusing during the periapse passage close to the SgrA\* black hole is capable to trigger star formation on time scales required for young massive stars. This process actually helps forming stars in the vicinity of super massive black holes.
**The G2 tail**
===============
In Fig.2-3 by Eckart et al. (2013b) we compare the G2 tail emission to 8.6$\mu$m observation taken in 2004 with VISIR at the ESO VLT and in 1994 with the Palomar telescope (Stolovy et al. 1996). These MIR results, the comparably low proper motion, as well as the possible linkage to the two mini-spiral disk systems, suggest that the ’tail’ component (red in Fig \[fig2\]c) might not be associated with the G2 source but rather is a back/fore-ground dust source associated to the mini-spiral within the central stellar cluster. The long dust lane feature that crosses the mini-spiral to the south-east of the DSO (Fig.7 in Gillessen et al. 2013) may be a consequence of the interaction between the two rotation gas disk components that are associated with the northern and eastern arms. Details are shown in our Fig \[fig2\]abc (see also Fig.21 in Zhao et al. 2009, and Fig.10 in Vollmer & Duschl 2000). If upcoming observations can confirm that the ’tail’ component is not associated with the DSO, it does not need to be taken into account in future simulations.
While the DSO has a marginal extent in Br$\gamma$ line emission of about 30mas (Gillessen 2013b), it may be compared to other dusty sources in the field (see Eckart et al. 2013a). In fact, the DSO may be a compact source comparable to the cometary shaped sources X3 and X7 (Muzic et al. 2010, see also Sabha et al. 2014 in prep.). Its smaller size compared to X3 and X7 can be explained by the higher particle density within the accretion stream close to SgrA\* (e.g. Shcherbakov & Baganoff 2010; see our Fig \[fig2\]de). Its size also depends on how earlier passages close to SgrA\* possibly influence the distribution of gas and dust close to a possible star at the center of the DSO.
Conclusions
===========
The observations planned for 2014 will be essential to investigate how the close flyby of the DSO/G2 object will alter the accretion characteristics of SgrA\*. The structural evolution of the DSO will also show if the DSO harbors a star or is a pure gas and dust cloud. Further NIR imaging and spectroscopy data will also show if the extended ’tail’ component is associated with the head of the DSO/G2 source. Theoretical investigations are required to study the formation process for DSO like sources and how they are linked to the conditions of star formation in the central stellar cluster. These studies will also have to address the question of whether the DSO is comparable to other dusty sources in the cluster - like the infrared excess IRS13N sources (Muzic et al. 2008, Eckart et al. 2012b) or the bow-shock sources X3 and X7 (Muzic et al. 2007, 2010). Independent of the answers to these problems, the DSO flyby in 2014 will undoubtedly be a spectacular event. It will reveal valuable information on the physics in the immediate vicinity of super-massive black holes.
|
---
abstract: 'This paper presents a Lie-Trotter splitting for inertial Langevin equations (Geometric Langevin Algorithm) and analyzes its long-time statistical properties. The splitting is defined as a composition of a variational integrator with an Ornstein-Uhlenbeck flow. Assuming the exact solution and the splitting are geometrically ergodic, the paper proves the discrete invariant measure of the splitting approximates the invariant measure of inertial Langevin to within the accuracy of the variational integrator in representing the Hamiltonian. In particular, if the variational integrator admits no energy error, then the method samples the invariant measure of inertial Langevin without error. Numerical validation is provided using explicit variational integrators with first, second, and fourth order accuracy.'
author:
- 'Nawaf Bou-Rabee[^1]'
- 'Houman Owhadi[^2]'
bibliography:
- 'nawaf.bib'
title: 'Long-Run Accuracy of Variational Integrators in the Stochastic Context'
---
[**Keywords**]{}
Lie-Trotter splitting, variational integrators, Ornstein-Uhlenbeck, inertial Langevin, Boltzmann-Gibbs measure, geometric ergodicty
[**AMS Subject Classification**]{}
65C30 (65C05, 60J05, 65P10)
Introduction
============
#### Overview
This paper analyzes equilibrium statistical accuracy of discretizations of inertial Langevin equations based on variational integrators. Variational integrators are time-integrators adapted to the structure of mechanical systems[@MaWe2001]. The theory of variational integrators includes discrete analogs of the Lagrangian, Noether’s theorem, the Euler-Lagrange equations, and the Legendre transform. Variational integrators can incorporate holonomic constraints (via, e.g., Lagrange multipliers) [@WeMa1997] and multiple time steps to obtain so-called asynchronous variational integrators [@LeMaOrWe2003].
The generalization of variational integrators the paper analyzes are derived from a Lie-Trotter splitting of inertial Langevin equations into Hamiltonian and Ornstein-Uhlenbeck equations. The integrator is then defined by selecting a variational integrator to approximate the Hamiltonian flow and using the exact Ornstein-Uhlenbeck flow. Such a generalization of variational integrators to inertial Langevin equations will be called a Geometric Langevin Algorithm (GLA).
This type of splitting of inertial Langevin equations is natural, but seems to have been only recently introduced in the literature (for molecular dynamics see [@VaCi2006; @BuPa2007], for dissipative particle dynamics see [@Sh2003; @SeFaEsCo2006], and for inertial particles see [@PaStZy2008]). This paper is geared towards applications in molecular dynamics where inertial Langevin integrators (including the ones cited above) have been based on generalizations of the widely used Störmer-Verlet integrator. The Störmer-Verlet integrator is attractive for molecular dynamics because it is an explicit, symmetric, second-order accurate, variational integrator for Hamilton’s equations. In molecular dynamics it was popularized by Loup Verlet in 1967. Other popular generalizations of the Störmer-Verlet integrator to inertial Langevin equations include Brünger-Brooks-Karplus (BBK) [@BrBrKa1984], van Gunsteren and Berendsen (vGB) [@GuBe1982], and the Langevin- Impulse (LI) methods [@SkIz2002]. The LI method is also based on a splitting of inertial Langevin equations, but it is different from the splitting considered here. To our knowledge there are few results in the literature which quantify the long-time statistical accuracy of the Lie-Trotter splitting considered here.
GLA is not only quasi-symplectic as defined in RL1 and RL2 of [@MiTr2003], but also conformally symplectic, i.e., preserves the precise symplectic area change associated to the flow of inertial Langevin processes [@McPe2001]. One way to prove this property is by deriving the scheme from a variational principle and analyzing its boundary terms as done in the context of stochastic Hamiltonian systems without dissipation in [@BoOw2009A].
#### Organization of the Paper
In §\[MainResults\] the main results of the paper are presented. §\[Preliminaries\] states all of the hypotheses used in the paper. These hypotheses are invoked in §\[AnalysisOfGLA\] where it is proved that GLA is pathwise convergent on finite time intervals (Theorem \[GLAaccuracy\]), GLA is geometrically ergodic with respect to a nearby invariant measure on infinite time intervals (Theorem \[GLAgeometricergodicity\]), and the equilibrium statistical accuracy of GLA is governed by the order of accuracy of the variational integrator in representing the Hamiltonian (Theorem \[GLABGaccuracy\]). In §\[Validation\], numerical validation is provided. In the Appendix we review some basic facts on variational integrators for the reader’s convenience.
#### Limitations
In a nutshell the main result of the paper states that if GLA is geometrically ergodic with respect to a unique invariant measure, the error in sampling the invariant measure of the SDE is determined by the energy error in GLA’s variational integrator. Now if the inertial Langevin equations have nonglobally Lipschitz drift and the GLA is based on an explicit variational integrator, GLA may fail to be geometrically ergodic. In particular, for any step-size there will be regions in phase space where the Lipschitz constant of the drift is beyond the linear stability threshold of GLA’s underlying variational integrator. Hence, an explicit GLA will be stochastically unstable. Since our results rely on a strong form of stochastic stability of GLA (namely, geometric ergodicity), they may not hold in this case.
To stochastically stabilize GLA, one can use GLA as a proposal move in a Metropolis-Hasting method. For a numerical analysis of the Metropolis-adjusted scheme, the reader is referred to [@BoVa2009A]. A difficulty in Metropolizing inertial Langevin is that its solution is not reversible. However, the solution composed with a momentum flip is reversible. The role of momentum flips in Metropolizing Langevin integrators is qualitatively and computationally analyzed in [@ScLeStCaCa2006; @BuPa2007; @AkBoRe2009]. For a quantitative treatment of the role of momentum flips in pathwise accuracy the reader is referred to [@BoVa2009A].
#### Extension to manifolds
For the sake of clarity, the setting of this paper is inertial Langevin equations on a flat space, but we stress GLA and its properties generalize to manifolds. We refer to Remark \[RemarkGen\] and to [@BoOw2009B] for details.
#### Acknowledgements
We wish to thank Christof Schütte and Eric Vanden-Eijnden for valuable advice. Denis Talay and Nicolas Champagnat helped sharpen the main result of the paper and put the paper in a better context.
This work was supported in part by DARPA DSO under AFOSR contract FA9550-07-C-0024. N. B-R. would like to acknowledge the support of the Berlin Mathematical School (BMS) and the United States National Science Foundation through NSF Fellowship \# DMS-0803095.
Main Results of Paper {#MainResults}
=====================
#### Inertial Langevin
The setting of the paper is a dissipative stochastic Hamiltonian system (as in [@So1994; @Ta2002]) on $\mathbb{R}^n$, with phase space $\mathbb{R}^{2n}$, and smooth Hamilton $H \in C^{\infty}(\mathbb{R}^{2n}, \mathbb{R})$. In terms of which consider the following inertial Langevin equations $$\begin{aligned}
\label{InertialLangevin}
\begin{cases}
d \mathbf{Y} &= \mathbb{J} \nabla H( \mathbf{Y} ) dt
- \gamma \boldsymbol{C} \nabla H( \mathbf{Y} ) dt
+ \sqrt{2 \gamma \beta^{-1}} \boldsymbol{C} d \mathbf{W} \\
\boldsymbol{Y}(0) &= \boldsymbol{x} \in \mathbb{R}^{2n}
\end{cases}\end{aligned}$$ where the following matrices have been introduced: $$\mathbb{J}=
\begin{bmatrix}
0 & \mathbf{I} \\
-\mathbf{I} & 0
\end{bmatrix},~~~
\boldsymbol{C} =
\begin{bmatrix}
0 & 0 \\
0 & \mathbf{I}
\end{bmatrix} \text{.}$$ Here $\boldsymbol{W}$ is a standard $2n$-dimensional Wiener process, or Brownian motion, $\beta>0$ is a parameter referred to as the inverse temperature, and $\gamma>0$ is referred to as the friction factor. We will often write the continuous solution in component form as $\mathbf{Y}(t)=(\boldsymbol{Q}(t),\boldsymbol{P}(t))$ where $\boldsymbol{Q}(t)$ and $\boldsymbol{P}(t)$ represent the instantaneous configuration and momentum of the system, respectively. We shall assume the Hamiltonian is separable and quadratic in momentum: $$H(\boldsymbol{q}, \boldsymbol{p}) =
\frac{1}{2} \boldsymbol{p}^T \boldsymbol{M}^{-1} \boldsymbol{p} + U(\boldsymbol{q}) \text{,}$$ where $\boldsymbol{M}$ is a symmetric positive definite mass matrix and $U$ is a potential energy function. Despite the degenerate diffusion in , under certain regularity conditions on $U$, the solution to this SDE is geometrically ergodic with respect to an invariant probability measure $\mu$ with the following density[@Ta2002]: $$\label{BGdistribution}
\pi(\boldsymbol{q}, \boldsymbol{p}) =
Z^{-1} \exp\left( - \beta H(\boldsymbol{q}, \boldsymbol{p}) \right) \text{,}$$ where $Z=
\int_{\mathbb{R}^{2n}}
\exp\left(- \beta H(\boldsymbol{q}, \boldsymbol{p}) \right)
d\boldsymbol{q} d\boldsymbol{p}$. The invariant measure $\mu$ is known as the [*Boltzmann-Gibbs measure*]{}.
#### Geometric Langevin Algorithm
Let $N$ and $h$ be given, set $T= N h$ and $t_k = h k $ for $k=0,...,N$. Observe that the conservative part of defines Hamilton’s equations for the Hamiltonian $H$: $$d \boldsymbol{Y} = \mathbb{J} \nabla H( \boldsymbol{Y} ) dt$$ or, $$\label{HamiltonsEquations}
\begin{cases}
d \boldsymbol{Q} &= \boldsymbol{M}^{-1} \boldsymbol{P} dt \\
d \boldsymbol{P} &= - \nabla U(\boldsymbol{Q}) dt
\end{cases}$$ Let $h$ be a fixed step-size. We apply a $pth$-order accurate variational integrator, $\theta_h: \mathbb{R}^{2n} \to \mathbb{R}^{2n}$, to approximate the Hamiltonian flow of ($p \ge 1$). The nonconservative part of the inertial Langevin equation defines an Ornstein-Uhlenbeck process in momentum governed by the following linear SDE: $$d \mathbf{Y} =
- \gamma \boldsymbol{C} \nabla H( \mathbf{Y} ) dt
+ \sqrt{2 \gamma \beta^{-1}} \boldsymbol{C} d \mathbf{W}$$ or, $$\label{OrnsteinUhlenbeck}
\begin{cases}
d \boldsymbol{Q} &= 0 \\
d \boldsymbol{P} &= - \gamma \boldsymbol{M}^{-1} \boldsymbol{P} dt
+ \sqrt{2 \beta^{-1} \gamma} d \boldsymbol{W}
\end{cases}$$ Reference [@PaStZy2008] aptly refers to as a Gaussian SDE since its stationary distribution on $\mathbb{R}^{2n}$ is Gaussian in momentum.
The following stochastic evolution map $\psi_{t_k+h, t_k} : \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ defines the stochastic flow of : $$\begin{aligned}
\label{exactpsi}
& \psi_{t_k+h,t_k}: \nonumber \\
& \qquad (\boldsymbol{q},\boldsymbol{p}) \mapsto \left(\boldsymbol{q}, e^{-\gamma \boldsymbol{M}^{-1} h} \boldsymbol{p}
+ \sqrt{2 \beta^{-1} \gamma} \int_{t_k}^{t_k+h} e^{-\gamma \boldsymbol{M}^{-1} (t_k+h-s)} d \boldsymbol{W}(s) \right) \text{,} \end{aligned}$$ with $\psi_{s,s}(\boldsymbol{x}) = \boldsymbol{x}$ and for $0 \le r \le s \le t$ recall the Chapman-Kolmogorov identity $\psi_{t,s} \circ \psi_{s,r}(\boldsymbol{x}) = \psi_{t,r}(\boldsymbol{x})$ for all $\boldsymbol{x} \in \mathbb{R}^{2n}$. For the distribution of the solution, the stochastic flow will be denoted simply by $\psi_{h}$. To make this map explicit, let $\boldsymbol{\xi} \sim \mathcal{N}(\boldsymbol{0},\boldsymbol{I})$ and set $$\begin{aligned}
\boldsymbol{\Sigma}_h :=&
2 \beta^{-1} \gamma \E\left\{ \left( \int_0^h e^{-\gamma \boldsymbol{M}^{-1} (h-s)} d \boldsymbol{W}(s) \right)
\left( \int_0^h e^{-\gamma \boldsymbol{M}^{-1} (h-s)} d \boldsymbol{W}(s) \right)^T \right\} \\
=& \beta^{-1} \left( \boldsymbol{I} - \exp(- 2 \gamma \boldsymbol{M}^{-1} h) \right) \boldsymbol{M} \end{aligned}$$ and define $A_h$ to be the decomposition matrix arising from the Cholesky factorization of $\boldsymbol{\Sigma}_{h}$, i.e., $\boldsymbol{A}_{h} \boldsymbol{A}_{h}^T = \boldsymbol{\Sigma}_{h}$. In terms of these, introduce the following flow map: $$\begin{aligned}
\label{psi}
\psi_h: (\boldsymbol{q},\boldsymbol{p}) \mapsto
\left(\boldsymbol{q}, e^{-\gamma \boldsymbol{M}^{-1} h} \boldsymbol{p} + A_h \boldsymbol{\xi} \right) \text{.} \end{aligned}$$ In distribution is identical to .
Given $\boldsymbol{X}_k \in \mathbb{R}^{2n}$ and $h$, the [*Geometric Langevin Algorithm*]{} (GLA) is defined as the following Lie-Trotter splitting integrator for : $$\label{GLA}
\boldsymbol{X}_{k+1} := \theta_h \circ \psi_{t_k+h,t_k} (\boldsymbol{X}_k)$$ for $k=0,...,N-1$ with $\boldsymbol{X}_0 = \boldsymbol{x}$.
\[RemarkGen\] Observe that GLA generalizes to inertial Langevin equations on a manifold. This generalization is possible because its symplectic component can be defined as a variational integrator for Hamilton’s equations on a manifold and its Ornstein-Uhlenbeck component can be defined as the solution of an SDE on a vector space. This generalization is motivated by molecular systems with holonomic constraints. As mentioned in the introduction, variational integrators can incorporate holonomic constraints. In the special case that the configuration manifold of GLA is compact (e.g., $SO(3)$) and the potential energy is smooth, then the assumption on the geometric ergodicity of GLA is typically satisfied for sufficiently small time-step.
Given $\boldsymbol{Z}_k \in \mathbb{R}^{2n}$ and $h$, let $\vartheta_h: \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ denote the exact time-$h$ flow of Hamilton’s equations . The [*Exact Splitting*]{} is defined as $$\label{ExactSplitting}
\boldsymbol{Z}_{k+1} := \vartheta_h \circ \psi_{t_k+h, t_k} ( \boldsymbol{Z}_k )$$ for $k=0,...,N-1$ with $\boldsymbol{Z}_0 = \boldsymbol{x}$.
#### Properties of GLA
The assumptions that appear in the following theorems are provided in §\[Preliminaries\].
Let $\E^{\boldsymbol{x}}\{ \cdot \}$ denote the expectation conditioned on the initial condition being $\boldsymbol{x} \in \mathbb{R}^{2n}$. In terms of this notation, we can quantify the strong convergence of GLA to solution trajectories of inertial Langevin . The precise statement follows
\[Pathwise Accuracy\] \[GLAaccuracy\] Assume \[sa1\] and \[sa2\]. For any $T>0$, there exist $h_c>0$ and $C(T)>0$, such that for all $h<h_c$, $\boldsymbol{x} \in \mathbb{R}^{2n}$, and $t\in[0,T]$, GLA satisfies $$( \E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{\lfloor t/h \rfloor} - \boldsymbol{Y}( \lfloor t/h \rfloor h) |^2 \} )^{1/2} \le
C(T) (1+|\boldsymbol{x}|^2)^{1/2} h \text{.}$$
This result is expected because a Lie-Trotter splitting is first-order for deterministic ODEs, and the noise in is additive.
Using this pathwise convergence, it is shown that GLA is geometrically ergodic with respect to a discrete invariant measure $\mu_h$.
\[Geometric Ergodicity\] \[GLAgeometricergodicity\] Assume \[sa1\], \[sa2\], and \[sa3\]. Then GLA is geometrically ergodic with respect to a discrete invariant measure $\mu_h$ and the continuous Lyapunov function (cf. Assumption \[sa3\]). That is, there exist $h_c>0$, $\lambda>0$ , and $C_3 > 0$, such that for all $h<h_c$ and for all $k \ge 2$, $$| \E^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\} - \mu_h(f) |
\le C_3 V(\boldsymbol{x}) e^{-\lambda k h}, ~~\forall~\boldsymbol{x} \in \mathbb{R}^{2n},$$ and for all test functions satisfying $| f(\boldsymbol{y}) | \le C_3 V(\boldsymbol{y})$ for all $\boldsymbol{y} \in \mathbb{R}^{2n}$.
We stress this result is a consequence of strong convergence of GLA and the assumptions made on the potential energy and variational integrator. These assumptions are sufficient, but not necessary to guarantee this result.
Using geometric ergodicity we can quantify the equilibrium statistical accuracy of GLA. If $p$ represents the global accuracy of GLA’s underlying variational integrator, then $\mu_h$ is in TV distance $O(h^p)$ away from the Boltzmann-Gibbs measure $\mu$. To be precise, the main result of the paper states
\[Long-Run Accuracy\] \[GLABGaccuracy\] Assume \[sa1\], \[sa2\], and \[sa3\]. Let $\mu_h$ denote the discrete invariant measure of GLA. Then, there exist $C>0$ and $h_c>0$, such that for all $h<h_c$, $$| \mu - \mu_h |_{TV} \le C h^{p} \text{.}$$
There is a stronger argument in [@Ta2002] based on the Feynman-Kac formula that can extend Theorem \[GLABGaccuracy\] to $$| \mu(f) - \mu_h(f) | \le C h^p \text{,}$$ for all test functions $f \in L^2_{\mu}(\mathbb{R}^{2n})$ that are smooth with polynomial growth at infinity. The paper proves Theorem \[GLABGaccuracy\] with a more direct strategy. An important point is that the proof is transparent since it involves a forward error analysis and does not rely on knowing the precise form of $\mu_h$. The proof relies on the existence of $\mu_h$ and the nature of the convergence of GLA from a nonequilibrium position. Indeed, a backward error analysis of this discretization of the SDE to characterize this invariant measure would be substantially more involved.
#### Implications
As a consequence of the TV error estimate derived in this paper, one can control the order of accuracy of $\mu_h$ by controlling the order of accuracy of GLA’s underlying variational integrator. This is the distinguishing feature of GLA. Existing theory would indicate the accuracy of $\mu_h$ is the same order as the weak or strong accuracy of GLA. Theorem \[GLAaccuracy\] states GLA is just first-order accurate on solution trajectories. Hence, existing theory would suggest that the equilibrium statistical accuracy of GLA is first-order, rather than $pth$-order accurate (where $p$ is the order of accuracy of GLA’s underlying variational integrator).
Existing theory would indicate to obtain a higher-order approximation of the invariant measure one would require a higher-order approximant to SDE which entails approximation of multiple $n$-dimensional stochastic integrals per time-step. It is well-known that such higher-order discretizations of SDEs are computationally intensive. In contrast, a step of GLA requires evaluation of a single, $n$-dimensional stochastic integral per time-step. According to the main result of this paper, the order of accuracy of the variational integrator can be used to tune the TV-distance in Theorem \[GLABGaccuracy\] to a desired tolerance.
Preliminaries {#Preliminaries}
=============
The following assumptions on the potential energy, $U: Q \to \mathbb{R}$, will be used in this paper. These hypotheses are the same as those made in §7 of [@MaStHi2002].
\[sa1\] The potential energy function $U \in C^{\infty}(\mathbb{R}^n, \mathbb{R})$ satisfies:
U1)
: there exists a real constant $A_0 > 0$ such that $$| \nabla U(\boldsymbol{q}_0) - \nabla U(\boldsymbol{q}_1)| \le
A_0 | \boldsymbol{q}_0 - \boldsymbol{q}_1 | \text{,}~~\forall~\boldsymbol{q}_0 , \boldsymbol{q}_1 \in \mathbb{R}^n \text{.}$$
U2)
: there exists a real constant $A_1 > 0$ such that $$U( \boldsymbol{q} ) \ge A_1 (1 + | \boldsymbol{q} | ^2 ) \text{,} ~~\forall~ \boldsymbol{q} \in \mathbb{R}^n \text{.}$$
By standard results in stochastic analysis, condition [**U1**]{} is sufficient to guarantee almost sure existence and pathwise uniqueness of a solution to . The condition [**U2**]{} ensures that $e^{-\beta H}$ is integrable over $\mathbb{R}^{2n}$, and hence, that the Boltzmann-Gibbs measure is a well-defined probability measure. Assuming the solution to is geometrically ergodic, we will prove in this paper that conditions [**U1**]{} - [**U2**]{} together with the following assumptions on the variational integrator, $\theta_h: \mathbb{R}^{2n} \to \mathbb{R}^{2n}$, are sufficient (but not necessary) to guarantee geometric ergodicty of GLA.
\[sa2\] For any $t>0$ let $\vartheta_t$ denote the exact Hamiltonian flow of . The variational integrator $\theta_h : \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ satisfies the following.
V1)
: $\theta_h$ is the discrete Hamiltonian map of a hyperregular discrete Lagrangian $L_d: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (cf. and [@MaWe2001]).
V2)
: there exist constants $B_0>0$ and $h_c>0$, such that for any $h<h_c$, $$| \theta_{h}(\boldsymbol{x}) - \vartheta_{h}(\boldsymbol{x}) |
\le B_0 (1 + | \boldsymbol{x} |^2 )^{1/2} h^{p+1},~~\forall~\boldsymbol{x} \in \mathbb{R}^{2n} \text{.}$$
As discussed in Appendix I, the condition [**V1**]{} implies that $\theta_h$ is symplectic, and hence, Lebesgue measure preserving. It will also be an important ingredient in proving Theorem \[GLAgeometricergodicity\] on geometric ergodicity of GLA. The condition [**V2**]{} states that the integrator is locally $(p+1)th$-order accurate.
Finally, we make the following structural assumption on .
\[sa3\] There exists $V \in C^{\infty}( \mathbb{R}^{2n} , \mathbb{R})$ and constants $C_i >0$ such that $$C_0 ( 1+ | \boldsymbol{x} |^2) \le V(\boldsymbol{x}) \le C_1 (1 + | \boldsymbol{x} |^2),~~
\nabla V(\boldsymbol{x}) \le C_2 (1+ | \boldsymbol{x} | ), ~~
\forall~\boldsymbol{x} \in \mathbb{R}^{2n},$$ $\lim_{\boldsymbol{x} \to \infty} V(\boldsymbol{x}) = \infty$, $a>0$ and $c>0$, such that for all $t>0$, $$\E^{\boldsymbol{x}} \{ V( \boldsymbol{Y}(t) ) \} \le
e^{-a t } V( \boldsymbol{x} ) + \frac{c}{a} (1 - e^{-a t} ),~~\forall~\boldsymbol{x} \in \mathbb{R}^{2n} \text{.}$$
Analysis of GLA {#AnalysisOfGLA}
===============
Pathwise Convergence
--------------------
Here GLA is shown to be first-order mean-squared convergent, which is a notion of pathwise convergence to solutions of [@Ta1995; @MiTr2004]. The first-order accuracy of GLA on solution trajectories is not surprising because the method is derived from a Lie-Trotter splitting of . It is simply a generalization of the well-known fact that Lie-Trotter splittings of deterministic ODEs yield first-order accurate methods. This generalization is possible despite the lack of regularity in solutions because the noise in is additive. Since the proof is standard, it will be kept terse.
Assume \[sa1\] and \[sa2\]. For any $T>0$, there exist $h_c>0$ and $C(T)>0$, such that for all $h<h_c$, $\boldsymbol{x} \in \mathbb{R}^{2n}$, and $t\in[0,T]$, $$( \E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{\lfloor t/h \rfloor} - \boldsymbol{Y}( \lfloor t/h \rfloor h ) |^2 \} )^{1/2}
\le C(T) (1+|\boldsymbol{x}|^2)^{1/2} h \text{.}$$
By standard results in stochastic analysis, condition [**U1**]{} guarantees there a.s. exists a pathwise unique solution to : $
\mathbf{Y}(t) \in \mathbb{R}^{2n}$ for $t \in [0, T]$ with $\mathbf{Y}(0) = \boldsymbol{x}$. Moreover, one can obtain the following bound on the second moment of of the solution: for all $T>0$, there exists a $C(T)>0$ such that for all $t \in [0, T]$, $$\label{Ymomentbound}
\E^{\boldsymbol{x}} \left\{ | \boldsymbol{Y}(t) |^2 \right\} \le C(T) (1 + | \boldsymbol{x} |^2) \text{.}$$
We will use this bound to invoke Theorem 1.1 in [@MiTr2004] which enables one to deduce global mean-squared error estimates of a discretization from local mean-squared error and local mean deviation. First, we establish this estimate for the exact splitting . By using Assumption [**U1**]{}, it is straightforward to show (see Lemma \[singlesteperror\]) that there exists $C>0$ such that: $$| \E^{\boldsymbol{x}} \{ \boldsymbol{Y}(h) - \boldsymbol{Z}_{1} \} |
\le C \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{2}$$ and $$\label{YZmserror}
\left( \E^{\boldsymbol{x}} \{ | \boldsymbol{Y}(h) - \boldsymbol{Z}_{1} |^2 \} \right)^{1/2}
\le C \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{3/2}$$ Together with this implies there exist $h_c>0$ and $C(T)>0$, such that for all $h<h_c$, $t \in [0, T]$ and $\boldsymbol{x} \in \mathbb{R}^{2n}$: $$\label{Zmomentbound}
\E^{\boldsymbol{x}} \{ | \boldsymbol{Z}_{\lfloor t/h \rfloor} |^2 \}
\le C(T) ( 1 + | \boldsymbol{x} |^2 )$$ Hence, by Theorem 1.1 in [@MiTr2004], one can show that for all $T>0$, there exist $h_c>0$ and $C(T)>0$, such that for all $h<h_c$, $t \in [0, T]$ and $\boldsymbol{x} \in \mathbb{R}^{2n}$: $$\label{YZglobalmserror}
( \E^{\boldsymbol{x}} \{ | \boldsymbol{Z}_{\lfloor t/h \rfloor} - \boldsymbol{Y}(\lfloor t/h \rfloor h) |^2 \} )^{1/2}
\le C(T) \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h
\text{.}$$
Observe that the difference between a single step of GLA and the exact splitting can be written as $$\boldsymbol{X}_{1} - \boldsymbol{Z}_{1} =
(\theta_h - \vartheta_h ) \circ \psi_{h,0}(\boldsymbol{x}) \text{.}$$ Using Assumption [**V2**]{} one can show there exists $C>0$ such that $$\label{XZmserror}
\left( \E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{1} - \boldsymbol{Z}_{1} |^2 \} \right)^{1/2}
\le C \left( 1 + | \boldsymbol{x} |^2 \right)^{1/2} h^{p+1} \text{,}$$ and, by Jensen’s inequality: $$\label{XZmerror}
| \E^{\boldsymbol{x}} \{ \boldsymbol{X}_{1} - \boldsymbol{Z}_{1} \} |
\le C \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{p+1}
\text{.}$$ Together with this implies that there exist $h_c>0$ and $C(T)>0$, such that for all $h<h_c$, $t \in [0, T]$ and $\boldsymbol{x} \in \mathbb{R}^{2n}$: $$\label{GLAmomentbound}
\E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{\lfloor t/h \rfloor} |^2 \}
\le C(T) ( 1 + | \boldsymbol{x} |^2 )$$ Using Assumption [**U1**]{} and Theorem 1.1 of [@MiTr2004], one can also show that for all $T>0$, there exist $h_c>0$ and $C(T)>0$, such that for all $h<h_c$, $t \in [0, T]$ and $\boldsymbol{x} \in \mathbb{R}^{2n}$: $$\label{XZglobalmserror}
( \E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{\lfloor t/h \rfloor} - \boldsymbol{Z}_{\lfloor t/h \rfloor} |^2 \} )^{1/2}
\le C(T) \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{p}
\text{.}$$ In other words, GLA is $O(h^p)$ strongly convergent to the exact splitting. One can then use the triangle inequality to obtain the estimate in the theorem from and , i.e., $$\begin{aligned}
& ( \E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{\lfloor t/h \rfloor} - \boldsymbol{Y}(\lfloor t/h \rfloor h) |^2 \} )^{1/2} \le \\
& \qquad \underset{\le K(T) \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{p} }{\underbrace{
( \E^{\boldsymbol{x}} \{ | \boldsymbol{X}_{\lfloor t/h \rfloor} - \boldsymbol{Z}_{\lfloor t/h \rfloor} |^2 \} )^{1/2} }} +
\underset{ \le K(T) \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h }{\underbrace{
( \E^{\boldsymbol{x}} \{ | \boldsymbol{Z}_{\lfloor t/h \rfloor} - \boldsymbol{Y}(\lfloor t/h \rfloor h) |^2 \} )^{1/2} }} \text{.}\end{aligned}$$ In sum, GLA is first-order strongly convergent to solutions of .
Geometric Ergodicity
--------------------
Geometric ergodicity is a strong type of stochastic stability of a Markov chain [@MeTw2009]. In this section geometric ergodicity of GLA is established following the recipe provided in §7 of [@MaStHi2002]. In the context of this paper, geometric ergodicity means,
A Markov chain $\boldsymbol{X}_k$ is said to be geometrically ergodic if there exist probability measure $\mu_{\infty}$, $ \rho < 1$, and $M \in C^{\infty}(\mathbb{R}^{2n}, \mathbb{R}^+)$, such that $$\label{eq:geometricergodicity}
| \E^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\} - \mu_{\infty}(f) |
\le M(\boldsymbol{x}) \rho^k,~~\forall~\boldsymbol{x} \in \mathbb{R}^{2n},~~\forall~k\in\mathbb{N} \text{,}$$ and for all $f \in L^2_{\mu_{\infty}}(\mathbb{R}^{2n})$ satisfying $| f(\boldsymbol{y}) | \le M(\boldsymbol{y})$ for all $\boldsymbol{y} \in \mathbb{R}^{2n}$.
Under the hypotheses below, the Lyapunov function from Assumption \[sa3\] is inherited by GLA.
\[Geometric Ergodicity\] Assume \[sa1\], \[sa2\], and \[sa3\]. Then GLA is geometrically ergodic with respect to a discrete invariant measure $\mu_h$ and the continuous Lyapunov function (cf. Assumption \[sa3\]). That is, there exist $h_c>0$, $\lambda>0$ , and $C_3 > 0$, such that for all $h<h_c$ and for all $k \ge 2$, $$| \E^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\} - \mu_h(f) |
\le C_3 V(\boldsymbol{x}) e^{-\lambda k h}, ~~\forall~\boldsymbol{x} \in \mathbb{R}^{2n},$$ and for all test functions satisfying $| f(\boldsymbol{y}) | \le C_3 V(\boldsymbol{y})$ for all $\boldsymbol{y} \in \mathbb{R}^{2n}$.
This proof is an application of Theorem 2.5 of [@MaStHi2002]. To invoke this theorem, we will show that GLA inherits the Lyapunov function $V:\mathbb{R}^{2n} \to \mathbb{R}$ of the continuous solution (cf. Assumption \[sa3\]) and satisfies a minorization condition when sampled every other step.
To prove that GLA inherits the Lyapunov function $V:\mathbb{R}^{2n} \to \mathbb{R}$ we use Theorem 7.2 of [@MaStHi2002]. This theorem assumes that the Lyapunov function of the SDE is essentially quadratic which follows from Assumption \[sa3\], and that the discretization of the SDE satisfies Condition 7.1 of [@MaStHi2002]. Condition 7.1 (i) is a consequence of a single-step mean-squared error estimate of GLA which can be derived from and . Condition 7.1 (ii) is satisfied for the first and second moments of GLA due to the estimate . Hence all of the assumptions of Theorem 7.2 [@MaStHi2002] are satisfied, and one can conclude that GLA inherits the Lyapunov function $V:\mathbb{R}^{2n} \to \mathbb{R}$ up to a constant pre-factor.
Next, we prove that GLA satisfies a minorization condition when sampled every other step. This property follows from Lemma 2.3 of [@MaStHi2002], because GLA sampled every other step admits a strictly positive, smooth transition probability function. In fact, this transition probability $q_h: \mathbb{R}^{2n} \times \mathbb{R}^{2n} \to [0, 1]$ can be explicitly characterized, and by inspection it is clear that it is smooth as a function of its arguments and strictly positive everywhere.
To derive this expression, let $o_h :\mathbb{R}^n \times \mathbb{R}^n \to [0,1]$ denote the transition probability of the Ornstein-Uhlenbeck flow $\psi_{h}$ . By a change of variables, it’s transition density is given explicitly by: $$\begin{aligned}
\label{oh}
& o_h (\boldsymbol{p}_0, \boldsymbol{p}_1) = \nonumber \\
& \frac{1}{( 2 \pi )^{n/2} | \det( \boldsymbol{\Sigma}_h) | }
\exp\left(-\frac{1}{2} \left( \boldsymbol{p}_1 - e^{- \gamma \boldsymbol{M}^{-1} h} \boldsymbol{p}_0 \right)^T
\boldsymbol{\Sigma}_h^{-1} \left( \boldsymbol{p}_1 - e^{- \gamma \boldsymbol{M}^{-1} h} \boldsymbol{p}_0 \right) \right)
\text{,}\end{aligned}$$ where $$\boldsymbol{\Sigma}_h = \beta^{-1} \left( \boldsymbol{Id} - \exp(- 2 \gamma \boldsymbol{M}^{-1} h) \right) \boldsymbol{M} \text{.}$$ Let $\mathbf{D} = \mathbb{R}^{2n} \times \mathbb{R}^{2n} \times \mathbb{R}^{2n} \times \mathbb{R}^n$. Since the maps $\theta_h$ and $\psi_h$ enjoy the Markov property, the transition probability of the composition $ \theta_h \circ \psi_h \circ \theta_h \circ \psi_h$ can be expressed as a product of the transition probabilities of its components: $$\begin{aligned}
&q_{h} \left( (\boldsymbol{q},\boldsymbol{p}), (\bar{\boldsymbol{q}},\bar{\boldsymbol{p}}) \right) = \\
& \int_{\mathbf{D}} o_h(\boldsymbol{p}, \boldsymbol{p}_1)
\delta((\boldsymbol{q}_1, \boldsymbol{p}_2)-\theta_h(\boldsymbol{q},\boldsymbol{p}_1))
o_h (\boldsymbol{p}_2 , \boldsymbol{p}_3)
\delta((\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}})-\theta_h(\boldsymbol{q}_1,\boldsymbol{p}_3))
d \boldsymbol{p}_1 d \boldsymbol{p}_2 d \boldsymbol{p}_3 d \boldsymbol{q}_1 \text{.}\end{aligned}$$ The zero of the argument of the second Dirac-delta measure (from left) occurs at $(\boldsymbol{q}_1,\boldsymbol{p}_3) = \theta_h^{-1}(\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}})$. Hence, the above expression simplifies, $$\begin{aligned}
& q_{h} \left( (\boldsymbol{q},\boldsymbol{p}), (\bar{\boldsymbol{q}},\bar{\boldsymbol{p}}) \right) =\\
& \int_{\mathbb{R}^{2n} \times \mathbb{R}^{2n} } o_h(\boldsymbol{p}, \boldsymbol{p}_1)
\delta((\boldsymbol{q}_1, \boldsymbol{p}_2)-\theta_h(\boldsymbol{q},\boldsymbol{p}_1))
o_h (\boldsymbol{p}_2 , \boldsymbol{p}_3)
d \boldsymbol{p}_1 d \boldsymbol{p}_2 \text{.}\end{aligned}$$ By condition [**V1**]{} on $\theta_h$, the zero of the argument of the remaining Dirac-delta measure above is uniquely determined by the discrete Hamiltonian flow of the discrete Lagrangian (cf. in Appendix I). Hence, one obtains: $$\begin{aligned}
\label{qh}
& q_{h} \left( (\boldsymbol{q},\boldsymbol{p}), (\bar{\boldsymbol{q}},\bar{\boldsymbol{p}}) \right) =\nonumber \\
& \quad | \det(D_{12} L_d(\boldsymbol{q}, \boldsymbol{q}_1,h) )|
o_h(\boldsymbol{p}, -D_1 L_d(\boldsymbol{q}, \boldsymbol{q}_1, h))
o_h( D_2 L_d(\boldsymbol{q}, \boldsymbol{q}_1, h), \boldsymbol{p}_3) \end{aligned}$$ where $(\boldsymbol{q}_1,\boldsymbol{p}_3) = \theta_h^{-1}(\bar{\boldsymbol{q}}, \bar{\boldsymbol{p}})$. Using the hyperregularity assumption on the variational integrator [**V1**]{} (cf. ), , and , it is clear that $q_h$ is a smooth probability transition function that is everywhere strictly positive. Hence, by Lemma 2.3 of [@MaStHi2002], GLA sampled every other step satisfies a minorization condition.
In sum, we have shown that GLA satisfies a minorization condition and admits a Lyapunov function. The result follows from invoking Theorem 2.5 in [@MaStHi2002].
Long-Run Accuracy
-----------------
Now we quantify the accuracy of GLA in sampling from the equilibrium measure of . For this purpose recall the following definition.
A Markov chain $\boldsymbol{X}_k \in \mathbb{R}^{2n}$ is said to preserve a probability measure $\mu_{\infty}$ if for all $f \in L^2_{\mu_{\infty}}(\mathbb{R}^{2n})$ and $k \in \mathbb{N}$, $$\label{eq:muinvariancecondition}
\E_{\mu_{\infty}} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_k ) \} = \mu_{\infty}(f)$$ where $\mu_{\infty}(f) = \int_{\mathbb{R}^{2n}} f d \mu_{\infty}$ and $\E_{\mu_{\infty}} \E^{\boldsymbol{x}}$ denotes expectation conditioned on the initial distribution being sampled from $\mu_{\infty}$, i.e., $$\mathbb{E}_{\mu_{\infty}} \mathbb{E}^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\} =
\int_{\mathbb{R}^{2n}} \mathbb{E}^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\}
\mu_{\infty} (d \boldsymbol{x}) \text{.}$$
Given a step-size $h$, define the deviation GLA makes in preserving the Boltzmann-Gibbs measure, $\mu$, as $\Delta_h^k: L^2_{\mu}(\mathbb{R}^{2n}) \to \mathbb{R}$: $$\Delta_h^k(f) := \E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_k ) \} - \mu(f)
\text{.}$$ Observe that if GLA exactly preserves $\mu$ then: $$\Delta_h^k(f) = 0, ~~~ \forall ~f \in L^2_{\mu}(\mathbb{R}^{2n}) \text{.}$$ The following local error result follows from the Ornstein-Uhlenbeck flow $\psi_h$ preserving $\mu$ and the variational integrator $\theta_h$ preserving Lebesgue measure.
\[Deltah1\] Suppose the potential energy satisfies [**U2**]{}. For a given $f \in L^2_{\mu}(\mathbb{R}^{2n})$, $$\Delta_h^1(f) = \int_{\mathbb{R}^{2n}} f(\boldsymbol{x})
\left( e^{-\beta \left( H((\theta_h)^{-1} (\boldsymbol{x})) - H(\boldsymbol{x}) \right)} - 1 \right)
\mu(d \boldsymbol{x}) \text{.}$$
The condition [**U2**]{} ensures that $\mu$ is a well-defined probability measure. According to the definition of GLA , $\boldsymbol{X}_1= \theta_h \circ \psi_h(\boldsymbol{x}) $. Substitute this expression into $\Delta_h^1$ to obtain: $$\Delta_h^1(f) =
\int_{\mathbb{R}^{2n}} \E^{\boldsymbol{x}} \left\{ f (\theta_h \circ \psi_h(\boldsymbol{x}) ) \right\} \mu( d \boldsymbol{x} )
- \int_{\mathbb{R}^{2n}} f d \mu \text{.}$$ Since $\psi_h$ preserves $\mu$ and $\theta_h$ is deterministic it follows that, $$\Delta_h^1(f) =
\int_{\mathbb{R}^{2n}} f (\theta_h(\boldsymbol{x}) ) \mu(d \boldsymbol{x} ) - \int_{\mathbb{R}^{2n}} f d \mu \text{.}$$ Changing variables under the map $\theta_h$ in the first integral above, and using the volume-preserving property of the variational integrator $\theta_h$ (See Appendix.) one obtains the desired expression.
As a consequence of Lemma \[Deltah1\], if $\theta_h$ admits no energy error, then GLA preserves $\mu$. In particular, the exact splitting preserves $\mu$.
In the situation where GLA is geometrically ergodic, this paragraph quantifies the equilibrium error of GLA in preserving the BG measure.
\[Deltahinfty\] Assume \[sa1\], \[sa2\], and \[sa3\]. Then, there exist $C>0$ and $h_c>0$, such that for all $h<h_c$, $$\lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le C h^{p} ,$$ and for all $f \in L^2_{\mu_{\infty}}(\mathbb{R}^{2n})$ satisfying $| f(\boldsymbol{y}) | \le C_3 V(\boldsymbol{y})$ for all $\boldsymbol{y} \in \mathbb{R}^{2n}$.
Let $f \in L^2_{\mu}(\mathbb{R}^{2n})$ such that $| f(\boldsymbol{y}) | \le C_3 V(\boldsymbol{y})$ for all $\boldsymbol{y} \in \mathbb{R}^{2n}$. The term $\E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_N ) \} $ can be written as a telescoping sum: $$\begin{aligned}
\E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_N ) \} = \mu(f) +
\sum_{k=1}^{N} \left( \E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_k ) \} -
\E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_{k-1} ) \} \right) \text{.}\end{aligned}$$ By Lemma \[Deltah1\], one can rewrite/reindex this sum as: $$\Delta_h^N( f ) = \int_{\mathbb{R}^{2n}}\sum_{k=0}^{N-1} \E^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\} \left( e^{ - \beta
(H ( \theta_h^{-1}(\boldsymbol{x})) - H(\boldsymbol{x}) ) } - 1 \right) \mu(d \boldsymbol{x}) \text{.}$$ Since $\theta_h$ preserves Lebesgue measure, one can write this deviation as: $$\begin{aligned}
\label{DeltahN}
\Delta_h^N & ( f ) = \nonumber \\
& \int_{\mathbb{R}^{2n}} \sum_{k=0}^{N-1} \underset{\text{Deviation from Equilibrium}}{\underbrace{
\left( \E^{\boldsymbol{x}} \left\{ f( \boldsymbol{X}_k ) \right\} - \mu_h(f) \right) }} \cdot
\underset{\text{Energy Error of Variational Integrator}}{\underbrace{
\left( e^{ - \beta (H ( \theta_h^{-1}(\boldsymbol{x}) ) - H(\boldsymbol{x}) ) } - 1 \right) }} \mu(d \boldsymbol{x}) \text{.} \end{aligned}$$ From it is clear that the equilibrium BG error is due to: 1) how fast GLA converges to equilibrium and 2) the local accuracy with which $\theta_h$ represents the Hamiltonian function $H$. The equality is the crux of the proof, and what follows is an approach to bound $\Delta_h^N(f)$.
Since GLA is geometrically ergodic (cf. Theorem \[GLAgeometricergodicity\]), one can bound $\Delta_h^N(f)$ from above by $$\left| \Delta_h^N( f ) \right| \le \left( \sum_{k=0}^{N-1} e^{-\lambda h k} \right) C_3 \int_{\mathbb{R}^{2n}} V( \boldsymbol{x} )
\left| e^{ - \beta (H ( \theta_h^{-1}(\boldsymbol{x}) ) - H(\boldsymbol{x}) ) } - 1 \right| \mu(d \boldsymbol{x}) \text{.}$$ Changing variables in the right-hand-side under the map $\theta_h$, one can rewrite this bound as, $$\left| \Delta_h^N( f ) \right| \le \left( \sum_{k=0}^{N-1} e^{-\lambda h k} \right) C_3 \int_{\mathbb{R}^{2n}} V( \theta_h(\boldsymbol{x}) )
\left| e^{ - \beta (H ( \theta_h(\boldsymbol{x}) )- H(\boldsymbol{x}) ) } - 1 \right| \mu(d \boldsymbol{x}) \text{.}$$ In the limit as $N \to \infty$, the right-hand-side of the above can be written in terms of the formula for the geometric series for $e^{-\lambda h}$: $$\lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le \frac{C_3}{1 - e^{-\lambda h}} \int_{\mathbb{R}^{2n}} V( \theta_h(\boldsymbol{x}) )
\left| e^{ - \beta (H ( \theta_h(\boldsymbol{x}) ) - H(\boldsymbol{x}) ) } - 1 \right| \mu(d \boldsymbol{x}) \text{.}$$ Using the natural bound $| e^{x} - 1 | \le e^{| x |} - 1$ for all $x \in \mathbb{R}$, one can further bound $| \Delta_h^N(f) |$ by: $$\label{DeltahNieq}
\lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le \frac{C_3}{1 - e^{-\lambda h}} \int_{\mathbb{R}^{2n}} V( \theta_h(\boldsymbol{x}) )
\left( e^{ \beta | H ( \theta_h(\boldsymbol{x}) ) - H(\boldsymbol{x}) | } - 1 \right) \mu(d \boldsymbol{x}) \text{.}$$ Introduce the exact flow $\vartheta_h$ of Hamilton’s equations into this bound, $$\label{DeltahNieq}
\lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le \frac{C_3}{1 - e^{-\lambda h}} \int_{\mathbb{R}^{2n}} V( \theta_h(\boldsymbol{x}) )
\left( e^{ \beta | H ( \theta_h(\boldsymbol{x}) )- H(\vartheta_h(\boldsymbol{x})) | } - 1 \right) \mu(d \boldsymbol{x}) \text{.}$$
Set $\boldsymbol{y}_0 = \theta_h(\boldsymbol{x})$ and $\boldsymbol{y}_1 = \vartheta_h(\boldsymbol{x})$. By the fundamental theorem of calculus, $$H(\boldsymbol{y}_1) - H(\boldsymbol{y}_0) =
\int_0^1 \nabla H(\boldsymbol{y}_0 + s (\boldsymbol{y}_1 - \boldsymbol{y}_0))
\cdot (\boldsymbol{y}_1 - \boldsymbol{y}_0) ds \text{.}$$ Using condition [**U1**]{} and the Cauchy-Schwartz inequality, it follows from the above that there exists $C > 0$ such that $$| H(\boldsymbol{y}_1) - H(\boldsymbol{y}_0) | \le
C (1+ | \boldsymbol{y}_1 | + | \boldsymbol{y}_0 | ) | \boldsymbol{y}_1 - \boldsymbol{y}_0 | \text{.}$$ Another application of the condition [**U1**]{} and [**V2**]{} implies there exists $C > 0$ such that $$| H(\boldsymbol{y}_1) - H(\boldsymbol{y}_0) | \le
C (1+ | \boldsymbol{x} |^2 ) h^{p+1} \text{.}$$ Therefore, $$\label{DeltahNieq}
\lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le \frac{C_3}{1 - e^{-\lambda h}} \int_{\mathbb{R}^{2n}} V( \theta_h(\boldsymbol{x}) )
\left( e^{ \beta K (1+ | \boldsymbol{x} |^2 ) h^{p+1} } - 1 \right) \mu(d \boldsymbol{x}) \text{.}$$ Now we show how the the factor $V( \theta_h(\boldsymbol{x}) ) $ above is handled.
Since the Lyapunov function is quadratically bounded, the variational integrator satisfies [**V2**]{}, and the Hamiltonian vector field is uniformly Lipschitz by condition [**U1**]{}, there exists $C > 0$ such that $$\lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le \frac{C}{1 - e^{-\lambda h}} \int_{\mathbb{R}^{2n}} (1 + |\boldsymbol{x} |^2 )
\left( e^{ \beta K (1+ | \boldsymbol{x} |^2 ) h^{p+1} } - 1 \right) \mu(d \boldsymbol{x}) \text{.}$$ By condition [**U2**]{} the total energy is quadratically bounded from below. Consequently one can bound $e^{- \beta H(\boldsymbol{x})}$ by $e^{- \beta D (1 + | \boldsymbol{x} |^2 ) }$ for some constant $D > 0$. Thus, $$\begin{aligned}
& \lim_{N \to \infty} \left| \Delta_h^N( f ) \right| \le \\
& \qquad \frac{C}{1 - e^{-\lambda h}} \int_{\mathbb{R}^{2n}} (1 + |\boldsymbol{x} |^2 )
\left( e^{ \beta K (1+ | \boldsymbol{x} |^2 ) h^{p+1} } - 1 \right) e^{- \beta D (1 + | \boldsymbol{x} |^2 ) } d \boldsymbol{x} \text{.}
\end{aligned}$$ When $h < h_c= (D/K)^{1/(p+1)}$ the above integral is finite and one obtains the desired error estimate.
A simple application of Theorem \[GLABGaccuracy\] implies an error estimate for $\mu_h$. For this purpose we introduce the total variation between measures $\mu$ and $\nu$: $$| \mu - \nu |_{TV} = \sup_{| f | \le 1} \left| \int_{\mathbb{R}^{2n}}
f(\boldsymbol{x}) (\mu(d \boldsymbol{x}) - \nu(d \boldsymbol{x})) \right| \text{.}$$ Since $\tilde{M}( \boldsymbol{y}) \ge 1$ for all $ \boldsymbol{y} \in \mathbb{R}^{2n}$, Theorem \[GLABGaccuracy\] applies for all $f \in L^2_{\mu}(\mathbb{R}^{2n})$ such that $| f(\boldsymbol{y}) | \le 1$ for all $\boldsymbol{y} \in \mathbb{R}^{2n}$. The TV norm can be written as: $$\begin{aligned}
&| \mu - \mu_h |_{TV} = \\
& \qquad \sup_{| f | \le 1} \left| \int_{\mathbb{R}^{2n}} f d \mu - \E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_N ) \}
+ \E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_N ) \} - \int_{\mathbb{R}^{2n}} f d\mu_h \right| \end{aligned}$$ By the triangle inequality, $$\begin{aligned}
| \mu - \mu_h |_{TV} \le \sup_{| f | \le 1} \left| \Delta_h^N(f) \right|
+ \sup_{| f | \le 1} \left| \E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_N ) \} -\mu_h(f) \right| \text{.} \label{eq:tvprelimit}\end{aligned}$$ However, under the hypotheses of the theorem, $GLA$ is geometrically ergodic with respect to $\mu_h$ and hence, $$\label{eq:decays}
\lim_{N \to \infty} \sup_{| f | \le 1}
\left| \E_{\mu} \E^{\boldsymbol{x}} \{ f( \boldsymbol{X}_N ) \} -\mu_h(f) \right| \to 0$$ and, $$\begin{aligned}
\label{tvestimate}
| \mu - \mu_h |_{TV} \le \lim_{N \to \infty} \sup_{| f | \le 1} \left| \Delta_h^N(f) \right| \text{.}\end{aligned}$$ Lemma \[Deltahinfty\] can now be invoked to obtain from an upper bound for the TV distance between $\mu$ and $\mu_h$. This concludes the proof of Theorem \[GLABGaccuracy\] which we restate:
Assume \[sa1\], \[sa2\], and \[sa3\]. Let $\mu_h$ denote the discrete invariant measure of GLA. Then, there exist $C>0$ and $h_c>0$, such that for all $h<h_c$, $$| \mu - \mu_h |_{TV} \le C h^{p} \text{.}$$
In summary, the preceding analysis showed the TV error estimate in Theorem \[GLABGaccuracy\] relies on GLA’s variational integrator $\theta_h$ being volume-preserving and $pth$-order accurate, the Ornstein-Uhlenbeck map $\psi_h$ exactly preserving the Boltzmann-Gibbs measure, and GLA being geometrically ergodic. To establish the latter, we used the strategy adopted in [@MaStHi2002] which relates pathwise convergence of a discretization of an SDE to geometric ergodicity of the discretization. This strategy requires the potential force is uniformly Lipschitz.
Validation {#Validation}
==========
This section tests three different instances of GLA on a variety of simple mechanical systems governed by Langevin equations. The purpose of this section is to confirm the error estimates provided in the paper.
Let $h$ be a fixed step size and $\xi_k \sim \mathcal{N}(0,1)$ for $k \in \mathbb{N}$. The following update scheme is obtained by composing the explicit first-order, symplectic Euler method with $\psi_h$: $$\label{eq:bgse}
\begin{cases}
\begin{array}{rcl}
\hat{p}_k &=& e^{-\gamma h} p_k + \sqrt{\frac{1-e^{-2 \gamma h}}{\beta}} \xi_k \text{,} \\
q_{k+1} &=& q_k + h \hat{p}_k \text{,} \\
p_{k+1} &=& \hat{p}_k - h \frac{\partial U}{\partial q}(q_{k+1}) \text{,}
\end{array}
\end{cases}$$ for $k \in \mathbb{N}$. The following integrator is obtained by composing the second-order accurate explicit, symmetric, symplectic Störmer-Verlet method with $\psi_h$: $$\label{eq:bgsv}
\begin{cases}
\begin{array}{rcl}
\hat{p}_k &=& e^{-\gamma h} p_k + \sqrt{\frac{1-e^{-2 \gamma h}}{\beta}} \xi_k \text{,} \\
P_k^{1/2} &=& \hat{p}_k - \frac{h}{2} \frac{\partial U}{\partial q}(q_k) \text{,} \\
q_{k+1} &=& q_k + h P_k^{1/2} \text{,} \\
p_{k+1} &=& P_k^{1/2} - \frac{h}{2} \frac{\partial U}{\partial q}(q_{k+1}) \text{,}
\end{array}
\end{cases}$$ for $k \in \mathbb{N}$. The following integrator is obtained by composing a fourth-order accurate explicit, symmetric, symplectic method due to F. Neri (see, e.g., [@Yo1990]) with $\psi_h$: $$\label{eq:bgne}
\begin{cases}
\begin{array}{rcl}
Q_1 &=& q_k \text{,} \\
P_1 &=& e^{-\gamma h} p_k + \sqrt{\frac{1-e^{-2 \gamma h}}{\beta}} \xi_k \text{,} \\
\end{array} \\
\begin{cases}
\begin{array}{rcl}
P_{i+1} &=& P_i - c_i h \frac{\partial U}{\partial q}(Q_i) \text{,} \\
Q_{i+1} &=& Q_i + d_i h P_{i+1} \text{,}
\end{array}~~~ i =1,..., 4,
\end{cases} \\
\begin{array}{rcl}
q_{k+1} &=& Q_{5} \text{,} \\
p_{k+1} &=& P_{5} \text{,}
\end{array}
\end{cases}$$ for $k \in \mathbb{N}$, and where we have introduced the following constants: $$\begin{aligned}
c_1 = c_4 = \frac{1}{2 (2 -2^{1/3})}, &~~~ c_2 = c_3 = \frac{1- 2^{1/3}}{2 ( 2- 2^{1/3})}, ~~\\
d_1 = d_3 = \frac{1}{2 -2^{1/3}}, &~~~ d_2 = \frac{-2^{1/3}}{2- 2^{1/3}}, d_4 = 0 \text{.}\end{aligned}$$ The purpose of this fourth-order symplectic integrator is for validation. For “optimal” fourth and fifth-order accurate symplectic integrators that minimize the error in the Hamiltonian, the reader is referred to [@McAt1992].
We will show that despite the fact that (\[eq:bgsv\]) and (\[eq:bgne\]) are only first-order pathwise convergent according to Theorem \[GLAaccuracy\], they approximate ensemble averages of $\mu$-integrable functions that satisfy $| f(q,p) | \le M(q,p)$ for all $(q,p) \in \mathbb{R}^{2n}$ to within second and fourth-order accuracy, respectively. This is consistent with Theorem \[GLABGaccuracy\].
#### Linear Oscillator
This section follows the analysis of numerical methods for linear oscillators governed by Langevin equations developed in [@MiTr2004; @BuLeLy2007]. The governing equations for a linear oscillator of unit mass at uniform temperature $1/\beta$ are given explicitly by evaluating at $U(q) = q^2/2$: $$\begin{cases}
\begin{array}{rcl}
d q &=& p dt \text{,} \\
d p &= & -q dt - \gamma p dt + \sqrt{2 \beta^{-1} \gamma} d W \text{.}
\end{array}
\end{cases}$$ The resulting process is Gaussian with stationary distribution given by the BG distribution: $$P_{\infty}(q,p) = Z^{-1} \exp\left( - \beta \left( \frac{ p^2}{2} + \frac{q^2}{2} \right) \right)$$ and with $$\mu(q^2) = \lim_{t \to \infty} \E \{ q_t^2 \} = 1/\beta, ~~~ \mu(p^2) = \lim_{t \to \infty} \E \{ p_t^2 \} =
1/\beta,~~~ \kappa(q p) = \lim_{t \to \infty} \E\{ q_t p_t \} = 0 \text{.}$$
The stationary distribution of the geometric Langevin integrators (\[eq:bgse\])-(\[eq:bgne\]) is also Gaussian with equilibrium distribution of the form: $$P_h (q, p) = \frac{1}{2 \pi | \Sigma^{-1} | } \exp\left( -\frac{1}{2} \begin{pmatrix} q & p
\end{pmatrix} \Sigma^{-1} \begin{pmatrix} q \\ p \end{pmatrix} \right)$$ where $$\Sigma = \begin{bmatrix} \sigma_q^2 & \kappa \\ \kappa & \sigma_p^2 \end{bmatrix}, ~~~
\sigma_q^2 = \lim_{n \to \infty} \E \{ q_n^2 \} , ~~~ \sigma_p^2 = \lim_{n \to \infty} \E \{ p_n^2 \} ,
~~~ \kappa = \lim_{n \to \infty} \E \{ q_n p_n \} \text{.}$$ This stationary correlation matrix can be explicitly determined. For (\[eq:bgse\]) its entries are given by: $$\begin{aligned}
\sigma_q^2 &= \frac{ \left( 1+ e^{\gamma h} \right)^2 }{ \left(2 + 2 e^{\gamma h} - h^2 \right) \beta} = \frac{1}
{\beta} + \mathcal{O}(h) \\
\sigma_p^2 &= \frac{2 + 2 e^{\gamma h} - h^2 + e^{2 \gamma h} h^2}{\left( 2 + 2 e^{\gamma h} - h^2\right) \beta} =
\frac{1}{\beta} + \mathcal{O}(h^2) \\
\gamma &= - \frac{e^{\gamma h} \left(1 + e^{\gamma h} \right) h}{\left( 2 + 2 e^{\gamma h} - h^2\right) \beta} =
\mathcal{O}(h)\end{aligned}$$ Observe that the cumulative error (\[eq:bgse\]) makes is of $\mathcal{O}(h)$, i.e., $$| \sigma_q^2 - \mu(q^2 ) | +
| \sigma_p^2 - \mu(p)^2 ) | +
| \kappa - \mu(q p ) | \le \mathcal{O}(h) \text{.}$$ Whereas for (\[eq:bgsv\]) its entries are given by: $$\begin{aligned}
\sigma_q^2 &= \frac{4}{\beta ( 4 - h^2 )} = \frac{1}{\beta} + \frac{h^2}{4 \beta} + \mathcal{O}
(h^4) \\
\sigma_p^2 &= \frac{1}{\beta} \\
\kappa &= 0\end{aligned}$$ and its cumulative error is of $\mathcal{O}(h^2)$, i.e., $$| \sigma_q^2 - \mu(q^2 ) | +
| \sigma_p^2 - \mu(p)^2 ) | +
| \kappa - \mu(q p ) | \le \mathcal{O}(h^2) \text{.}$$ For (\[eq:bgne\]) its entries are given by: $$\begin{aligned}
\sigma_q^2 &= \frac{1}{\beta} + \frac{\left(-4-3 \times \sqrt[3]{2}-2 \times 2^{2/3}\right)
h^4}{144 \beta } + O(h^5) \\
\sigma_p^2 &= \frac{1}{\beta} \\
\kappa &= 0\end{aligned}$$ and its cumulative error is of $\mathcal{O}(h^4)$, i.e., $$| \sigma_q^2 - \mu(q^2 ) | +
| \sigma_p^2 - \mu(p)^2 ) | +
| \kappa - \mu(q p ) | \le \mathcal{O}(h^4) \text{.}$$
Finally, consider the exact splitting applied to the linear oscillator at uniform temperature. Hamilton’s equations for a linear oscillator are: $$\begin{bmatrix} \dot{q} \\ \dot{p} \end{bmatrix}(t) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}
\begin{bmatrix} q \\ p \end{bmatrix}(t), ~~~ \begin{bmatrix} q \\ p \end{bmatrix}(0) =
\begin{bmatrix} q_0 \\ p_0 \end{bmatrix} \text{,}$$ with explicit solution given by: $$\begin{bmatrix} q \\ p \end{bmatrix}(t) = \begin{bmatrix} \cos(t) & \sin(t) \\ -\sin(t) & \cos(t)
\end{bmatrix} \begin{bmatrix} q_0 \\ p_0 \end{bmatrix} \text{.}$$ Thus, the exact splitting update is given by: $$\begin{bmatrix} q_1 \\ p_1 \end{bmatrix} = \begin{bmatrix} \cos(h) & \sin(h) \\ -\sin(h) & \cos(h)
\end{bmatrix} \begin{bmatrix} q_0 \\ p_0 \end{bmatrix} +
\sqrt{\frac{e^{2 \gamma h} - 1}{\beta}} \begin{bmatrix} \sin(h) \\ \cos(h) \end{bmatrix} \xi_0 \text{.}$$ In this situation one can show there is no error made in the stationary correlation matrix. This follows from the fact that the exact solution of Hamilton’s equations is volume and energy preserving.
#### Nonglobally Lipschitz, Nonlinear Oscillator
The theory in this paper does not apply to this example since the potential force is nonglobally Lipschitz. With a nonglobally Lipschitz potential force, for any $h>0$ there will exist regions in phase space where the Lipschitz constant of the potential force is beyond the linear stability threshold of an explicit variational integrator $\theta_h$. Hence, a GLA based on an explicit variational integrator will be stochastically unstable; transient, to be precise. However, for the step-sizes and variational integrators employed, and for the duration of the numerical experiments, discrete orbits of GLA seem to be confined to a compact region of phase space where the variational integrator $\theta_h$ is linearly stable and Monte Carlo estimates are consistent with the error estimates in the paper.
The governing equations for a cubic oscillator of unit mass at uniform temperature $1/\beta$ are given explicitly by evaluating at $U(q) = q^4/4 - q^2/2$: $$\begin{cases}
\begin{array}{rcl}
d q &=& p dt \text{,} \\
d p &= & (q - q^3) dt - \gamma p dt + \sqrt{2 \beta^{-1} \gamma} d W \text{.}
\end{array}
\end{cases}$$ The resulting potential force is only locally Lipschitz.
The estimates shown earlier predict that $$| \mu(q^2) - \mu_h(q^2) | \le \mathcal{O}(h^p)$$ where $p$ is the order of accuracy of $\theta_h$. Hence, one expects near fourth-order accuracy for , near second-order accuracy for and first-order accuracy for as shown in table \[tab:coerrorstats\]. The tests will apply (\[eq:bgse\])-(\[eq:bgne\]) to estimate $$\lim_{t \to \infty} \E \{ q_t^2 \} = \mu(q^2) = \frac{\int_{-\infty}^{\infty} q^2 e^{- \beta U(q)} dq}
{ \int_{-\infty}^{\infty} e^{- \beta U(q)} dq}$$ by empirical averages of the form $$I^{h, N} := \frac{1}{N} \left( \sum_{i=1}^N q_i^{2} \right) \text{.}$$ As nicely discussed in [@Ta2002], in addition to the discretization error $| \mu(q^2) -
\mu_h(q^2) | $ one has to cope with the statistical error arising from the time-average being finite, i.e., $I^{h, N} \approx \mu_h(q^2)$. The computations were performed with $\gamma = 1$ and an inverse temperature value of $ \beta = 2$.
Time-Step Number of Steps
----------- ----------------- ---------- ---------- ----------
h N 3.11e-02 8.03e-03 1.45e-02
h/2 2 N 1.49e-02 1.94e-03 9.80e-04
h/4 4 N 7.42e-03 4.83e-04 7.35e-05
h/8 8 N 3.74e-03 1.29e-04 5.79e-06
: The table estimates $\left| \mu_h(q^2) - \mu(q^2) \right|$ using empirical time-averages with $N=40 \times 10^9$ steps and $h=0.4$ with GLA as determined by (\[eq:bgse\])-(\[eq:bgne\]). For subsequent rows the time-steps are halved and the number of steps doubled, so that the time-interval of integration is fixed for all experiments. The results show that as the time-steps are halved the difference decreases linearly for , nearly quadratically for , and nearly quartically for . These results are consistent with the error estimates in the paper. []{data-label="tab:coerrorstats"}
Conclusion
==========
The analysis in this paper represents a first step towards a deeper analysis of GLA for molecular systems. In this paper we make assumptions on the Hamiltonian that ensure the solution to inertial Langevin and GLA are geometrically ergodic. In particular, we assume the Hamiltonian vector field is uniformly Lipschitz and the Hamiltonian is coercive. These hypotheses are sufficient to ensure GLA is geometrically ergodic whenever the solution process is. In particular, the former hypothesis is important to ensure GLA is stochastically stable [@MeTw2009]. If GLA’s underlying variational integrator is not globally linearly stable, one can show GLA defines a transient Markov chain. Still one can use GLA as proposal step within a Metropolis-Hastings algorithm to obtain a stochastically stable Metropolis-Adjusted Geometric Langevin Algorithm (MAGLA). A numerical analysis of MAGLA including pathwise convergence can be found in [@BoVa2009A].
A closer inspection of the proof of Theorem \[GLABGaccuracy\] reveals that the estimate relies on the following important ingredients:
1. GLA is geometrically ergodic with respect to a probability measure $\mu_h$;
2. the variational integrator is Lebesgue-measure preserving;
3. the Ornstein-Uhlenbeck flow preserves $\mu$; and,
4. the local energy error of the variational integrator is $(p+1)th$-order accurate.
Therefore, we stress that the result holds under more general conditions. The main point being:
> [*If GLA is geometrically ergodic with respect to a unique invariant measure, the error in sampling the invariant measure of the SDE is determined by the energy error in GLA’s variational integrator.*]{}
Appendix
========
Single-Step Error
-----------------
\[singlesteperror\] Assume \[sa1\] and \[sa2\]. For $h$ small enough, there exists a $C>0$ such that $$| \E^{\boldsymbol{x}} \{ \boldsymbol{Y}(h) - \boldsymbol{Z}_{1} \} |
\le C \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{2}$$ and $$\left( \E^{\boldsymbol{x}} \{ | \boldsymbol{Y}(h) - \boldsymbol{Z}_{1} |^2 \} \right)^{1/2}
\le C \left( 1+ | \boldsymbol{x} |^2 \right)^{1/2} h^{3/2} \text{.}$$
Write $\mathbf{Y}(t)=(\boldsymbol{Q}(t),\boldsymbol{P}(t))$ where $\boldsymbol{Q}(t)$ and $\boldsymbol{P}(t)$ represent the instantaneous configuration and momentum of the system, respectively. In terms of which write the SDE as: $$\label{langevin}
\begin{cases}
d \boldsymbol{Q} / dt &= \boldsymbol{M}^{-1} \boldsymbol{P} \\
d \boldsymbol{P} &= - \nabla U( \boldsymbol{Q} ) dt
- \gamma \boldsymbol{M}^{-1} \boldsymbol{P} dt + \sqrt{2 \gamma \beta^{-1}} d \boldsymbol{W}
\end{cases}$$ $\boldsymbol{Q}(0) = \boldsymbol{Q}_0$ and $\boldsymbol{P}(0) = \boldsymbol{P}_0$. It will be useful to write out the solution of . For this purpose integrate to obtain: $$\begin{aligned}
& \boldsymbol{Q}(h) = \boldsymbol{Q}_0
+ h \boldsymbol{M}^{-1} \boldsymbol{P}_0
+ \int_0^h \boldsymbol{M}^{-1} [ - \nabla U(\boldsymbol{Q}(s)) - \gamma \boldsymbol{M}^{-1} \boldsymbol{P}(s) ] ( h - s) ds \nonumber \\
& \qquad + \sqrt{2 \gamma \beta^{-1}} \int_{0}^{h} (h - s) \boldsymbol{M}^{-1}
d \boldsymbol{W}(s) \label{Qcontinuous}\end{aligned}$$ and $$\begin{aligned}
& \boldsymbol{P}(h) = e^{-\gamma \boldsymbol{M}^{-1} h}
\boldsymbol{P}_0 - h \nabla U( \boldsymbol{Q}_0) - \int_{0}^{h} (h-s) \frac{\partial^2 U}{\partial \boldsymbol{q}^2}(\boldsymbol{Q}(s)) \cdot \boldsymbol{M}^{-1} \boldsymbol{P}(s) ds \nonumber \\
& \qquad + \int_0^h (\boldsymbol{I} - e^{- \gamma \boldsymbol{M}^{-1} (h-s)} ) \nabla U(\boldsymbol{Q}(s)) ds + \boldsymbol{\eta} \label{Pcontinuous}\end{aligned}$$ where we have introduced: $$\boldsymbol{\eta} = \sqrt{2 \gamma \beta^{-1}} \int_0^h e^{-\gamma \boldsymbol{M}^{-1} (h - s) } d \boldsymbol{W}(s) \text{.}$$
Write $\mathbf{Z}(t)=(\hat{\boldsymbol{Q}}(t),\hat{\boldsymbol{P}}(t))$ where $\hat{\boldsymbol{Q}}(t)$ and $\hat{\boldsymbol{P}}(t)$ represent the instantaneous configuration and momentum of the exact splitting, respectively. The exact splitting after a single step solves $$\label{hamiltonseqns}
\begin{cases}
d \hat{\boldsymbol{Q}}/dt &= \boldsymbol{M}^{-1} \hat{\boldsymbol{P}} \\
d \hat{\boldsymbol{P}}/dt &= - \nabla U(\hat{\boldsymbol{Q}})
\end{cases}$$ where $\hat{\boldsymbol{Q}}(0) = \boldsymbol{Q}_0$ and $\hat{\boldsymbol{P}}(0) = e^{-\gamma \boldsymbol{M}^{-1} h} \boldsymbol{P}_0 + \boldsymbol{\eta}$. Integrating yields, $$\begin{aligned}
& \hat{\boldsymbol{Q}}(h) = \boldsymbol{Q}_0
+ h \boldsymbol{M}^{-1} e^{-\gamma \boldsymbol{M}^{-1} h} \boldsymbol{P}_0
- \int_0^h \boldsymbol{M}^{-1} \nabla U(\hat{\boldsymbol{Q}}(s)) ( h - s) ds + h \boldsymbol{M}^{-1} \boldsymbol{\eta} \label{Qdiscrete}\end{aligned}$$ and $$\begin{aligned}
& \hat{\boldsymbol{P}}(h) = e^{-\gamma \boldsymbol{M}^{-1} h}
\boldsymbol{P}_0 - h \nabla U( \boldsymbol{Q}_0) \nonumber \\
& \qquad - \int_{0}^{h} (h-s) \frac{\partial^2 U}{\partial \boldsymbol{q}^2}(\hat{\boldsymbol{Q}}(s)) \cdot \boldsymbol{M}^{-1} \hat{\boldsymbol{P}}(s) ds
+ \boldsymbol{\eta} \label{Pdiscrete} \text{.}\end{aligned}$$
To obtain the mean-squared and mean error estimates we will use the following bounds on the second moment of the continuous solution and the exact splitting. Namely, for all $t \in [0,h]$, there exists a $C>0$ such that $$\label{momentbounds}
\E^{\boldsymbol{x}} \left\{ | \boldsymbol{Z}(t) |^2 \right\}
\vee
\E^{\boldsymbol{x}} \left\{ | \boldsymbol{Y}(t) |^2 \right\} \le C (1 + | \boldsymbol{x} |^2)$$ where $\boldsymbol{x} = (\boldsymbol{Q}_0, \boldsymbol{P}_0)$. We will prove this estimate for the exact splitting, and omit the proof for the continuous solution since it is very similar. Let $\hat{\boldsymbol{x}} = (\hat{\boldsymbol{Q}}(0), \hat{\boldsymbol{P}}(0))$. By Taylor’s formula, $$\begin{aligned}
& | \boldsymbol{Z}(t) |^2 = | \hat{\boldsymbol{x}} |^2 + 2 \int_0^t \left\langle \hat{\boldsymbol{Q}}(s), \boldsymbol{M}^{-1} \hat{\boldsymbol{P}}(s) \right\rangle ds + 2 \int_0^t \left\langle \hat{\boldsymbol{P}}(s), - \nabla U( \hat{\boldsymbol{Q}}(s) ) \right\rangle ds \end{aligned}$$ By Young’s inequality, $$\begin{aligned}
& | \boldsymbol{Z}(t) |^2 \le | \hat{\boldsymbol{x}} |^2 \\
& \qquad + \int_0^t ( | \hat{\boldsymbol{Q}}(s) | ^2 + | \boldsymbol{M}^{-1} \hat{\boldsymbol{P}}(s) |^2 ) ds
+ \int_0^t ( | \hat{\boldsymbol{P}}(s) | ^2 + | \nabla U( \hat{\boldsymbol{Q}}(s) )|^2 ) ds \end{aligned}$$ The uniform Lipschitz condition [**U1**]{} implies a linear growth condition on the potential force. Hence, there exists a constant $C>0$ such that $$\begin{aligned}
| \boldsymbol{Z}(t) |^2 \le | \hat{\boldsymbol{x}} |^2
+ C \int_0^t | \hat{\boldsymbol{Z}}(s) | ^2 ds \end{aligned}$$ By Gronwall’s lemma it follows that, $$\begin{aligned}
| \boldsymbol{Z}(t) |^2 \le | \hat{\boldsymbol{x}} |^2 e^{C h} \end{aligned}$$ for $t \le h$. Hence, for $h$ small enough we obtain the desired bound on the second moment of the exact splitting.
The difference between and is, $$\begin{aligned}
& \boldsymbol{Q}(h) - \hat{\boldsymbol{Q}}(h) = \nonumber \\
& \qquad h \boldsymbol{M}^{-1} ( \boldsymbol{I} - e^{-\gamma \boldsymbol{M}^{-1} h } ) \boldsymbol{P}_0 \nonumber \\
& \qquad + \int_0^h \boldsymbol{M}^{-1} [ \nabla U( \hat{\boldsymbol{Q}}(s) ) - \nabla U( \boldsymbol{Q}(s) ) ] (h - s) ds \nonumber \\
& \qquad + \int_0^h \boldsymbol{M}^{-1} [ - \gamma \boldsymbol{P}(s) ds + \sqrt{2 \gamma \beta^{-1}} d \boldsymbol{W}(s) ] ( h - s) - h \boldsymbol{M}^{-1} \boldsymbol{\eta} \label{Qerror}\end{aligned}$$ Likewise, the difference between and is, $$\begin{aligned}
& \boldsymbol{P}(h) - \hat{\boldsymbol{P}}(h) = \nonumber \\
& \qquad \int_0^h (h-s) \left[ \frac{\partial^2 U}{\partial \boldsymbol{q}^2}(\hat{\boldsymbol{Q}}(s)) \cdot \boldsymbol{M}^{-1} \hat{\boldsymbol{P}}(s) - \frac{\partial^2 U}{\partial \boldsymbol{q}^2}(\boldsymbol{Q}(s)) \cdot \boldsymbol{M}^{-1} \boldsymbol{P}(s) \right] ds \nonumber \\
& \qquad + \int_0^h (e^{-\gamma \boldsymbol{M}^{-1} (h - s) } - \boldsymbol{I} ) \nabla U( \boldsymbol{Q}(s) ) ds
\label{Perror}\end{aligned}$$ From and , it is clear that the leading term of the expectation of these differences is $O(h^2)$ and the leading term in the mean-squared expectation of the differences is $O(h^{3/2})$. To bound these terms one needs the bounds on the second moments of the solutions and the exact splitting provided in . To enable estimation of one needs control of the Hessian of $U$. The assumption of smoothness on $U$ and the uniform Lipschitz condition [**U1**]{} on the potential force provide this control. In particular, since a differentiable function is Lipschitz continuous if and only if it has bounded differential, the Frobenius norm of the Hessian of $U$ is bounded by the Lipschitz constant of the potential force.
Variational Integrators
-----------------------
Let $L: \mathbb{R}^{2n} \to \mathbb{R}$ denote the Lagrangian obtained from the Legendre transform of the Hamiltonian $H$, and given by: $$L(\boldsymbol{q}, \boldsymbol{v}) =
\frac{1}{2} \boldsymbol{v}^T \boldsymbol{M} \boldsymbol{v} - U(\boldsymbol{q}) \text{.}$$ A variational integrator is defined by a discrete Lagrangian $L_d : \mathbb{R}^n \times \mathbb{R}^n
\times \mathbb{R}^+ \to \mathbb{R}$ which is an approximation to the so-called [*exact discrete Lagrangian*]{} which is defined as: $$L_d^E(\boldsymbol{q}_0, \boldsymbol{q}_1, h) = \int_0^h L( \boldsymbol{Q}, \dot{\boldsymbol{Q}} ) dt$$ where $\boldsymbol{Q}(t)$ solves the Euler-Lagrange equations for the Lagrangian $L$ with endpoint conditions $\boldsymbol{Q}(0) = \boldsymbol{q}_0$ and $\boldsymbol{Q}(h) = \boldsymbol{q}_1$.
By passing to the Hamiltonian description, a discrete Lagrangian determines a symplectic integrator on $\mathbb{R}^{2n}$ as follows. Given $(\boldsymbol{q}_0,\boldsymbol{p}_0) \in \mathbb{R}^{2n}$, a variational integrator defines an update $(\boldsymbol{q}_1, \boldsymbol{p}_1) \in \mathbb{R}^{2n}$ by the following system of equations: $$\label{del}
\begin{cases}
\begin{array}{rcl}
\boldsymbol{p}_0 &=& -D_1 L_d(\boldsymbol{q}_0, \boldsymbol{q}_1,h) \text{,} \\
\boldsymbol{p}_1 &=& D_2 L_d(\boldsymbol{q}_0, \boldsymbol{q}_1,h) \text{.}
\end{array}
\end{cases}$$ Denote this map by $\theta_h : \mathbb{R}^{2n} \to \mathbb{R}^{2n}$, i.e., $$\theta_h: ~~ (\boldsymbol{q}_0,\boldsymbol{p}_0) \mapsto (\boldsymbol{q}_1, \boldsymbol{p}_1) \text{,}$$ where $(\boldsymbol{q}_1, \boldsymbol{p}_1)$ solve . One can show that $\theta_h$ preserves the canonical symplectic form on $\mathbb{R}^{2n}$, and hence, is Lebesgue measure preserving [@MaWe2001]. By appropriately constructing $L_d$, the map $\theta_h$ can define an approximation to the flow of Hamilton’s equations for the Hamiltonian $H$ . Hyperregularity of the discrete Lagrangian means for all $h>0$ $$\label{HyperRegularity}
| \det D_{12} L_d(\boldsymbol{q}_0, \boldsymbol{q}_1, h) | > 0,
~~ \forall~ \boldsymbol{q}_0, \boldsymbol{q}_1 \in Q \times Q \text{.}$$
[^1]: Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185 ([nawaf@cims.nyu.edu]{}).
[^2]: Applied & Computational Mathematics and Control & Dynamical Systems, Caltech, Pasadena, CA 91125 ().
|
---
abstract: 'We review some quantum electrodynamical effects related to the uniform acceleration of atoms in vacuum. After discussing the energy level shifts of a uniformly accelerated atom in vacuum, we investigate the atom-wall Casimir-Polder force for accelerated atoms, and the van der Waals/Casimir-Polder interaction between two accelerated atoms. The possibility of detecting the Unruh effect through these phenomena is also discussed in detail.'
address:
- '$^1$ SISSA International School for Advanced Studies, Via Bonomea 265, 34136 Trieste, Italy'
- '$^2$ INFN, Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Italy'
- '$^3$ Dipartimento di Fisica e Chimica, Università degli Studi di Palermo and CNISM, Via Archirafi 36, I-90123 Palermo, Italy'
- '$^4$ Université Montpellier 2, Laboratoire Charles Coulomb UMR 5221- F-34095, Montpellier, France'
author:
- 'Jamir Marino$^{1,2}$, Antonio Noto$^{3,4}$, Roberto Passante$^3$, Lucia Rizzuto$^3$ and Salvatore Spagnolo$^3$'
title: 'Effects of a uniform acceleration on atom-field interactions'
---
Introduction
============
One of the most interesting phenomena in quantum field theory is the so-called Unruh effect [@Unruh76; @CHM08], stating that a uniformly accelerated detector in the Minkowski quantum vacuum experiences a thermal bath at a temperature proportional to its acceleration. In qualitative terms, this phenomenon originates from time-dependent Doppler shifts of the field detected by the accelerated detector [@Milonni04]. Unfortunately, this effect is very tiny, and there is not yet an experimental evidence of it. In fact, an acceleration of the order of $10^{22}$ cm/s$^2$ would be necessary to obtain Unruh radiation corresponding to the temperature of 1 K [@CHM08]. The question of the perception of the quantum vacuum in accelerated frames remains a widely debated problem. In this paper we discuss and review some effects related to a uniform acceleration of atoms in the vacuum space, in the framework of quantum electrodynamics. After considering the radiative level shifts of a uniformly accelerated atom in vacuum, we focus on the atom-wall Casimir-Polder interaction for an accelerated atom, as well as on the van der Waals/Casimir-Polder interaction between two accelerating atoms. We are interested to investigate if [*thermal*]{} effects due to the acceleration, may produce observable effects in such physical systems. This is indeed expected because the Lamb-shift and the Casimir-Polder interactions are directly related to vacuum field fluctuations [@Welton; @CP48; @PT93]. This suggests the possibility of detecting the Unruh effect through a measurement of the Casimir-Polder interactions for atoms accelerating in the vacuum space.
The paper is organized as follows. In Section 2, we review the Lamb-shift of a uniformly accelerated hydrogen atom in vacuum, in terms of vacuum fluctuations and radiation reaction field. Section 3 is devoted to the Casimir-Polder force between an accelerated atom and a conducting wall, both in the case of the scalar and the electromagnetic field. Finally, in Section 4 we investigate the atom-atom van der Waals/Casimir-Polder force between two accelerating atoms.
Energy level shifts for an accelerated hydrogen atom {#sec 2}
====================================================
Let us consider a hydrogen atom moving with uniform acceleration and interacting with the quantum electromagnetic field in the vacuum state. The Hamiltonian describing the atom-field interaction in the instantaneous inertial frame of the atom, in the multipolar coupling scheme is [@Passante98; @YZ06]
$$H(\tau)=H_{\rm A}(\tau)+H_{\rm F}(\tau)+H_{\rm I}(\tau) \label{eq:2.1} \, ,$$
with $$\begin{aligned}
&\ & H_{\rm A}(\tau)=\hbar \sum_n\wn\sigma_{nn}(\tau),\label{eq:2.2}\\
&\ & H_{\rm F}(\tau)=\hbar \sum_{\bk\j} \wk \akd\ak\frac{\rmd t}{\rmd\tau},
\label{eq:2.3}\\
&\ &H_{\rm I}(\tau)=-e\sum_{mn}\mathbf{r}_{mn}\cdot\E(\xt)\sigma_{mn}(\tau) \, ,
\label{eq: 2.4}\end{aligned}$$ where $\tau$ is the proper time and $\sigma_{\ell m}=|\ell\rangle\langle m|$, $|n\rangle$ being a complete set of atomic states with energy $\omega_n$. ${\bf \mu}=e{\bf r}$ is the atomic electric dipole moment. Also, $x=(t,\bx)$ is the space-time coordinate of the atom and $\bk\j$ the wave vector ($j=1,2$ is the polarization index). We are interested in investigating the energy level shifts of the uniformly accelerated atom. Exploiting the general procedure in [@DDC82; @AM95], we consider the contributions of vacuum fluctuations and radiation reaction field (indicated respectively with the subscripts *vf* and *rr*) to the atomic level shifts of the accelerated atom. These quantities, at second order in $e$, are [@AM95] $$\begin{aligned}
(\delta E_a)_{\rm vf}=- \frac{\rmi e^2}{\hbar}
\int_{\tau_0}^{\tau}\rmd\tau'C^{\rm F}_{\ell\m}(\xt,\xtp){(\chi^{\rm A}_{\ell\m})}_a(\tau,\tau'),
\label{eq:2.5}\\
(\delta E_a)_{\rm rr}=-\frac{\rmi e^2}{\hbar}
\int_{\tau_0}^{\tau}\rmd\tau'\chi^{\rm F}_{\ell\m}(\xt,\xtp){(C^{\rm A}_{\ell\m})}_a(\tau,\tau'),
\label{eq:2.6}\end{aligned}$$ where $C_{\ell m}^{\rm F (A)}$ and $\chi_{\ell m}^{\rm F(A)}$ are the symmetric correlation function and the linear susceptibility of the field (atom), respectively. Using the appropriate statistical functions of atom and field [@Takagi], after some algebra, it is obtained [@Passante98] $$\begin{aligned}
{(\delta E_a)}_{\rm vf}&=\frac{e^2}{3\pi c^3}\sum_b \left| \ad \br (0)|b \rangle \right|^2 \int_0^{\infty}\rmd\omega\,\omega^3 \left(1+\frac{a^2}{c^2\omega^2} \right)\nonumber \\
&\times \coth\left(\frac{\pi c\,\omega}{a}\right) P\left(\frac{1}{\omega+\omega_{ab}}-\frac{1}{\omega-\omega_{ab}}\right)\label{eq:2.7},\\
{(\delta E_a)}_{\rm rr}&= - \frac{e^2}{3\pi c^3}\sum_b \left| \ad \br (0)|b \rangle \right|^2 \int_0^{\infty}\rmd\omega\,\omega^3\nonumber \\
&\times P\left(\frac{1}{\omega+\omega_{ab}}+\frac{1}{\omega-\omega_{ab}}\right) \, ,
\label{eq:2.8}\end{aligned}$$ where the index $a$ and the relative ket indicate a generic atomic state, $\omega_{ab}=\omega_a-\omega_b$, $a$ is the acceleration, and the limit $\tau-\tau_0$ to infinity has been taken.
We first note from (\[eq:2.8\]) that the radiation reaction contribution to the energy level shift does not depend on the atomic acceleration; it is identical to that obtained in the inertial case. This result is expected: in fact the field emitted by the atom propagates with the velocity of light, and the only moment it can act back on the atom is precisely the same time it is emitted. Thus, radiation reaction contribution is not influenced by the atomic motion. As we shall discuss in the next Section, the situation radically changes in the presence of boundaries, such as a reflecting mirror. On the other hand, the contribution of vacuum fluctuations depends explicitly on the acceleration, through the presence of the [*thermal*]{} term $\coth(\pi c\,\omega/a)$, linked to the Unruh temperature $T=\hbar a/2\pi c k_{\rm B}$, and of an extra term proportional to $a^2$. This result indicates that the atomic acceleration induces observable effects in the energy shifts and in the observable Lamb shift.
The appearance of a nonthermal term proportional to $a^2$ is related to a similar term appearing in the correlation function of the electric field in the accelerated frame. It is possible to show that, for a ground-state hydrogen atom, thermal and non-thermal terms are comparable for $a\sim 10^{25}$ $\rm{cm/s^2}$ [@Passante98]. This is the also the typical acceleration required to detect the Unruh effect for such a system by measuring atomic level shifts.
The atom-wall Casimir-Polder interaction for accelerated atoms {#sec 3}
==============================================================
The same physical arguments given in the previous Section indicate that also the Casimir-Polder interaction between a uniformly accelerated atom and a perfectly reflecting plate could make manifest the Unruh effect. Corrections to the atom-wall Casimir-Polder force due to the acceleration of the atom, have been calculated in the scalar field case [@Rizzuto07]. It has been shown that such corrections are relevant only for accelerations of the order of $10^{24}$ cm/s$^2$, confirming the necessity of extremely high accelerations for a detection of the Unruh effect. This calculation can be extended to the more realistic case of a uniformly accelerated hydrogen atom, interacting with the quantum electromagnetic field in the presence of a perfectly reflecting mirror [@RS09; @ZY10]. Let us consider an atom moving with uniform acceleration in a direction parallel to the mirror. In analogy with the case of an atom at rest near a plate, the atom-wall Casimir-Polder interaction can be obtained considering only the $z$-dependent terms in the expression of the energy level shift. As before, we evaluate the contribution of vacuum fluctuations and of the radiation reaction field to the energy shift of the atomic level, in the presence of a conducting plate. After some lengthy algebra, it is found [@RS09] $$\begin{aligned}
{(\delta E_a)}_{\rm{vf}}^{(\mathit b.c.)}&=-\frac{1}{8\pi^2 c^3}\sum_b\left(\mu_{\ell}^{ab}\mu_m^{ba}\right)\frac{1}{(2z_0)^3}P\int_0^{\infty}\rmd\omega
K_{\ell m}(\omega;z_0,a) \nonumber \\
&\times \coth\left(\frac{\pi c\,\omega}{a}\right)\left(\frac{1}{\omega+\omega_{ab}}-\frac{1}{\omega-\omega_{ab}}\right)
\label{eq:3.1}\end{aligned}$$ and $$\begin{aligned}
{(\delta E_a)}_{\rm{rr}}^{(b.c.)}&=
\frac{1}{8\pi^2 c^3}\sum_b\left(\mu_{\ell}^{ab}\mu_m^{ba}\right)\frac{1}{(2z_0)^3}P\int_0^{\infty}\rmd\omega \nonumber \\
&\times K_{\ell m}(\omega;z_0,a)
\left(\frac{1}{\omega+\omega_{ab}}+\frac{1}{\omega-\omega_{ab}}\right) \, ,
\label{eq:3.2}\end{aligned}$$ where the superscript (b.c.) stands for *boundary conditions*, $z_0$ is the atom-wall distance and $K_{\ell m}(\omega;z_0,a)$ is a function containing a combination of sinusoidal functions, which takes into account the presence of the conducting plate [@RS09].
We now briefly comment the results obtained. Equation (\[eq:3.1\]) clearly shows that the contribution of vacuum fluctuations contains not only a thermal correction due to the factor $\coth(\pi c\,\omega/a)$, but also an extra term proportional to the function $K_{\ell m}(\omega;z_0,a)$. This function modulates the Casimir-Polder interaction as a function of the atom-plate distance $z_0$ and of the atomic acceleration $a$. On the other hand, Equation (\[eq:3.2\]) shows that the radiation reaction term is sensitive to the atomic acceleration. This behavior is not surprising. When a boundary is present, the field emitted by the atom can act back on the atom after a reflection on the conducting plate. Since the atom accelerates, in the time-interval between the emission and the subsequent absorption of the reflected field, the atom has moved from its position of a distance depending on its acceleration. This gives rise to a dependence of the radiation reaction contribution on the atomic acceleration. The expression for the total atom-wall Casimir-Polder interaction for the accelerated atom, is obtained by summing (\[eq:3.1\]) and (\[eq:3.2\]). It is easy to show that in order to reveal effects of acceleration on the atom-wall Casimir-Polder interaction, accelerations of the order of $10^{24}$cm/s$^2$ are necessary, as in the case of the energy shift of an atom in the unbounded space. This makes very difficult to observe the effects of the acceleration through the Lamb shift or the atom-wall Casimir-Polder interactions. As we shall show in the next Section, the situation seems different when we consider the van der Waals/Casimir-Polder interaction between two accelerated atoms.
van der Waals/Casimir-Polder interaction energy between two accelerated atoms
=============================================================================
We now discuss the effect of the uniform acceleration on the van der Waals interaction (both in the non-retarded and in the Casimir-Polder regime) between two atoms in their ground state moving parallel to each other, with the same uniform acceleration perpendicular to their distance. A first approach to this problem can be done by extending a procedure developed in [@GW] for the Casimir-Polder interaction at finite temperature, to the case of accelerated atoms, on the basis of the equivalence between temperature and acceleration expressed by the Unruh effect. Following this method one finds, in the low acceleration limit ($a\ll\omega_0 c$), a qualitative change of the distance dependence of the interaction energy due to the acceleration in the far zone, and a correction in the near zone [@MP10]. This result suggests that investigating the behavior of the van der Waals force could give evidence of the Unruh effect.
An alternative and more fundamental approach to this problem is obtained by generalizing a heuristic model for the van der Waals interactions, already used for atoms at rest [@PT93; @PPR03; @CP97; @Salam09] and in the presence of boundary conditions [@SPR06], to the case of accelerating atoms [@NP13]. In such model the interaction energy arises from the dipolar interaction between the (instantaneous) oscillating dipole moments of the atoms, induced and spatially correlated by the zero-point electromagnetic field fluctuations. The dipole fields are treated classically while the quantum nature of the radiation is included in the spatial correlations of the electric field related to vacuum fluctuations.
We start calculating the electric and the magnetic field of an accelerating dipole (atom $A$) in the position of the other dipole (atom $B$) at the retarded time [@PT01]. These fields are then Lorentz-transformed to the co-moving frame of the atoms, that is a locally inertial frame. The interaction of the field emitted by atom A with atom B, related to the dipole moments correlation and thus to the spatial correlations of vacuum field fluctuations, is then calculated in this frame and expressed in terms of physical quantities in the laboratory frame. We finally get the following expression of the interaction energy, at time $t$ (the atoms are assumed at rest at $t=0$) [@NP13]
$$\begin{aligned}
\langle \Delta \tilde{E}\rangle &= \Delta E^{\rm r}+\frac{a^2 t}{2 c^3}\frac{\hbar c}{\pi R^3} \int _0^\infty \alpha (A; \rmi u)\alpha(B; \rmi u) \nonumber \\
&\times \left( 3 + \frac{4}{uR} +\frac{2}{u^2R^2}\right)u^2\,\exp[-2uR]\, \rmd u\, \nonumber \\
&+\frac{a^2t^2}{6c^2}\frac{\hbar c}{\pi R^2}\,\int _0^\infty \alpha (A; \rmi u)\alpha(B; \rmi u) \, \left( -1+\frac{4}{uR}\right.\nonumber \\
&+\frac{8}{u^2R^2}\left.+\frac{8}{u^3R^3}+\frac{4}{u^4R^4}\right) u^4\,\exp[-2uR]\,\rmd u \, ,
\label{eq:32}\end{aligned}$$
where $$\begin{aligned}
\Delta E^{\rm r} &= - \frac{\hbar c}{\pi R^2} \int _0^\infty \alpha (A; \rmi u)\alpha(B; \rmi u) \left[ 1 + \frac{2}{uR} + \frac{5}{u^2R^2}\right.\nonumber \\
&+\left.\frac{6}{u^3R^3}+\frac{3}{u^4R^4}\right] u^4 \exp[-2uR]\, \rmd u
\label{eq:33}\end{aligned}$$ is the well-known van der Waals potential energy for two atoms at rest [@CP48; @PT93]. $\alpha$ is the atomic dynamical polarizability and $R$ the interatomic distance.
From (\[eq:32\]) it is possible to show that in the near zone, an acceleration-dependent term proportional to $R^{-5}$ adds to the usual $R^{-6}$ term, while in the far zone a term proportional to $R^{-6}$ adds to the usual $R^{-7}$ behavior. There are also acceleration-dependent corrections to the usual $R^{-6}$ (near zone) and $R^{-7}$ (far zone) terms. These corrections depend on the product $at$; thus, by taking sufficiently long times, significant changes to the interaction energy could be observed even for reasonable values of the acceleration. For example, if we consider $a^2t^2/c^2\simeq 0.2$, the correction term is about 10% in near zone and 1% in far zone; taking $t\simeq 10^{-3}$ s, we obtain $a\sim 10^{13}$ cm/s$ ^2$, an acceleration several orders of magnitude smaller than that necessary to observe the Unruh effect in the cases discussed in the previous Sections.
A different formal approach to the problem can be based on an extension of the method used in Sections \[sec 2\] and \[sec 3\] to the fourth-order in the coupling, allowing separation of vacuum fluctuations and radiation reaction contributions to the Casimir-Polder interaction between two accelerated atoms. For atoms interacting with a scalar field, we obtain, after lengthy algebra, the following expression for the contribution of vacuum fluctuations to distance-dependent energy level shifts [@MNP13] $$\begin{aligned}
{(\Delta E)}_{\rm vf} &=4\rmi\mu^4\int_{\tau_0}^\tau \rmd\tau'\int_{\tau_0}^{\tau'} \rmd\tau''\int_{\tau_0}^{\tau''} \rmd\tau'''
C^{\rm F}\left(\phi^{\rm f}(x_{\rm A}(\tau)),\phi^{\rm f}(x_{\rm B}(\tau'''))\right)
\nonumber \\
&\times \chi^{\rm F}\left(\phi^{\rm f}(x_{\rm A}(\tau')),\phi^{\rm f}(x_{\rm B}(\tau''))\right)
\chi_b^{\rm B}(\tau'',\tau''')\chi_a^{\rm A}(\tau,\tau') \, ,\end{aligned}$$ where $a$ and $b$ are respectively the states of atom $\rm A$ and atom B, $\mu$ is the coupling constant, $\tau$ the proper time and $\phi^{\rm f}$ the free part of the scalar field. $C^{\rm F}$ and $\chi^{\rm F}$ indicate the symmetrical correlation function and susceptivity of the field, respectively, while $\chi^{\rm A(B)}$ is the susceptivity of atom A (B). An analogous expression can be obtained for the radiation reaction contribution. From the distance-dependent part of the energy shift $\delta E_a^{\rm A}$ obtained with this method, using the appropriate statistical functions of field and atoms, we can obtain the van der Waals/Casimir-Polder interaction energy. This method has also the advantage that it can be used in several different cases (atoms at rest, atoms in a thermal field or moving atoms, for example), by substituting the appropriate statistical functions.
Conclusions
===========
We have reviewed several different physical effects related to the uniform acceleration of atoms in the quantum vacuum, in the framework of quantum electrodynamics. Specifically, after discussing the energy level shifts of a uniformly accelerated atom in vacuum, we have investigated the atom-wall Casimir-Polder force for an accelerated atom, as well as the van der Waals/Casimir-Polder interaction between two accelerated atoms, using different approaches. We have shown that the atomic acceleration modifies both the Lamb shift and the Casimir-Polder or van der Waals interactions. In particular, our results suggest that significant changes to the interaction energy between the two accelerated atoms could be observed even for reasonable values of the acceleration, contrary to other known manifestations of the Unruh effect.
Acknoledgments
==============
The authors gratefully acknowledge financial support by the Julian Schwinger Foundation, by Ministero dell’Istruzione, dell’Università e della Ricerca and by Comitato Regionale di Ricerche Nucleari e di Struttura della Materia.
References {#references .unnumbered}
==========
[50]{} Unruh W G 1976 D [**14**]{} 870 Crispino L C B, Higuchi A and Matsas G E A 2008 787 Alsing P M and Milonni P W 2004 [*Am J. Phys.*]{} [**72**]{} 1524 Welton T A 1948 1157 Casimir H B G and Polder D 1948 , 360 Power E A and Thirunamachandran T 1993 A [**48**]{} 4761 Passante R 1998 A **57** 1590 Yu H and Zhu Z 2006 D **74** 044032 Dalibard J, Dupont-Roc J and Cohen-Tannoudji C 1982 1617; Dalibard J, Dupont-Roc J and Cohen-Tannoudji C 1984 637 Audretsch J and Muller R 1995 A **52** 629 Takagi S 1988 [*Prog. Theor. Phys. Suppl.*]{} **88** 1 Rizzuto L 2007 A [**76**]{} 062114 Rizzuto L and Spagnolo S 2009 A [**79**]{} 062110 Zhu Z and Yu H 2010 A [**82**]{} 042108 Goedecke G H and Wood R 1999 A [**60**]{} 2577 Marino J and Passante R 2010 [*Quantum Field Theory under the Influence of External Conditions (QFEXT09)*]{} edited by Milton K A and Bordag M (Singapore: World Scientific) p 328 Passante R, Persico F and Rizzuto L 2003 A [**316**]{} 29 Cirone M and Passante R 1997 [*J. Phys.*]{} B [**30**]{} 5579 Salam A 2009 [*J.Phys.: Conf. Ser.*]{} [**161**]{} 012040 Spagnolo S, Passante R and Rizzuto L 2006 A [**73**]{} 062117 Noto A and Passante R 2013 D [**88**]{} 025041 Power E A and Thirunamachandran T 2001 [*Proc. R. Soc. Lond. A 457*]{} 2757 Marino J, Noto A and Passante R, 2013 [*in preparation*]{}
|
---
abstract: |
The resistivities of the dilute, strongly-interacting 2D electron systems in the insulating phase of a silicon MOSFET are the same for unpolarized electrons in the absence of magnetic field and for electrons that are fully spin polarized by the presence of an in-plane magnetic field. In both cases the resistivity obeys Efros-Shklovskii variable range hopping $\rho(T) = \rho_0
\mbox{exp}[(T_{ES}/T)^{1/2}]$, with $T_{ES}$ and $1/\rho_0$ mapping onto each other if one applies a shift of the critical density $n_c$ reported earlier. With and withoug magnetic field, the parameters $T_{ES}$ and $1/\rho_0 = \sigma_0$ exhibit scaling consistent with critical behavior approaching a metal-insulator transition.
author:
- Shiqi Li
- 'M. P. Sarachik'
title: 'Resistivity of the insulating phase approaching the 2D metal-insulator transition: the effect of spin polarization'
---
Two-dimensional (2D) electron systems realized in semiconductor heterostructures and on the surface of doped semiconductor devices such as silicon MOSFETs have been intensively studied for more than half a century [@Ando1982]. Significant advances in fabrication techniques in recent years have yielded samples with greatly increased electron mobility thereby allowing access to lower electron densities, a regime where the energy of interactions between the electrons is dominant and substantially larger than their kinetic energy. Rather than exhibiting a resistivity that increases logarithmically toward infinity as the temperature is reduced [@DolanDynes] as expected within the theory of localization [@Abrahams1979], the resistivity of these strongly-interacting dilute 2D electron systems displays metallic-like behavior at low temperatures and insulating behavior as the electron density is reduced below a material-dependent critical electron density $n_{\rm c}$ (see references [@Abrahams2001; @KravchenkoReports; @Spivak2009] for reviews). The apparent metal-insulator transition and metallic phase observed in high-mobility, strongly interacting 2D electron systems is widely regarded as one of the most important unresolved problems in condensed matter physics.
Exceptionally large magnetoresistances have been reported in response to in-plane magnetic field in the vicinity of the critical electron density, $n_c$. For electron densities $n_{\rm s} >
n_{\rm c}$ on the just-metallic side of the transition, increasing the parallel magnetic field causes the resistivity to increase by several orders of magnitude at low temperatures and reach saturation at a density-dependent field $B_{\rm sat}$; for $B > B_{\rm sat}$ the temperature dependence of the resistivity is that of an insulator. Measurements have confirmed that the saturation of the sample resistivity corresponds to full spin polarization [@Okamoto1999; @Vitkalov2000]. Since the parallel magnetic field does not couple to the orbital motion of electrons in sufficiently thin 2D systems, this suggests an important role for the electron spins.
The magnetoresistance has been thoroughly investigated on the metallic side of the transition, while considerably less information has been gathered in the insulating phase. In order to better understand the effect of spin, we embarked on a detailed comparison of the resistivity of the insulating state arrived at by: (1) reducing the electron density below the critical density $n_{\rm c}$ in zero field so that the electrons are unpolarized, or (2) applying an in-plane magnetic field beyond the saturation field $B_{\rm sat}$ where the metallic behavior is suppressed and the spins are fully polarized.
It has been established in a number of experiments that the application of in-plane magnetic field causes a shift of the critical density $n_c$ [@Simonian1997; @Pudalov1997; @Mertes1999; @Jaroszynski2004; @Shashkin2001]. As further detailed below, there have been conflicting reports on the behavior of the resistivity in the insulating phase of low-disorder 2D materials, with some claiming simply-activated behavior and others claiming Efros-Shklovskii variable-range hopping in the presence of a Coulomb gap due to electron interactions [@Shklovskii1984].
In this paper we report that both in zero field and in the presence of an in-plane magnetic field sufficient to polarize all the carriers, the resistivity obeys Efros-Shklovskii variable-range hopping. Moreover, we demonstrate that the unpolarized insulator and the fully spin-polarized insulator map onto each other if one simply shifts the critical density $n_c$. This implies that the transport properties of the insulating state are the same in an unpolarized and in a completely spin-polarized system.
Measurements were performed on silicon metal-oxide-semiconductor field-effect transistors (MOSFETs) between 0.25 K and 2 K in an Oxford Heliox He-3 refrigerator in the absence of magnetic field and in a parallel field of 5 T. Similar to those used in Ref. [@Mokashi2012], the high-mobility samples used in our studies ($\mu_{\rm peak} = 3 \times 10^4 $ cm$^2$/Vs) were fabricated in a Hall bar geometry of width 50 $\mu$m and distance 120 $\mu$m between the central potential probes; the oxide thickness was 150 nm. Contact resistance was minimized by using a split-gate geometry in which thin gaps are introduced in the gate metallization so that a high electron density can be maintained near the contacts independently of the value of the electron density in the main part of the sample. Electron densities were controlled by applying a positive dc gate voltage relative to the contacts.
![\[fig:nonlinearIV\] Nonlinear current-voltage (I-V) characteristics in zero field at several electron densities in the insulating regime; $T = 0.25$ K. As the electron density approaches the critical density from the insulating side, the nonlinearity of I-V gradually fades away [@Shashkin2001]. At density $0.738 \times 10^{11}$ cm$^{-2}$ which is very close to the critical density, the nonlinearity can only be seen when plotted on an expanded scale. The inset shows the temperature dependence of resistivity at a few electron densities near the critical density in zero field.](nonlinearIV.eps){width="50.00000%"}
As shown in Fig. \[fig:nonlinearIV\], the voltage is a strongly nonlinear function of the current: it exhibits linear (ohmic) behavior at low currents and bends to a lower slope above a current that depends on electron density and temperature. This is in agreement with numerous past measurements and has variously been attributed to non-ohmic strong electric field Efros-Shklovskii variable range hopping [@Marianer1992], or the depinning of a Wigner Crystal or charge-density wave [@Goldman1990; @Kravchenko1991; @D'Iorio1992; @Pudalov1993; @Pudalov1994]. The resistivity plotted in the next few figures was deduced from the low-current, linear portion of the curves measured for each $n_{\rm s}$ and $T$ [@Mason1995].
Figure \[fig:rhoTB0\] shows the log of the resistivity measured in zero field plotted against $T^{-\frac{1}{2}}$ for electron densities $n_{\rm s} < n_{\rm c}$. For comparison, the inset shows the same data plotted as a function of $T^{-1}$. Quite clearly, the data obey the Efros-Shklovskii form of variable-range hopping in the temperature range of the measurements (0.25 K to 2 K). Small deviations at the lowest temperature may be due to poor thermal contact of the electron system to the lattice (and thermometer).
![\[fig:rhoTB0\] Resistivity vs $T^{-\frac{1}{2}}$ in zero field for electron densities between $0.449$ and $0.663 \times
10^{11} $ cm$^{-2}$. For comparison the inset shows the data for several densities plotted vs $T^{-1}$.](rhoTB0.eps){width="60.00000%"}
Similar data obtained in a parallel field $B_{||} = 5$ T are shown on a semilog plot in Fig. \[fig:rhoTB5\], as a function of $T^{-\frac{1}{2}}$ in the main figure and as a function of $T^{-1}$ in the inset. Care was taken to check that $B_{||} = 5$ T is beyond the saturation fields $B_{\rm sat}$ at the densities we used for the measurement, so that all the spins in the sample are fully polarized. The Efros-Shklovskii form of variable-range hopping, $\rho(T)=\rho_0$ exp$[(T_{ES}/T)^{\frac{1}{2}}]$ provides an excellent fit to the data in parallel magnetic field.
![\[fig:rhoTB5\] Resistivity vs $T^{-\frac{1}{2}}$ in 5 T parallel magnetic field for electron densities between $0.488$ and $0.654 \times 10^{11}$ cm$^{-2}$. The inset shows the data for several densities plotted vs $T^{-1}$.](rhoTB5.eps){width="57.00000%"}
In a doped semiconductor at relatively low temperature where there is not enough (thermal) energy to activate electrons across the mobility edge to conducting states in the impurity band, charge transport occurs via thermally activated hopping between localized states, obeying the expression $$\rho(T) = \rho_0 \mbox{exp}[(T_{ES}/T)^{\alpha}]$$ Here the exponent $\alpha$ = 1 for nearest-neighbor hopping, $\alpha =
\frac{1}{3}$ for Mott variable-range hopping in two dimensions [@Mott1969], and $\alpha = \frac{1}{2}$ for Efros-Shklovskii variable-range hopping in the presence of a Coulomb gap in the density of states associated with electron-electron interactions [@Efros1975; @Shklovskii1984].
There have been a number of studies of the resistivity of silicon MOSFETs in the insulating phase. The temperature dependence of the resistivity in zero field was attributed to Efros-Shklovskii variable range hopping with $\alpha = \frac{1}{2}$ down to $\approx 0.3$ K by Mertes $et~al.$ [@Mertes1999] and Jarozynski $et~al.$ [@Jaroszynski2004] and over a broader range to lower temperatures by Mason $et~al.$ [@Mason1995]. The zero-field resistivity was found to obey the same form in a single two-dimensional layer of $\delta$-doped GaAs/Al$_x$Ga$_{1-x}$As [@Shlimak2000]. By contrast, Arrhenius-type activated behavior ($\alpha = 1$) was claimed for silicon in zero field by Shashkin $et~al.$ [@Shashkin2001].
Measurements in the presence of an in-plane magnetic field have yielded various different results: Shashkin $et~al.$ fitted data for silicon at intermediate temperatures and densities near the transition to the activated form with $\alpha = 1$ [@Shashkin2001]; also in silicon, Mertes $et~al.$ applied a high parallel magnetic field $B_{||} = 10.8$ T and found a larger exponent $\alpha = 0.7$ [@Mertes2001]. For a single two-dimensional layer of $\delta$-doped GaAs/Al$_x$Ga$_{(1-x)}$As in parallel fields of 8 T and 6 T, Shlimak $et~al.$ reported an exponent of 0.8 [@Shlimak2000].
As shown in Fig. \[fig:rhoTB0\] and Fig. \[fig:rhoTB5\] above, the data reported here in zero field and in 5 T parallel field are both consistent with Efros-Shklovskii variable-range hopping, $\rho(T)=\rho_0$exp$[(T_{ES}/T)^{\frac{1}{2}}]$ [@footnote]. The parameters $T_{ES}(0)$ in zero field and $T_{ES}$(5T) in 5 T parallel field are shown in Fig. \[fig:TES\] as a function of electron density. $T_{ES}(0)$ and $T_{ES}$(5T) both decrease and gradually approach zero. While $T_{ES}(0)$ extrapolates to zero at critical density $n_{\rm c}$ in the absence of magnetic field, $T_{ES}$(5T) obtained in a 5 T parallel field extrapolates to zero at a different electron density which we designate as $n_{\rm c}$(5T) = $n_{\rm c}(B_{\rm sat})$. The inset to Fig. \[fig:TES\] shows that the prefactor $\rho_0$ increases sharply with increasing electron density.
![\[fig:TES\] $T_{ES}(0)$ in zero field and $T_{ES}$(5T)$ =
T_{ES}(B_{\rm sat})$ in in-plane field of 5 T vs electron density. The inset shows the parameter $\rho_0$ as a function of electron density. Closed circles and open circles refer to data in zero field and $5$ T, respectively.](TES.eps){width="55.00000%"}
The critical density $n_{\rm c}$ in zero field is generally determined as the density at which the temperature derivative of the resistivity, $d\rho/dT$, changes sign; an example of such a “separatrix” between metallic and insulating behavior can be seen in the inset to Fig. \[fig:nonlinearIV\]. The authors of Ref. [@Shashkin2001] have shown that in zero field the density where the nonlinearity of current-voltage (I-V) characteristics vanishes yields the same critical density. Using both protocols, the critical density in zero field was estimated to be $n_{\rm c} = (0.782 \pm 0.014) \times 10^{11}$ cm$ ^{-2}$ for our sample. We note that this is the critical density inferred from thermoelectric power measurements on the same sample [@Mokashi2012].
An estimate of the critical density in a $5$ T field can be obtained by determining the density at which the nonlinearity in the I-V characteristic vanishes [@Shashkin2001]. Using this procedure, an estimate is obtained for the critical electron density at $B_{\rm sat}$ in our sample of $n_{\rm c}(B_{\rm sat}) = (0.985 \pm 0.014) \times 10^{11} $ cm$^{-2}$.
In Efros and Shklovskii’s theory of variable-range hopping conduction for strongly interacting electrons, the parameter $T_{ES}
\propto \frac{e^2}{\epsilon \xi}$, where $\epsilon$ and $\xi$ are the density-dependent dielectric constant and localization length, respectively [@Shklovskii1984]. Moreover, $\epsilon(n_{\rm s}) \propto [n_{\rm c}/(n_{\rm c}-n_{\rm s})]^{\rm \zeta}$ and $\xi(n_{\rm s}) \propto
[n_{\rm c}/(n_{\rm c}-n_{\rm s})]^{\rm \nu}$ as electron density $n_{\rm s}$ approaches the critical density $n_{\rm c}$ from the insulating side, so that $T_{ES}$ obeys the critical form:
$$T_{ES} = A[(n_{\rm c}-n_{\rm s})/n_{\rm c}]^{\beta}$$
where $\beta = (\zeta + \nu)$.
![\[fig:compare\] The parameters $T_{ES}$ and $1/\rho_0 = \sigma_0$ versus $[(n_{\rm c}-n_{\rm s})/n_{\rm c}]$ on a log-log scale. The filled circles denote the measurements taken in zero field (for unpolarized electrons); the open circles denote measurements taken in a 5T in-plane magnetic field where the electrons are fully spin-polarized. The value of $n_{\rm c}$ is deduced for each case as described in the text: $n_{\rm c}(B=0) = (0.782 \pm 0.014) \times 10^{11}$ cm$^{-2}$, $n_{\rm c}(B_{\rm sat}) = (0.985 \pm 0.014) \times 10^{11}$ cm$^{-2}$.](compare.eps){width="50.00000%"}
Using the values estimated for $n_{\rm c}(0)$ and $n_{\rm c}(B_{\rm sat})$, we now plot $T_{ES}(0)$ in zero field (filled circles) and $T_{ES}(B_{\rm sat})$ in 5 T parallel field (triangles) as a function of $(n_{\rm c}-n_{\rm s})/n_{\rm c}$ and $(n_{\rm c}(B_{sat})-n_{\rm s})/n_{\rm c}(B_{\rm sat})$, respectively, on a log-log scale in Fig. \[fig:compare\]. $T_{ES}(0)$ in the unpolarized state and $T_{ES}(B_{\rm sat})$ in the fully polarized state at 5 T lie on the same curve. Relative to the appropriate critical density for each case, the Efros-Shklovskii parameter $T_{ES} \propto \frac{1}{\epsilon \xi}$ approaches the metallic phase with the same critical exponent, $T_{ES}
= A[(n_{\rm c}-n_{\rm s})/n_{\rm c}]^{\beta}$ with $\beta = 2.5 \pm 0.1$. Interestingly, the prefactor $1/\rho_0 = \sigma_0$ is also consistent with critical behavior.
Although our data and many past experiments are consistent with the occurrence of a transition, it is noteworthy and puzzling that the value we’ve obtained for the critical exponent $\beta$ is substantially different from the smaller exponents in the vicinity of $1.6$ reported in a number of earlier studies [@exp1.6]. Our analysis is based on data obtained quite far from the transition and entails two parameters, $\rho_0$ and $T_{ES}$, while earlier analyses using a single parameter $T_0$ were based on data obtained near and on both sides of the presumed critical point. A careful examination shows that the portion of our data that is near the transition is consistent with one-parameter scaling and a smaller exponent of $\beta = 1.7$, while the portion of the published data that is further from the transition is can be scaled using two parameters, $T_{ES}$ and $\rho_0$ with $\beta$ much closer to our value of $2.5$ [@footnote2]. Thus, there is no discrepancy between data sets, and the difference may be due to relative closeness to the critical regime.
As suggested in Ref. [@Mokashi2012], there may indeed be two quantum critical points in play: $n_c$ driven by disorder, and a disorder-independent universal interaction-driven critical point $n_i$. They are different in principle, but so close to each other in the low-disorder samples of our studies that they have not been separately identified experimentally. The scaling of $T_{ES}$ and $\rho_0$ is determined using data obtained in the insulating phase only, and it is by no means clear which (if either) critical point it refers to. This raises the possibility that there may be two closely spaced transitions, one driven by disorder and one driven by interactions. In the range of densities just below the transition where the temperature derivative of the resistivity, $d\rho/dT$, is negative indicating insulating behavior, the resistivity is a very weak function of temperature so that it cannot be reliably fit to the Efros-Shklovskii form. It is thus possible that there is an intermediate phase in this region that gives rise to different exponents entering and leaving this phase as the density is increased. Additional careful studies to lower temperatures of samples with yet lower disorder in this range of densities would be of great interest.
To summarize, we have measured the resistivity of the dilute, strongly-interacting 2D electron system in a silicon MOSFET in the insulating phase arrived at by (1) reducing the electron density in the absence of magnetic field, which results in no spin polarization (2) applying a 5 Tesla in-plane field which results in complete spin polarization. For both cases, the resistivity obeys Efros-Shklovskii variable-range hopping with parameters $T_{ES}$ and $\rho_0$ that are consistent with critical behavior approaching a metal-insulator transition. The sole effect of spin polarization is a simple shift of the critical density. The fact that the transport properties of the insulating state are the same in two systems with different spin configurations should be very interesting and worth further theoretical attention.
Useful comments were provided by Steve Kivelson, Boris Spivak, Vladimir Dobrosavljevic and Dragana Popovic. We thank Boris Shklovskii, Dietrich Belitz and Sergey Kravchenko for numerious discussions, valuable insights and for their critical reading of this manuscript. This work was supported by the National Science Foundation grant DMR-1309008 and the Binational Science Foundation Grant 2012210.
[99]{} For example, see T. Ando, A. B. Fowler and F. Stern, Rev. Mod. Phys. [**54**]{}, 437 (1982). G. J. Dolan and D. D. Osheroff, Phys. Rev. Lett. [**43**]{}, 721 (1979); D. J. Bishop, D. C. Tsui, and R. C. Dynes, Phys. Rev. Lett. [**44**]{}, 1153 (1980); M. J. Uren, R. A. Davies, and M. Pepper, J. Phys. C [**13**]{}, L985 (1980). E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Phys. Rev. Lett. [**42**]{}, 673 (1979). E. Abrahams, S. V. Kravchenko, and M. P. Sarachik, Rev. Mod. Phys. 73, 251 (2001). S. V. Kravchenko and M. P. Sarachik, Rep. Prog. Phys. [**67**]{}, 1 (2004). B. Spivak, S. V. Kravchenko S. A. Kivelson, and X. P. A. Gao, Rev. Mod. Phys. [**82**]{}, 1743 (2009). Tohru Okamoto, Kunio Hosoya, Shinji Kawaji, and Atsuo Yagi, Phys. Rev. Lett. [**82**]{}, 3875 (1999). S. A. Vitkalov, Hairong Zheng, K. M. Mertes, M. P. Sarachik, and T. M. Klapwijk, Phys. Rev. Lett. [**85**]{}, 2164 (2000); S. A. Vitkalov, M. P. Sarachik and T. M. Klapwijk, Phys. Rev. B [**64**]{}, 073101 (2001). D. Simonian, S. V. Kravchenko, M. P. Sarachik, and V. M. Pudalov, Phys. Rev. Lett. [**79**]{}, 2304 (1997) V. M. Pudalov, G. Brunthaler, A. Prinz, and G. Bauer, JETP Lett. [**65**]{}, 932 (1997). K. M. Mertes, D. Simonian, M. P. Sarachik, S. V. Kravchenko, and T. M. Klapwijk, Phys. Rev. B [**60**]{}, R5093 (1999). A. A. Shashkin, S. V. Kravchenko, and T. M. Klapwijk, Phys. Rev. Lett. [**87**]{}, 266402 (2001). J. Jaroszynski, Dragana Popovic, and T. M. Klapwijk, Phys. Rev. Lett. [**92**]{}, 226403 (2004). B. I. Shklovskii, A. L. Efros, Electronic Properties of Doped Semiconductors, Solid State Series Vol. 45 (Springer-Verlag Berlin Heidelberg, 1984). A. Mokashi, S. Li, Bo Wen, S. V. Kravchenko, A. A. Shashkin, V. T. Dolgopolov, M. P. Sarachik, Phys. Rev. Lett. [**109**]{}, 096405 (2012). S. Marianer and B. I. Shklovskii, Phys. Rev. B [**46**]{}, 13100 (1992). V. J. Goldman, M. Santos, M. Shayegan, and J. E. Cunningham, Phys. Rev. Lett. [**65**]{}, 2189 (1990). S. V. Kravchenko, V. M. Pudalov, J. Campbell, and M. D’Iorio, JETP Lett.[**54**]{}, 532 (1991). M. D’Iorio, V. M. Pudalov, and S. G. Semenchinsky, Phys. Rev. B [**46**]{}, 15992 (1992). V. M. Pudalov, M. D’Iorio, S. V. Kravchenko, and J. W. Campbell, Phys. Rev. Lett. [**70**]{}, 1866 (1993). V. M. Pudalov and S. T. Chui, Phys. Rev. B [**49**]{}, 14062 (1994). W. Mason, S. V. Kravchenko, G. E. Bowker, and J. E. Furneaux, Phys. Rev. B [**52**]{}, 7857 (1995). N. F. Mott, Philosophical Magazine, [**19**]{}, 835 (1969). A. L. Efros and B. I. Shklovskii, J. Phys. C, [**L49**]{}, 8 (1975). I. Shlimak, S. I. Khondaker, M. Pepper, and D. A. Ritchie, Phys. Rev. B [**61**]{}, 07253 (2000). K. M. Mertes, Hairong Zheng, S. A. Vitkalov, M. P. Sarachik, and T. M. Klapwijk, Phys. Rev. B [**63**]{}, 041101(R) (2001). Acceptable fits to our data can be obtained to the expression $\rho = A T^x\mbox{exp}[(T_{ES}/T)^p$ using a temperature-dependent prefactor. This yields a strongly negative value of $x$ for $p=1$, a small positive value of x consistent with $0$ for $p=1/2$ and $x \approx 2$ for $p=1/3$. Although little is known about the range of acceptable values for $x$, these numbers suggest that $p=1/2$ and $x \approx 0$ (a temperature-independent prefactor) is a reasonable choice. Data over a broader range of temperature could restrict the range for acceptable fits. S. V. Kravchenko, Whitney E. Mason, G. E. Bowker, J. E. Furneaux, V. M. Pudalov, and M. D’Iorio, Phys. Rev. B [**51**]{}, 7038 (995); S. V. Kravchenko, D. Simonian, M. P. Sarachik, Whitney Mason, and J. E. Furneaux, Phys. Rev. Lett. [**77**]{}, 4938 (1996); Dragana Popovic, A. B. Fowler, and S. Washburn, Phys. Rev. Lett. [**79**]{}, 1543 (1997); X. G. Feng, Dragana Popovic, and S. Washburn, Phys. Rev. Lett. [**83**]{}, 368 (1999). The value $2.5$ is very close to the critical exponent $2.3$ expected for a classical percolation transition. However, the disorder potential in Si is short-range so the possibility of a percolation transition can be safely ruled out.
|
1 psbox =
**Quantum Hall Transition in an Array of Quantum Dots**
0.2in
K. Ziegler
0.3in
Max-Planck-Institut für Physik Komplexer Systeme
Außenstelle Stuttgart, Postfach 800665, D-70506 Stuttgart, Germany
0.6in Abstract:
A two-dimensional array of quantum dots in a magnetic field is considered. The electrons in the quantum dots are described as unitary random matrix ensembles. The strength of the magnetic field is such that there is half a flux quantum per plaquette. This model exhibits the Integer Quantum Hall Effect. For $N$ electronic states per quantum dot the limit $N\to\infty$ can be solved by a saddle point integration of a supersymmetric field theory. The effect of level statistics on the density of states and the Hall conductivity is compared with the effect of temperature fluctuations. PACS Nos.: 71.55Jv, 73.20Dx, 73.40Hm We consider a two-dimensional array of quantum dots in a homogeneous magnetic field perpendicular to the array. A quantum dot in an array is a complex finite system of electrons, subject to strong Coulomb interaction and a confining potential. Even if the number of electrons is small there is a large number of electronic states in a given energy interval. Therefore, we are forced to use a statistical description of the quantum dot. A typical feature of such a complicated non-integrable system is level repulsion. The latter, also found in other complex many-particle systems like atomic nuclei \[1\], atoms \[2\] or metallic particles \[3\], can be conveniently described by random matrix ensembles \[4\]. Since the magnetic field breaks the time–reversal invariance in the dot, an appropriate model is the Gaussian unitary ensemble (GUE). Electrons can travel in the array of quantum dots due to tunneling between neighboring dots. On the square–array, which will be considered in this article, the tunneling rates are $t$ and $t'$ for nearest and next nearest neighbors, respectively (cf. Fig.1). The coupling between the individual quantum dots due to these tunneling processes is weak. This allows us to assume that the statistical occupation of the electronic states in each dot is uncorrelated between different dots. Thus the quantum dots can be represented by independent random matrix ensembles. Moreover, we also assume for simplicity that the tunneling processes are independent, i.e., the tunneling electrons do not interact with each other.
For very weak tunneling rates the array should behave like an insulator because of the fluctuations of the energy levels. One would expect for increasing tunneling rates that a metallic regime can be reached where the array becomes conducting. However, due to the statistical fluctuations of the energy levels in the dots the effect of Anderson localization must play a crucial role in the array. Anderson localization prevents a two-dimensional system to become metallic, at least if no or only a weak magnetic field is present \[5\]. On the other hand, in the two-dimensional electron gas in a homogeneous magnetic field quantum Hall transitions (QHT) have been observed which are accompanied by delocalized electronic states \[6\]. A QHT occurs if a gap opens in a band of electronic states. This phenomenon is known, for instance, from electrons which are subject to a homogeneous magnetic field [*and*]{} a periodic potential \[7\]. Depending on the magnetic field the electrons form several subbands where each subband contributes $e^2/h$ to the Hall conductivity \[8,9,10\]. As an approximation of the periodic potential one can use a tight binding model where the lattice constant is given by the period of the potential. In this article we will study the effect of the statistics of energy levels and the effect of thermal fluctuations on the QHT.
There are two different approaches to the transport in quantum dots. One is based on the S-matrix, the other one on the Hamiltonian. The former is very useful for numerical simulations because it describes directly the reflection and transmission of the electrons through the quantum dots \[11,12\]. The latter, however, requires the application of linear response theory to get a conductivity via Kubo’s formula. In this article the Hamiltonian representation will be used. The effective Hamiltonian of an array of quantum dots reads as a quadratic form $\sum{\hat H}_{r,r'}^{\a,\a'}c_{r}^\a c^{\a'\dagger}_{r'}$ in the fermion creation and annihilation operators $c^\dagger$, $c$ with the matrix elements $${\hat H}_{r,r'}^{\a,\a'}=H_{r}^{\a,\a'}\delta_{r,r'}+H'_{r,r'}
\delta_{\a,\a'}+V_r\delta_{r,r'}\delta_{\a,\a'},
\eqno (1)$$ where $\a,\a'=1,...,N$ label the $N$ electronic states in the quantum dots and $r$ and $r'$ label positions of the quantum dots in the two-dimensional array. In general, tunneling between all $N$ states should be allowed with some probability, depending exponentially on the energy of the states $\a$ and $\a'$. To include this would require a detailed knowledge of these states. Therefore, we choose as a simplifying approximation the assumption that there is tunneling only between states with the same $\a$ at nearest or next nearest neighbor dots with fixed tunneling rates. The distance between neighboring dots is measured in units of $(\phi_0/2B)^{1/2}$. Typical distances are $a=100...500 nm$ \[13\]. The magnetic field for the creation of one flux quantum per plaquette is $B=\phi_0/a^2\approx 0.016...0.4T$. This regime is accessible in natural crystals ($a\approx 0.5 nm$) only with astronomical magnetic fields.
The electron can occupy statistically states inside the quantum dot which are represented by the matrix elements $H^{\a,\a'}_r$. $H$ is the $N\times N$ Hermitean Hamiltonian ($H^\dagger=H$) of a quantum dot with $N^2$ statistically independent matrix elements. They are Gaussian distributed with zero mean and $\langle H_r^{\alpha \alpha'} H_{r'}^{\alpha''\alpha'''}
\rangle =(g/N)\delta^{\alpha\alpha'''}\delta^{\alpha'\alpha''}\delta_{r,r'}$. $g$ is the strength of the level fluctuations. This depends on the strength of the interaction beween the electrons inside the dot. Therefore, $g$ increases with the number of electrons per dot and with increasing confinement.
The tunneling is represented by the Hamiltonian $H'$. This reads in Landau gauge (with $r=(x,y)$) for flux $\phi$ per plaquette $$H'_{r,r'}=te^{2i\pi y\phi/\phi_0}\delta_{r',r+e_x}+t\delta_{r',r+e_y}
\pm it'e^{2i\pi y\phi/\phi_0}\delta_{r',r+e_x\pm e_y}+h.c.
\eqno (2)$$ For the special case of half a flux quantum per plaquette ($\phi=\phi_0/2$) the phase factor in (2) is real and changes only sign between nearest neighbors in $y$–direction. Finally, the potential term $V_r$ represents an additional external (e.g. electric) field. Here we regard a staggered chemical potential $V_r=(-1)^{x+y}\mu$ which opens a gap $2\mu$ in the spectrum of the electrons, as will be explained below \[14\]. It is probably difficult to implement a staggered potential in a real sample of quantum dots. However, the parameter $\mu$ plays only the role of a gap which could be created also by other means.
We choose for the tunneling rate $t=1$. Therefore, $\mu$, $t'$ are measured in units of $t$ and $g$ is measured in units of $t^2$. If we identify fermions with the four corners of the unit cell (Fig.1) the tunneling matrix $H'$ can be diagonalized by a Fourier transformation. This gives a $4\times4$ matrix in Fourier space. $H$, the Hamiltonian of a dot, is a diagonal matrix with respect to the four corners in the sublattice representation $H=(H_{1}^{\a,\a'},H_{2}^{\a,\a'},H_3^{\a,\a'},H_{4}^{\a,\a'})$. A similar model with correlated randomness $H_1^{\a,\a'}=H_3^{\a,\a'}
=-H_2^{\a,\a'}=-H_4^{\a,\a'}$ was considered in Ref. \[15\].
We begin the discussion of the model with the analysis of an array where the interaction of the electrons inside the quantum dots are neglected. It can be understood as a tight-binding model for non-interacting electrons in a metall with some electronic bands in a magnetic field \[14,16\]. The Fourier components of $H'$ can be expanded around the four nodes $k=(\pm\pi,\pm\pi)$ for $k=(\pm\pi,\pm\pi)+ap$ with small $p$ vectors. After a global orthogonal transformation the Hamiltonian reads $$H''(p)=2\pmatrix{
\mu-t'&ip_x-p_y&-2t'(p_x+p_y)&0\cr
-ip_x-p_y&-\mu+t'&0&-2t'(p_x-p_y)\cr
-2t'(p_x+p_y)&0&\mu+t'&p_y+ip_x\cr
0&-2t'(p_x-p_y)&p_y-ip_x&-\mu-t'\cr
}$$ $$\equiv\pmatrix{
H''_{11}&H''_{12}\cr
H''_{21}&H''_{22}\cr
}.
\eqno (3)$$ The last equation combines the $4\times 4$–structure to a $2\times 2$–structure with $2\times2$ block matrices $H''_{ij}$. Neglecting terms $O(p^2)$ the Green’s function $({\hat H}+i\omega)^{-1}$ decays into a diagonal block structure $${\hat G}(i\omega)=\pmatrix{
(H''_{11}{\bf 1}_N+h_1+i\omega
)^{-1}&0\cr
0&(H''_{22}{\bf 1}_N+h_1+i\omega
)^{-1}\cr
}
\eqno (4)$$ with the diagonal matrix $h_1=(H_1+H_3,H_2+H_4)$. Thus the diagonal elements are statistically independent. ${\bf 1}_N$ is the $N\times N$–unit matrix. It is interesting to notice that the matrices $H''_{jj}=m_j\sigma_z+
i\nabla_x\sigma_x\mp i\nabla_y\sigma_y$ represent two independent two-dimensional Dirac Hamiltonians with masses $m_1=\mu-t'$ and $m_2=\mu+t'$, respectively.
The current density in a Dirac model can be calculated from the response to an external vector potential $q_y$ \[17\]. The introduction of this vector potential is equivalent to a change of the boundary conditions, a concept extensively used in numerical investigations of Anderson localization \[18\]. The response to the vector potential leads to the Hall conductivity $\sigma_{xy}$ in terms of Green’s functions. We obtain for $q_y\sim0$ the expression \[14,15\] $$\sigma_{xy}\approx i{\sum_{r,r',r''}}'\int Tr[\sigma_x{\hat G}_{rr'}(E-i\om)
{\hat G}_{r'r''}(E-i\om)\sigma_y{\hat G}_{r''r}(E-i\om)]{d\omega\over2\pi}.
\eqno (5)$$ Here $\sum'$ is the sum normalized with the number of quantum dots in the array and the number of energy levels $N$. If there is only one electron per dot the energy spectrum has discrete levels which are well–separated. For instance, with a harmonic oscillator potential for the dot we have $E_n=\hbar\omega_p(n+1/2)$. The separation of the energy levels in the single electron case allows us to neglect all levels with $n>0$. Consequently, there is no statistics of energy levels and we can write $\dm=0$. For the Hall conductivity in units of $e^2/h$ we find $$\sigma_{xy}=(1/2)[{\rm sign}(m_1)\Theta(|m_1|-|E|)
+{\rm sign}(m_2)\Theta(|m_2|-|E|)],
\eqno (6)$$ where $\Theta$ is the Heaviside step function. This result reflects correctly the qualitative behavior of the Hall conductivity at the QHT: The Hall conductivity of the original lattice fermion problem is the sum of the Hall conductivities from the light Dirac mass ($m_1$) and the heavy Dirac mass ($m_2$), such that the total $\sigma_{xy}$ has a jump from 0 to 1 if the light mass changes the sign (i.e., exchange of particles and holes in the Dirac model). Thus the Dirac fermions, together with the Hall conductivity of Eq. (5), represent a simple picture for a Hall transition. Special cases are $\mu=0$ which gives $\sigma_{xy}=0$ and the (unrealistic) case $t'=0$ with $\sigma_{xy}=({\rm sign}(\mu)/2\pi)\Theta(|m_1|-|E|)$. The sharp step–like QHT is only possible in an ideal systems of non-interacting lattice electrons at zero temperature. In order to compare with real systems we have to include the statistical fluctuations of the energy levels as well as thermal fluctuations. The latter are taken into account by replacing the integral over $\omega$ in (5) by a summation over discrete Matsubara frequencies $\omega_n=(2n+1)\pi T$ ($n=0,\pm1,\pm2,...$). This leads to a thermal broadening of the step–like behavior of $\sigma_{xy}$.
The effect of the level fluctuations is evaluated by averaging $\sigma_{xy}$ over the random matrix elements of $\dm$. In order to derive a simple expression for the limit of infinitely many energy levels per dot ($N\to\infty$) it is convenient to write the product of Green’s functions $G=(H_0+h_1+z\sigma_0)^{-1}$ ($H_0$ is either $H''_{11}$ or $H''_{22}$) in the expression of the Hall conductivity formally as a functional integral of a supersymmetric model \[15,19,20\] $$G_{rr'}^{\a\a'}G_{r'r''}^{\b\b'}G_{r''r}^{\g\a}=
\langle\bPsi_r^\a\Psi_{r'}^{\a'}\bchi_{r'}^\b\chi_{r''}^{\b'}
\bPsi_{r''}^\g\Psi_r^\a\rangle_S
-\langle\bchi_r^\a\chi_{r'}^{\a'}\bPsi_{r'}^\b\Psi_{r''}^{\b'}
\bchi_{r''}^\g\chi_r^\a\rangle_S
\eqno (7)$$ with $\langle ...\rangle_S=\int ... \exp(-S_1)\prod_r d\Phi_r d{\bar \Phi_r}$ and with the supersymmetric action (sum convention for $\a$) $$S_1=-i\se\sum_{r,r'\mu,j,j'}\Phi_{r,\mu,j}^\a(H_0+z\s_0)_{r,j;r',j'}
\bPhi_{r',\mu,j'}^\a
-i\se\sum_{r,\mu,j}(\Phi_{r,\mu,j}^{\a'}\dm_r^{\a'\a}\bPhi_{r,\mu,j}^{\a}),
\eqno (8)$$ where $\se ={\rm sign}({\rm Im}z)$ and the field $\Phi_{r,j}^\a=(\Psi_{r,j}^\a,\chi_{r,j}^\a)$. The first component is Grassmann and the second complex. $\mu=1,2$ labels the complex and the Grassmann components, and $j=1,2$ labels the two components of the Dirac model. This choice guarantees a normalized functional. Consequently, the averaging with respect to the Gaussian distributed matrix elements of $\dm$ can be performed in the functional integral as $\langle\exp(-S_1)\rangle_{h_1}=\exp(-S_2)$ with the effective action $S_2$. The latter is obtained from $S_1$ by replacing the second term with $(g/N)\sum_{r,\mu,j}(\Phi_{r,\mu,j}^\a
\bPhi_{r,\mu,j}^\a)^2$. Thus we have derived an effective field theory for $\Phi$ which serves as a generating functional for the average product of Green’s functions. It is important to notice that [*not only*]{} $\dm$ creates the interaction in $S_2$ but also other types of random terms in $S_1$. For instance, the interaction can also be created by a term $(N/g)(Q_{r,\mu,j})^2 -2i\se Q_{r,\mu,j}\Phi_{r,\mu,j}^{\a}
\bPhi_{r,\mu,j}^{\a}$ as the second term in $S_1$, followed by an integration over the matrix field $Q$. This field, in contrast to the random matrix $\dm$, does not depend on the index $\a$ of the electronic states inside the quantum dot. This means that the distribution $\dm$ can be transformed into another distribution with a new ‘random variable’ $Q$ (which does not have a probability measure but some generalized distribution including Grassmann variables). In other words, we can write, after integrating out the field $\Phi$, $$\langle[(H_0+\dm +z\s_0)^{-1}]^{\a\a}...\rangle_{\dm}=
\langle[(H_0+2Q+z\s_0)^{-1}]^{\a\a}...\rangle_Q.
\eqno (9)$$ The distribution which belongs to $\langle ...\rangle_Q$ was investigated in detail in \[20\]. Here we need only the result for leading order in $N$: $\langle ... \rangle_Q =\int ...\exp(-NS(Q,P))\prod_r dP_r dQ_r$ with diagonal matrix fields $Q_r$, $P_r$ and $$S(Q,P)={1\over g}\sum_r[Tr(Q_{r}^2)+Tr(P_{r}^2)]$$ $$+\log\det(H_0+2Q+z\s_0)-\log\det(H_0-2iP+z\s_0)
% +O(N^{-1}).
\eqno (10)$$ The number of levels $N$ appears in front of the action. Thus the effect of the statistics of the energy levels can be evaluated for $N\to\infty$ in saddle point (SP) approximation. The SP equation reads $${\delta\over\delta Q}\big\lbrack {1\over g}Tr(Q_{r}^2)+
\log\det(H_0+2Q+z\s_0)\big\rbrack=0.
\eqno (11)$$ A second SP equation appears from the variation of $P$ by replacing $Q\to -iP$. As an ansatz we take a uniform SP solution $Q_0 =-i P_0 =(1/2)[i\eta\s_0+M_s\s_3]$. Then (11) leads to the conditions $\eta =(\be-iE) g I$, $M_s=-m_1gI/(1 +gI)$ with the integral $I=\int\lbrack (m_1+M_s)^2 +(\be-iE)^2+k^2\rbrack^{-1}d^2k/2
\pi^2$. This result means that disorder shifts the frequency $\om\to\om+\eta$ and the Dirac mass $m_1\to\bm=m_1+M_s$, where $\eta(m_1,\om)$ and $M_s(m_1,\om)$ are solutions of the SP equation. For instance, with $\omega=0$ we have $\eta^2=(1/4)(M_c^2-m_1^2)\Theta(M_c^2-m_1^2)$ where $M_c=2e^{-\pi/g}$. The sign of $\eta$ is fixed by the condition that $\eta$ must be analytic in $\om$. This implies ${\rm sign}(\eta)={\rm sign}(\om)$. The average density of states (DOS) is proportional to $\eta$ in the $N\to\infty$–limit. Thus we have a narrow DOS for the array of quantum dots of width $2M_c$ in contrast to the isolated dot which has a semicircular density of width $2\sqrt{g}$. The DOS vanishes for $E=0$ in the absence of level fluctuations. The creation of a non–zero DOS due to level fluctuations is a non–perturbative effect.
At $T=0$ and $E=0$ the Hall conductivity per fermion level reads in the limit $N\to\infty$ and with the approximation that $\bm$ and $\eta$ do not depend on $\om$ $$\s_{xy}\approx1/2+{\rm sign}(m_1)\Big[1/2-(1/\pi)
{\rm arctan}(\sqrt{M_c^2/m_1^2-1})\Theta(M_c^2-m_1^2)\Big].
\eqno (12)$$ The Hall conductivities are plotted in Fig.2 for $T=0.1$ with and without level fluctuations. It is remarkable that the Hall conductivity is enhanced by the level fluctuations for $\sigma_{xy}<1/2$ whereas it is suppressed for $\sigma_{xy}>1/2$. The effect of these fluctuations is strictly constrained to the interval $2M_c$.
[*Conclusions*]{} In a square–array of quantum dots with $N$ electronic states per dot we have investigated the DOS and the Hall conductivity. Both quantities are significantly affected by the statistical fluctuations of the energy levels. In particular, the Hall conductivity, which is step–like at the QHT in the absence of fluctuations, has a more complicated behavior in the presence of level fluctuations. Thermal fluctuations have a different effect on the Hall conductivity; they lead to a simple broadening of the step–like behavior.
Only the average quantities have been considered. However, it is possible within the same method described in this article to study also higher moments of these quantities. E.P. Wigner, Ann.Math.[**53**]{}, 36 (1951), ibid. [**67**]{}, 325 (1958)
N. Rosenzweig and C.E. Porter, Phys.Rev. [**120**]{}, 1698 (1960)
L.P. Gorkov and G.M. Eliashberg, JETP [**21**]{}, 940 (1965)
M.L. Mehta, [*Random matrices*]{} (Academic Press, New York, 1967)
P.A. Lee and T.V. Ramakrishnan, Rev. Mod. Phys. [**57**]{}, 287 (1985)
K. von Klitzing, G. Dorda and M. Pepper, Phys.Rev.Lett. [**45**]{}, 494 (1980)
R.D. Hofstadter, Phys.Rev. [**B 14**]{}, 2239 (1976)
D. Thouless, [*The Quantum Hall Effect*]{}, edited by R.E. Prange and S.M. Girvin
[ ]{}(Springer-Verlag, New York, 1990)
D. Pfannkuche and R.R. Gerhardts, Phys.Rev. [**B 46**]{}, 12 606 (1992)
B.L. Johnson and G. Kirczenow, Phys.Rev.Lett. [**69**]{}, 672 (1992)
U.H. Baranger and P.A. Mello, Phys.Rev.Lett. [**73**]{}, 142 (1994)
R.A. Jalabert, J.-L. Pichard and C.W.J. Beenakker, Europhys.Lett. [**27**]{}, 255 (1994)
D. Weiss, [*Elektronen in “künstlichen” Kristallen*]{} (Verlag Harri Deutsch,
[ ]{}Frankfurt, 1994)
A.W.W. Ludwig, M.P.A. Fisher, R. Shankar and G. Grinstein, Phys.Rev.[**B50**]{}, 7526
[ ]{}(1994)
K. Ziegler, Europhys.Lett.[**28** ]{} (1), 49 (1994)
M.P.A. Fisher and E. Fradkin, Nucl.Phys.B251\[FS13\], 457 (1985)
J.D. Bjorken and S.D. Drell, [*Relativistic Quantum Mechanics*]{} (Mc Graw Hill,
[ ]{}New York, 1964)
D.J. Thouless, Phys.Rep. [**13**]{}, 93 (1974)
K.B. Efetov, Adv. in Phys. [**32**]{}, 53 (1983)
K. Ziegler, Nucl.Phys. [**B344**]{}, 499 (1990)
Fig.1: Schematic picture of an array of quantum dots with nearest ($t$) and next nearest neighbor ($t'$) tunneling. The square denotes the unit cell of the translational invariant array with magnetic flux $\Phi=\Phi_0/2$. Fig.2: Hall conductivity $\sigma_{xy}$ in units of $e^2/h$ as a function of the effective chemical potential $m=\mu-t'$ at temperature $T=0.1$. The circles are without level fluctuations and the full curve is with level fluctuations with variance $g=1.36$.
Fig.1:
$$\psannotate{\psboxto(5cm;5cm){aqdfig1.ps}}{\fillinggrid\at(3\pscm;1\pscm)
{}}$$
Fig.2:
$$\psannotate{\psboxto(12cm;10cm){aqdfig2.ps}}{\fillinggrid\at(0\pscm;15\pscm)
{$\sigma_{xy}$}\at(10\pscm;2\pscm){m}}$$
|
---
abstract: 'The Interacting Boson plus broken-pairs Model has been used to describe high-spin bands in spherical and transitional nuclei. In the spherical nucleus $^{104}$Cd the model reproduces the structure of high-spin bands built on a vibrational structure. Model calculations performed with a single set of parameters reproduce ten bands in $^{136}$Nd, including the two dipole high-spin bands based on the $(\pi h_{11/2})^2$ $(\nu h_{11/2})^2$ configuration.'
author:
- |
[ ]{} G. Bonsignori, D. Vretenar, S. Cacciamani and L. Corradini\
INFN and Physics Department, University of Bologna, Italy
title: |
[ Broken Pairs and High-Spin States\
in Transitional Nuclei]{}
---
=10000 0.5cm 16.5cm
Introduction
============
Models that are based on the Interacting Boson Approximation provide a consistent description of low-spin nuclear structure in spherical, deformed and transitional nuclei. By including selective non-collective fermion degrees of freedom, through successive breaking of correlated S an D pairs, the IBM can be extended to describe the physics of high-spin states in nuclei. This extension of the model is especially relevant for transitional regions, where single-particle excitations and vibrational collectivity are dominant modes and no pronounced axis for cranking exists. We present a model that extends the IBM-1 to include two- and four-fermion noncollective states (one and two broken-pairs). The model has been applied in the description of high-spin states in the Hg [@IV91; @VRE93; @VRE95], Sr-Zr [@CHO91; @LIS93; @CHI93; @caccia], and Nd-Sm [@DEA94; @RAV96; @PET97] regions.
The model is based on the IBM-1 [@IAC87]; the boson space consists of [*s*]{} and [*d*]{} bosons, with no distinction between protons and neutrons. To generate high-spin states, the model allows one or two bosons to be destroyed and non-collective fermion pairs formed, represented by two- and four-quasiparticle states which recouple to the boson core. High-spin states are described in terms of broken pairs. The model space for an even-even nucleus with $2N$ valence nucleons is $$\begin{aligned}
\mid N~bosons~> & \oplus & \mid (N-1)bosons \otimes 1~broken~pair> \nonumber \\
& \oplus & \mid (N-2)bosons \otimes 2~broken~pairs> \oplus~~... \nonumber\end{aligned}$$ The model Hamiltonian has four terms: the IBM-1 boson Hamiltonian, the fermion Hamiltonian, the boson-fermion interaction, and a pair breaking interaction that mixes states with different number of fermions. $$H=H_{B}+H_{F}+V_{BF}+V_{mix}.$$ The fermion Hamiltonian $H_F$ contains single-fermion energies and fermion-fermion interactions. The interaction between the unpaired fermions and the boson core contains the dynamical, exchange and monopole interactions of the IBFM-1 [@IAC79] In order to describe dipole bands in transitional nuclei, we have modified the quadrupole-quadrupole dynamical interaction $$V_{dyn}=\Gamma _{0}\sum_{j_{1}j_{2}}(u_{j_{1}}u_{j_{2}}-v_{j_{1}}v_{j_{2}})
\langle j_{1}\parallel Y_{2}\parallel j_{2}\rangle \times
\left( [a^{\dagger}_{j_{1}} \times\tilde{a}_{j_{2}}]^{(2)}
\cdot Q^{B} \right),$$ The standard boson quadrupole operator $Q^{B}$ has been extended by a higher order term $$\chi'\sum_{L_{1}L_{2}}\left[
\left[
d^{\dag} \times \tilde{d}
\right]^{\left( L_{1} \right)}
\times
\left[
d^{\dag} \times \tilde{d}
\right]^{\left( L_{2} \right)}
\right]^{\left( 2 \right)}\;,
\label{chi_prime}$$ The terms $H_B$, $H_F$ and $V_{BF}$ conserve the number of bosons and the number of fermions separately. In our model only the total number of nucleons is conserved, bosons can be destroyed and fermion pairs created, and vice versa. In the same order of approximation as for $V_{BF}$, the pair breaking interaction $V_{mix}$ which mixes states with different number of fermions, conserving the total nucleon number only, reads $$\begin{aligned}
V_{mix}&=&-U_{0}\left\{ \sum_{j_{1}j_{2}}u_{j_{1}}u_{j_{2}}(u_{j_{1}}v_{j_{2}}+
u_{j_{2}}v_{j_{1}})\langle j_{1}\parallel Y_{2}\parallel j_{2}\rangle ^{2}
\frac{1}{\sqrt{2j_{2}+1}}\left( [a^{\dagger}_{j_{2}}\times
a^{\dagger}_{j_{2}}]^{(0)}\cdot s\right)+\ h.c.\right\} \nonumber\\
&&-U_{2}\left\{ \sum_{j_{1}j_{2}}(u_{j_{1}}v_{j_{2}}+
u_{j_{2}}v_{j_{1}})\langle j_{1}\parallel Y_{2}\parallel j_{2}\rangle
\left( [a^{\dagger}_{j_{1}}\times a^{\dagger}_{j_{2}}]^{(2)}
\cdot \tilde{d}\right) +\ h.c.\right\}.
\label{mixing}\end{aligned}$$ If mixed proton-neutron configurations are included in the fermion model space, i.e. there can be both proton and neutron broken pairs, the full model Hamiltonian reads $$H = H_{B} + H_{\nu F} + H_{\pi F} + H_{\nu BF}+
H_{\pi BF} + H_{\nu}^{mix} + H_{\pi}^{mix} + H_{\nu \pi},$$ where the proton-neutron interaction term is defined $$\begin{aligned}
H_{\nu \pi} & = & \sum_{nn'pp'} \sum_{J} (-)^{J}
h_{J}(nn'pp')
(u_{n}u_{n'}-v_{n}v_{n'})
(u_{p}u_{p'}-v_{p}v_{p'})
\times \nonumber \\
& & \times \left(
\left[
a^{\dag}_{n} \times \tilde{a}_{n'}
\right]^{(J)} \cdot
\left[
a^{\dag}_{p} \times \tilde{a}_{p'}
\right]^{(J)}
\right)\;.\end{aligned}$$ In the following two sections we present results for quadrupole bands in the spherical nucleus $^{104}$Cd, and for proton-neutron dipole bands in the transitional nucleus $^{136}$Nd.
The nucleus $^{104}_{~48}$Cd$_{56}$
=====================================
The Cd isotopes, with only two protons from the closed shell at N=50, present an example of spherical vibrational nuclei in which also high-spin quadrupole bands are found. In particular, in the nucleus $^{104}$Cd quadrupole bands extend up to spin 26 $\hbar$. Here we present a description of the structure of the excitation spectrum in the framework of the IBM plus one broken pair.
In general most of the parameters of the Hamiltonian are taken from analyses of the low- and high-spin states in the neighboring even and odd nuclei. For $^{104}$Cd the parameters of the boson Hamiltonian are: $\epsilon=0.658$ MeV, $C_4=0.117$ MeV, the number of bosons is $N=4$. $\epsilon$ corresponds to the excitation energy of $2^+_1$, and $C_4$ is adjusted to reproduce the $6^+_1$ and $8^+_1$ states. Since only the yrast sequence of the collective vibrational structure is known experimentally, the remaining parameters of the boson Hamiltonian could not be determined, and are set to zero.
The resulting SU(5) vibrator spectrum displays very little anharmonicity. In the present calculation we only consider collective states and two-quasiparticle states based on configurations with two neutrons in the broken pair. States based on the proton $(g_{9/2})^2$ configuration are not included in the model space. The single-quasiparticle neutron energies and occupation probabilities are obtained by a simple BCS calculation. The quasiparticle energies and occupation probabilities are: $E (\nu d_{5/2})=1.113$ MeV, $E (\nu s_{1/2})=2.287$ MeV, $E (\nu h_{11/2})=2.691$ MeV, $E (\nu g_{7/2})=1.316$ MeV, $v^{2} (\nu d_{5/2})=0.57,$ $v^{2} (\nu s_{1/2})=0.06,$ $v^{2} (\nu h_{11/2})=0.04,$ $v^{2} (\nu g_{7/2})=0.23.$
The parameters of the boson-fermion interactions have been adjusted to reproduce the lowest positive and negative parity structures in the neighboring odd-N isotopes $^{103}$Cd and $^{105}$Cd. For neutron states the boson-fermion parameters are: $\Gamma_0=0.2$ MeV and $\chi=-0.9$ for the dynamical interaction, and $\Lambda_0=0.2$ MeV for the exchange interaction. The parameter of the boson quadrupole operator, $\chi=-0.9$, is adjusted to reproduce the experimental data for $^{106}$Cd: $B(E2, 2_1 \rightarrow 0_1) = 0.068~e^2b^2$ and $Q(2_1)=-0.25~eb$. In addition to the boson and boson-fermion parameters that have been already discussed, the strength parameter of the pair-breaking interaction is $U_2=0.1$ MeV. Results of model calculations for positive parity states in $^{104}$Cd, with a fermion space of one neutron broken pair, are shown in Fig. 1.
In this energy level diagram, only few lowest calculated levels of each spin are compared to the experimental counterparts. The collective vibrational structure remains yrast up to angular momentum $I=6^+$. In the experimental spectrum one finds, below the collective $I=8^+$ (3211 keV) state, two other states of the same angular momentum: at 2902 keV and 3031 keV. They are probably based on a two-neutron configurations. It is interesting that this triplet of $I=8^+$ states is also reproduced by our calculation. In fact, for three $\Delta I=2$ positive parity sequences, based on the $(d_{5/2})^2$, $(g_{7/2})^2$ and $(d_{5/2}, g_{7/2})$ neutron configurations, probable experimental counterparts are observed. The calculated energy spacings correspond to the collective vibrational sequence. The experimental energy spacings are somewhat smaller, indicating a stronger core polarization and/or change of deformation. This is probably caused by admixtures of 2p-2h proton configurations, an effect that could not be included in our model space. In heavier Cd isotopes experimental data exists on the neutron $(h_{11/2})^2$ structure. The $I=10^+$ band-head of the $\Delta I=2$ $(h_{11/2})^2$ sequence is at: 4153 KeV in $^{108}$Cd, and 4816 keV in $^{106}$Cd. In $^{104}$Cd this state is not observed, our calculations place it at 5310 keV. $^{104}$Cd has also less particles outside the closed shell compared to heavier isotopes, and therefore collective properties are less pronounced. In particular, with only four bosons we can only construct states up to angular momentum $I=16^+$, including the fermion space of two neutrons. States with higher angular momenta should be based on a different core, probably one that includes proton excitations across the $Z=50$ shell closure.
The lowest calculated negative-parity two neutron states are compared with experimental levels in Fig. 2. The parameters of the Hamiltonian have the same values as in the calculation of positive-parity states. There are several negative parity sequences in the experimental spectrum which start at $\approx 4$ MeV, and with angular momenta $I=7^-~-~9^-$. The lowest two calculated levels of each spin $I\geq 7^-$ are displayed in Fig. 2. The levels are grouped into sequences according to the dominant fermion component in the wave function. The two-fermion angular momentum is approximately a good quantum number. The collective part of the wave functions correspond to that of the low-lying collective vibrational sequence. Because the quasiparticle energies of $d_{5/2}$ and $g_{7/2}$ differ by only $\approx 200$ keV, both the $(h_{11/2},d_{5/2})$ and the $(h_{11/2},g_{7/2})$ configurations form sequences with states of the same angular momentum at almost the same excitation energies. Of course the density of negative parity states between 4 MeV and 7 MeV is very high. All four neutron orbitals are included in the calculation, and one finds many states with low angular momenta, that are not observed in experiment. Therefore in Fig. 2 we only display states for which correspondence with experimental data can be established.
The nucleus $^{136}_{~60}$Nd$_{76}$
====================================
Nuclei in the A = 130-140 mass region are $\gamma$-soft and the polarizing effect of the aligned nucleons induces changes in the nuclear shape. Because of the different nature of the excitations (particles for proton, and holes for neutron configurations), the alignment of a pair of $h_{11/2}$ protons induces a prolate shape, whereas the alignment of a neutron pair in the $h_{11/2}$ orbital drives the nucleus towards a collective oblate shape. In $^{136}$Nd one therefore expects to observe different coexisting structures at similar excitation energies. The IBM model with broken pairs has been previously used in the description of low-spin and high-spin properties of $\gamma$-soft nuclei of this region (in the IBM language O(6) nuclei): $^{138}$Nd [@DEA94], $^{137}$Nd [@PET97] and $^{139}$Sm [@RAV96].
In the present work we use the IBM with proton and neutron broken pairs to describe the excitation spectrum of $^{136}$Nd. In particular, we want to obtain a correct description of high-spin dipole bands, which have been interpreted as two proton - two neutron structures. The experimental level scheme of positive-parity states is displayed in Fig. 3. The labels of bands are from Ref. [@SUN96]; in addition to the ground state band and the quasi $\gamma$-band, bands 3, 5, 7 and 8 result from the alignment of two protons or two neutrons in the $h_{11/2}$ orbital, the two four-quasiparticle dipole bands have labels 10 and 11.
There are 6 neutron valence [*holes*]{} and 10 proton valence [*particles*]{}. The resulting boson number is N=8. The set of parameters for the boson Hamiltonian is: $\epsilon$=0.36, $C_0$=0.16, $C_2$=-0.12, $C_4$=0.19, $V_2$=0.11 and $V_0$=-0.3 (all values in MeV). The boson parameters have values similar to those that have been used in the calculation of $^{138}$Nd, $^{137}$Nd and $^{139}$Sm.
In A $\approx$ 140 nuclei the structure of positive parity high-spin states close to the yrast line is characterized by the alignment of both proton and neutron pairs in the $h_{11/2}$ orbital. For positive-parity states we have only included the proton and neutron $h_{11/2}$ orbitals in the fermion model space. Additional single-nucleon states make the two broken-pairs bases prohibitively large. The single quasiparticle energies and occupation probabilities are obtained from a BCS calculation. Similar to our previous calculations for $^{138}$Nd and $^{137}$Nd, the resulting quasiparticle energies for the proton and neutron $h_{11/2}$ states had to be slightly renormalized. $E_{\nu}(h_{11/2}) = 1.75$ MeV, $v^2_{\nu}(h_{11/2}) = 0.83$, $E_{\pi}(h_{11/2}) = 1.60$ MeV, $v^2_{\pi}(h_{11/2}) = 0.07$. In order to further reduce the large size of the space with two-broken pairs, we had prediagonalized the boson Hamiltonian, and the fermion states were then coupled to the lowest eigenvectors, i.e. only to the collective ground state band. The parameters of the fermion-boson interactions are determined from IBFM calculations of low-lying negative-parity states in $^{137}$Nd and neighboring odd-proton nuclei. The parameters of the neutron dynamical fermion-boson interaction are $\Gamma_0$=0.3 MeV, $\chi$=-1 and $\chi'$=-0.2, and for protons: $\Gamma_0$= 0.22 MeV, $\chi$=+1 and $\chi'$=+0.2. For the exchange interaction $\Lambda_0^{\nu}=1.0$ and $\Lambda_0^{\pi}=1.5$ for neutrons and protons, respectively. The strength parameter of the pair-breaking interaction is $U_0=0$ and $U_2=0.2$ MeV, both for protons and neutrons in broken pairs. The residual interaction between unpaired fermions is a surface $\delta$-force with strength parameters: $ V_{\nu\nu} = -0.1 $, $ V_{\pi\pi} = -0.1$ and $ V_{\nu\pi} = -0.9 $ for neutron-neutron, proton-proton and proton-neutron, respectively.
In Fig. 4 we display the calculated spectrum of positive-parity states. According to the structure of wave functions, states are classified in bands labeled in such a way that a direct comparison can be made with their experimental counterparts. The calculated positive-parity structures 3, 5, 7, 8, 10 and 11, as well as the ground state band an the quasi $\gamma$-band, have to be compared with the experimental bands of Fig. 3. The bands 3, 5, and 7 result from the alignment of a pair of protons in the $h_{11/2}$ orbital. The band 8 corresponds to two $h_{11/2}$ neutrons coupled to the boson core. Finally, the two dipole bands 10 and 11 correspond to four-quasiparticle states, two protons and two neutrons in their respective $h_{11/2}$ orbitals, coupled to the ground state band of the core. The occurrence of regular dipole bands ($\Delta I=1$) in nearly spherical and transitional nuclei presents an interesting phenomenon. In the semiclassical picture of the cranked shell model, $\Delta I=1$ high-spin bands have been described as TAC (Tilted Axis Cranking) solutions [@fra93]. In our model such $\Delta I=1$ structures are produced by the fermion-boson interactions. However, in order to obtain the correct energy spacings for bands 10 and 11, it was necessary to include the additional term (\[chi\_prime\]) in the boson quadrupole operator. We have also found that a crucial role in the excitation spectrum of these bands is played by the proton-neutron delta-interaction.
It should be noted that bands 10 and 11 have been recently described in the framework of the projected shell model [@SUN96]. In order to obtain bands of dipole character, a permanent deformation had to be assumed, which in turn made energetically more favorable one of the neutrons to occupy the $\nu f_{7/2}$ orbital. Therefore the configuration $(\pi h_{11/2})^2$ $\nu f_{7/2}$ $\nu h_{11/2}$ was assigned to the bands 10 and 11. The dipole character of the bands results from the coupling of the neutron hole in $h_{11/2}$ and the neutron particle in $f_{7/2}$. We believe that this interpretation is not correct. In our calculations, structures based on the $\nu f_{7/2}$ orbital are found high above the yrast. Using a unique set of parameters, we have been able to reproduce the complete experimental spectrum of positive parity states, from the ground state band, to angular momentum 29 $\hbar$ in band 11, at more than 13 MeV excitation energy. With the same set of parameters we have also calculated negative parity states based on the neutron orbitals $s_{1/2}$, $d_{3/2}$ and $h_{11/2}$. The resulting bands reproduce the experimental data.
Conclusions
-----------
We have used an extension of the Interacting Boson Model to describe the physics of high-spin states in nuclei. Compared with traditional models based on the cranking approximation, the present approach provides several advantages. High-spin bands can be described not only in well deformed, but also in transitional and spherical nuclei. A single set of parameters and a well defined Hamiltonian are used to calculate both ground state collective bands and high-spin two- and four-quasiparticle bands. Polarization effects directly result from the model boson-fermion interactions. The model produces not only energy spectra, but also wave functions which can be used to calculate electromagnetic properties. All calculations are performed in the laboratory frame, and therefore produce results that can be directly compared with experimental data.
In the spherical nucleus $^{104}$Cd the model reproduces the structure of high-spin bands built on a vibrational structure. The bands result from the alignment of a pair of neutrons in the orbitals $\nu d_{5/2}$ and $\nu g_{7/2}$. For the transitional nucleus $^{136}$Nd, our calculation reproduce the complete experimental excitation spectrum of positive and negative parity states. In particular, we have been able to obtain a correct description of the two high-spin $(\pi h_{11/2})^2$ $(\nu h_{11/2})^2$ dipole bands.
[999]{} F. Iachello and D. Vretenar, Phys. Rev. C. [**43**]{} (1991) 945. D. Vretenar, G. Bonsignori, and M. Savoia, Phys. Rev. C. [**47**]{} (1993) 2019. D. Vretenar, G. Bonsignori, and M. Savoia, Z. Phys. A [**351**]{} (1995) 289. P. Chowdhury, C. J. Lister, D. Vretenar et al., Phys. Rev. Lett. [**67**]{} (1991) 2950. C.J. Lister, P. Chowdhury and D. Vretenar, Nucl. Phys. [**A557**]{} (1993) 361c. A. A. Chishti,P. Chowdhury, D. J. Blumenthal, P. J. Ennis, C. J. Lister, Ch. Winter, D. Vretenar, G. Bonsignori, and M. Savoia, Phys. Rev. C. [**48**]{} (1993) 2607. S. Caccaiamani, G. Bonsignori, F. Iachello, D. Vretenar, Phys. Rev. C. [**53**]{} (1996) 1618. G. de Angelis, M. A. Cardona, M. De Poli, S. Lunardi, D. Bazzacco, F. Brandolini, D. Vretenar, G. Bonsignori, M. Savoia, R. Wyss, F. Terrasi, and V. Roca, Phys. Rev. C. [**49**]{} (1994) 2990. C. Rossi Alvarez, D. Vretenar et al, Phys. Rev. [**C54**]{} (1996) 57. C.M. Petrache, R. Venturelli, D. Vretenar et al, Nucl. Phys. [**A617**]{} (1997) 228. A. Arima and F. Iachello, hys. Rev. Lett. [**35**]{} (1975) 10; F. Iachello and A. Arima, [*The Interacting Boson Model*]{} (Cambridge University Press, Cambridge, 1987). F. Iachello and O. Scholten, Phys. Rev. Lett. [**43**]{} (1979) 679. S. Frauendorf, Nucl. Phys. [**A557**]{} (1993) 259c. C.M.Petrache, Y. Sun, D. Bazzacco, S. Lunardi et al., Phys. Rev. [**C53**]{} (1996) R2581.
[ Results of the IBM plus broken pair calculation for positive-parity states compared with experimental levels in $^{104}$Cd.]{}
[ Negative-parity states in $^{104}$Cd compared with results of the IBM plus broken pair calculation.]{}
[ Experimental excitation spectrum of positive-parity states in $^{136}$Nd.]{}
[ Results of IBM plus broken pairs calculation for positive parity bands in $^{136}$Nd]{}
|
---
abstract: 'According to observations, in our Universe for gravitational phenomena in a Newtonian approximation the Newtonian non-modified relations are valid. The Friedmann equations of universe dynamics describe infinite number of relativistic universe models in Newtonian approximation, but only in one of them the Newtonian non-modified relations are valid. From these facts it results that the Universe is described just by this only Friedmannian universe model with Newtonian non-modified relations.'
address: 'Faculty of Materials Science and Technology of the Slovak Technical University, 917 24 Trnava, Slovakia '
author:
- Vladimír Skalský
title: |
The Friedmannian model\
of our observed Universe
---
= cmr9 at 9pt
According to observations – realized in the course last five centuries – in our [*expansive homogeneous and isotropic relativistic Universe*]{} for gravitational phenomena in Newtonian approximation the Newtonian non-modified relations are valid.
[*The Friedmann general equations of homogeneous and isotropic relativistic universe dynamics*]{} \[1\] describe infinite number of homogeneous and isotropic relativistic universe models in Newtonian approximation, but only in one of them the Newtonian non-modified relations are valid \[2\].
From these facts it results unambiguously that our observed Universe is described just by this only Friedmannian model of universe with Newtonian non-modified relations.
The deductive-reductive determination of the Friedmannian model of universe with Newtonian non-modified relations you can see in \[3\].
The Friedmannian model of universe with Newtonian non-modified relations, i.e. Friedmannian model of [*the (flat) expansive non-decelerative (homogeneous and isotropic relativistic) Universe (with total zero and local non-zero energy)*]{} (ENU) \[4\] is a special partial solution of the Friedmann general equations of homogeneous and isotropic relativistic universe dynamics: $$\label{1a}
\dot{a}^2=\frac{8\pi G\rho a^2}{3}-kc^2 +
\frac{\Lambda a^2 c^2}{3}, \tag{$1a$}$$ $$\label{1b}
2a\ddot{a}+\dot{a}^2= -\frac{8\pi G p a^2}{c^2}-kc^2+
\Lambda a^2 c^2,\tag{1b}$$ with $k = 0$, $\Lambda = 0$ and a state equation \[5\]: $$p=-\frac13 \varepsilon ,$$ where $a$ is the gauge factor, $\rho$ is the mass density, $k$ is the curvature index, $\Lambda$ is the cosmological member, $p$ is the pressure, and $\varepsilon$ is the energy density.
The Friedmann equations (1a) and (1b) with $k = 0$, $\Lambda = 0$ and the state equation (2) determine the parameters of Friedmannian model of ENU, which we express – for better transparency – in all possible variants \[2\], \[3\]: $$a=ct=\frac{c}{H}=\frac{2Gm}{c^2}=\sqrt{\frac{3c^2}{8\pi G \rho}},$$$$t=\frac{a}{c}=\frac{1}{H}=\frac{2Gm}{c^3}=\sqrt{\frac{3}{8\pi G \rho}},$$$$H=\frac{c}{a}=\frac{1}{t}=\frac{c^3}{2Gm}=\sqrt{\frac{8\pi G \rho}{3}},$$$$m=\frac{c^2 a}{2G}=\frac{c^3 t}{2G}=\frac{c^3}{2GH}=\sqrt{\frac{3c^6}
{32\pi G^3\rho}},$$$$\rho=\frac{3c^2}{8\pi G a^2}=\frac{3}{8\pi G t^2}=\frac{3H^2}{8\pi G}=
\frac{3c^6}{32\pi G^3 m^2}=-\frac{3p}{c^2},$$ $$p=-\frac{c^4}{8\pi G a^2}=-\frac{c^2}{8\pi G t^2}=-\frac{c^2 H^2}{8\pi G}=
-\frac{c^8}{32\pi G^3 m^2}=-\frac{c^2 \rho}{3}=-\frac{1}{3} \varepsilon ,$$ where $t$ is the cosmological time, $H$ is the Hubble “constant", and $m$ is the mass of ENU.
From relations (3)–(8) it results that the parameters of ENU are mutually linearly linked. For fundamental parameters of ENU are valid relations \[2\], \[6\]: $$m=Ca=Dt,$$ where $C$ and $D$ are (total) constants: $$C=\frac{m}{a}=\frac{m}{ct}=\frac{Hm}{c}=\sqrt{\frac{8\pi G\rho m^2}{3c^2}}=
\frac{c^2}{2G}=6.734\,67(15)\times 10^{26}\hbox{kg\,m}^{-1},$$$$D=\frac{cm}{a}=\frac{m}{t}=Hm=\sqrt{\frac{8\pi G\rho m^2}{3}}=
\frac{c^3}{2G}=2.019\,00(37)\times 10^{35}\hbox{kg\,s}^{-1}.$$
The matter-space-time properties of ENU \[7\] you can see in \[6\].
[**References**]{}
|
---
abstract: 'We provide new insights into the a priori theory for a time-stepping scheme based on least-squares finite element methods for parabolic first-order systems. The elliptic part of the problem is of general reaction-convection-diffusion type. The new ingredient in the analysis is an elliptic projection operator defined via a non-symmetric bilinear form, although the main bilinear form corresponding to the least-squares functional is symmetric. This new operator allows to prove optimal error estimates in the natural norm associated to the problem and, under additional regularity assumptions, in the $L^2$ norm. Numerical experiments are presented which confirm our theoretical findings.'
address:
- 'Facultad de Matemáticas, Pontificia Universidad Católica de Chile, Santiago, Chile'
- 'Departamento de Matemática, Universidad Técnica Federico Santa María, Valparaíso, Chile'
author:
- Thomas Führer
- Michael Karkulik
bibliography:
- 'literature.bib'
title: ' New a priori analysis of first-order system least-squares finite element methods for parabolic problems'
---
[^1]
Introduction
============
In this work we analyse a time-stepping scheme based on least-squares finite element methods for a first-order reformulation of the evolution problem
\[eq:model\] $$\begin{aligned}
{2}
\dot{u}-\div \Amat\nabla u -\bbeta\cdot\nabla u + \gamma u &=f &\quad&\text{in } (0,T)\times\Omega, \label{eq:model:pde} \\
u &= 0 &\quad&\text{in } (0,T)\times \partial\Omega, \\
u(0,\cdot) &= u^0 &\quad&\text{in } \Omega.\end{aligned}$$
The coefficients $\Amat\in L^\infty(\Omega)_\mathrm{sym}^{d\times d}$, $\bbeta\in L^\infty(\Omega)^d$, $\gamma\in L^\infty(\Omega)$ are assumed to satisfy $$\begin{aligned}
\label{eq:coeff}
L^\infty(\Omega)\ni\tfrac12\div\bbeta+\gamma \geq 0, \quad\text{and}\quad
\lambda_\mathrm{min}(\Amat) \geq \alpha_0 > 0 \quad\text{a.e. in } \Omega.\end{aligned}$$ This ensures that \[eq:model\] admits a unique solution. (Here, $\lambda_\mathrm{min}$ denotes the minimal eigenvalue.) We refer to [@evans] for the analytical treatment and to [@thomee] for an overview and analysis of different Galerkin methods. For a simpler presentation we suppose that the coefficients are independent of $t$, however, we stress that our analysis also applies in the case of time-dependent coefficients, if \[eq:coeff\] holds uniformly in time.
Discretizations by finite elements in space and finite differences in time (also called time-stepping) are attractive for the numerical approximation of solutions of evolutions equations such as \[eq:model\]. This is due to the fact that such discretizations are much simpler to implement then discretizations of the space-time domain. In the context of least-squares finite element methods such time-stepping schemes have been analysed in, e.g., [@SplitLSQ; @YangI; @YangII]. The analysis in these works include the Euler and Crank-Nicolson scheme in time. Another time-stepping approach is considered in [@starkeParabolicI; @starkeParabolicII]. We refer also to [@BochevGunzburgerLSQ Chapter 9] for an overview on the whole topic. In accordance with the last reference, cf. [@BochevGunzburgerLSQ Section 2.2], the first step to obtain a practical least-squares discretization is to work with a first-order reformulation of the PDE at hand. This way, high regularity conditions on the discrete space are avoided, and condition numbers of underlying systems are kept at a reasonable level. If the first-order reformulation of \[eq:model\] allows for the so-called *splitting property*, optimal a priori error estimates are known, cf. [@SplitLSQ; @BochevGunzburgerLSQ]. It can be easily shown [@SplitLSQ] that the method then reduces to a time-stepping Galerkin method in the primal variable, and optimal a priori error estimates can be extracted from the classical reference [@thomee Chapter 1]. We refer to \[section:decoupling\] for more details. This splitting property is quite specific and in general only holds if $\bbeta=0=\gamma$ in \[eq:model\]. Available a priori analysis of least-squares finite element time stepping schemes does not extend to the general case $(\bbeta,\gamma)\neq (0,0)$. In fact, if $(\bbeta,\gamma)\neq (0,0)$, current literature provides optimal a priori error estimates under severe and impractical restrictions, e.g., the use of different meshes for approximations of the variables of the first-order system.
The purpose of the work at hand is to provide optimal error estimates in the $L^2$ norm for the scalar variable and in the natural norm associated with the problem, in particular in the case $(\bbeta,\gamma)\neq (0,0)$. Since the pioneering work of Mary Wheeler [@wheeler], one of the main tools in the a priori analysis of time-stepping Galerkin schemes is the *elliptic projection operator*, defined as the discrete solution operator on the *spatial part* of the parabolic PDE. The main tool in the following analysis is a new *elliptic projection operator*, which is defined with respect to only part of the spatial terms of the overall bilinear form. In particular, these terms constitute a non-symmetric bilinear form. At first glance this seems to be somehow counter-intuitive, since the overall bilinear form associated to the least-squares method is symmetric. Nevertheless, it turns out that this is the right choice, and it allows us to mimic some of the main ideas of well-known proofs for Galerkin finite element methods. Moreover, for the proof of optimal $L^2$ error estimates we show, under some regularity assumptions, that the $L^2$ error between a sufficient regular function and its elliptic projection is of higher order. This is done by considering solutions corresponding to problems that involve the aforementioned non-symmetric bilinear form and by using duality arguments. We stress that everything has to be analized carefully due to the fact that the bilinear form depends on the (arbitrarily small) temporal step-size.
To keep presentation simple, we consider a backward Euler scheme for time discretization. We stress that we have no reason to believe that our results do not extend correspondingly to a higher-order discretization in time (e.g., Crank-Nicolson). Likewise, the main ideas of our analysis are independent of the particular choice of (spatial) finite element spaces. However, to obtain convergence orders in terms of powers of the mesh-width $h$ we stick to standard conforming finite element spaces (piecewise polynomials for the scalar variable and Raviart-Thomas elements for the vector-valued variable) on the same simplical mesh.
Outline
-------
The remainder of this work is given as follows: In \[sec:main\], we introduce our model problem and discrete method and discuss necessary definitions and results needed throughout our work. In particular, in \[sec:duality\], we introduce and analyse the elliptic projection operator. The \[sec:apriori\] contains the main results (stability of the discrete method and optimal convergence rates in $L^2$ and energy norm) and their proofs. In \[sec:ext\] we show how to extend our ideas to a different problem. Finally, in \[sec:num\] we present numerical examples.
Least-squares methods {#sec:main}
=====================
Notation
--------
Let $\Omega\subset \R^d$ denote a bounded Lipschitz domain with polygonal boundary $\Gamma := \partial \Omega$. Let $\LL^2(\Omega) := L^2(\Omega)^d$. We use the usual notation for Sobolev spaces $H_0^1(\Omega)$, $H^s(\Omega)$, $\HH^s(\Omega):= H^s(\Omega)^d$, $\Hdivset\Omega : = \set{\ssigma \in\LL^2(\Omega)}{\div\ssigma\in L^2(\Omega)}$. Recall that $\norm{u}{} \leq \CF \norm{\nabla u}{}$ for all $u\in H_0^1(\Omega)$, where $\norm\cdot{}$ denotes the $L^2(\Omega)$ norm. Moreover, the $L^2(\Omega)$ scalar product is denoted by $\ip\cdot\cdot$.
We work with the space $$\begin{aligned}
U := H_0^1(\Omega) \times \Hdivset\Omega\end{aligned}$$ equipped with the $\ts$-dependent norm $$\begin{aligned}
\norm{(u,\ssigma)}\ts^2 := \norm{\nabla u}{}^2 + \norm{\ssigma}{}^2 + \ts\norm{\div\ssigma}{}^2.\end{aligned}$$ The parameter $\ts>0$ corresponds to the time step later on.
Let $0=:t_0<t_1<\dots<t_N:=T$ denote an arbitrary partition of the time interval $[0,T]$. We call $\ts_n :=
t_n-t_{n-1}$ a time step. For a simpler notation, we use upper indices to indicate time evaluations, i.e., for a time-dependent function $v =
v(t;\cdot)$ we use the notation $v^n(\cdot) := v(t_n;\cdot)$. Moreover, we use similar notations for discrete quantities which however are not defined for all $t\in [0,T]$. For instance, later on $u_h^n$ will denote an approximation of $u^n = u(t_n;\cdot)$ where $u$ is the exact solution of \[eq:model\].
Operators involving derivatives with respect to the spatial variables are denoted by $\nabla$ and $\div$, whereas the first and second derivate with respect to the time variable are denoted by $\dot{(\cdot)}$, $\ddot{(\cdot)}$. For vector-valued functions the time derivative is understood componentwise.
We write $A\lesssim B$ if there exists $C>0$ such that $A\leq C B$ and $C$ is independent of the quantities of interest. Analogously we define $A\gtrsim B$. If both $A\lesssim B$ and $B\lesssim A$ hold, then we write $A\simeq B$. In this work we use these notations if the involved constants only depend on the coefficients $\Amat,\bbeta,\gamma$, the final time $T$, $\Omega$, some fixed polynomial degree $p\in\N_0$ of the discrete spaces, and shape-regularity of the underlying triangulation.
First-order system and least-squares method
-------------------------------------------
Let $u$ denote the exact solution of \[eq:model\]. Introducing $\ssigma:= \Amat\nabla u$, we rewrite \[eq:model\] as the equivalent first-order system
\[eq:fo\] $$\begin{aligned}
{2}
\dot{u} - \div\ssigma -\bbeta\cdot\nabla u + \gamma u &= f &\quad&\text{in } (0,T)\times\Omega, \label{eq:fo:a} \\
\ssigma - \Amat\nabla u &= 0 &\quad&\text{in } (0,T)\times\Omega, \label{eq:fo:b} \\
u &= 0 &\quad&\text{on } (0,T)\times\Gamma, \label{eq:fo:c} \\
u(0,\cdot) &= u^0 &\quad&\text{in }\Omega. \label{eq:fo:d}\end{aligned}$$
We approximate $\dot{u}^n$ by a backward difference, i.e., $\dot{u}^n \approx (u^n-u^{n-1})/\ts_n$. This approach naturally leads to a time-stepping method: given an approximation $\wilde u^{n-1}$ of $u^{n-1}$, we consider the problem of finding $(u,\ssigma)\in U = H_0^1(\Omega)\times\Hdivset\Omega$ such that
\[eq:model:euler\] $$\begin{aligned}
\frac{u-\wilde u^{n-1}}{\ts_n} - \div\ssigma -\bbeta\cdot\nabla u + \gamma u &= f^n, \\
\ssigma - \Amat\nabla u &= 0\end{aligned}$$
for $n=1,\dots,N$. The last equations will be put into a variational scheme using a least-squares approach. For given data $g,w\in L^2(\Omega)$ we consider the functional $$\begin{aligned}
\label{eq:def:lsqfun}
J_n(\uu;g,w) := \ts_n \norm{\frac{u-w}{\ts_n}- \div\ssigma -\bbeta\cdot\nabla u + \gamma u-g}{}^2
+ \norm{\Amat^{1/2}\nabla u-\Amat^{-1/2}\ssigma}{}^2.\end{aligned}$$ for $\uu = (u,\ssigma)\in U$. Note that \[eq:model:euler\] is an elliptic equation since $\ts_n>0$ and therefore admits a unique solution that can also be characterized as the unique minimizer of the problem $$\begin{aligned}
\label{eq:min:euler}
\min_{\vv\in U} J_n(\vv;f^n,\wilde u^{n-1}).\end{aligned}$$ It is well-known [@CLMMcC1994] that the functional satisfies the norm equivalence $$\begin{aligned}
\norm{\nabla u}{}^2 + \norm{\ssigma}{}^2 + \norm{\div\ssigma}{}^2 \simeq J_n(\uu;0,0)\end{aligned}$$ for all $\uu = (u,\ssigma)\in U$. In particular, standard textbook knowledge of the least-squares methodology, see [@BochevGunzburgerLSQ], shows that there exists a unique minimizer if the space $U$ in \[eq:min:euler\] is replaced by some closed subspace. Note that the norm equivalence constants depend on the time step $\ts_n$ in general.
We introduce the bilinear forms $\blf_n,\blfTot_n : U\times U \to \R$, $$\begin{aligned}
\blf_n(\uu,\vv) &:= \ip{u}{-\div\ttau-\bbeta\cdot\nabla v+\gamma v} + \ip{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{v}
\\ &\qquad+ \ts_n \ip{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{-\div\ttau-\bbeta\cdot\nabla v+\gamma v}
\\ &\qquad + \ip{\Amat^{1/2}\nabla u-\Amat^{-1/2}\ssigma}{\Amat^{1/2}\nabla v-\Amat^{-1/2}\ttau}, \\
\blfTot_n(\uu,\vv) &:= \frac1{\ts_n}\ip{u}v + \blf_n(\uu,\vv)\end{aligned}$$ for all $\uu=(u,\ssigma),\vv=(v,\ttau)\in U$. Moreover, define for some given $w\in L^2(\Omega)$, the functional $F_n : U\to \R$ by $$\begin{aligned}
F_n(\vv;f^n,w) := \ts_n\ip{f^n+\frac{w}{\ts_n}}{\frac{v}{\ts_n}-\div\ttau-\bbeta\cdot\nabla v+\gamma v}\end{aligned}$$ for $\vv=(v,\ttau)\in U$.
The *backward Euler* scheme reads as follows: For all $n=1,\dots,N$ let $U_h^n \subseteq U$ denote a closed subspace. Let $u_h^0 \in L^2(\Omega)$ be some approximation of the initial data, i.e., $u_h^0\approx u^0$. For $n=1,\dots,N$ we seek solutions $\uu_h^n = (u_h^n,\ssigma_h^n)\in U_h^n$ of $$\begin{aligned}
\label{eq:euler}
\blfTot_n(\uu_h^n,\vv_h) = F_n(\vv_h;f^n,u_h^{n-1}) \quad\text{for all } \vv_h = (v_h,\ttau_h)\in U_h^n.\end{aligned}$$
From our discussion above we conclude:
\[thm:euler\] For all $n=1,\dots,N$ Problem \[eq:euler\] admits a unique solution $\uu_h^n\in U_h^n$.
Problem decoupling for convection-reaction free problems {#section:decoupling}
--------------------------------------------------------
In the case of convection and reaction free problems, i.e., $(\bbeta,\gamma)=(0,0)$, one can show, see [@BochevGunzburgerLSQ; @SplitLSQ], that the backward Euler least-squares method reduces to a simplified problem: Suppose $(\bbeta,\gamma)=(0,0)$ and that $\Amat$ is the identity. The bilinear form then reads $$\begin{aligned}
\blfTot_n(\uu,\vv) &= \frac1{\ts_n}\ip{u}v + \ip{u}{-\div\ttau} + \ip{-\div\ssigma}{v}
+ \ts_n \ip{-\div\ssigma}{-\div\ttau} +
\ip{\nabla u-\ssigma}{\nabla v-\ttau} \\
&= \frac1{\ts_n}\ip{u}v + \ip{\nabla u}{\ttau} + \ip{\ssigma}{\nabla v} + \ts_n \ip{\div\ssigma}{\div\ttau} +
\ip{\nabla u-\ssigma}{\nabla v-\ttau} \\
&= \frac1{\ts_n}\ip{u}v + \ts_n \ip{\div\ssigma}{\div\ttau} + \ip{\nabla u}{\nabla v} + \ip{\ssigma}{\ttau},\end{aligned}$$ where we have only used integration by parts. Testing with $\vv_h = (v_h,0)$ in \[eq:euler\] leads to the variational problem $$\begin{aligned}
\frac1{\ts_n}\ip{u_h^n}{v_h} + \ip{\nabla u_h^n}{\nabla v_h^n} = \frac1{\ts_n} \ip{u_h^{n-1}}{v_h} +
\ip{f^n}{v_h},\end{aligned}$$ which is independent of $\ssigma_h^j$ for all $j=1,\dots,n$. Hence, this is the backward Euler method for the standard Galerkin scheme, where optimal error estimates are known [@thomee]. Therefore, the least-squares problem simplifies to a well-known method. However, the situation is completely different when $(\bbeta,\gamma)\neq (0,0)$ because the problem does not decouple anymore and therefore does not reduce to a simplified method. We stress that our analysis below is in particular also valid for $(\bbeta,\gamma)\neq (0,0)$.
Discrete spaces and approximation properties {#sec:discrete}
--------------------------------------------
The main ideas of our analysis are independent of specific discrete spaces. However, for the analysis of convergence orders we will fix the following discrete setting. By $\TT$ we denote a simplicial mesh on $\Omega$. As usual, $h$ denotes the biggest element diameter in $\TT$. We will only consider uniform sequences of meshes, and we assume that our meshes are shape-regular. The discrete spaces which we will use are the space $\cS_0^{p+1}(\TT)$ of globally continuous, $\TT$-piecewise polynomials of degree at most $p+1$ satisfying homogeneous boundary conditions, and the Raviart-Thomas space $\RT^p(\TT)$ of order $p$. We will also need the space $\PP^p(\TT)$ of piecewise polynomials of degree at most $p$. For the remainder of this work we consider the following discrete subspace of $U$, $$\begin{aligned}
U_h := U_{h,p} := \cS_0^{p+1}(\TT) \times \RT^p(\TT),\end{aligned}$$ We will need certain interpolation operators mapping into the spaces. By $I^{\rm SZ}_{h,p}:H^1_0(\Omega)\rightarrow\cS^{p+1}_0(\TT)$ we denote the Scott-Zhang interpolation operator from [@SZ_90], by $I^{\rm
RT}_{h,p}:\Hdivset\Omega\cap\HH^1(\Omega)\rightarrow \RT^p(\TT)$ the Raviart-Thomas interpolation operator [@RT_77], and by $\Pi_{h,p}:L^2(\Omega)\rightarrow \PP^p(\TT)$ the $L^2(\Omega)$-orthogonal projection. We keep in mind that there the commutativity property $\div I^{\rm RT}_{h,p}\ssigma = \Pi_{h,p}\div\ssigma$.
The mesh $\TT$ is called shape-regular if $$\begin{aligned}
\max_{T\in\TT} \frac{\diam(T)^d}{|T|} \leq \kappa < \infty,\end{aligned}$$ where $|T|$ denotes the volume of an element.
We make use of the spaces $$\begin{aligned}
C^1(\TT) &:= \set{v\in L^\infty(\Omega)}{v|_T \in C^1(\overline T) \text{ for all } T\in\TT}, \\
H^{p+1}(\TT) &:= \set{v\in L^2(\Omega)}{v|_T \in H^{p+1}(T) \text{ for all } T\in\TT},\end{aligned}$$ where $H^{p+1}(T)$ denotes the Sobolev space on $T$. Moreover, let $\pwnabla : H^1(\TT) \to L^2(\Omega)$ denote the elementwise gradient operator.
The following result is a simple consequence of the approximation properties of the interpolation operators mentioned above, respectively the commutativity property.
\[prop:approx\] Let $\uu = (u,\ssigma) \in U$, $p\in\N_0$. Suppose that $u\in H^{p+2}(\Omega)\cap H_0^1(\Omega)$, $\ssigma\in
\HH^{p+1}(\Omega)$, and $\div\ssigma\in H^{p+1}(\TT)$. Then, $$\begin{aligned}
\min_{\vv_h\in U_h} \norm{\uu-\vv_h}\ts \leq C_\mathrm{app} h^{p+1} (\norm{u}{H^{p+2}(\Omega)} +
\norm{\ssigma}{\HH^{p+1}(\Omega)} + \ts^{1/2}\norm{\div\ssigma}{H^{p+1}(\TT)})
\end{aligned}$$ The constant $C_\mathrm{app}>0$ only depends on shape-regularity of $\TT$ and $p\in \N_0$.
Regularity of solutions
-----------------------
For some given $g\in L^2(\Omega)$ we consider the second order elliptic problem $$\begin{aligned}
\begin{split}
-\div\Amat\nabla v -\bbeta\cdot\nabla v + \gamma v &= g, \\
v|_\Gamma &= 0
\end{split}\end{aligned}$$ and the equivalent first-order problem
$$\begin{aligned}
-\div\ttau - \bbeta\cdot\nabla v + \gamma v &= g, \\
\Amat\nabla v - \ttau &= 0, \\
v|_\Gamma &= 0.\end{aligned}$$
Moreover, we consider the (dual or adjoint) problem $$\begin{aligned}
-\div(\Amat\nabla w - \bbeta w) + \gamma w &= g, \\
w|_\Gamma &= 0.\end{aligned}$$ Throughout, we assume that $\bbeta$ and $\gamma$ satisfy
\[eq:regularity\] $$\begin{aligned}
\label{eq:regularity:b}
\bbeta\in C^1(\TT)^d \cap \Hdivset\Omega, \quad \gamma\in C^1(\TT).\end{aligned}$$ For the results concerning optimal rates of the $L^2$ error we additionally assume that $\Omega$ and the coefficients are such that (for the solutions above) $$\begin{aligned}
\label{eq:regularity:a}
\norm{v}{H^2(\Omega)} + \norm{\ttau}{\HH^1(\Omega)} + \norm{w}{H^2(\Omega)} \lesssim \norm{g}{}.\end{aligned}$$
Estimate \[eq:regularity:a\] is satisfied for $d=2$ if $\Omega$ is convex, $\Amat$ the identity, and $\bbeta,\gamma$ satisfy \[eq:regularity:b\], see [@grisvard] for details on the regularity of solutions on polygonal domains.
Elliptic projection and duality arguments {#sec:duality}
-----------------------------------------
In this section we introduce and analyse the so-called *elliptic projection operator* $\ep_h : U\to
U_h$, which will be used in the remainder of this work. As discussed in the introduction, the elliptic part of the problem is described by a non-symmetric bilinear form given by $$\begin{aligned}
\blfns_n(\uu,\vv) &:= \ip{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{v}
\\ &\qquad+ \ts_n \ip{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{-\div\ttau-\bbeta\cdot\nabla v+\gamma v}
\\ &\qquad + \ip{\Amat^{1/2}\nabla u-\Amat^{-1/2}\ssigma}{\Amat^{1/2}\nabla v-\Amat^{-1/2}\ttau}\end{aligned}$$ for all $\uu,\vv\in U$.
We analyse basic properties of $\blf$ and $\blfns$. For the remainder of this section we set $\ts:=\ts_n$ and skip the index $n$ in the bilinear forms.
\[lem:blf\] There exist constants $c_\blf,C_\blf>0$ independent of $\ts\in(0,T]$ such that $$\begin{aligned}
c_\blf\norm{\uu}\ts^2 &\leq \min\{ \blf(\uu,\uu),\blfns(\uu,\uu) \}, \label{eq:blf:coer}\\
\max\{ |\blf(\uu,\vv)|,|\blfns(\uu,\vv)|\} &\leq C_\blf \norm{\uu}\ts\norm{\vv}\ts \label{eq:blf:bound}
\end{aligned}$$ for all $\uu,\vv\in U$.
We start to proof boundedness \[eq:blf:bound\] of $\blfns$. The Cauchy-Schwarz inequality, the triangle inequality and Friedrich’s inequality show that $$\begin{aligned}
&\ts |\ip{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{-\div\ttau-\bbeta\cdot\nabla v+\gamma v}|
\\&\qquad \leq \ts^{1/2} \norm{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{}
\ts^{1/2} \norm{-\div\ttau-\bbeta\cdot\nabla v+\gamma v}{} \\
&\qquad \lesssim (\ts^{1/2} \norm{\div\ssigma}{} + \norm{\nabla u}{} + \norm{u}{})
(\ts^{1/2} \norm{\div\ttau}{} + \norm{\nabla v}{} + \norm{v}{})
\lesssim \norm{\uu}\ts\norm{\vv}\ts .
\end{aligned}$$ Moreover, $$\begin{aligned}
|\ip{\Amat^{1/2}\nabla u-\Amat^{-1/2}\ssigma}{\Amat^{1/2}\nabla v-\Amat^{-1/2}\ttau}|
&\lesssim (\norm{\nabla u}{}+\norm{\ssigma}{})(\norm{\nabla v}{}+\norm{\ttau}{})
\lesssim \norm{\uu}\ts\norm{\vv}\ts.
\end{aligned}$$ Integration by parts yields $$\begin{aligned}
|\ip{-\div\ssigma-\bbeta\cdot\nabla u + \gamma u}{v}| &= |\ip{\ssigma}{\nabla v} - \ip{\bbeta\cdot\nabla u}v
+\ip{\gamma u}v| \lesssim \norm{\uu}\ts\norm{\vv}\ts.
\end{aligned}$$ Putting altogether shows boundedness of $\blfns$. The same arguments prove boundedness of $\blf$ and therefore \[eq:blf:bound\]. It remains to show \[eq:blf:coer\]. First, from \[eq:coeff\] it follows with integration by parts that $$\begin{aligned}
\ip{u}{-\bbeta\cdot\nabla u+\gamma u} = \ip{u}{(\tfrac12\div\bbeta+\gamma)u}\geq 0.
\end{aligned}$$ This implies that $$\begin{aligned}
\ip{u}{-\div\ssigma-\bbeta\cdot\nabla u+\gamma u} = \ip{\nabla u}{\ssigma} +
\ip{u}{-\bbeta\cdot\nabla u+\gamma u}\geq \ip{\nabla u}{\ssigma}.
\end{aligned}$$ Therefore, $$\begin{aligned}
&2\ip{u}{-\div\ssigma-\bbeta\cdot\nabla u+\gamma u} + \norm{\Amat^{1/2}\nabla u-\Amat^{-1/2}\ssigma}{}^2
\\&\qquad\geq 2\ip{\nabla u}{\ssigma} + \norm{\Amat^{1/2}\nabla u}{}^2 + \norm{\Amat^{-1/2}\ssigma}{}^2
- 2\ip{\nabla u}{\ssigma}
= \norm{\Amat^{1/2}\nabla u}{}^2 + \norm{\Amat^{-1/2}\ssigma}{}^2.
\end{aligned}$$ Together this yields $$\begin{aligned}
\norm{\uu}\ts^2 &= \norm{\nabla u}{}^2 + \norm{\ssigma}{}^2 + \ts\norm{\div\ssigma}{}^2
\lesssim \norm{\nabla u}{}^2 + \norm{\ssigma}{}^2 + \ts\norm{-\div\ssigma-\bbeta\cdot\nabla u +\gamma u}{}^2 \\
&\lesssim \norm{\Amat^{1/2}\nabla u}{}^2 + \norm{\Amat^{-1/2}\ssigma}{}^2 + \ts\norm{-\div\ssigma-\bbeta\cdot\nabla u +\gamma u}{}^2\\
&\leq 2\ip{u}{-\div\ssigma-\bbeta\cdot\nabla u+\gamma u} + \norm{\Amat^{1/2}\nabla u-\Amat^{-1/2}\ssigma}{}^2
+ \ts \norm{-\div\ssigma-\bbeta\cdot\nabla u+\gamma u}{}^2 = \blf(\uu,\uu).
\end{aligned}$$ Coercivity of $\blfns$ follows from that of $\blf$, since $$\begin{aligned}
\blfns(\uu,\uu) \geq \tfrac12 \blf(\uu,\uu).
\end{aligned}$$ This finishes the proof.
For given $\uu=(u,\ssigma)\in U$ define the *elliptic projector* $\ep_h\uu = (\ep_h^\nabla \uu,\ep_h^{\div}\uu)\in U_h$ by $$\begin{aligned}
\label{eq:def:ep}
\blfns(\ep_h\uu,\vv_h) = \blfns(\uu,\vv_h) \quad\text{for all } \vv_h\in U_h.\end{aligned}$$
\[thm:ep\] If $\uu\in U$, then $\ep_h\uu$ is well-defined.
In particular, it holds that $$\begin{aligned}
\label{eq:ep:opt}
\norm{\uu-\ep_h\uu}\ts \leq C_\mathrm{opt} \inf_{\vv_h\in U_h} \norm{\uu-\vv_h}\ts.
\end{aligned}$$ The constant $C_\mathrm{opt}>0$ only depends on $c_\blf,C_\blf>0$.
Moreover, if \[eq:regularity\] holds, then $$\begin{aligned}
\label{eq:ep:duality}
\norm{u-\ep_h^\nabla \uu}{} \leq C_{L^2} h \norm{\uu-\ep_h\uu}\ts.
\end{aligned}$$ The constant $C_{L^2}>0$ only depends on $c_\blf,C_\blf>0$, shape-regularity of $\TT$, the constant in \[eq:regularity\], $\bbeta,\gamma$, $T$, and $\Omega$.
Note that \[lem:blf\] together with the Lax-Milgram theory show that $\ep_h\uu$ is well-defined and that it holds \[eq:ep:opt\].
To prove \[eq:ep:duality\] we need to develop some duality arguments. For optimal $L^2$ error estimates for least-squares finite element methods we refer to [@LSQduality]. Note that in our case the bilinear form $\blfns$ is not symmetric and thus does not directly correspond to a least-squares method. We divide the proof into several steps. Throughout we use $\uu_d := (u_d,\ssigma_d) := \uu-\ep_h \uu$.
**Step 1.** Define the function $w\in H_0^1(\Omega)$ through the unique solution of the PDE $$\begin{aligned}
-\div(\Amat\nabla w -\bbeta w) + \gamma w = u_d.
\end{aligned}$$ By assumption \[eq:regularity:a\] there holds $w\in H^2(\Omega)$ and $$\begin{aligned}
\label{eq:reg}
\norm{w}{H^{2}(\Omega)} \lesssim \norm{u_d}{}.
\end{aligned}$$ In particular, we have that $$\begin{aligned}
\label{eq:dual}
\begin{split}
\norm{u_d}{}^2 &= \ip{u_d}{-\div(\Amat\nabla w -\bbeta w) + \gamma w} = \ip{\nabla u_d}{\Amat\nabla w-\bbeta w} +
\ip{u_d}{\gamma w} \\
&= \ip{\Amat^{1/2}\nabla u_d}{\Amat^{1/2}\nabla w} + \ip{-\bbeta\cdot\nabla u_d+\gamma u_d}{w}\\
&= \ip{\Amat^{1/2}\nabla u_d-\Amat^{-1/2}\ssigma_d}{\Amat^{1/2}\nabla w} + \ip{-\div\ssigma_d-\bbeta\cdot\nabla u_d+\gamma u_d}{w},
\end{split}
\end{aligned}$$ where we used integration by parts and $0=\ip{-\Amat^{-1/2}\ssigma}{\Amat^{1/2}\nabla w} - \ip{\div\ssigma}w$ in the last step.
**Step 2.** We will construct a function $\vv = (v,\ttau)\in U$ such that $$\begin{aligned}
\label{eq:dual:identity}
\norm{u_d}{}^2 = \blfns(\uu_d,\vv).
\end{aligned}$$ To that end, let $v\in H_0^1(\Omega)$ be the unique solution of $$\begin{aligned}
\label{eq:dual:v}
-\div\Amat\nabla v - \bbeta\cdot\nabla v + \gamma v + \frac1\ts v = -\div\Amat\nabla w + \frac1\ts w.
\end{aligned}$$ From the first step and \[eq:regularity:b\] we conclude that the right-hand side is in $L^2(\Omega)$. Then, by assumption \[eq:regularity:a\] there holds $v\in H^2(\Omega)$. If we define $\ttau$ by
\[eq:dual:fo\] $$\begin{aligned}
\label{eq:dual:fo:1}
\Amat^{1/2}\nabla v-\Amat^{-1/2}\ttau &= \Amat^{1/2}\nabla w,
\end{aligned}$$ then it follows from \[eq:dual:v\] that $$\begin{aligned}
-\div\ttau - \bbeta\cdot\nabla v + \gamma v + \frac1\ts v &= \frac1\ts w,
\end{aligned}$$
and hence $\ttau\in\Hdivset\Omega$. Using \[eq:dual:fo\] in \[eq:dual\] finally shows \[eq:dual:identity\].
**Step 3.** We will show that $$\begin{aligned}
\norm{v}{H^2(\Omega)} + \norm{\ttau}{\HH^1(\Omega)} \lesssim \norm{u_d},
\end{aligned}$$ Note that because $\ts$ can be arbitrary small, we can not extract such an $\ts$-independent estimate directly from \[eq:dual:v\]. We therefore make the ansatz $v = \widetilde v + w$ with $\widetilde v\in H_0^1(\Omega)$. From \[eq:dual:v\] it follows that $$\begin{aligned}
-\div\Amat\nabla \widetilde v - \bbeta\cdot\nabla \widetilde v + \gamma \widetilde v + \frac1\ts \widetilde v
- \bbeta\cdot\nabla w + \gamma w = 0.
\end{aligned}$$ Multiplying by $\ts$ and simplifying yields $$\begin{aligned}
\label{eq:dual:vTilde}
\ts(-\div\Amat\nabla \widetilde v - \bbeta\cdot\nabla \widetilde v + \gamma \widetilde v) + \widetilde v
= \ts(\bbeta\cdot\nabla w - \gamma w).
\end{aligned}$$ Testing with $\widetilde v$, using that $\ip{-\bbeta\cdot\nabla \widetilde v+\gamma \widetilde v}{\widetilde v}\geq 0$, and employing the bound \[eq:reg\], we infer $$\begin{aligned}
\ts \norm{\nabla\widetilde v}{}^2 + \norm{\widetilde v}{}^2
\lesssim \ts\norm{\bbeta\cdot\nabla w - \gamma w}{}\norm{\widetilde v}{} \leq \ts\norm{u_d}{}
(\norm{\widetilde v}{} + \ts^{1/2}\norm{\nabla \widetilde v}{}).
\end{aligned}$$ This shows that $$\begin{aligned}
\label{eq:bound:ts}
\norm{\nabla \widetilde v}{} \leq \ts^{1/2}\norm{u_d}{} \quad\text{and}\quad \norm{\widetilde v}{} \lesssim \ts
\norm{u_d}{}.
\end{aligned}$$ With $v=w+\widetilde v$ we rewrite the first-order system \[eq:dual:fo\] as $$\begin{aligned}
\Amat^{1/2}\nabla\widetilde v - \Amat^{-1/2}\ttau &= 0, \\
-\div\ttau-\bbeta\cdot\nabla\widetilde v + \gamma \widetilde v
&= \bbeta\cdot\nabla w - \gamma w - \frac1\ts \widetilde v.
\end{aligned}$$ By our regularity assumptions \[eq:regularity\] and the bounds \[eq:reg\] and \[eq:bound:ts\] we conclude $$\begin{aligned}
\norm{\ttau}{\HH^1(\Omega)} + \norm{\widetilde v}{H^2(\Omega)} \lesssim \norm{-\bbeta\cdot\nabla w + \gamma w -
\frac1\ts \widetilde v}{} \lesssim \norm{u_d}{}.
\end{aligned}$$ With the triangle inequality and \[eq:reg\] we conclude this step.
**Step 4.** We prove that $$\begin{aligned}
\ts^{1/2}\norm{\div(1-I^{\rm RT}_{h,0})\ttau}{} \lesssim h \norm{u_d}{}.
\end{aligned}$$ Recall from Step 3 that $\div\ttau = -\bbeta\cdot\nabla v+\gamma v + \frac1\ts \widetilde v$. By assumption \[eq:regularity:b\] we have $\bbeta\in C^1(\TT)^d$, and $\gamma\in C^1(\TT)$. Therefore, $\div\ttau \in
H^1(\TT)$. Using the commutativity property of $I^{\rm RT}_{h,0}$ we infer $$\begin{aligned}
\ts^{1/2}\norm{\div(1-I^{\rm RT}_{h,0})\ttau}{} &= \ts^{1/2}\norm{(1-\Pi_{h,p})\div\ttau}{}
\lesssim \ts^{1/2} h \norm{\pwnabla \div\ttau}{} \\
&\lesssim \ts^{1/2} h (\norm{v}{H^2(\Omega)} + \frac1\ts\norm{\nabla \widetilde v}{}).
\end{aligned}$$ From Step 3 we know that $\norm{v}{H^2(\Omega)}\lesssim \norm{u_d}{}$ and $\norm{\nabla\widetilde v}{}\lesssim \ts^{1/2}\norm{u_d}{}$. Hence, $$\begin{aligned}
\ts^{1/2}\norm{\div(1-I^{\rm RT}_{h,0})\ttau}{}
\lesssim h \norm{u_d}{}.
\end{aligned}$$
**Step 5.** From the definition \[eq:def:ep\] of $\ep_h$, $\uu_d = \uu-\ep_h\uu$, and boundedness of $\blfns$, it follows that $$\begin{aligned}
\norm{u-\ep_h^\nabla \uu}{}^2 = \blfns(\uu-\ep_h\uu,\vv) = \blfns(\uu-\ep_h\uu,\vv-\vv_h)
\lesssim \norm{\uu-\ep_h\uu}\ts \norm{\vv-\vv_h}\ts
\end{aligned}$$ for any $\vv_h\in U_h$. We choose $\vv_h = (I^{\rm SZ}_{h,1} v,I^{\rm RT}_{h,0}\ttau)$. It remains to show $\norm{\vv-\vv_h}\ts \lesssim h \norm{u-\ep_h^\nabla \uu}{}$ to finish the proof. With the regularity estimates from Step 3 and the approximation properties of the operators from \[sec:discrete\] we infer $$\begin{aligned}
\norm{\nabla(v-v_h)}{} + \norm{\ttau-\ttau_h}{} \lesssim h (\norm{v}{H^2(\Omega)} + \norm{\ttau}{\HH^1(\Omega)})
\lesssim h\norm{u-\ep_h^\nabla\uu}{}.
\end{aligned}$$ Then, together with the estimate from Step 4 this shows $$\begin{aligned}
\norm{\vv-\vv_h}\ts \lesssim h \norm{u-\ep_h^\nabla \uu}{},
\end{aligned}$$ which finishes the proof.
The following is a simple consequence of \[thm:ep,prop:approx\].
\[cor:ep:rates\] Suppose $u\in H^{p+2}(\Omega)\cap H_0^1(\Omega)$, $\ssigma\in \HH^{p+1}(\Omega)$ such that $\div\ssigma\in
H^{p+1}(\TT)$ and define $\uu := (u,\ssigma) \in U$. Then, $$\begin{aligned}
\norm{\uu-\ep_h\uu}\ts \leq C h^{p+1}(\norm{u}{H^{p+2}(\Omega)} + \norm{\ssigma}{\HH^{p+1}(\Omega)} +
\ts^{1/2}\norm{\div\ssigma}{H^{p+1}(\TT)} )
\end{aligned}$$ If additionally \[eq:regularity\] holds true, then $$\begin{aligned}
\norm{u-\ep_h^\nabla \uu}{} \leq C h^{p+2} (\norm{u}{H^{p+2}(\Omega)} + \norm{\ssigma}{\HH^{p+1}(\Omega)} +
\ts^{1/2}\norm{\div\ssigma}{H^{p+1}(\TT)} ).
\end{aligned}$$ The constant $C>0$ depends only on the constants from \[thm:ep\] and \[prop:approx\].
A priori analysis {#sec:apriori}
=================
Stability of discrete solutions
-------------------------------
Our first result treats stability of discrete solutions of \[eq:euler\].
\[thm:stability:euler\] Let $f\in C^0([0,T];L^2(\Omega))$ and let $u_h^0\in L^2(\Omega)$ be given. Let $\uu_h^n=(u_h^n,\ssigma_h^n)\in U_h^n$, $n=1,\dots,N$, denote the solutions of \[eq:euler\]. The $n$-th iteration satisfies $$\begin{aligned}
\norm{u_h^n}{} \leq \sum_{j=1}^n \ts_j \norm{f^j}{} + \norm{u_h^0}{}.
\end{aligned}$$
Apply the Cauchy-Schwarz estimate to get $$\begin{aligned}
|F_n(\vv;f^n,u_h^{n-1})| &\leq \ts_n^{1/2}\norm{\frac1{\ts_n}u_h^{n-1}+f^n}{} \ts_n^{1/2}\norm{\frac1{\ts_n}v_h
-\div\ttau_h-\bbeta\cdot\nabla v_h +\gamma v_h}{} \\
&\leq \ts_n^{1/2}\norm{\frac1{\ts_n}u_h^{n-1}+f^n}{} \, \blfTot_n(\vv_h,\vv_h)^{1/2}.
\end{aligned}$$ Thus, the choice $\vv_h = \uu_h^n$ and \[eq:euler\] lead to $$\begin{aligned}
\blfTot_n(\uu_h^n,\uu_h^n)^{1/2} \leq \ts_n^{1/2} \norm{f^n}{} + \ts^{-1/2}\norm{u_h^{n-1}}.
\end{aligned}$$ Multiplying by $\ts_n^{1/2}$ and using that $\blf(\cdot,\cdot)$ is coercive (see \[lem:blf\]) we infer $$\begin{aligned}
\norm{u_h^n}{} \leq \ts_n^{1/2}\blfTot_n(\uu_h^n,\uu_h^n)^{1/2} \leq \ts_n\norm{f^n}{} +
\norm{u_h^{n-1}}{}.
\end{aligned}$$ Iterating the same arguments yields the assertion.
Optimal $L^2$ estimates {#sec:apriori:L2}
-----------------------
Our second main results concerns optimal estimates in the $L^2(\Omega)$ norm. We will assume that the initial data $u_h^0$ for problem \[eq:euler\] is chosen such that $$\begin{aligned}
\label{eq:choice:initDataL2}
\norm{\ep_h^\nabla \uu^0-u_h^0}{} \leq C_0 h^{p+2},\end{aligned}$$ where $C_0$ is some generic constant depending also on $\uu^0$ and $\ep_h=(\ep_h^\nabla,\ep_h^\div)$ is the elliptic projection operator from \[sec:duality\]. For a simpler representation of the error estimates, we use the following norms in the remainder of this article: $$\begin{aligned}
\enorm{(v,\ttau)}_p^2 := \norm{v}{H^{p+2}(\Omega)}^2 + \norm{\ttau}{\HH^{p+1}(\Omega)}^2 +
\norm{\div\ttau}{H^{p+1}(\TT)}^2.\end{aligned}$$ Whenever such a term appears, we assume that $$\begin{aligned}
\sup_{t\in[0,T]} \enorm{(v,\ttau)}_p < \infty.\end{aligned}$$ This means that we implicitly assume some regularity of the function. Similarly, when $\norm{\ddot{u}}{}$ appears we assume that $\sup_{t\in[0,T]} \norm{\ddot{u}}{} < \infty$.
\[thm:euler:L2\] Suppose $\ts_n = \ts>0$ for $j=1,\dots,N$ and let $\uu_h^n\in U_h^n=U_h$ denote the solution of \[eq:euler\] and let $\uu=(u,\ssigma)$ denote the solution of \[eq:fo\]. If \[eq:regularity\] and \[eq:choice:initDataL2\] hold true, then $$\begin{aligned}
\norm{u^n-u_h^n}{} \leq C(\uu)( h^{p+2} + \ts),
\end{aligned}$$ where the constant $C(\uu) = C(\enorm{\uu}_p,\enorm{\dot\uu}_p,\norm{\ddot u}{},C_0)$ and $C_0$ is the constant from \[eq:choice:initDataL2\].
We consider the error splitting $$\begin{aligned}
u^n-u_h^n = (u^n-\ep_h^\nabla \uu^n) + (\ep_h^\nabla\uu^n-u_h^n),
\end{aligned}$$ where $\ep_h\uu\in U_h$ is the elliptic projection defined and analysed in \[sec:duality\].
**Step 1.** By \[cor:ep:rates\] we have that $$\begin{aligned}
\norm{u^n-\ep_h^\nabla\uu^n}{} \lesssim h^{p+2} \enorm{\uu^n}_p.
\end{aligned}$$
**Step 2.** Observe that $\ep_h\uu^n$ satisfies the equation $$\begin{aligned}
\frac1\ts\ip{\ep_h^\nabla \uu^n}{v_h} + \blf(\ep_h\uu^n,\vv_h) &=
\frac1\ts\ip{\ep_h^\nabla \uu^n}{v_h} + \ip{\ep_h^\nabla\uu^n}{-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h} +
\blfns(\ep_h\uu^n,\vv_h) \\
&= \frac1\ts\ip{\ep_h^\nabla \uu^n}{v_h} + \ip{\ep_h^\nabla\uu^n}{-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h} +
\blfns(\uu^n,\vv_h) \\
&= \ip{\ep_h^\nabla \uu^n+\ts(f^n-\dot{u}^n)}{\frac1\ts v_h-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h}
\end{aligned}$$ By \[eq:euler\] the discrete solution satisfies $$\begin{aligned}
\frac1\ts\ip{u_h^n}{v_h} + \blf(\uu_h^n,\vv_h) =
\ts\ip{\frac{u_h^{n-1}}\ts+f^n}{\frac1\ts v_h-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h}.
\end{aligned}$$ Then, the error equations read $$\begin{aligned}
\frac1\ts\ip{\ep_h^\nabla \uu^n-u_h^n}{v_h} + \blf(\ep_h\uu^n-\uu_h,\vv_h) =
\ip{\ep_h^\nabla \uu^n-u_h^{n-1}-\ts\dot{u}^n}{\frac1\ts v_h-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h}.
\end{aligned}$$ The right-hand side is estimated with $$\begin{aligned}
&|\ip{\ep_h^\nabla \uu^n-u_h^{n-1}-\ts\dot{u}^n}{\frac1\ts v_h-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h}| \\
&\qquad \leq \ts^{-1/2}\norm{\ep_h^\nabla \uu^n-u_h^{n-1}-\ts\dot{u}^n}{} \ts^{1/2}\norm{\frac1\ts
v_h-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h}{} \\
&\qquad \leq \ts^{-1/2}\norm{\ep_h^\nabla \uu^n-u_h^{n-1}-\ts\dot{u}^n}{} \blfTot(\vv_h,\vv_h)^{1/2}.
\end{aligned}$$ Choosing $\vv_h = \ep_h\uu^n-\uu_h^n$, this shows that $$\begin{aligned}
\blfTot(\vv_h,\vv_h) \leq \ts^{-1/2}\norm{\ep_h^\nabla \uu^n-u_h^{n-1}-\ts\dot{u}^n}{}
\blfTot(\vv_h,\vv_h)^{1/2}.
\end{aligned}$$ Dividing by $\blfTot(\vv_h,\vv_h)^{1/2}$, using coercivity of $\blf$ and multiplying by $\ts^{1/2}$, yield $$\begin{aligned}
\norm{\ep_h^\nabla \uu^n-u_h^n}{} &\leq \norm{\ep_h^\nabla \uu^n-u_h^{n-1}-\ts\dot{u}^n}{}
\leq \norm{\ep_h^\nabla\uu^{n-1}-u_h^{n-1}}{} + \norm{\ep_h^\nabla (\uu^n-\uu^{n-1})-\ts\dot{u}^n}{}
\end{aligned}$$
**Step 3.** We follow [@thomee] and rewrite $$\begin{aligned}
\ep_h^\nabla (\uu^n-\uu^{n-1})-\ts\dot{u}^n = [\ep_h^\nabla(\uu^n-\uu^{n-1})-(u^n-u^{n-1})] +
[u^n-u^{n-1}-\ts\dot{u}^n] =: e_h^{n,1}+e_h^{n,2}.
\end{aligned}$$
**Step 4.** We estimate $\norm{e_h^{n,1}}{}$: Writing $\uu^n-\uu^{n-1}=\int_{t_{n-1}}^{t_n} \dot{\uu}\,ds$ and applying \[cor:ep:rates\] give us $$\begin{aligned}
\norm{e_h^{n,1}}{} &\leq \int_{t_{n-1}}^{t_n} \norm{\ep_h^\nabla\dot{\uu}-\dot{u}}{} \,ds \lesssim
h^{p+2} \int_{t_{n-1}}^{t_n} (\norm{\dot{u}}{H^{p+2}(\Omega)} + \norm{\dot\ssigma}{H^{p+1}(\Omega)} +
\ts^{1/2}\norm{\div\dot{\ssigma}}{H^{p+1}(\TT)} ) \,ds.
\end{aligned}$$
**Step 5.** The representation $e_h^{n,2} = u^n-u^{n-1}-\ts \dot{u}^n = \int_{t_{n-1}}^{t_n}(t_{n-1}-s)\ddot{u}\,ds$ shows that $$\begin{aligned}
\norm{e_h^{n,2}}{} \leq \ts \int_{t_{n-1}}^{t_n} \norm{\ddot{u}}{} \,ds.
\end{aligned}$$
**Step 6.** Putting altogether we infer that $$\begin{aligned}
\norm{\ep_h^\nabla \uu^n-u_h^n}{} &\leq \norm{\ep_h^\nabla\uu^{n-1}-u_h^{n-1}}{} + \norm{e_h^{n,1}}{}
+ \norm{e_h^{n,2}}{} \\
&\leq \norm{\ep_h^\nabla\uu^{n-1}-u_h^{n-1}}{}
+ C h^{p+2} \int_{t_{n-1}}^{t_n} \enorm{\dot\uu}_p \,ds
+ \ts \int_{t_{n-1}}^{t_n} \norm{\ddot{u}}{} \,ds.
\end{aligned}$$ Iterating the above arguments and estimate \[eq:choice:initDataL2\] finish the proof.
Optimal error estimate in the natural norm
------------------------------------------
For the proof of our next main result we need the following version of the well-known discrete Gronwall inequality, cf. [@thomee Lemma 10.5].
\[lem:gronwall\] Let $\alpha_n,\beta_n,\gamma_n$ be sequences, where $\beta_n$ is non-decreasing, $\gamma_n\geq 0$. If $$\begin{aligned}
\alpha_n \leq \beta_n + \sum_{j=0}^{n-1} \gamma_j \alpha_j \quad\text{for all } n=1,\dots,N,
\end{aligned}$$ then $$\begin{aligned}
\alpha_n \leq \beta_n e^{\sum_{j=0}^{n-1}\gamma_j} \quad\text{for all }n=1,\dots,N.
\end{aligned}$$
We will assume that the initial data $u_h^0$ for problem \[eq:euler\] is chosen such that $$\begin{aligned}
\label{eq:choice:initDataEnergy}
\norm{\nabla(\ep_h^\nabla \uu^0-u_h^0)}{} \leq C_0 h^{p+1},\end{aligned}$$ where $C_0$ is some generic constant depending also on $\uu^0$ and $\ep_h=(\ep_h^\nabla,\ep_h^\div)$ is the elliptic projection operator from \[sec:duality\]. Recall the norm $\enorm\cdot_p$ from \[sec:apriori:L2\].
\[thm:euler:energy\] Suppose $\ts_n = \ts>0$ for $n=1,\dots,N$, let $\uu_h^n=(u_h^n,\ssigma_h^n)\in U_h^n=U_h$ denote the solution of \[eq:euler\] and let $\uu=(u,\ssigma)$ denote the solution of \[eq:fo\]. If $u_h^0$ satisfies \[eq:choice:initDataEnergy\], then $$\begin{aligned}
\norm{\uu^n-\uu_h^n}\ts \leq C(\uu)( h^{p+1} + \ts),
\end{aligned}$$ where $C(\uu) = C(\enorm{\uu}_p,\enorm{\dot\uu}_p,\norm{\ddot u}{},C_0)$ and $C_0$ is the constant from \[eq:choice:initDataEnergy\].
We consider the error splitting $$\begin{aligned}
\uu^n-\uu_h^n = (\uu^n-\ep_h\uu^n) + (\ep_h\uu^n-\uu_h^n),
\end{aligned}$$ where $\ep_h$ denotes the elliptic projection operator from \[sec:duality\].
**Step 1.** By \[cor:ep:rates\] we have the estimate $$\begin{aligned}
\norm{\uu^n-\ep_h\uu^n}\ts \lesssim h^{p+1} \enorm{\uu^n}_p.
\end{aligned}$$
**Step 2.** If we write $\ww^n := (w^n,\cchi^n) := \ep_h\uu^n-\uu_h^n$ and $e_h^n := \ep_h^\nabla(\uu^n-\uu^{n-1})-\ts\dot{u}^n$, then the error equations from the proof of \[thm:euler:L2\] read $$\begin{aligned}
\label{eq:euler:H1:a}
\begin{split}
\frac1\ts\ip{w^n}{v_h} + \blf(\ww^n,\vv_h)
= \frac1\ts \ip{e_h^n+w^{n-1}}{v_h+\ts(-\div\ttau_h-\bbeta\cdot\nabla v_h+\gamma v_h)}.
\end{split}
\end{aligned}$$ We test this identity with $\vv_h = (v_h,\ttau_h) = (v_h,0)$. First note that by integration by parts, $$\begin{aligned}
\blf(\ww^n,\vv_h) &= \ip{w^n}{-\bbeta\cdot\nabla v_h + \gamma v_h} + \ip{-\div\cchi^n-\bbeta\cdot\nabla w^n +\gamma w^n}{v_h}
\\ &\quad + \ts \ip{-\div\cchi^n-\bbeta\cdot\nabla w^n+\gamma w^n}{-\bbeta\cdot\nabla v_h + \gamma v_h}
\\ &\quad+ \ip{\Amat^{1/2}\nabla w^n-\Amat^{-1/2}\cchi^n}{\Amat^{1/2}\nabla v_h}
\\
& = \ip{w^n}{-\bbeta\cdot\nabla v_h + \gamma v_h} + \ip{-\bbeta\cdot\nabla w^n +\gamma w^n}{v_h}
\\
&\quad + \ts \ip{-\bbeta\cdot\nabla w^n+\gamma w^n}{-\bbeta\cdot\nabla v_h + \gamma v_h}
+ \ip{\Amat\nabla w^n}{\nabla v_h} - \ts\ip{\div\cchi^n}{-\bbeta\cdot\nabla v_h + \gamma v_h}.
\end{aligned}$$ If we put the term $\ts\ip{\div\cchi^n}{-\bbeta\cdot\nabla v + \gamma v_h}$ to the right-hand side and the term $\frac1\ts \ip{w^{n-1}}v$ to the left-hand side, we obtain $$\begin{aligned}
\label{eq:euler:H1:b}
\begin{split}
\frac1\ts \ip{w^n-w^{n-1}}{v_h} + c(w^n,v_h) &= \frac1\ts\ip{e_h^n}{v_h+\ts(-\bbeta\cdot\nabla v_h+\gamma v_h)}
\\
&\qquad + \ip{w^{n-1}}{-\bbeta\cdot\nabla v_h + \gamma v_h} + \ts\ip{\div\cchi^n}{-\bbeta\cdot\nabla v_h + \gamma v_h},
\end{split}
\end{aligned}$$ where we use $c: H_0^1(\Omega) \times H_0^1(\Omega)\to \R$ to denote the bilinear form defined by $$\begin{aligned}
c(u,v) &:= \ip{u}{-\bbeta\cdot\nabla v + \gamma v} + \ip{-\bbeta\cdot\nabla u +\gamma u}{v}
\\
&\qquad + \ts \ip{-\bbeta\cdot\nabla u+\gamma u}{-\bbeta\cdot\nabla v + \gamma v}
+ \ip{\Amat\nabla u}{\nabla v} \quad\text{for all } u,v \in H_0^1(\Omega).
\end{aligned}$$ Using $\ip{-\bbeta\cdot\nabla u+\gamma u}u \geq 0$, it is straightforward to check that $c(\cdot,\cdot)$ defines a coercive and bounded bilinear form on $H_0^1(\Omega)$. In particular, $c(u,u) \simeq \norm{\nabla u}{}^2$ for all $u\in H_0^1(\Omega)$, where the equivalence constants depend only on $T$ and the coefficients $\Amat$, $\bbeta$, $\gamma$ but are otherwise independent of $\ts$.
**Step 3.** The bilinear form $c(\cdot,\cdot)$ is symmetric and coercive on $H_0^1(\Omega)$. In particular, it defines a scalar product that induces the norm $\enorm\cdot$. Observe that for $w,v\in H_0^1(\Omega)$ we have $$\begin{aligned}
c(w,w-v) = \frac12( \enorm{w}^2-\enorm{v}^2 + \enorm{w-v}^2).
\end{aligned}$$
**Step 4.** We estimate the three terms on the right-hand side of \[eq:euler:H1:b\] separately. Throughout this step, let $\varepsilon>0$ denote the parameter for Young’s inequality, which will be fixed later on. First, $$\begin{aligned}
\frac1\ts \ip{e_h^n}{v_h+\ts(-\bbeta\cdot\nabla v_h+\gamma v_h)} &\leq \frac{\varepsilon^{-1}}{2\ts}
\norm{e_h^n}{}^2 + \frac\varepsilon\ts\norm{v_h}{}^2 + \varepsilon\ts\norm{-\bbeta\cdot\nabla v_h+\gamma v_h}{}^2
\\ &\leq \frac{\varepsilon^{-1}}{2\ts} \norm{e_h^n}{}^2 + \varepsilon\left(\frac1\ts\norm{v_h}{}^2
+\enorm{v_h}^2\right).
\end{aligned}$$ Second, integration by parts and standard estimates show that $$\begin{aligned}
\ip{w^{n-1}}{-\bbeta\cdot\nabla v_h + \gamma v_h} &= \ip{(\div\bbeta+\gamma) w^{n-1}+\bbeta\cdot\nabla w^{n-1}}{v_h}
\leq C_1 \varepsilon^{-1}\ts \norm{\nabla w^{n-1}}{}^2 + \frac\varepsilon{2\ts} \norm{v_h}{}^2.
\end{aligned}$$ Third, $$\begin{aligned}
\ts\ip{\div\cchi^n}{-\bbeta\cdot\nabla v_h+\gamma v_h} \leq \varepsilon^{-1}\ts^2\norm{\div\cchi^n}{}^2
+ C_2 \eps \enorm{v_h}^2.
\end{aligned}$$
**Step 5.** We estimate $\ts^2\norm{\div\cchi^n}{}^2$. To do so, we consider the error equations \[eq:euler:H1:a\] but now test with $\vv_h = (0,\ttau_h)$ which gives us $$\begin{aligned}
&\ip{w^n}{-\div\ttau_h} + \ts\ip{-\div\cchi^n-\bbeta\cdot\nabla w^n + \gamma w^n}{-\div\ttau_h}
+ \ip{\Amat^{1/2}\nabla w^n-\Amat^{-1/2}\cchi^n}{-\Amat^{-1/2}\ttau_h}
\\ &\qquad = \ip{e_h^n}{-\div\ttau_h} + \ip{w^{n-1}}{-\div\ttau_h}.
\end{aligned}$$ Integration by parts (on left-hand and right-hand side) and putting the term $\ts\ip{-\bbeta\cdot\nabla w^n+\gamma
w^n}{\div\ttau_h}$ on the right-hand side further gives $$\begin{aligned}
\ip{\Amat^{-1}\cchi^n}{\ttau_h} + \ts\ip{\div\cchi^n}{\div\ttau_h} = \ip{e_h^n}{-\div\ttau_h}
+ \ip{\nabla w^{n-1}}{\ttau_h} + \ts\ip{-\bbeta\cdot\nabla w^n+\gamma w^n}{\div\ttau_h}
\end{aligned}$$ Choosing $\ttau_h= \cchi^n$ and similar estimates as in the previous step prove $$\begin{aligned}
\norm{\cchi^n}{}^2 + \ts\norm{\div\cchi^n}{}^2 \lesssim \frac1\ts \norm{e_h^n}{}^2
+ \norm{\nabla w^{n-1}}{}^2 + \ts\norm{\nabla w^n}{}^2.
\end{aligned}$$ Hence, $$\begin{aligned}
\ts^2\norm{\div\cchi^n}{}^2 \lesssim \norm{e_h^n}{}^2
+\ts \norm{\nabla w^{n-1}}{}^2 + \ts^2\norm{\nabla w^n}{}^2.
\end{aligned}$$ The last term can be further estimated as follows: Note that using $\vv_h=\ww^n$ in \[eq:euler:H1:a\] and standard estimates, we obtain $$\begin{aligned}
\norm{\nabla w^n}{}^2 \lesssim \frac1\ts \norm{e_h^n}{}^2 + \frac1\ts \norm{w^{n-1}}{}^2
\lesssim \frac1\ts \norm{e_h^n}{}^2 + \frac1\ts \norm{\nabla w^{n-1}}{}^2.
\end{aligned}$$ Therefore, $$\begin{aligned}
\ts^2\norm{\div\cchi^n}{}^2 \lesssim \norm{e_h^n}{}^2
+\ts \norm{\nabla w^{n-1}}{}^2 + \ts^2\norm{\nabla w^n}{}^2 \lesssim \norm{e_h^n}{}^2 +\ts \norm{\nabla w^{n-1}}{}^2.
\end{aligned}$$
**Step 6.** Take $v_h = w^n-w^{n-1}$ in \[eq:euler:H1:b\]. Combining this with Step 3–5 we obtain $$\begin{aligned}
\frac1\ts\norm{v_h}{}^2
+ \frac12\left(\enorm{w^n}^2 -\enorm{w^{n-1}}^2 + \enorm{v_h}^2\right)
&\leq \frac{\varepsilon^{-1}}{2\ts} \norm{e_h^n}{}^2
+ \varepsilon\left( \frac1\ts\norm{v_h}{}^2 + \enorm{v_h}^2 \right) \\
&\qquad + C_1{\varepsilon^{-1}}\ts \norm{\nabla w^{n-1}}{}^2 + \frac{\varepsilon}{2\ts} \norm{v_h}{}^2 \\
&\qquad + \varepsilon^{-1} C_3(\norm{e_h^n}{}^2 + \ts\norm{\nabla w^{n-1}}{}^2)
+ C_2\varepsilon \enorm{v_h}{}^2.
\end{aligned}$$ We subtract terms on the right-hand side for sufficient small $\varepsilon$ which after multiplication by $2$ leads us to $$\begin{aligned}
\enorm{w^n}^2-\enorm{w^{n-1}}^2 \leq \frac{C_4}\ts\norm{e_h^n}{}^2
+ C_5 \ts \norm{\nabla w^{n-1}}{}^2.
\end{aligned}$$ Then, iterating the above arguments and norm equivalence $\norm{\nabla(\cdot)}{}^2\leq C_6\enorm\cdot^2$ yield $$\begin{aligned}
\enorm{w^n}^2 \leq \sum_{j=1}^n \frac{C_4}\ts\norm{e_h^j}{}^2
+ \enorm{w^0}^2
+ \sum_{j=0}^{n-1} C_5\ts\norm{\nabla w^j}{}^2
\leq \sum_{j=1}^n \frac{C_4}\ts\norm{e_h^j}{}^2
+ \enorm{w^0}^2
+ \sum_{j=0}^{n-1} C_6\ts\enorm{w^j}^2.
\end{aligned}$$ We apply \[lem:gronwall\] with $$\begin{aligned}
\alpha_n := \enorm{w^n}{}^2, \quad \beta_n:= \sum_{j=1}^n \frac{C_4}\ts\norm{e_h^j}{}^2 + \enorm{w^0}^2,
\quad \text{and } \gamma_n := C_6 \ts,
\end{aligned}$$ which proves $$\begin{aligned}
\norm{\nabla w^n}{}^2 \simeq \enorm{w^n}^2 \lesssim \sum_{j=1}^n \frac1\ts\norm{e_h^j}{}^2 + \norm{\nabla
w^0}{}^2.
\end{aligned}$$ We estimate the terms $\norm{e_h^n}{}$: As in the proof of \[thm:euler:L2\] we write $$\begin{aligned}
e_h^n := e_h^{n,1}+e_h^{n,2} := [\ep_h^\nabla(\uu^n-\uu^{n-1})-(u^n-u^{n-1})]
+ [u^n-u^{n-1}-\ts\dot{u}^n].
\end{aligned}$$
**Step 7.** together with the Cauchy-Schwarz inequality (for the time integral) gives us $$\begin{aligned}
\norm{(1-\ep_h^\nabla)(\uu^n-\uu^{n-1})}{} &\lesssim \int_{t_{n-1}}^{t_n} h^{p+1}
\enorm{\dot\uu}_p \,ds
\leq \ts^{1/2} \left( \int_{t_{n-1}}^{t_n} h^{2(p+1)} \enorm{\dot\uu}_p^2 \,ds \right)^{1/2}.
\end{aligned}$$ Thus, $$\begin{aligned}
\frac1\ts \norm{e_h^{n,1}}{}^2 \lesssim h^{2(p+1)}
\int_{t_{n-1}}^{t_n} \enorm{\dot\uu}_p^2 \,ds
\end{aligned}$$ **Step 8.** To estimate $\norm{e_h^{n,2}}{}$ we proceed as in the proof of \[thm:euler:L2\] and similar as in Step 7 to obtain $$\begin{aligned}
\norm{u^n-u^{n-1}-\ts\dot{u}^n}{} \leq \ts \int_{t_{n-1}}^{t_n} \norm{\ddot{u}}{} \,ds
\leq \ts^{3/2} \left(\int_{t_{n-1}}^{t_n} \norm{\ddot{u}(t)}{}^2 \,ds\right)^{1/2}
\end{aligned}$$ and thus, $$\begin{aligned}
\frac1{\ts}\norm{e_h^{n,2}}{}^2 \leq \ts^2 \int_{t_{n-1}}^{t_n} \norm{\ddot{u}(t)}{}^2 \,ds.
\end{aligned}$$ **Step 9.** Putting altogether we infer $$\begin{aligned}
\norm{\nabla w^n}{}^2 \lesssim \sum_{j=1}^n \frac1\ts \norm{e_h^j}{}^2 + \norm{\nabla w^0}{}^2
&\lesssim h^{2(p+1)}
\int_0^{t_n} \enorm{\dot\uu}_p^2 \,ds + \ts^2 \int_0^{t_n} \norm{\ddot{u}}{}^2 \,ds + \norm{\nabla w^0}{}^2.
\end{aligned}$$ **Step 10.** To get an estimate for $\norm{\ww^n}\ts$ it remains to estimate $\norm{\cchi^n}{}^2
+\ts\norm{\div\cchi^n}{}^2$. With similar arguments as in Step 5 we infer with Step 9 that $$\begin{aligned}
\norm{\cchi^n}{}^2 +\ts\norm{\div\cchi^n}{}^2 \lesssim \frac1\ts \norm{e_h^n}{}^2 + \norm{\nabla w^{n-1}}{}^2
\lesssim h^{2(p+1)} + \ts^2 + \norm{\nabla w^0}{}^2.
\end{aligned}$$ The estimate \[eq:choice:initDataEnergy\] for $\norm{\nabla w^0}{}$ finishes the proof.
Extension to a different problem {#sec:ext}
================================
In this section we show that the techniques developed previously also apply for different first-order systems. We consider
\[eq:fo:alt\] $$\begin{aligned}
\dot{u}-\div\ssigma+\gamma u &= f, \\
\ssigma-\Amat\nabla u + \bbeta u &= 0, \\
u|_\Gamma &= 0.\end{aligned}$$
Note that this problem admits a unique solution (under the same assumptions as used for \[eq:fo\] and \[eq:regularity:b\]).
Following \[sec:main\] we replace the time derivative by a finite difference approximation which leads to the time-stepping scheme $$\begin{aligned}
\frac{u-\widetilde u^{n-1}}{\ts_n} - \div\ssigma + \gamma u &= f^n, \\
\Amat^{-1/2}\ssigma - \Amat^{1/2}\nabla u + \Amat^{-1/2}\bbeta u &= 0, \\
u|_\Gamma &= 0,\end{aligned}$$ where $\widetilde u^{n-1}$ denotes an approximation of $u^{n-1}$.
In what follows we redefine the bilinear forms $\blf_n,\blfns_n,\blfTot_n$ and the functional $F_n$ from \[sec:main\] to account for the different first-order system \[eq:fo:alt\]. We show that the main results on the bilinear forms, i.e., \[lem:blf,thm:ep\] hold true. Moreover, \[thm:euler:L2\] and \[thm:euler:energy\] transfer to the present situation following the same lines of proof with some minor modifications.
Backward Euler method
---------------------
Define $\blfns_n,\blf_n,\blfTot_n : U\times U \to \R$, $F_n : U\to \R$ by
\[eq:def:blf:alt\] $$\begin{aligned}
\blfns_n(\uu,\vv) &:= \ip{-\div\ssigma+\gamma u}{v}
+ \ts_n \ip{-\div\ssigma+\gamma u}{-\div\ttau+\gamma v} \\
&\qquad + \ip{\Amat^{-1/2}\ssigma-\Amat^{1/2}\nabla u+\Amat^{-1/2}\bbeta u}{\Amat^{-1/2}\ttau-\Amat^{1/2}\nabla v+\Amat^{-1/2}\bbeta v}
\\
\blf_n(\uu,\vv) &:= \ip{u}{-\div\ttau+\gamma v} + \blfns_n(\uu,\vv), \\
\blfTot_n(\uu,\vv) &:= \frac1{\ts_n}\ip{u}v + \blf_n(\uu,\vv), \\
F_n(\vv;f^n,w) &:= \ip{f^n+\frac{w}{\ts_n}}{v+\ts_n(-\div\ttau+\gamma v)}.\end{aligned}$$
The *backward Euler* method then reads: Given $u_h^0\in L^2(\Omega)$, find $\uu_h^n\in U_h^n$ such that for all $n=1,\dots,N$ $$\begin{aligned}
\label{eq:euler:alt}
\blfTot_n(\uu_h^n,\vv) = F_n(\vv;f^n,u_h^{n-1}) \quad\text{for all } \vv\in U_h^n.\end{aligned}$$
Similar as in \[sec:main\] we conclude:
\[thm:euler:alt\] For all $n=1,\dots,N$ Problem \[eq:euler:alt\] admits a unique solution $\uu_h^n\in U_h^n$.
Elliptic projection operator
----------------------------
In this section, we set $\ts_n:=\ts\in(0,T]$ and skip the indices in the bilinear forms. We define the elliptic projection operator $\ep_h : U \to U_h$ with respect to the elliptic part of the equations, which is given by the bilinear form $\blfns$, i.e., $$\begin{aligned}
\label{eq:def:ep:alt}
\blfns(\ep_h\uu,\vv_h) = \blfns(\uu,\vv_h) \quad\text{for all } \vv_h\in U_h.\end{aligned}$$
\[lem:blf\] holds true for the bilinear forms defined in \[eq:def:blf:alt\].
Boundedness with respect to the norm $\norm\cdot{\ts}$ follows with the same argumentation as in the proof of \[lem:blf\].
It remains to prove coercivity. We show the coercivity of $\blf(\cdot,\cdot)$ only. Coercivity of $\blfns(\cdot,\cdot)$ then follows since $\blfns(\uu,\uu)\geq \tfrac12\blf(\uu,\uu)$.
Observe that $$\begin{aligned}
&2\ip{u}{-\div\ssigma+\gamma u} + \norm{\Amat^{-1/2}\ssigma-\Amat^{1/2}\nabla u+\Amat^{-1/2}\bbeta u}{}^2 \\
&= 2\ip{\ssigma}{\nabla u} + 2\ip{\gamma u}{u} + \norm{\Amat^{-1/2}\ssigma-\Amat^{1/2}\nabla u}{}^2
+ 2\ip{\Amat^{-1/2}\ssigma-\Amat^{1/2}\nabla u}{\Amat^{-1/2}\bbeta u} + \norm{\Amat^{-1/2}\bbeta u}{}^2 \\
&= 2\ip{-\bbeta\cdot\nabla u+\gamma u}u + \norm{\Amat^{-1/2}\ssigma}{}^2
+ \norm{\Amat^{1/2}\nabla u}{}^2 + 2\ip{\Amat^{-1/2}\ssigma}{\Amat^{-1/2}\bbeta u} + \norm{\Amat^{-1/2}\bbeta u}{}^2
\end{aligned}$$ Let $1\geq \mu>0$. Young’s inequality with parameter $\varepsilon>0$ leads us to $$\begin{aligned}
2|\ip{\Amat^{-1/2}\ssigma}{\Amat^{-1/2}\bbeta u}| &\leq \varepsilon\norm{\Amat^{-1/2}\ssigma}{}^2
+ \varepsilon^{-1}\norm{\Amat^{-1/2}\bbeta u}{}^2 \\
&= \varepsilon\norm{\Amat^{-1/2}\ssigma}{}^2 + \varepsilon^{-1}\mu\norm{\Amat^{-1/2}\bbeta u}{}^2 +
\varepsilon^{-1}(1-\mu)\norm{\Amat^{-1/2}\bbeta u}{}^2 \\
&\leq \varepsilon\norm{\Amat^{-1/2}\ssigma}{}^2 + C\varepsilon^{-1}\mu\norm{\Amat^{1/2}\nabla u}{}^2 +
\varepsilon^{-1}(1-\mu)\norm{\Amat^{-1/2}\bbeta u}{}^2,
\end{aligned}$$ where $C>0$ depends on $\Amat$, $\bbeta$ and $\CF$. Together with $\ip{-\bbeta\cdot\nabla u+\gamma u}u\geq 0$ this further shows $$\begin{aligned}
&2\ip{u}{-\div\ssigma+\gamma u} + \norm{\Amat^{-1/2}\ssigma-\Amat^{1/2}\nabla u+\Amat^{-1/2}\bbeta u}{}^2 \\
&\qquad \geq (1-\varepsilon)\norm{\Amat^{-1/2}\ssigma}{}^2
+ (1-C\varepsilon^{-1}\mu)\norm{\Amat^{1/2}\nabla u}{}^2 + (1-\varepsilon^{-1}(1-\mu))\norm{\Amat^{-1/2}\bbeta u}{}^2.
\end{aligned}$$ Choose $\mu\in(0,1]$ sufficiently small such that $C\mu<1-\mu$ and choose $\varepsilon$ with $1-\mu<\varepsilon<1$. This implies that $$\begin{aligned}
1-\varepsilon>0, \quad 1-\varepsilon^{-1}\mu C > 0, \quad\text{and } 1-\varepsilon^{-1}(1-\mu)>0,
\end{aligned}$$ hence, $$\begin{aligned}
&2\ip{u}{-\div\ssigma+\gamma u} + \norm{\Amat^{-1/2}\ssigma-\Amat^{1/2}\nabla u+\Amat^{-1/2}\bbeta u}{}^2
\gtrsim \norm{\Amat^{-1/2}\ssigma}{}^2 + \norm{\Amat^{1/2}\nabla u}{}^2 + \norm{\Amat^{-1/2}\bbeta u}{}^2.
\end{aligned}$$ Finally, with the triangle inequality we infer that $$\begin{aligned}
\norm{\uu}\ts^2 \lesssim \norm{\Amat^{-1/2}\ssigma}{}^2 + \norm{\Amat^{1/2}\nabla u}{}^2 + \norm{\Amat^{-1/2}\bbeta u}{}^2
+ \ts \norm{-\div\ssigma+\gamma u}{}^2
\lesssim \blf(\uu,\uu),
\end{aligned}$$ which finishes the proof.
\[thm:ep\] holds for the operator defined in \[eq:def:ep:alt\].
It suffices to develop duality arguments. Let $\uu_d =(u_d,\ssigma_d) = \uu-\ep_h\uu$, where $\ep_h\uu$ is defined in \[eq:def:ep:alt\]. Define $w\in H_0^1(\Omega)$ as the solution of $$\begin{aligned}
-\div\Amat\nabla w -\bbeta\cdot\nabla w +\gamma w = u_d.
\end{aligned}$$ Then, $$\begin{aligned}
\norm{u_d}{}^2 &= \ip{u_d}{-\div\Amat\nabla w -\bbeta\cdot\nabla w +\gamma w}
= \ip{\Amat\nabla u_d -\bbeta u_d}{\nabla w} + \ip{\gamma u_d}{w}.
\end{aligned}$$ Adding $-\ip{\ssigma_d}{\nabla w} - \ip{\div\ssigma_d}w=0$ we infer $$\begin{aligned}
\norm{u_d}{}^2 &= \ip{\Amat^{1/2}\nabla u_d -\Amat^{-1/2}\bbeta u_d-\Amat^{-1/2}\ssigma_d}{\Amat^{1/2}\nabla w}
+ \ts \ip{-\div\ssigma_d+\gamma u_d}{\frac{w}\ts}.
\end{aligned}$$ Define $(v,\ttau)\in U$ via $$\begin{aligned}
-\div\ttau +\gamma v + \frac{v}\ts &= \frac{w}\ts, \\
\Amat^{1/2}\nabla v - \Amat^{-1/2}\bbeta v -\Amat^{-1/2}\ttau &= \Amat^{1/2}\nabla w.
\end{aligned}$$ Therefore, $v\in H_0^1(\Omega)$ satisfies $$\begin{aligned}
-\div\Amat\nabla v + \div(\bbeta v) + \gamma v + \frac{v}\ts &= -\div\Amat\nabla w + \frac{w}\ts.
\end{aligned}$$ Following the same argumentation as in the proof of \[thm:ep\] (Step 3–Step 5) we conclude the proof.
\[cor:ep:rates\] holds for the operator defined in \[eq:def:ep:alt\].
Error estimates
---------------
The main results from \[sec:apriori\] hold true for the present situation:
\[thm:euler:alt:main\] \[thm:stability:euler,thm:euler:L2,thm:euler:energy\] hold for solutions of \[eq:euler:alt\].
Following the same lines as in the proof of \[thm:stability:euler,thm:euler:L2\] we conclude the analogous results for solutions of \[eq:euler:alt\].
For the proof of the analogous result of \[thm:euler:energy\] we need some minor modifications of the proof of \[thm:euler:energy\], but the main ideas are the same: With the same notation as in the proof of \[thm:euler:energy\], the error equations read $$\begin{aligned}
\label{eq:euler:alt:H1:a}
\frac1\ts \ip{w^n}{v_h} + b(\ww^n,\vv_h) &= \frac1\ts\ip{e_h^n}{v_h+\ts(-\div\ttau_h + \gamma v_h)}
+ \frac1\ts\ip{w_h^{n-1}}{v_h+\ts(-\div\ttau_h+\gamma v_h)}.
\end{aligned}$$ Taking $\vv_h = (v_h,0)$ we obtain (using integration by parts) $$\begin{aligned}
b(\ww^n,\vv_h) &= \ip{-\div\cchi^n+\gamma w^n}{v_h} + \ip{w^n}{\gamma v_h} + \ts\ip{-\div\cchi^n+\gamma
w^n}{\gamma v_h}
\\ &\qquad + \ip{\Amat^{-1/2}\cchi^n-\Amat^{1/2}\nabla w^n+\Amat^{-1/2}\bbeta
w^n}{-\Amat^{1/2}\nabla v_h+\Amat^{-1/2}\bbeta v_h}
\\ &= 2\ip{\gamma w^n}{v_h} + \ts \ip{\gamma^2 w^n}{v_h} + \ip{\Amat\nabla w^n}{\nabla v_h}
-\ip{\nabla w^n}{\bbeta v_h} - \ip{\bbeta w^n}{\nabla v_h}
\\ &\qquad + \ip{\Amat^{-1}\bbeta w^n}{\bbeta v_h} + \ip{\Amat^{-1}\cchi^n}{\bbeta v_h} -\ts
\ip{\div\cchi^n}{\gamma v_h}
\end{aligned}$$ Define the symmetric, bounded and coercive bilinear form $c : H_0^1(\Omega)\times H_0^1(\Omega) \to \R$ by $$\begin{aligned}
c(u,v) := 2\ip{\gamma u}{v} + \ts \ip{\gamma^2 u}{v} + \ip{\Amat\nabla u}{\nabla v}
-\ip{\nabla u}{\bbeta v} - \ip{\bbeta u}{\nabla v}
+ \ip{\Amat^{-1}\bbeta u}{\bbeta v}
\end{aligned}$$ for $u,v\in H_0^1(\Omega)$. Then, \[eq:euler:alt:H1:a\] with $\vv_h = (v_h,0)$ reads $$\begin{aligned}
\frac1\ts \ip{w^n-w^{n-1}}{v_h} + c(w^n,v_h) &= \frac1\ts \ip{e_h^n}{v_h+\ts\gamma v_h}
+ \ip{w^{n-1}}{\gamma v_h} \\
&\qquad -\ip{\Amat^{-1}\cchi^n}{\bbeta v_h} + \ts \ip{\div\cchi^n}{\gamma v_h}.
\end{aligned}$$ With $\enorm{\cdot}^2 := c(\cdot,\cdot)$ and $v_h = w^n-w^{n-1}$ we conclude with the same arguments as in the proof of \[thm:euler:energy\] that $$\begin{aligned}
\label{eq:euler:alt:H1:b}
\enorm{w^n}^2-\enorm{w^{n-1}}^2 \leq \frac{C_1}\ts \norm{e_h^n}{}^2 + C_2\ts \norm{\nabla w^{n-1}}{}^2
+ C_3( \ts\norm{\cchi^n}{}^2 +\ts^3\norm{\div\cchi^n}{}^2).
\end{aligned}$$ In the following we estimate $\ts\norm{\cchi^n}{}^2 +\ts^3\norm{\div\cchi^n}{}^2$. By testing \[eq:euler:alt:H1:a\] with $\vv_h= (0,\ttau_h)$ we get $$\begin{aligned}
\ip{\Amat^{-1}\cchi^n}{\ttau_h} + \ts\ip{\div\cchi^n}{\div\ttau_h}
&= \ip{e_h^n}{-\div\ttau_h} + \ip{w^{n-1}}{-\div\ttau_h}
\\
&\qquad -\ip{\Amat^{-1}\bbeta w^n}{\ttau_h}
+\ts \ip{\gamma w^n}{\div\ttau_h}
\\ &= \ip{e_h^n}{-\div\ttau_h} + \ip{\nabla w^{n-1}}{\ttau_h}
\\
&\qquad -\ip{\Amat^{-1}\bbeta w^n}{\ttau_h}
+\ts \ip{\gamma w^n}{\div\ttau_h}.
\end{aligned}$$ Then, standard arguments give $$\begin{aligned}
\norm{\cchi^n}{}^2 + \ts \norm{\div\cchi^n}{}^2 \lesssim \frac1\ts\norm{e_h^n}{}^2 + \norm{\nabla
w^{n-1}}{}^2 + \norm{w^n}{}^2.
\end{aligned}$$ Using $\norm{w^n}{}\leq \norm{w^{n-1}}{} + \norm{e_h^n}{} \lesssim \norm{\nabla w^{n-1}}{} + \norm{e_h^n}{}$ we further infer $$\begin{aligned}
\norm{\cchi^n}{}^2 + \ts \norm{\div\cchi^n}{}^2 \lesssim \frac1\ts\norm{e_h^n}{}^2 + \norm{\nabla w^{n-1}}{}^2
\end{aligned}$$ and get instead of \[eq:euler:alt:H1:b\] the estimate $$\begin{aligned}
\enorm{w^n}^2-\enorm{w^{n-1}}^2 \leq \frac{C_4}\ts \norm{e_h^n}{}^2 + C_5\ts \norm{\nabla w^{n-1}}{}^2.
\end{aligned}$$ Finally, the very same argumentation as in the proof of \[thm:euler:energy\] yields $$\begin{aligned}
\norm{\nabla w^n}{} \lesssim h^{p+1}+\ts \quad\text{and also}\quad
\norm{\ww^n}\ts \lesssim h^{p+1}+\ts.
\end{aligned}$$ This finishes the proof.
Numerical Examples {#sec:num}
==================
In this section we present some numerical examples that underpin the theoretical results uncovered in this work. In particular, we are interested in the convergence rates predicted by \[thm:euler:L2,thm:euler:energy,thm:euler:alt:main\].
Throughout, we consider the convex domain $\Omega = (0,1)^2$ and the manufactured solution $$\begin{aligned}
u(t;x,y) := e^{-2\pi^2 t}\sin(\pi x) \sin(\pi y).\end{aligned}$$ Note that $u$ is smooth and its trace vanishes on $\Gamma = \partial \Omega$. Moreover, we choose $\Amat$ to be the identity matrix, $\bbeta(x,y) = (1,1)^T$, $\gamma(x,y) = 0$ for $(x,y)\in \Omega$. In \[sec:examples:1\] we consider the solutions of \[eq:euler\], where the right-hand side data $f$ is given through \[eq:fo\], i.e., $f := \dot{u}- \Delta u -\bbeta\cdot\nabla u$. Also note that by \[eq:fo\] we have $\ssigma = \nabla u$. In \[sec:examples:2\] we consider the solutions of \[eq:euler:alt\], where the right-hand side data $f$ is given through \[eq:fo:alt\], i.e., $f = \dot{u}-\Delta u + \bbeta\cdot\nabla u$. Furthermore, in this case $\ssigma = \nabla u -\bbeta u$.
In the following examples we use a lowest-order discretization, i.e., $p=0$. In order to obtain the optimal possible rates we have to consider two cases: For optimal rates for the scalar variable in the $L^2(\Omega)$ norm we have to equilibrate the time step $\ts$ and the mesh-size $h$ so that $\ts \simeq h^2$. If we are only interested in optimal rates for the error in the natural norm $\norm{\cdot}\ts$, it suffices to choose $\ts \simeq h$.
The problem set-up is quite simple: We choose as end time $T=0.1$. For all computations we start with an initial triangulation of $\Omega$ into four (similar) triangles, see \[fig:Mesh\], and an initial time step $\ts = 0.1$. We then solve \[eq:euler\], see \[sec:examples:1\], resp. \[eq:euler:alt\], see \[sec:examples:2\]. In the next step we refine the triangulation uniformly, i.e., each triangle is divided into four son triangles (by Newest Vertex Bisection). In our configuration this leads to four triangles which have diameter exactly half of their father triangle, i.e., $h/2$. Depending on which case we consider we then halve or quarter the time step, which ensures that either $\ts\simeq h$ or $\ts\simeq h^2$.
The initial data $u_h^0$ is chosen to be the $L^2$-projection of $u^0=u(0,\cdot)$ onto $\cS_0^1(\TT)$. Note that $u$ is smooth (for all times), so that the initial data satisfies \[eq:choice:initDataL2\] and \[eq:choice:initDataEnergy\].
In all figures, we plot the errors that correspond to approximations of $u^N=u(T,\cdot)$, resp., $\ssigma^N=\ssigma(T,\cdot)$ over the number of degrees of freedom in $U_h$. To that end define the following error quantities $$\begin{aligned}
\err(u_h^N) &:= \norm{u^N-u_h^N}{}, \\
\err(\nabla u_h^N) &:= \norm{\nabla(u^N-u_h^N)}{}, \\
\err(\ssigma_h^N) &:= \norm{\ssigma^N-\ssigma_h^N}{}, \\
\err(\div\ssigma_h^N) &:= \norm{\div(\ssigma^N-\ssigma_h^N)}{}.\end{aligned}$$ Triangles in the plots visualize the order of convergence, i.e., their negative slope $\alpha$ corresponds to $\OO(h^\alpha)$.
Example 1 {#sec:examples:1}
---------
We consider the solutions $\uu_h^N=(u_h^N,\ssigma_h^N)\in U_h$ of \[eq:euler\]. shows the error quantities in the case where $\ts\simeq h^2$. By \[thm:euler:L2\] we expect that $\err(u_h^N)\lesssim h^2$. This is also observed in \[fig:Example1L2\]. Note that the other quantities converge at a lower rate. In particular, observe that $\err(\div\ssigma_h^N)$ converges at the same rate as $\err(\ssigma_h^N)$, although from our analysis (\[thm:euler:energy\]) we can only infer that $$\begin{aligned}
\norm{\uu^N-\uu_h^N}\ts \simeq \err(\nabla u_h^N) + \err(\ssigma_h^N) + \ts^{1/2}\err(\div\ssigma_h^N) =
\OO(h+\ts)= \OO(h+h^2) = \OO(h).\end{aligned}$$
The results for the case where $\ts\simeq h$ are displayed in \[fig:Example1Energy\]. We observe that all error quantities, including $\err(u_h^N) = \norm{u^N-u_h^N}{}$, converge at the same rate. We conclude that it is not only sufficient but in general also necessary to set $\ts\simeq h^2$ to obtain higher rates for the $L^2$ error $\err(u_h^N)$. As before we see that $\err(\div\ssigma_h^N)$ converges even at a higher rate than expected.
Example 2 {#sec:examples:2}
---------
We consider the solutions $\uu_h^N=(u_h^N,\ssigma_h^N)\in U_h$ from \[eq:euler:alt\]. show the error quantities in the cases $\ts\simeq h^2$ and $\ts\simeq h$, respectively. We make the same observations as in \[sec:examples:1\].
[^1]: [**Acknowledgment.**]{} This work was supported by CONICYT through FONDECYT projects 11170050 “Least-squares methods for obstacle problems” (TF) and 1170672 “Fast space-time discretizations for fractional evolution equations” (MK)
|
---
abstract: 'We propose a method to generate 3D shapes using point clouds. Given a point-cloud representation of a 3D shape, our method builds a *kd-tree* to spatially partition the points. This orders them consistently across all shapes, resulting in reasonably good correspondences across all shapes. We then use PCA analysis to derive a linear shape basis across the spatially partitioned points, and optimize the point ordering by iteratively minimizing the PCA reconstruction error. Even with the spatial sorting, the point clouds are inherently noisy and the resulting distribution over the shape coefficients can be highly multi-modal. We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework. Compared to 3D shape generative models trained on voxel-representations, our point-based method is considerably more light-weight and scalable, with little loss of quality. It also outperforms simpler linear factor models such as Probabilistic PCA, both qualitatively and quantitatively, on a number of categories from the ShapeNet dataset. Furthermore, our method can easily incorporate other point attributes such as normal and color information, an additional advantage over voxel-based representations.'
bibliography:
- 'egbib.bib'
title: Shape Generation using Spatially Partitioned Point Clouds
---
|
---
author:
- Agnieszka Cichy
- Konrad Jerzy Kapcia
- Andrzej Ptok
bibliography:
- 'biblio.bib'
title: 'Phase separations induced by a trapping potential in one-dimensional fermionic systems as a source of core-shell structures'
---
Introduction {#introduction .unnumbered}
============
Recently, an extensive progress in experimental techniques in ultracold quantum gases in optical lattices occurs. These systems can be give as the greatest examples of practical realization of “quantum simulators” [@bloch.dalibard.08; @giorgini.pitaevskii.08; @guan.batchelor.13; @georgescu.ashhab.14; @dutta.gajda.15]. Because of the fact that they are systems with fully controllable parameters, they open a new avenue to study fundamental phenomena of condensed matter physics. Moreover, such simulators can provide information about properties of physical systems in a context of different effects and mechanisms, which are difficult to observe in solid state materials due to their complexity. An experimental realization of such systems with bosonic and fermionic gases in optical lattices have already been conducted [@greiner.mandel.02; @kohl.moritz.05; @stoferle.moritz.06; @jardens.strohmaier.08; @jordens.tarruell.10]. It creates a very attractive field for further development. Additionally, in these systems trapping potential and lattice geometry can be modified to study new exotic phases. Hence, ultracold gases in optical lattices allow to simulate the well-known Hubbard model in various regimes of parameters [@bloch.10].
The lattice geometry is a fundamental characteristic of many-body systems and has large influence on their physical properties [@ptok.17]. Ultra-cold atomic gases give the opportunity of realization of systems with different geometries. Experimentally, the geometry of the lattice can be changed by different spatial arrangement of laser beams [@bloch.dalibard.08]. Recently, the occurrence of antiferromagnetic spin correlations in the repulsive fermionic gas, in different lattice geometries of varying dimensionality, also including crossover configurations between different geometries, was investigated [@corcovilos.baur.10; @simon.bakr.11; @hart.duarte.15; @greif.jotzu.15; @cheuk.nichols.16a; @cheuk.nichols.16b; @azurenko.chiu.17].
Equally important is the better comprehension of unconventional phases that can appear in fermionic superfluids with population imbalance, the latter being the effect of a magnetic (Zeeman) field or the result of preparing a mixture with the desired composition. In the weak coupling limit, states with nontrivial Cooper pairs can exist at large population imbalance [@machida.mizushima.06; @koponen.paananen.2008; @liu.hu.08; @cai.wang.11; @ptok.cichy.17]. One of examples of such pairing is the Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) state [@FF; @LO], in which the Cooper pairs have non-zero total momentum as a result of pairing across the spin-split Fermi surface. The properties of this state attracted a lot of theoretical and experimental attention [@casalbuoni.nardulli.04; @matsuda.shimahara.10; @beyer.wosnitza.13; @ptok.kapcia.17; @kinnunen.baarsma.18]. Another example of unconventional coherent state is the homogeneous spin-polarized superconductivity with a gapless spectrum for the majority spin species [@radzihovsky.sheehy.10]. For this phase, the coexistence of the normal and the superfluid states in the isotropic phase is characteristic. This phase was firstly proposed by Sarma [@sarma] who studied the case of a superconductor in an external magnetic field within the BCS theory, neglecting orbital effects. He showed that self-consistent mean field solutions with gapless spectrum \[$\Delta (h)$\] are energetically unstable at $T=0$, in contrary to the fully gapped BCS solutions. On the other hand, a spin-polarized superconducting state can be stabilized by non-zero temperature.
Experimental studies of the spin–imbalanced fermionic gases give new possibilities for research in the field of condensed matter systems with strong correlations. There are experimental evidences for the occurrence of [*core-shell structure*]{} — an unpolarized superfluid core in the center of the trap surrounded by a polarized normal state [@zwierlein.aboshaeer.05; @zwierlein.schirotzek.06; @partridge.li.06]. The structure has been observed in the density profiles of trapped spin–imbalanced fermionic mixtures. In this system a phase separation between these two states appears [@liu.wilczek.03; @sheehy.radzihovsky.06; @diederix.gubbels.09; @shin.zwierlein.06; @partridge.li.06prl; @lobo.recati.06; @pilati.giorgini.08; @giorgini.pitaevskii.08; @bertaina.giorgini.09; @valtolina.scazza.17]. Additionally, in the case of two-component fermionic gases in one-dimensional optical lattice, the exact thermodynamic Bethe ansatz solution indicates the occurrence of a mixed phase with two-shell structure [@guan.batchelor.13]. In such a state, a partially polarized superfluid core (the FFLO phase) is surrounded by a fully paired (BCS-like) or fully polarized (normal) phases [@guan.batchelor.07; @orso.07]. Similar observation has been also performed in the case of a one-component trapped gas [@hu.xiaji.07]. It is impotant to emphasize that the one-dimensional system is a good candidate for observing the FFLO phase because of a nesting effect, which makes the state much more robust than in the three-dimensional case [@bloch.10; @ptok.17].
In this paper we show that the occurrence of core-shell structures is a consequence of the presence of inhomogeneity in the system, in particular, of the changes of chemical potential or magnetic field (or equivalently effective spin-imbalance) depending on the trapping potential and the short-range interactions between atoms. Moreover, we provide an analysis according to which, depending on the attractive interaction, multiple core-shell structures can appear in the system, including different phases, such as the spatially homogeneous spin-unpolarized superconducting state, i.e., the BCS state, as well as the spatially inhomogeneous superconducting FFLO state. We show that it is possible to prepare the system in such a way that one can observe the two different phases separately in space and we study the influence of the trapping potential on such spatially separated phases.
Model and method {#model-and-method .unnumbered}
================
In this paper, we study a one-dimensional system with the $s$-wave superconductivity. The attractive Hubbard Hamiltonian (i.e., with on-site pairing, $U<0$) in presence of the magnetic field ($h$) has a following form: $$\begin{aligned}
\label{eq.ham_real}
\mathcal{H} = \sum_{ \langle i,j \rangle \sigma } \left[ - t - ( \mu_{i} + \sigma h ) \delta_{ij} \right] c_{i\sigma}^{\dagger} c_{j\sigma} + U \sum_{i} n_{i\uparrow} n_{i\downarrow},\end{aligned}$$ where $c_{i\sigma}^{\dagger}$ ($c_{i\sigma}$) denotes the creation (annihilation) operator of the particles at site $i$ and spin $\sigma= \{ \uparrow ,\downarrow \}$. $t$ is the hopping integral between nearest-neighbor sites, $U < 0$ is the on-site pairing interaction, $h$ is the external magnetic Zeeman field, while $\mu_{i}$ is an effective on-site chemical potential. The interaction term is treated within the mean-field broken-symmetry approximation: $$\begin{aligned}
n_{i\uparrow} n_{i\downarrow} = \Delta_{i}^{\ast} c_{i\downarrow} c_{i\uparrow} + \Delta_{i} c_{i\uparrow}^{\dagger} c_{i\downarrow}^{\dagger} - | \Delta_{i} |^{2},\end{aligned}$$ where $\Delta_{i} = \langle c_{i\downarrow} c_{i\uparrow} \rangle$ is the [*s*]{}-wave superconducting order parameter (SOP). Hence, the mean-field Hamiltonian in real space has the form: $$\begin{aligned}
\mathcal{H}^{MF} = \sum_{ \langle i,j \rangle \sigma } \left[ - t - ( \mu_{i} + \sigma h ) \delta_{ij} \right] c_{i\sigma}^{\dagger} c_{j\sigma} + U \sum_{i} \left( \Delta_{i}^{\ast} c_{i\downarrow} c_{i\uparrow} + H.c. \right) - U \sum_{i} | \Delta_{i} |^{2}.\end{aligned}$$
For a general case (i.e., for any distribution of $\mu_{i}$), Hamiltonian (\[eq.ham\_real\]) can be exactly diagonalized within the Bogoliubov–Valatin transformation: $$\begin{aligned}
\label{eq.bvtransform} c_{i\sigma} = \sum_{n} \left( u_{in\sigma} \gamma_{n\sigma} - \sigma v_{in\sigma}^{\ast} \gamma_{n\bar{\sigma}}^{\dagger} \right) ,\end{aligned}$$ where $\gamma_{n\sigma}$ and $\gamma_{n\sigma}^{\dagger}$ are the new [*quasi*]{}-particle fermionic operators, whereas ${\bm u}$ and ${\bm v}$ are the Bogoliubov–de Gennes (BdG) eigenvectors. One can obtain the self-consistent BdG equations in real space: $$\begin{aligned}
\label{eq.bdgeq}
\sum_{j}
\left(
\begin{array}{cc}
H_{ij\sigma} & \Delta_{ij} \\
\Delta_{ij}^{\ast} & -H_{ij\bar{\sigma}}^{\ast}
\end{array}
\right)
\left( \begin{array}{c}
u_{jn\sigma} \\
v_{jn\bar{\sigma}}
\end{array} \right)
=
\mathcal{E}_{n\sigma}
\left( \begin{array}{c}
u_{in\sigma} \\
v_{in\bar{\sigma}}
\end{array} \right)
$$ where $H_{ij\sigma} = - t \delta_{\langle i,j \rangle} - ( \mu_{i} + \sigma h ) \delta_{ij}$ is the single-particle Hamiltonian and $\Delta_{ij} = U \Delta_{i} \delta_{ij}$ are the on-site SOPs. Using transformation (\[eq.bvtransform\]), the SOPs can be found as: $$\begin{aligned}
\label{eq.sop_mom}
\Delta_{i} = \langle c_{i\downarrow} c_{i\uparrow} \rangle = \sum_{n} \left[ u_{in\uparrow} v_{in\downarrow}^{\ast} f( \mathcal{E}_{n\uparrow} ) - u_{in\downarrow} v_{in\uparrow}^{\ast} f ( - \mathcal{E}_{n\downarrow} ) \right] .\end{aligned}$$ Equations (\[eq.bdgeq\]) can be solved self-consistently with respect to the distribution of $\Delta_{i}$. In this case, one can find the grand canonical potential for a given state as: $$\begin{aligned}
\Omega \equiv - k_{B} T \sum_{n\sigma} \ln \left[ 1 + \exp \left( \frac{ - \mathcal{E}_{n\uparrow} }{ k_{B} T } \right) \right] - \sum_{i} \left( \mu_{i} + h + U | \Delta_{i} |^{2} \right) .\end{aligned}$$ From several solutions of the BdG equations, only those with a minimal value of grand canonical potential $\Omega$ (at fixed $\mu$ and $h$) indicate a thermodynamically stable state of the system. In the absence of a trap (i.e., $\mu_{i} = \mu$ at each site), the distribution of $\Delta_{i}$ can be rewritten in momentum space, using the Fourier transform: $$\begin{aligned}
\Delta_{i} = \frac{1}{N_{x}} \sum_{\bm q} \Delta_{\bm q} \exp \left( i {\bm R}_{i} \cdot {\bm q} \right) ,\end{aligned}$$ where ${\bm q}$, which are restricted to the first Brillouin zone, are total momenta of the Cooper pairs.
![ (a) The full $\mu$–$h$ ground state phase diagrams for the infinite homogeneous chain obtained for different values of on-site attraction $U/t$ (as labeled). Labels indicate the regions of an occurrence of the following phases: NO – normal phase; BCS – unpolarized superconducting phase with ${\bm Q} = 0$; FFLO – polarized superconducting phase with ${\bm Q} \neq 0$. Solid lines and dots denote boundaries between the phases found from calculations in which only the simplified FF solutions are considered (cf. Ref. and ) as well as the full FFLO solutions are taken into account, respectively. (b) A part of the full phase diagram in the vicinity of the BCS–FFLO transitions including only the full FFLO solutions. The stars connected by the dashed lines denote sets $\mathcal{A}$ and $\mathcal{B}$ of the model parameters used in further calculations, included in the present work (cf. Figs. \[fig.ps1\] and \[fig.ps2\]). \[fig.df\] ](df.eps){width="\sizeA"}
Numerical results {#numerical-results .unnumbered}
=================
In this section, the obtained numerical results are presented. First, the homogeneous system (without the trap) is investigated. The magnetic field $h$ versus chemical potential $\mu$ phase diagram is determined. Next, we show that phase separation occurs in the presence of a harmonic trap. Such a phase separation induced by a trapping potential we will call *(artificially) enforced* phase separation further in the text. For chosen parameters (i.e., $\mu$ and trapping potential), the phase separation between different phases (e.g., a state with the BCS core and the FFLO shell as well as the FFLO core and the BCS shell) can be realized in the system.
Phase diagram: homogeneous system {#phase-diagram-homogeneous-system .unnumbered}
---------------------------------
Numerical calculations presented in this section have been performed for a one-dimensional chain with the periodic boundary conditions and $N = 200$ sites. For the homogeneous system, i.e., $\mu_{i} = \mu = \text{const.}$, the magnetic field $h$ versus chemical potential $\mu$ phase diagram is shown in Fig. \[fig.df\]. The results for different values of $U/t$ are presented (cf. also Ref. and ). In each case, they consist of three regions. For low values of magnetic field, the conventional BCS phase is stable (with $\Delta_i=\text{const}$ and $\Delta_i\neq0$). With increasing $h$, one finds a discontinuous phase transition from the BCS to the FFLO phase (with $\Delta_i$ changing from site to site). In the third region the normal (non-ordered, NO) phase is stable (with $\Delta_i=0$). The transition from the BCS to the NO phase is continuous (for changing $h$ and fixed $\mu$) or discontinuous (at the vertical boundary in the phase diagram). Notice also that for small $\mu/t$ and large $h/t$, the so-called $\eta$ phase can exist [@yang.89] (the FFLO phase with maximal $q=\pi$, at least for the FF ansatz [@ptok.cichy.17; @ptok.cichy.17b]). With increasing $U/t$, the region of the BCS phase extends, whereas the continuous FFLO–NO boundary weakly depends on $U/t$. In Fig. \[fig.df\](a), the solid lines correspond to the case in which only the FF state is considered, i.e., all Cooper pairs have the same momentum ${\bm q}$ [@FF]. Including the fact that generally in the “full” FFLO state, Cooper pairs with different momenta can contribute (i.e., given by Eq. (\[eq.sop\_mom\]), one gets that the region of the FFLO phase occurrence is slightly extended in comparison to the one obtained only for the FF ansatz. This is an expected and well known relation between superconducting phases with different numbers of allowed ${\bm q}$, i.e., a phase with larger number of ${\bm q}$’s is more stable than the FF phase with only one ${\bm q}$ [@casalbuoni.nardulli.04; @jiang.ye.07].
It should be also mentioned that for low filling (i.e., for $| \mu | \sim 2 t$), the BCS–BEC crossover can be realized. The detailed discussion of this issue has been presented in Ref. . In the context of the present work (particularly for the problem of the phase separation region in the system), the following property of the phase diagram is also very important. Namely, depending on values of the magnetic field $h$ and on-site attraction $U$, with increasing $|\mu|$ the transitions from the BCS phase to the FFLO phase or from the FFLO phase to the BCS phase can occur \[red and blue stars, respectively, in Fig. \[fig.df\](b)\].
However, it is important to keep in mind that the use of the mean-field approximation is generally restricted to the weak coupling limit and ground state properties. The limitations of the mean-field method affect the one-dimensional system the most because the pair fluctuations become very important in this case [@liu.hu.07]. For such system geometry, the nature of phase transition between the BCS and FFLO phases can be faultily predicted. However, the mean-field approximation can give some useful description in the weak and intermediate couplings, which are comparable with the Bethe ansatz results [@liu.hu.07]. While the mean-field FF-type calculations do not predict the correct type of the phase transition from the BCS phase to the FFLO phase, the self-consistent Bogoliubov–de Gennes results are in a good agreement with those obtained from the Bethe ansatz. In the continuum model, the first order phase transition is simply an artifact of the FF-type calculation [@liu.hu.07]. This suggests a similar problem in the case of a lattice in the thermodynamic limit, when average particle concentration and lattice constant go to zero (i.e., $n\rightarrow0$ and $a\rightarrow0$ — the dilute limit).
Phase separation and its types {#phase-separation-and-its-types .unnumbered}
------------------------------
Generally, the phase separation is a state of the system where two or more uniform phases (e.g., those which have been defined previously) occur in different parts (so-called domains) of the system. In this section, we distinguish two different types of phase separation, which can emerge in the system, i.e.: (a) a spontaneous phase separation, which can occur in a homogeneous system and (b) an artificially enforced phase separation, which can emerge in inhomogeneous system. In the present work these cases correspond to the system with $\mu_i=\text{const}$ at each site and to the system with inhomogeneous spatial distribution of $\mu_i$, respectively. Below, we characterize briefly these two types of phase separations.
### Spontaneous (macroscopic) phase separation {#spontaneous-macroscopic-phase-separation .unnumbered}
The discontinuous transitions between two (homogeneous) phases in the diagram, as a function of $\mu$ are usually related to the discontinuous change of particle concentration from value $n_+$ to value $n_-$ ($n_+>n_-$). Such a transition can be associated with the occurrence of the (macroscopic) phase separation in a defined range $n_-<n<n_+$ of particle concentration [@arrigoni.strinati.91]. In such a phase-separated state two domains with different particle concentrations $n_-$ and $n_+$ coexist (there can be also regions differing in the magnitude of the order parameter as well as thermodynamic phases). In this approach, because of neglecting the interface energy at the boundaries of the domains, such states can exist only in the thermodynamic limit (i.e., when $N\rightarrow\infty$) [@kujawa.micnas.11; @kapcia.robaszkiewicz.12; @cichy.micnas.14; @kapcia.czart.16; @kapcia.baranski.17]. In a finite system, the interface energy can lead to an occurrence of states with other textures [@coleman.yukalova.95; @lorenzana.castellani.01.a; @lorenzana.castellani.01.b; @yukalov.yukalova.04; @yukalov.yukalova.14], besides the homogeneous states and the phase separated states discussed above. Due to the fact that the BCS–FFLO, BCS–NO, and FFLO–NO boundaries in Fig. \[fig.df\](a) can be discontinuous in the diagram as a function of $n$, one expects an occurrence of the following (macroscopic) phase separation regions: BCS/FFLO, BCS/NO, and FFLO/NO [@ptok.cichy.17; @ptok.cichy.17b]. Notice that only the BCS–FFLO boundary is discontinuous for all model parameters. To distinguish the phase separation from other states discussed above, we will call it a spontaneous one, because it can occur spontaneously in the homogeneous system.
{width="\sizeA"}
{width="\sizeA"}
### Artificially enforced phase separation: a model example {#artificially-enforced-phase-separation-a-model-example .unnumbered}
From the analysis of the phase diagram and shapes of boundaries between the phases, it is clear that for $U/t=-1.5$ and $h/t=0.125$ (set $\mathcal{A}$ of the model parameters), the phase with the lowest energy is (a) the FFLO phase at $\mu=\mu_1=0.75t$ and (b) the BCS phase at $\mu=\mu_2=1.75t$ \[the red stars in Fig. \[fig.df\](b)\]. The SOP at each site for these two solutions is presented in Fig. \[fig.ps1\](a) and Fig. \[fig.ps1\](b). Analogously, we also choose the following parameters for $U/t=-3.0$ and $h/t=0.55$ \[set $\mathcal{B}$ of the parameters, the blue stars in Fig. \[fig.df\](b)\]: $\mu_1=0.75t$ (the BCS phase) and $\mu_2=1.5t$ (the FFLO phase), and present the local dependence of the SOP in Fig. \[fig.ps2\](a) and Fig. \[fig.ps2\](b). One can notice that in the FFLO phase $\Delta_i$ changes from site to site periodically, whereas for the BCS solution $\Delta_i$ is homogeneous in the whole system.
Now, we investigate the system with a particular distribution of the chemical potential: $\mu_i=\mu_1$ in one half of system (favoring the FFLO phase or the BCS phase for set $\mathcal{A}$ or $\mathcal{B}$, respectively) and $\mu_i=\mu_2$ in the other half (favoring the BCS phase or the FFLO phase for set $\mathcal{A}$ or $\mathcal{B}$, respectively). After solving the self-consistent set of the BdG equations in real space for such a system with $N=200$ sites (and with open boundary conditions), one gets that solutions obtained in parts of the system resemble the ones for the homogeneous system \[Fig. \[fig.ps1\](c) and Fig. \[fig.ps2\](c)\]. In such a case, we have a coexistence of two homogeneous solutions, which are spatially separated. Notice that small changes (in comparison to the solutions for the homogeneous system in each part of the system) are distinguishable only in the neighborhood of interfaces between two such defined domains. Thus, one can conclude that the effects associated to the interfaces are almost irrelevant for the state under consideration. Moreover, the spin polarization (defined as the difference of concentrations of particles with spin up and spin down, $m_i=n_{i\uparrow}-n_{i\downarrow}$) as a function of site $i$ in these inhomogeneous states is shown in Fig. \[fig.ps1\](d) and Fig. \[fig.ps2\](d), for $\mathcal{A}$ and $\mathcal{B}$, respectively. As can be expected, the spin polarization has a modulation, which is two times faster than that of the SOP. Additionally, maximal values of the spin polarization are located at the nodal points of the SOP [@yanase.sigrist.09; @loh.trivedi.10]. This is a consequence of the interplay between the unpaired polarized particles and those in the superconducting state (i.e., the Cooper pairs). A small number of pairs (at sites with $\Delta_i\approx 0$) supports the occurrence of polarized states (i.e., a large value of $m_i$). Note also that there is no coexistence of the superconducting and magnetic ordering in the BCS phase.
The nature of this inhomogeneous state is, however, different than the origin of the spontaneous (macroscopic) phase separation discussed in the previous point. In the present setup, the system parameters are inhomogeneous (different values of chemical potential in two parts of the system). We will call such separated state an (artificially) enforced phase separation in contrary to the spontaneous phase separation, which could occur in the homogeneous system.
Note, that even if $\mu_1=\mu_2=\mu_c$ (where $\mu_c$ is a critical value of the chemical potential at the FFLO–BCS boundary for particular $U/t$ and $h/t$), due to the fact that the interface has small but still finite energy, the finite system cannot exhibit spontaneous phase separation. In such a case, the whole system is in the FFLO phase or in the BCS phase (both states have equal energy and both solutions correspond to local minima of the grand canonical potential).
System with a trap {#system-with-a-trap .unnumbered}
------------------
After studying the model systems, let us investigate more realistic situations, in which the on-site potential $\mu_i$ changes from site to site. An example of experimental realization of such systems are ultracold atomic gases in optical lattices. In the following, we consider two types of traps: (i) a linear trap with $\mu_{i} = V_{0} | r_{i} - r_{c} |$ and (ii) a harmonic trap with $\mu_{i} = V_{0} ( r_{i} - r_{c} )^{2}$, where $r_{i}$ is a location of $i$-th site, whereas $r_{c}$ is a location of the center of the trap (i.e., $i=N/2=500$). In this part, we performed the calculations for $N=1000$ sites with open boundary conditions. $V_0$ was chosen in such a way to ensure a value much larger than the chemical potential corresponding to the BCS-NO transition in the homogenous system at $h=0$ and fixed $U$.
![ SOP $U \Delta_{i}$, magnetization $m_i$, and particle concentration $n_i$ (as labeled, solid lines) in real space (as a function of lattice site $i$) in the presence of a linear trap, for two different values of on-site attraction and external magnetic field: (a) $U/t=-1.5$ and $h=0.125t$ (set $\mathcal{A}$ of the parameters); and (b) $U/t=-3.0$ and $h=0.55t$ (set $\mathcal{B}$ of the parameters). The dependence of (local) chemical potential $\mu_i$ is marked with the dotted line. The background colors (blue, yellow and pink) indicate the boundaries between the phases (NO, BCS and FFLO, respectively). \[fig.linear\] ](linear.eps){width="\sizeA"}
![ SOP $U \Delta_{i}$, magnetization $m_i$, and particle concentration $n_i$ (as labeled, solid lines) in the real space (as a function of lattice site $i$) in a presence of the harmonic trap, for two different values of on-site attraction and external magnetic field: (a) $U/t=-1.5$ and $h=0.125t$ (set $\mathcal{A}$ of the parameters); and (b) $U/t=-3.0$ and $h=0.55t$ (set $\mathcal{B}$ of the parameters). The dependence of (local) chemical potential $\mu_i$ is marked with the dotted line. The background colors (blue, yellow and pink) indicate the boundaries between the phases (NO, BCS and FFLO, respectively). \[fig.harmonic\] ](harmonic.eps){width="\sizeA"}
The self-consistent solutions of the BdG equations for the case of a linear trap are presented in Fig. \[fig.linear\]. In this case, we show SOP $U \Delta_{i}$, magnetization $m_i$, particle concentration $n_i$, and the trapping potential $\mu_i \equiv V(i) = V_{0} | r_{i} - r_{c} |$ (with $V_0 = 6.0 t / N$) as a function of the lattice site $i$ (blue, green, red and dashed black lines, respectively). Due to the fact that $\mu_i$ changes from site to site, $n_i$ changes at each site and one does not observe the solutions with, e.g., constant value of $\Delta_i$, corresponding to the BCS phase (but the region without oscillations of the SOP can be identified as corresponding to this phase, see below). However, one can indicate the regions along the chain which correspond to the phases indicated in Fig. \[fig.df\](a). Namely, starting from the left side of Fig. \[fig.linear\](a) (for $U/t=-1.5$ and $h=0.125t$, set $\mathcal{A}$), one can find a region of the NO (empty) phase, where $\Delta_i=0$ (and $n_i=0$). Next, there is the BCS region, where $\Delta_i\neq0$ exhibits no oscillations (and $m_i=0$). The oscillations of $\Delta_i$ and nonzero magnetization $m_i\neq0$ are clearly visible in the center of the system. Such a behaviour of $\Delta_i$ indicates the occurrence of the FFLO phase in this part of the system. Hence, in the linear trapping potential, the FFLO phase in the core (i.e., in the center of the system) is surrounded by the shell of the BCS states. For $U/t=-3.0$ and $h=0.55t$ (set $\mathcal{B}$), the situation is analogous, but going to the center of the trap, the following sequence of regions is presented: the NO phase \[with $n_i=0$ (vacuum state), as well as filled state with $n_i\neq0$\], the FFLO phase, and the BCS phase (as a core). The NO phase with $n_i\neq0$ is fully polarized (i.e., $m_i/n_{i}=1$).
Note, that the location $i$ of the boundaries between the regions along the chain corresponds, approximately, to the values of $\mu_i$ for which the phase transition occurs in the homogeneous system \[cf. Fig. \[fig.df\](b)\]. It is attributed to the fact that the interactions in the system are short-ranged and the interface energy is relatively small. Moreover, for set $\mathcal{A}$, only the NO phase with $n=0$ can exist, whereas for set $\mathcal{B}$ also the NO phase with $n\neq0$ is possible (cf. Ref. ).
The results for a harmonic trap, presented in Fig. \[fig.harmonic\], do not differ qualitatively from those obtained for a linear trap. We take $\mu_{i} \equiv V(i) = V_{0} ( r_{i} - r_{c} )^{2}$, with $V_{0} = 4.0 t / (N/2)^2$. Such chosen values correspond to a relatively flat trap. The sequences of the regions for both sets of the model parameters ($\mathcal{A}$ and $\mathcal{B}$) are NO–BCS–FFLO and NO–FFLO–BCS, respectively, with spatially extended regions in the center (cores) and spatially shrunken external regions (shells).
The results presented above show that in the system with a trap at each site, the state of a homogeneous system corresponding to $\mu=\mu_i$ is approximately realized. The trap shapes considered above are chosen in such a way to change $\mu_i \equiv V(i)$ from a range that all of possible phases occurring in the system are present, at fixed $h$ cf. Fig. \[fig.df\] (from the NO phase at large $|\mu|$ at the boundaries of the system, i.e., at $i=0,N-1$, to the half-filled phases at $\mu=0$ in the center, i.e., at $i=N/2$). Thus, in other words, all phases for a homogeneous system at fixed $\mu$ can be realized in the setup with the trap. However, by modification of parameters of the trap for instance, one can realize a system which corresponds only to a part of the phase diagram (i.e., some limited part of $\mu$ range). A sample of the results is shown in Fig. \[fig.harmonic\_shif\]. They are obtained for the system with a harmonic trap in the form of $\mu_{i} = V_{0} ( r_{i} - r_{c} )^{2} + \bar{V}$, with $V_{0} = 2.0 t / (N/2)^2$ and $\bar{V} = 1.25 t$. In this case, the minimum value of the trapping potential is shifted to some finite value, i.e., $\mu_{N/2} = \bar{V}$ (black doted line). The chosen parameters allow to realize a part of the phase diagram in which two superconducting domains are separated by the NO state \[cf. Fig. \[fig.df\](b)\]. Depending on a fixed value of magnetic field $h$, the states with BCS or BCS-FFLO core surrounded by the BCS shell, separated by the NO sub-shell-part can be realized in the system.
\]. The background colors (blue, yellow and pink) indicate the boundaries between the phases (NO, BCS and FFLO, respectively). \[fig.harmonic\_shif\] ](ps3.eps){width="\sizeA"}
The results shown in Fig. \[fig.harmonic\_shif\] correspond to a part of the phase diagram \[cf. Fig. \[fig.df\](b)\], where the BCS–BEC crossover is realized. One should notice that the system under consideration is in a trap, which can change the state at the site with $\mu_i$ in comparison to the homogeneous solution with $\mu=\mu_i$. Indeed, numerical results show that this state can be realized outside the BCS core state \[Fig. \[fig.harmonic\_shif\](a)\], for $h$ smaller than the one at which the FFLO phase occurs in a homogeneous system. The realization of the two BCS-like domains, separated by the NO state, is possible by the decrease of $h$ \[Fig. \[fig.harmonic\_shif\](b)\]. At the boundary of the internal BCS core, one observes the FFLO-type gap oscillation with changing of its sign [@castorina.grasso.05; @iskin.williams.08; @bakhtiari.leskinen.08]. It is important to emphasize that, as a consequence of low filling at the external BCS shell state, we expect realization of the BEC condensate (the tightly bound local pairs region) of the Cooper pairs. According to the [*Leggett criterion*]{} [@leggett.80], the BEC (i.e., Bose–Einstein condensate) begins when the effective chemical potential is smaller than the lower band edge.
Summary {#summary .unnumbered}
=======
In conclusion, we have shown that properties of the system in the presence of a trap are strongly associated to the phase diagram of the homogeneous system (in the absence of a trap). It is clearly seen from the spatial dependence of different quantities in the system with linear and harmonic trap (Figs. \[fig.linear\] and \[fig.harmonic\], respectively) and provides a confirmation about the validity of so-called local density approximation, that is very often employed in theoretical calculations for trapped system starting from previous results for homogeneous systems [@heidrichmeisner.orso.10; @hu.xiaji.07; @liu.hu.08]. We have shown that in such a system the core-shell structures can be created due to the trapping potential. For chosen parameters, the order of states realized in such structures corresponds to the sequence of phases occurring in the phase diagram for the homogeneous system (Fig. \[fig.df\]). The states occurring in a particular sequence depend mainly on values of model parameters (mainly $h$ and $U$). Generally, the shape of the trapping potential does not change the structure of phases occurrence. Moreover, even in the same type of trap, for different values of the model parameters (i.e., interaction $U$ or magnetic field $h$), the structure with the BCS core and the FFLO shell as well as with the FFLO core and the BCS shell can be realized (depending on $U$ and $h$).
An increase of the on-site interaction at low filling can lead to the BCS–BEC crossover in the system. By tuning of the trap parameters, we can realize the part of the phase diagram in which this crossover occurs. Thus, one can obtain the more complex core-shell structures. In particular, one can obtain two BCS states separated by the NO state (e.g., Fig. \[fig.harmonic\_shif\]). Due to the low filling, at the outer BCS shell, the BEC should emerge in the system.
In this work, we have shown that different core-shell structures can occur in a trapped system. These structures are examples of so-called (artificially) enforced phase separation, occurring in spatially inhomogeneous systems, whose origin is different than that of the macroscopic phase separation in homogeneous systems. It is important to emphasize that such experimental setup with a trap allows to investigate phase diagrams of homogeneous systems with short-range interactions. The theoretical prediction presented in this work should be realizable experimentally in a relatively simple way.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Krzysztof Cichy for careful reading of the manuscript, valuable comments and discussions. A.P. is grateful to Laboratoire de Physique des Solides (CNRS, Université Paris-Sud) for hospitality during a part of the work. This work was supported by the National Science Centre (Narodowe Centrum Nauki, NCN, Poland) under grants nos.: UMO-2017/24/C/ST3/00357 (A.C.), UMO-2017/24/C/ST3/00276 (K.J.K.), and UMO-2017/25/B/ST3/02586 (A.P.).
Author contributions statement {#author-contributions-statement .unnumbered}
==============================
A.P. initialized and coordinated the project. A.P. performed numerical calculation. All authors discussed the results. A.C. and K.J.K. prepared the first version of the manuscript. All authors contributed to the final version of the manuscript and all of them reviewed and accepted it.
Competing interests {#competing-interests .unnumbered}
===================
The authors declare no competing interests.
|
---
abstract: 'We investigate how the Hausdorff dimensions of microsets are related to the dimensions of the original set. It is known that the maximal dimension of a microset is the Assouad dimension of the set. We prove that the lower dimension can analogously be obtained as the minimal dimension of a microset. In particular, the maximum and minimum exist. We also show that for an arbitrary $\mathcal{F}_\sigma$ set $\Delta \subseteq [0,d]$ containing its infimum and supremum there is a compact set in $[0,1]^d$ for which the set of Hausdorff dimensions attained by its microsets is exactly equal to the set $\Delta$. Our work is motivated by the general programme of determining what geometric information about a set can be determined at the level of tangents.'
address:
- |
Jonathan M. Fraser\
School of Mathematics & Statistics\
University of St Andrews\
St Andrews\
KY16 9SS\
UK
- |
Douglas C. Howroyd\
School of Mathematics & Statistics\
University of St Andrews\
St Andrews\
KY16 9SS\
UK
- |
Antti K[ä]{}enm[ä]{}ki\
Department of Physics and Mathematics\
University of Eastern Finland\
P.O. Box 111\
FI-80101 Joensuu\
Finland
- |
Han Yu\
School of Mathematics & Statistics\
University of St Andrews\
St Andrews\
KY16 9SS\
UK
author:
- 'Jonathan M. Fraser'
- 'Douglas C. Howroyd'
- 'Antti K[ä]{}enm[ä]{}ki'
- Han Yu
title: On the Hausdorff dimension of microsets
---
[^1]
Introduction
============
To calculate the dimension of a set it is often important to understand its infinitesimal structure. This leads us to the notion of microsets introduced by Furstenberg [@Fu]. They are sets that are obtained as limits of successive magnifications of the original set. From a dynamical point of view, the collection of all microsets together with the magnification action define a dynamical system. The study of this dynamical system is known as the theory of CP-chains. For more details in this direction, see also [@FFS; @Fu; @H10; @HS12; @KSS15]. In this paper, we want to study the collection of *all* microsets. This collection heuristically represents all possible fine structures of a set. For general compact sets the structure of this collection is very rich; see [@CR].
The Assouad dimension characterises how large the densest part of a set is. It is known that the greatest Hausdorff dimension of all microsets of a set $F$ is equal to the Assouad dimension of $F$. In much the same way, the lower dimension reflects how sparse a set can be and it is natural to expect that the smallest microset of a set $F$ represents the lower dimension of $F$. This is our first result.
\[ThMain\] For any compact set $F\subset \mathbb{R}^d$ we have $${\dim_{\mathrm{L}}}F=\min_{E\in\mathcal{G}_F} {\dim_{\mathrm{H}}}E=\min_{E\in\mathcal{G}_F} {\overline{\dim}_\mathrm{B}}E.$$ In particular, this minimum exists.
Here ${\dim_{\mathrm{L}}}$ stands for the lower dimension, ${\dim_{\mathrm{H}}}$ for the Hausdorff dimension, ${\overline{\dim}_\mathrm{B}}$ for the upper box dimension, and $\mathcal{G}_F$ for the gallery of $F$; see Section \[prelim\] for the precise definitions. Combining this result with the analogous one for the Assouad dimension, we obtain the following corollary.
\[maincor\] For any compact set $F\subset \mathbb{R}^d$, all elements in $\mathcal{G}_F$ have the same Hausdorff dimension if and only if $${\dim_{\mathrm{L}}}F={\dim_{\mathrm{A}}}F.$$
Here ${\dim_{\mathrm{A}}}$ stands for the Assouad dimension; again see Section \[prelim\] for the definition.
We know that the Hausdorff dimension of microsets attains both the lower and Assouad dimensions of a set. The question then becomes which other numbers *can* be attained by the dimensions of microsets and which numbers are *guaranteed* to be attained. The next result shows that the collection of Hausdorff dimensions obtained can be rather complicated and rich.
\[Thmain2\] If $\Delta \subseteq [0,d]$ is an $\mathcal{F}_\sigma$ set which contains its infimum and supremum, then there exists a compact set $F \subseteq [0,1]^d$ such that $$\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} = \Delta.$$
The gallery of a set is closed under the Hausdorff metric. However, the above theorem says that the set of dimensions of microsets in the gallery need not be. We have not been able to construct compact sets $F$ for which $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} $ is not $\mathcal{F}_\sigma$ and wonder if this is always the case.
Note that the Hausdorff, packing, upper and lower box dimensions of the original set need not appear as Hausdorff dimensions of sets in the gallery if one insists on the microsets having unbounded scaling sequence. This is a natural assumption which guarantees that microsets genuinely reflect infinitesimal structure, that is, one genuinely zooms in to generate them. This is in stark contrast to the Assouad and lower dimensions which we have seen always appear. See Section \[dimappear\] for a full discussion of this observation.
In the opposite direction, there exist well studied sets whose microsets observe all possible dimensions between the lower and Assouad dimensions. For instance, Bedford-McMullen carpets $F$ which do not have uniform fibers have the property that ${\dim_{\mathrm{L}}}F< {\dim_{\mathrm{A}}}F$ and $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} = [{\dim_{\mathrm{L}}}F, {\dim_{\mathrm{A}}}F]$. This can be seen by adapting the arguments in [@mackay; @F] which construct extremal microsets to such carpets. The extremal microsets are of the form $\pi F \times C$ where $\pi F$ is the projection of $F$ onto the first coordinate and $C$ is a self-similar set corresponding to the minimal or maximal column. To obtain intermediate dimensions, one may construct microsets of the form $\pi F \times C_p$ where $C_p$ is a ‘random Cantor set’, where the minimal column is chosen with probability $(1-p)$ and the maximal column with probability $p$. Varying $p \in (0,1)$ yields microsets with all possible dimensions. We do not pursue the details. Alternatively the construction of Chen and Rossi [@CR] yields a set $F\subseteq [0,1]^d$ such that $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} = [0,d]$.
In addition to the dimension results above, we also have the following topological result which can be naturally viewed as a dual version of [@FY1 Theorem 2.4], which says that any set of full Assouad dimension has the unit cube as a microset.
\[Sing\] If $F\subset \mathbb{R}^d$ is compact with ${\dim_{\mathrm{L}}}F=0$, then there is a singleton in $\mathcal{G}_F$.
Preliminaries {#prelim}
=============
Dimensions
----------
Let $N_r(F)$ be the smallest number of cubes of side length $r>0$ needed to cover the compact set $F \subset \mathbb{R}^d$. The *upper* and *lower box dimensions* of $F$ are $${\overline{\dim}_\mathrm{B}}F=\limsup_{r\to 0} \frac{-\log N_r(F)}{\log r}$$ and $${\underline{\dim}_\mathrm{B}}F=\liminf_{r\to 0}\frac{-\log N_r(F)}{\log r},$$ respectively. When these two values coincide we simply talk about the *box dimension* of $F$, denoted by ${\dim_\mathrm{B}}F$.
Let $F\subset \mathbb{R}^d$ be a compact set and $s$ a non-negative real. For all $\delta>0$ we define $$\mathcal{H}^s_\delta(F)=\inf\left\{\sum_{i=1}^{\infty}\mathrm{diam} (U_i)^s: F \subset \bigcup_i U_i \text{ and } \mathrm{diam}(U_i)<\delta\right\}.$$ The $s$-dimensional Hausdorff measure of $F$ is $$\mathcal{H}^s(F)=\lim_{\delta\to 0} \mathcal{H}^s_{\delta}(F)$$ and the *Hausdorff dimension* of $F$ is $${\dim_{\mathrm{H}}}F=\inf\{s\geq 0:\mathcal{H}^s(F)=0\}=\sup\{s\geq 0: \mathcal{H}^s(F)=\infty \}.$$ For a more thorough treatment of the box and Hausdorff dimensions, see [@Fa Chapters 2 and 3] and [@Ma1 Chapters 4 and 5].
Finally we define the *Assouad* and *lower dimensions* of $F$ by $$\begin{aligned}
{\dim_{\mathrm{A}}}F = \inf \Bigg\{ s \ge 0 \, \, \colon \, (\exists \, C >0)\, (\forall & R>0)\, (\forall r \in (0,R))\, (\forall x \in F) \\
&N_r(B(x,R) \cap F) \le C \left( \frac{R}{r}\right)^s \Bigg\}
\end{aligned}$$ and $$\begin{aligned}
{\dim_{\mathrm{L}}}F = \sup \Bigg\{ s \ge 0 \, \, \colon \, (\exists \, C >0)\, (\forall &\, 0<R<1)\, (\forall r \in (0,R))\, (\forall x \in F) \\
&N_r(B(x,R) \cap F) \geq C \left( \frac{R}{r}\right)^s \Bigg\},
\end{aligned}$$ where $B(x,r)$ is the closed ball of centre $x$ and radius $r$. For basic properties of these dimensions, see [@F].
The main property we will use is that, $${\dim_{\mathrm{L}}}F\leq {\dim_{\mathrm{H}}}F\leq {\underline{\dim}_{\mathrm{B}}}F\leq {\overline{\dim}_{\mathrm{B}}}F\leq {\dim_{\mathrm{A}}}F$$ for all compact $F\subset\mathbb{R}^d$.
Microsets and galleries {#MG}
-----------------------
We now introduce the notion of microsets and galleries following [@Fu]. We start by defining the Hausdorff distance between two compact sets $A,B \subset \mathbb{R}^d$ by $$d_{\mathcal{H}}(A,B)=\inf\{\delta>0: A\subset B_{\delta} \text{ and } B\subset A_{\delta}\},$$ where $E_\delta$ is the closed $\delta$-neighbourhood of a compact set $E$.
Let $X=[0,1]^d$ for some $d\in \mathbb{N}$. Then $(\mathcal{K}(X),d_\mathcal{H})$, the space of compact subsets of $X$, is a compact metric space.
We call $D\in \mathcal{K}(X)$ a *miniset* of $F\in\mathcal{K}(\mathbb{R}^d)$ if $D = \left(\lambda F + t\right)\cap X$ for some scaling coefficient $\lambda \ge 1$ and a translation vector $t\in \mathbb{R}^d$. A set $E \in \mathcal{K}(X)$ is called a *microset* if it is a limit of a sequence $(D_n)_{n \in \mathbb{N}}$ of minisets under the Hausdorff metric. The sequence $(\lambda_n)_{n \in \mathbb{N}}$, where each $\lambda_n$ is a scaling coefficient of the miniset $D_n$, is called the *scaling sequence* of the microset $E$.
If a set is regular enough, for instance a self-similar set of positive dimension, then one could expect all microsets to be of the same dimension without appealing to Theorem \[maincor\]. However, it is easy to find microsets which are just singletons. Simply consider the middle third Cantor set $\mathcal{C}$, then $(4\mathcal{C} - 4/3) \cap [0,1]= \{0\}$ is a miniset and hence a microset. This example can be easily modified such that none of the defining minisets contains singletons but the microset does. Thus it is natural to discard all microsets which only contain points on the boundary of $X$. This will be reflected in the next definition which strays slightly from the formulation in [@Fu].
\[def:gallery\] Let $F$ be a compact subset of $\mathbb{R}^d$. We consider only microsets which intersect the interior of $X$. Then the collection of all such microsets of $F$ is called the gallery of $F$, denoted by $\mathcal{G}_F$.
Due to [@MT] we know that for compact subsets $F\subset \mathbb{R}^d$, $$\label{eq:assouad-upper-micro}
{\dim_{\mathrm{A}}}F \ge \sup_{E\in\mathcal{G}_F} {\dim_{\mathrm{A}}}E.$$ The lower dimension case was considered in [@F Proposition 7.7] where the following proposition was obtained under some extra assumption but with the infimum taken over lower dimensions of microsets. We give a short proof to show that the extra assumption is not needed when one takes the infimum over Hausdorff dimensions.
\[prop:lower-lower-bound\] If $F \subseteq [0,1]^d$ is a compact set, then $${\dim_{\mathrm{L}}}F \le \inf_{E\in\mathcal{G}_F} {\dim_{\mathrm{H}}}E.$$
Let $F\subseteq [0,1]^d$ be compact. We may assume ${\dim_{\mathrm{L}}}F> 0$ since otherwise there is nothing to prove. If $0<s< {\dim_{\mathrm{L}}}F$, then for any sequence of homotheties $T_k \colon \mathbb{R}^d \rightarrow \mathbb{R}^d$ there exists a constant $C >0$ such that for any $k\in \mathbb{N}$, $y\in T_k(F)$ and $0<r<R< 1$ we have $$N_r(B(y,R)\cap T_k(F) \cap [0,1]^d) \ge C\left( \frac{R}{r}\right)^s.$$ If this was not true, then the lower dimension of $F$ would be strictly less than $s$, a contradiction. Note that $T_k(F) \cap [0,1]^d$ is a miniset of $F$.
Let $E\in \mathcal{G}_F$ and recall from Definition \[def:gallery\] that then $E \cap (0,1)^d \neq \emptyset$. Note that if $E$ has an isolated point on the boundary of $[0,1]^d$, then ${\dim_{\mathrm{L}}}E = 0 < {\dim_{\mathrm{L}}}F$. To show that ${\dim_{\mathrm{L}}}F \le {\dim_{\mathrm{L}}}E \cap (0,1)^d$ let us fix $0<r<R<1$ and $x \in E \cap (0,1)^d$. Choose $k\in \mathbb{N}$ so that $d_{\mathcal{H}}(T_k(F) \cap [0,1]^d, E) \le r/2$. Then there is $y \in T_k(F) \cap [0,1]^d$ such that $B(y,R/2) \subset B(x,R)$ and for every $r$-cover of $B(x,R) \cap E \cap (0,1)^d$ there is a $2r$-cover of $B(y,R/2) \cap T_k(F) \cap [0,1]^d$. Thus $$\begin{aligned}
N_r(B(x,R)\cap E \cap (0,1)^d) &\ge N_{2r}(B(y,R/2)\cap T_k(F) \cap [0,1]^d) \\ &\ge C4^{-s} \left(\frac{R}{r} \right)^s
\end{aligned}$$ yielding $${\dim_{\mathrm{L}}}F \le {\dim_{\mathrm{L}}}E \cap (0,1)^d \le {\dim_{\mathrm{H}}}E$$ as desired.
We are interested in whether the above inequalities are actually equalities and if the supremum and infimum can be attained. For the Assouad dimension, we have the following result.
\[MicroAssouad\] If $F \subset \mathbb{R}^d$ is a compact set, then $${\dim_{\mathrm{A}}}F=\max_{E\in\mathcal{G}_F} {\dim_{\mathrm{H}}}E=\max_{E\in\mathcal{G}_F} {\dim_{\mathrm{A}}}E.$$
Recalling , the statement follows from [@Fu Theorem 5.1] and [@KR Proposition 3.13], or, alternatively, directly from [@KOR Proposition 5.7].
Thus, together with Theorem \[ThMain\], we obtain the following equivalent definitions of the Assouad and lower dimensions for compact subsets of Euclidean spaces $${\dim_{\mathrm{A}}}F=\max_{E\in\mathcal{G}_F} {\dim_{\mathrm{H}}}E$$ and $${\dim_{\mathrm{L}}}F=\min_{E\in\mathcal{G}_F} {\dim_{\mathrm{H}}}E.$$ We remark that in the literature weak tangents are often used in place of microsets. They differ from microsets by allowing rotations in the magnifications, and sometimes they are not restricted to the unit cube.
Global and local size of trees {#Tree}
==============================
Before proving Theorem \[ThMain\] we need some combinatorial results on the structure of trees. Here we only talk about binary trees (finite or infinite) but all definitions and results can be easily generalized to any $k$-ary trees with $k\geq 3$. Notation introduced in this section will only be used in this section and the next one.
We adopt standard graph theoretic notation and use $V(T)$ and $E(T)$ for vertices and edges of $T$. We consider rooted trees which are directed graphs, with edges going away from a root vertex. We define the degree of a vertex to be the sum of indegrees and outdegrees of a vertex. A leaf of $T$ is an element in $V(T)$ whose degree is $1$, except when $T$ consists of just the root vertex then the only leaf is the root of degree 0. We denote the set of leaves of $T$ as $L(T)$, so in particular $L(T)\subset V(T)$.
Given a binary tree $T$, the height $h(T)$ is the length of the longest path starting at the root vertex. When $h(T)<\infty$ we say that $T$ is finite. For a vertex $a\in V(T)$, $h(a)$ is the length of the unique path from the root to $a$. We will often use the term ‘level $n$’ to mean all vertices of height $n$. Given $a\in V(T)$, we use $T(a,n)$ to denote the largest subtree of $T$ with root $a$ of height at most $n$, formally the vertex set of $T(a,n)$ is defined to be as follows $$V(T(a,n))=\{b\in V(T): \text{there is a path in $T$ from $a$ to $b$ of length at most $n$}\}.$$ The edge set of $T(a,n)$ is defined to be as follows $$E(T(a,n))=\{(b_1,b_2)\in E(T):b_1\in V(T(a,n)), b_2\in V(T(a,n)) \}.$$ In other words, $T(a,n)$ is the spanned subgraph of $T$ with vertices $V(T(a,n))$.
For any binary tree $T$ we use $\#T$ to denote the number of leaves and $\#_nT$ for the number of vertices of height $n$. A binary tree $T$ is tidy if $h(a)=h(T)$ for all leaves $a\in V(T)$. For example, a full tree is tidy but not vice versa. If $T$ is tidy, then for any $a\in V(T)$ and any integer $n$ such that $h(a)+n\leq h(T)$, it is clear that $T(a,n)$ is tidy.
\[loc\] Let $T$ be a tidy binary tree, $s>0$ and $m\in \mathbb{N}$. We call $T$ locally $(s,m)$-large (or small) if for all $a\in V(T)$ with $h(a)+m\leq h(T)$, there exists $1\leq n\leq m$ such that $$\#T(a,n)\geq 2^{sn}
\quad\left( \text{or } \#T(a,n)\leq 2^{sn} \right).$$
Note that when $T$ is infinite then this must simply hold for all $a\in V(T)$.
An extreme example of this would be to take $s,m=1$ in the definition and then $T$ is in fact a full binary tree if it is locally $(1,1)$-large. With this local property at hand we can also define the following global property.
\[glo\] Let $T$ be a tidy binary tree, $s>0$ and $C>0$. We call $T$ globally $(s,C)$-large (or small) if for all $n\in [1,h(T)]$ $$\#_n T\geq C2^{sn}
\quad\left( \text{or } \#_n T\leq C2^{sn}\right).$$
Again note that if $T$ is infinite then this must hold for all $n\in [1,\infty)$. We state and prove our regularity lemma in terms of largeness. Note that it is also possible to obtain an analogous lemma with largeness being replaced by smallness. The proof is similar and we omit the details.
\[reg\] Let $T$ be a tidy locally $(s,m)$-large tree with height larger than $m$, then it is globally $(s,2^{-sm})$-large as well.
As $T$ is locally large in the above sense we can use the following algorithm to find large subtrees.
**Step 1**: let $T_0$ be the tree whose vertex set contains only the root of $T$.
**Step 2**: If $T_k$ is defined for an integer $k$, then $T_k$ has leaves. Take a leaf $a$ of $T_k$, then there is an integer $1\leq n\leq m$ such that $$\#T(a,n)\geq 2^{sn}.$$ Then we join $T(a,n)$ to $T_k$ at $a$ and call the tree obtained $T_{k+1}$.
We can repeat Step 2 for all leaves of $T_k$. Notice that the above algorithm is not deterministic as there are multiple choices of leaves in Step 2. However, the algorithm could easily be made deterministic by picking leaves “from left to right”.
Let $N$ be an integer larger than $m$ and not greater than the height of $T$. Let $T^N$ be a subtree of $T$ obtained by repeating Step 2 with the restriction that all leaves of $T_k$ have height at most $N$. Suppose $T^N$ is maximal in the sense that we cannot enlarge $T^N$ by applying Step 2. Then it is clear that all the leaves of $T^N$ have height at least $N-m$ for otherwise the local largeness can help us enlarge $T^N$.
We define the $s$-weight of $a\in V(T)$ to be $W(a) = 2^{-sh(a)}$ for $s\in [0,1]$. Note that the root always has weight 1. Consider the total $s$-weight of the leaves of $T^N$: $$W(T^N)=\sum_{a \in L(T^N)} 2^{-sh(a)}.$$ We can group the leaves of $T^N$ according to their ancestors. Namely, for each leaf $a$ of $T^N$, there is a unique $b\in V(T^N)$ such that $a$ is a leaf of $T(b,n)$ for some $n\in [1,m]$ and $T(b,n)$ is a tree joined to the main tree in Step 2 of our algorithm. We shall denote $b=b(a)$ to imply this dependence.
We now compute the total weight of the leaves of $T^N$. First, we find the set $L$ of leaves with maximum height. This is possible because there are only finitely many leaves in $L(T^N)$. Then if $a\in L$ and $b=b(a)$ then $T(b,h(a)-h(b))$ is contained in $T^N$ and $L(T(b,h(a)-h(b)))\subset L$. As there are only finitely many leaves in $L$ we can find a finite collection $B$ of vertices $b\in V(T^N)$ and integers $\{n_b\}_{b\in B}$ such that $L$ is the disjoint union of sets $L(T(b,n_b)), b\in B$. Therefore we see that $$\sum_{a\in L} 2^{-sh(a)}=\sum_{b\in B}\sum_{a\in L(T(b,n_b))} 2^{-sh(a)}.$$ As $T$ is locally $(s,m)$-large, we see that for each $b\in B$ $$\sum_{a\in L(T(b,n_b))} 2^{-sh(a)}\geq 2^{-sh(a)}2^{s(h(a)-h(b))}=2^{-sh(b)}.$$ This implies that $$\sum_{a\in L} 2^{-sh(a)}\geq \sum_{b\in B} 2^{-sh(b)}.$$ Now we construct a subtree $T^N_1$ of $T^N$ replacing the subtrees $T(b,n_b), b\in B$ with single vertices $b\in B$. Then we have seen from above that $$\sum_{a\in L(T^N_1)} 2^{-sh(a)}\leq \sum_{a'\in L(T^N)} 2^{-sh(a')},$$ because each $a\in L(T^N_1)$ is either in $L(T^N)$ or else it is in $B$.
We can then perform the above procedure on the tree $T^N_1$ instead of $T^N$ and we obtain a subtree $T^N_2$ whose leaves have weight no greater than that of $T^N_1$. Moreover, the height of $T^N_2$ is strictly smaller than the height of $T^N_1$. This means that after performing the above procedure at most finitely many times we arrive at the tree with only one vertex, the root. This implies that $$W(T^N)\geq 1.$$
As we just observed, the leaves of $T^N$ have height at least $N-m$ so their weights are at most $2^{-s(N-m)}$ so the number of the leaves is at least $$2^{s(N-m)}.$$ However, observe that $\#_N T\geq \# T^N$ because $T$ is tidy. Therefore we see that $$\#_N T \geq 2^{s(N-m)}.$$ As $N$ is arbitrarily chosen, this is what we want.
Lower dimension and microsets {#Microset}
=============================
Returning to the Euclidean space, we now prove Theorem \[ThMain\]. We shall show that there exists a microset $E \in \mathcal{G}_F$ such that $${\overline{\dim}_\mathrm{B}}E \le {\dim_{\mathrm{L}}}F$$ whenever $F$ is a compact subset of $[0,1]$. The result easily generalizes to higher dimensions and Theorem \[ThMain\] then follows from Proposition \[prop:lower-lower-bound\]. The main idea behind the proof is to represent subsets of the unit interval as dyadic trees and then use the previous regularity lemma to determine the covering number of a microset.
For $n\in \mathbb{N}$ and $i\in \left\{0,1,\ldots, 2^n-1 \right\}$ we define the $i^\textrm{th}$ dyadic interval of height $n$ to be $D_n(i) = \left[\frac{i}{2^n}, \frac{i+1}{2^n}\right]$. This interval is then associated with the $i^\textrm{th}$ vertex of level $n$ in the full binary tree. We can then associate a subtree $T(F)$ of the full binary tree to a compact set $F\subseteq [0,1]$ by removing the $j^\textrm{th}$ vertex of level $k$ (as well as all of its descendants) if $D_k(j) \cap F = \emptyset$. Note that if $D_k(j) \cap F = \emptyset$ for some $k$ and $j$ then any smaller dyadic interval inside $D_k(j)$ must also not intersect $F$. So $T(F)$ is indeed a subtree of the full dyadic tree. This association is morally 1-1, only failing to be due to the fact that dyadic rationals can be represented by two infinite paths. We get around this problem, as well as simultaneously explaining what the ‘tree analogue’ of our approach above used to avoid unwanted singletons arising as microsets, as follows. If there is some $n\in \mathbb{N}$ and $i\in \{1,\ldots,2^n-1\}$ such that $\frac{i}{2^n}\in F$ then we need to check whether $\left(\frac{i-1}{2^n}, \frac{i}{2^n}\right)\cap F = \emptyset$ or $\left(\frac{i}{2^n}, \frac{i+1}{2^n}\right)\cap F = \emptyset$. If both intersections are empty then without loss of generality we remove the vertex associated to $D_n(i)$ and keep the vertex associated to $D_n(i-1)$. If both are non-empty then we keep both vertices. Finally, and most importantly, if only one of the two intersections is empty then we remove the vertex associated with the dyadic interval forming the empty intersection. Thus we have a 1-1 relationship between a subcollection of tidy infinite subtrees of the full binary tree and compact subsets of $[0,1]$.
Since we wish to somehow compare microsets and trees, we must also have a suitable notion of convergence of trees. Let $T_i$ be a sequence of binary trees with roots denoted by $a$. We say that $T_i$ converges if there exists a sequence of tidy binary trees $\{K_n\}_{n\in\mathbb{N}}$ with height $n$ such that for all $n\in\mathbb{N}$ there exists an $I\in \mathbb{N}$ such that $$T_i(a,n) \text{ are equal to $K_n$, for all $i\ge I$}.$$ The limit $\lim_{i\to\infty} T_i=T$ is defined to be the binary tree with root $a$ and $T(a,n)=K_n$ for all $n$. Notice that if the above holds then it is necessary that $K_{n_1}$ is a subtree of $K_{n_2}$ for integers $n_1\leq n_2$.
For any sequence of binary trees with unbounded heights, there exists a convergent subsequence. To see this we note that if a tree has some number of vertices at some level then there are only finitely many configurations for the vertices on the next level. Therefore for any sequence of trees of unbounded height, there will always be at least one configuration of the first $n$ levels that repeats infinitely often for all $n\in \mathbb{N}$.
Let $T=T(F)$, then for any $a\in V(T)$ and integer $n$, the subtree $T(a,n)$ corresponds to a finite approximation of a miniset $E$ of $F$ by blowing a dyadic interval of length $2^{-h(a)}$ up to the unit interval. Given a convergent sequence $T(a_i,n_i)$ with $h(a_i)\to\infty$, we can find a convergent sequence of minisets $E_i=S(T(a_i,\infty))$ such that the binary tree associated with the limit $E_{\infty}$ is precisely $\lim_{i\to\infty} T(a_i,n_i)$. To see that $E_i$ indeed converges to $S(\lim_{i\to\infty} T(a_i,n_i))$ we need only to see that $n_i\to\infty$ and the Hausdorff metric between $E_i$ and $S(\lim_{i\to\infty} T(a_i,n_i))$ is bounded from above by $2^{-n_i}$.
Due to the construction of our tree such microsets will only intersect the boundary of $X$ if there is a genuine isolated point, in which case there exists a number of actual microsets in the gallery containing the isolated point. Thus, without loss of generality, we may assume that the microsets obtained do satisfy our extra condition.
If $F \subseteq [0,1]$ is a compact set and $\varepsilon > 0$, then there is a microset $E \in \mathcal{G}_F$ such that ${\overline{\dim}_{\mathrm{B}}}E \le {\dim_{\mathrm{L}}}F+\varepsilon$.
We associate a binary tree $T(F)$ to $F$. Such a tree $T(F)$ is tidy by construction. Write $s = {\dim_{\mathrm{L}}}F$ and observe that, for any $\varepsilon>0$, we can find tidy subtrees $T_i=T(a_i,n_i)$ of $T(F)$ with height $n_i$ and $\#_{n_i} T_i\leq 2^{n_i(s+\varepsilon)}$. Moreover, we can assume that $n_i\to\infty$ as $i\to\infty$. We can also assume that $h(a_i)\to\infty$. Indeed, if it is not possible to find such a sequence of $a_i$, then there is an integer $N_0$ such that $\#_{N} T(a, N)\geq 2^{(s+\varepsilon)N}$ whenever $N\geq N_0$ and $h(a)\geq N_0$. This implies that ${\dim_{\mathrm{L}}}F\geq s+\varepsilon$ which is not possible.
Let us show that, for any integer $m\geq 1$, we can find $T'_m=T(a'_m,n'_m)$ such that $a'_m\in\bigcup_{i\in\mathbb{N}} V(T_i)$ and $$\label{eq:subtree}
\#_n T'_m\leq 2^{(s+2\varepsilon)n}$$ for all $n\in [1,m]$. If we cannot find such a collection of subtrees, then all the trees $T_i$ are locally $(s+2\varepsilon,m)$-large for an integer $m$ which does not depends on $i$. Of course by dropping finitely many $T_i$ we may assume that $h(T_i)\geq m$ for all $i$. Then take a $T_i$ with large height (compared to $m$), then by Lemma \[reg\] we see that $\#_{n_i}(T_i)\geq 2^{(s+2\varepsilon)(n_i-m)}$. So we see that for all $i$, with large enough $n_i$, $$2^{(s+2\varepsilon)(n_i-m)}\leq 2^{(s+\varepsilon)n_i}.$$ Therefore we see that $$n_i\leq (s+2\varepsilon)\frac{m}{\varepsilon}.$$ This is a contradiction as $n_i$ can be arbitrarily large, and hence such subtrees exist.
Let $T'_m$ be a sequence of subtrees satisfying . By taking a subsequence of $T'_m$ if necessary we assume that the sequence converges and this corresponds to a subset $E$ of $F$ with $T(E)=\lim_{m\to\infty} T'_m$. It is clear that $N_{2^{-n}} (E)=\#_n T(E)\leq 2^{(s+2\varepsilon)n}$ for all integers $n$ and therefore the upper box dimension (and so the Hausdorff dimension) of $E$ is at most $s+2\varepsilon$. Now $\sup_{m}h(a'_m)$ must be infinite because $h(a_i)\to\infty$ and there are only finitely many vertices in each $T_i=T(a_i,n_i)$. So we see that $E$ is a microset.
\[LowerHaus\] If $F\subseteq [0,1]$ is a compact set, then there is a microset $E \in \mathcal{G}_F$ such that ${\overline{\dim}_{\mathrm{B}}}E \le {\dim_{\mathrm{L}}}F$.
The above lemma says we can find a microset whose upper box dimension arbitrarily approximates the lower dimension of the original set. To obtain the equality desired we will use a Cantor diagonal argument to find a sequence of minisets which converge to a set satisfying the equality. From the previous lemma we know that, given $\varepsilon > 0$, there exists a subsequence of $T'_{m,\varepsilon}$ which converges to $T(E_\varepsilon)$ with ${\overline{\dim}_{\mathrm{B}}}E_\varepsilon \le s + \varepsilon$, where $s={\dim_{\mathrm{L}}}F$. We actually have the following stronger inequality, that for all $n\in [1,m]$, $$\#_n T'_{m,\varepsilon}\leq 2^{n(s+\varepsilon)}.$$
We construct an algorithm which will give us the desired sequence.\
\
**Step 1**: Let $j=1$ and $n_1 = 1$.\
\
**Step 2**: Consider the subsequence of trees $T'_{m,2^{-j}}$ which converges to $T(E_{2^{-j}})$. Let $T_{n_j,j} = T'_{n_j+k,2^{-j}}$ where $k$ is the smallest integer (including zero) such that $T'_{n_j+k,2^{-j}}$ is in the convergent subsequence.\
\
**Step 3**: Set $n_{j+1} = n_j + k + 1$ and $j=j+1$, then repeat the previous step.\
\
We thus obtain a sequence of strictly increasing integers $\left\{n_j \right\}$ and a sequence of trees $\left\{T_{n_j,j} \right\}$. There is therefore a subsequence which converges to the tree $T(E)$ and $E$ is such that $$N_{2^{-n}}(E) = \#_n T(E) \le 2^{(s+2^{-j})n}$$ for all $j$ and $n$. Hence ${\overline{\dim}_{\mathrm{B}}}E \leq s$ as required.
Obtainable Hausdorff dimensions in a gallery
============================================
In this section we prove Theorem \[Thmain2\]. Let $\Delta \subseteq [0,d]$ be an arbitrary $\mathcal{F}_\sigma$ set which contains its infimum and supremum which we denote by $\inf \Delta$ and $\sup \Delta$, respectively. Further assume that $\Delta\subseteq [0,d]$ is infinite, otherwise the proof is much simpler and we leave the details to the reader.
We use self-similar sets with particular dimensions as building blocks for $F$. We assume a working knowledge of self-similar sets and refer the unfamiliar reader to [@Fa Chapter 9]. In what follows, we assume that the convex hull of each self-similar set we consider is $[0,1]^d$. Let $Q_\infty \subseteq [0,1]^d$ be a self-similar set generated by equicontractive homotheties satisfying the open set condition which has Hausdorff dimension $\sup \Delta$.
\[constructionlemma\] For each $n \in \mathbb{N}$, there is a collection $\{ K(s,n) : s \in [0,\sup \Delta] \}$ of self-similar sets in $[0,1]^d$ satisfying the open set condition such that ${\dim_{\mathrm{H}}}K(s,n)=s$ and $$\label{1-over-n}
d_\mathcal{H}(K(s,n), Q_\infty) \leq \sqrt{d}/n$$ and, for a fixed $n$, $$\label{ensureclosed}
d_\mathcal{H}(K(s,n), K(t,n) ) \to 0$$ as $|s-t| \to 0$.
To see that such a collection of sets exists, fix $n\in \mathbb{N}$ and assume $Q_\infty$ is generated by the iterated function system (IFS) $\left\{f_{i}\right\}_{i=1}^a$ where each $f_i$ is a contracting homothety with common contraction ratio $c$. Define $k$ to be the smallest integer such that $c^k \le \frac{1}{n}$. Since $Q_\infty$ has the maximal dimension $\sup \Delta$, for any $s \in (0,\sup \Delta)$, the set $K(s,n)$ can be defined to be the self-similar set satisfying ${\dim_{\mathrm{H}}}K(s,n) = s$ and generated by $a^k$ homotheties $\{g_{n,j}^s\}_{j=1}^{a^k}$ of contraction ratio $a^{-k/s} < a^{-k/ \sup \Delta }= c^k$ such that for any $j\in\{1,\ldots,a^k\}$, the image of $[0,1]^d$ under $g_{n,j}^s$ lies in the corner of the image of $[0,1]^d$ under $f_{i_1} \circ f_{i_2} \circ \cdots \circ f_{i_k}$ for some $i_1,i_2,\ldots,i_k \in \{1,2,\ldots,a \}$. To do this in a canonical way it is enough to assume that $f_{i_1} \circ f_{i_2} \circ \cdots \circ f_{i_k}(0) =g_{n,j}^s(0)$. The set $K(0,n)$ can be defined similarly where we allow contraction ratio $0$ and the set $K(\sup \Delta, n)$ is simply defined to be $Q_\infty$. This guarantees that $$d_\mathcal{H}(K(s,n), Q_\infty) \leq \sqrt{d} c^k \leq \sqrt{d}/n$$
which shows . Also, since the images $g_{n,j}^s([0,1]^d)$ are placed in the same corner for each $s$, we also get, for fixed $n$ (and therefore fixed $k$) and for $s>t$, $$d_\mathcal{H}(K(s,n), K(t,n) ) \leq \sqrt{d} (a^{-k/s } - a^{-k/t}) \to 0$$ as $|s-t| \to 0$. This proves .
Since $\Delta$ is $\mathcal{F}_\sigma$ we may write it as $\Delta= \bigcup_n \Delta_n$ where each $\Delta_n$ is closed. Let $$\Omega_n = \{ K(s,n) : s \in \Delta_n \} \subset \mathcal{K}([0,1]^d)$$ and note that each $\Omega_n$ is closed by and the fact that each $\Delta_n$ is closed. Since $\mathcal{K}([0,1]^d)$ is separable in the Hausdorff metric, for each $n$ we can find a countable subset $\Omega_{n,0} \subseteq \Omega_n$ such that $\overline{\Omega_{n,0}} = \Omega_n$. Let $\Omega^0 = \{ Q_1, Q_2, Q_3, \dots\}$ be an enumeration of $\bigcup_n\Omega_{n,0}$.
\[closed!\] We have $$\overline{\Omega^0} = \{Q_\infty\} \cup \bigcup_n \Omega_n.$$
First note that $\overline{\Omega^0} \supseteq \bigcup_n \overline{\Omega_{n,0}} = \bigcup_n \Omega_n$ and $ Q_\infty \in \overline{\Omega^0} $ by construction (recall ) and so one direction is obvious. The other direction is more difficult, but follows from the way we defined the sets $K(s,n)$. It suffices to argue that $\{Q_\infty\} \cup \bigcup_n \Omega_n$ is a closed set. Let $K(s_i, n_i) \in \bigcup_n \Omega_n$ be a convergent sequence of sets, where for each $i$, $K(s_i, n_i) \in \Omega_{n_i}$. By taking a subsequence if necessary we may assume that either $n_i \to \infty$, in which case $K(s_i, n_i) \to Q_\infty$ by , or $n_i=n$ is constant. In this second case, we may take a further subsequence (using compactness of $\Delta_n$) where $s_i \to s \in \Delta_n$. Therefore $K(s_i, n_i) \to K(s,n) \in \Omega_n$ by . This proves the claim.
We are now ready to build $F$, but there are two slightly different cases depending on whether or not $\inf \Delta = 0$. The basic idea is to arrange shrinking copies of the sets $Q_i$ in such a way that a given miniset only sees a significant proportion of one of the sets $Q_i$, thus making the microsets easier to understand. Using Lemma \[closed!\] we then argue that microsets generated this way are essentially restricted to $\overline{\Omega^0}$ plus another set $Q_0$ which has dimension $\inf \Delta$.
First, suppose that $\inf \Delta > 0$ and let $Q_0 \subseteq [0,1]^d$ be a self-similar set generated by homotheties and satisfying the strong separation condition which has Hausdorff dimension $\inf \Delta$. We now construct $F$ based on the structure of $Q_0$, see Figure \[fig:construction-of-F\] for an illustration. Suppose $Q_0$ is generated by $b\geq 2$ similarity maps $\left\{h_{u}\right\}_{u=1}^b$ with common contraction ratio $c_0$ and let $\mathcal{I}=\{1,\ldots, b\}$. Let $(\alpha_i)_{i\in \mathbb{N}}$ be a sequence of distinct integers which increases super exponentially and assume $\alpha_0=0$. Let $$\begin{aligned}
\mathcal{I}_{\alpha_i}=\{(u_1,\ldots,u_{\alpha_i})\in \mathcal{I}^{\alpha_i} : \;&(u_1,\ldots,u_{\alpha_{i-1}}) = (1,\ldots,1) \text{ and} \\ &(u_{\alpha_{i-1}+1},\ldots,u_{\alpha_i}) \ne (1,\ldots,1) \}.
\end{aligned}$$ Then for all $i\in \mathbb{N}$ define $$Q^*_i= \bigcup_{(u_1,\ldots,u_{\alpha_i})\in \mathcal{I}_{\alpha_i}}h_{u_1}\circ h_{u_2}\circ \cdots \circ h_{u_{\alpha_i}}(Q_i)$$ and let $$F = \overline{\bigcup_{i \in \mathbb{N}} Q_{i}^*}.$$
(0,12) – (27,12); (0,8) – (9,8); (18,8) – (27,8); (0,4) – (3,4); (6,4) – (9,4); (0,0) – (1,0); (2,0) – (3,0); (6,0) – (7,0); (8,0) – (9,0);
at (22.5,8) [$Q_1$]{}; at (2.5,0) [$Q_2$]{}; at (6.5,0) [$Q_2$]{}; at (8.5,0) [$Q_2$]{};
We claim that $F$ has the desired properties. Since each $Q_i$ is a miniset of $F$, we clearly have $$\overline{\Omega^0} \subseteq \mathcal{G}_F.$$ Furthermore, since $d_{\mathcal{H}}(h_1^{-1} \circ \cdots \circ h_1^{-1}(F) \cap [0,1]^d, Q_0) \le c_0^{\alpha_{i+1}-\alpha_i}$ where $h_1^{-1}$ is composed $\alpha_i$ times, we see that $Q_0 \in \mathcal{G}_F$. Therefore, by Lemma \[closed!\], $$\Delta = \{ {\dim_{\mathrm{H}}}E : E \in \{Q_0\} \cup \overline{\Omega^0} \} \subseteq \{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \}.$$ It remains to show that we do not get any ‘unwanted’ microsets appearing whose Hausdorff dimension is outside of $\Delta$. Let $E \in \mathcal{G}_F$ and therefore we can find cubes $J_k$ and vectors $$t_k=\left(\min_{(x_1,\ldots,x_d)\in J_k}x_1,\ldots,\min_{(x_1,\ldots,x_d)\in J_k}x_d\right)$$ such that $$\frac{\sqrt{d}}{\text{diam }J_k}(F \cap J_k - t_k) \to E.$$ If for all large enough $k$ we have that $J_k$ intersects more than one of the sets $Q_{i}^*$, then for all but at most one of the sets $Q_{i}^*$, the constituent pieces $h_{u_1}\circ h_{u_2}\circ \cdots \circ h_{u_{\alpha_i}}(Q_i)$ become arbitrarily small compared with the diameter of $J_k$ and for such $Q_i^*$ the portion intersecting $J_k$ will get arbitrarily close to a subset of $Q_0$. Therefore we may assume that $J_k$ intersects only one of the sets $Q_i^*$ for all large enough $k$, since the contribution from other sets either approaches a subset of $Q_0$, a singleton, or disappears completely.
Either the set of $i$ such that $Q_i^*$ intersects $J_k$ for some $k$ is bounded, in which case $E$ is a microset of one of the sets $Q_i$, and therefore ${\dim_{\mathrm{H}}}E \in \Delta$, or the set of $i$ such that $Q_i^*$ intersects $J_k$ for some $k$ is unbounded, in which case $E$ is either a microset of $Q_0$ or a microset of a set from $\overline{\Omega^0}$, depending on how large the constituent pieces of $Q_i^*$ are with respect to each $J_k$. Lemma \[closed!\] implies that in all cases ${\dim_{\mathrm{H}}}E \in \Delta$. This completes the proof in the case $\inf \Delta>0$.
The proof in the case $\inf \Delta=0$ is similar, but actually more straightforward, and so we only sketch the idea. We let $$Q_i^* = 2^{-i^i} Q_i +(2^{-i}, 0, \dots, 0)$$ and $$F =\overline{ \bigcup_{i \in \mathbb{N}} Q_i^*} ,$$ that is we scale the sets $Q_i$ by a superexponential factor and then arrange them in an exponentially decreasing sequence accumulating at 0. Arguing as above, a microset of $F$ is either a microset of the set $\{0\} \cup \{2^{-i} : i \in \mathbb{N}\}$ (which plays the role of $Q_0$ above) or a microset of a set from $\overline{\Omega^0}$.
Small microsets
===============
In this section, we will prove Theorem \[Sing\]. If $F$ has a genuine microset of zero Hausdorff dimension then it is clear that $F$ has zero lower dimension. Therefore we just need to show the other direction, this will be done by proving the contrapositive. Let $F$ be a compact subset of $\mathbb{R}^d$ such that the gallery of $F$ contains only microsets of cardinality at least two. To prove our result we just need to show ${\dim_{\mathrm{L}}}F >0$.
Let $k> 1$ be an integer. In what follows cubes are assumed to be oriented with the coordinate axes. We say that $F$ satisfies property $P(k)$ if for every $x\in F$ and $R\in (0,1)$ the following statement is satisfied:
- If $Q(x,R)$ is the closed cube centred at $x$ with side length $R$, then there exist two cubes with disjoint interiors and with centres in $F \cap Q(x,R)$ and side lengths $2^{-k}R$.
If $F$ fails property $P(k)$ for all integers $k$ then for all $k$ there is a cube $Q_k$ such that $Q_k\cap F$ can be covered by one cube of side length $2^{-k}$ times that of $Q_k$. It follows that $T_k(F) \cap [0,1]^d$ converges in the Hausdorff metric to a singleton as $i\to\infty$, where $T_k$ is the unique homothety mapping $Q_k$ to $[0,1]^d$. Therefore by our assumption we see that $F$ satisfies $P(k)$ for some integer $k>1$, which we fix from now on.
Let $x\in F$ be arbitrarily chosen and fix $0<r<R \leq 1$. Consider $Q(x,R)$ and since $F$ satisfies $P(k)$ we see that there exist two disjoint cubes with centres in $F \cap Q(x,R)$ and side lengths $2^{-k}R$. Repeat the argument inside each of these cubes and then inside each of the four cubes at the next level and so on. Run this argument $m$ times where $m$ is chosen to be the largest integer such that $2^{-km}R > r$. It follows that there are $$2^m \geq 2^{-1} \left( \frac{R}{r} \right)^{1/k}$$ disjoint cubes of side length at least $r$ contained in $Q(x,2R)$. It follows that ${\dim_{\mathrm{L}}}F\geq 1/k>0$, as desired.
Further remarks and problems
============================
Dimensions which need not appear as dimensions of microsets {#dimappear}
-----------------------------------------------------------
Here we elaborate on the following question: given a compact set $F \subset \mathbb{R}^d$, which dimensions of $F$ necessarily appear as the Hausdorff dimension of a microset of $F$ with unbounded scaling sequence? The answer is: the lower and Assouad dimensions necessarily do, but the Hausdorff, packing and upper and lower box dimensions need not. Note that it is vital to include the requirement that the scaling sequences are unbounded as otherwise the set itself appears as a microset and the question is trivial. Concluding that the upper and lower box dimensions do not necessarily appear even in the closure of $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \text{ has unbounded scaling sequence} \}$ is straightforward. For example the set $\{1/n : n\in \mathbb{N} \}$ has box dimension $1/2$, but all its microsets with unbounded scaling sequence are either an interval or a singleton.
Concluding that the Hausdorff and packing dimensions do not necessarily appear as Hausdorff dimensions of microsets with unbounded scaling sequence is a little more subtle and relies on our proof of Theorem \[Thmain2\]. Let $\Delta = \{1, 1/2-1/(n+1) : n = 1, 2, \dots\}$, which is clearly $\mathcal{F}_\sigma$, and let $F\subset \mathbb{R}$ be the set constructed in the proof of Theorem \[Thmain2\] given this $\Delta$. Note that we may assume $\Omega_0 = \{Q_1,Q_2, \dots\}$ where $Q_n$ has dimension $1/2-1/(n+1)$ and importantly $Q_\infty \notin \Omega_0$. It follows that ${\dim_{\mathrm{H}}}F ={\dim_{\mathrm{P}}}F = \sup_n {\dim_{\mathrm{H}}}Q_n = 1/2 \notin \Delta$ as required. The important point is that the $Q_n$ are chosen such that ${\dim_{\mathrm{H}}}Q_n \to 1/2$, but $Q_n \to [0,1]$ in the Hausdorff metric.
Note that in the above example, the Hausdorff and packing dimensions do appear as accumulation points of the set of Hausdorff dimensions of microsets and so we pose the following question.
Is it true that if $F \subset \mathbb{R}^d$ is compact, then ${\dim_{\mathrm{H}}}F$ appears in the closure of $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \text{ has unbounded scaling sequence} \}$?
Hausdorff measures of microsets
-------------------------------
Let us first recall that it is possible to obtain the following slightly stronger version of Theorem \[MicroAssouad\]; see [@F2 Theorem 1.3].
Let $F$ be a compact set, then $${\dim_{\mathrm{A}}}F=\max \{s\geq 0 : E\in\mathcal{G}_F \text{ and } \mathcal{H}^s(E)>0\}.$$
Thus we can find a microset of $F$ whose $s$-Hausdorff measure is positive, where $s$ is the Assouad dimension of $F$. It is very natural to ask whether the following dual result for the lower dimension holds.
Is it true that if $F$ is compact, then $${\dim_{\mathrm{L}}}F=\min \{s\geq 0 : E\in\mathcal{G}_F \text{ and } \mathcal{H}^s(E)<\infty\}?$$
Set theoretic complexity of $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} $
-------------------------------------------------------------------------------
We proved that the set of dimensions attained by a gallery can be surprisingly complicated: it can be $\mathcal{F}_\sigma$, despite the gallery itself being $\mathcal{F}$. However, we are unaware if the set of attained dimensions can be any more complicated than $\mathcal{F}_\sigma$. We remark that $$\Big\{ \{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} \ : \ \text{$F \subseteq [0,1]^d$ compact }\Big\}$$ must have cardinality $\aleph_1$, since we have a natural surjection from the set of compact sets $F \subseteq [0,1]^d$ onto this set. In particular, there must be sets (which contain their infimum and supremum) which cannot be obtained as the set of dimensions attained by a gallery.
If $F \subseteq [0,1]^d$ is compact, then is $\{ {\dim_{\mathrm{H}}}E : E \in \mathcal{G}_F \} \subseteq [0,d]$ an $\mathcal{F}_\sigma$ set? If not, does it belong to a finite Borel class?
Acknowledgments {#acknowledgments .unnumbered}
===============
JMF was financially supported by a Leverhulme Trust Research Fellowship (RF-2016-500) and an EPSRC Standard Grant (EP/R015104/1). DCH was financially supported by an EPSRC Doctoral Training Grant (EP/N509759/1). AK was financially supported by the Finnish Center of Excellence in Analysis and Dynamics Research, the Finnish Academy of Science and Letters, and the Väisälä Foundation. HY was financially supported by the University of St Andrews. This research project began while all four authors were in attendance at the Institut Mittag-Leffler during the 2017 semester programme *Fractal Geometry and Dynamics*.
The authors thank Tamas Keleti and Yuval Peres for pointing out an error in an earlier version of this paper and Richard Balka and Michael Hochman for helpful discussions on the topic.
[[Sze]{}75]{}
CH. Chen and E. Rossi, *Locally rich compact sets*, Illinois J. Math., **58**(3), (2014), 779–806.
K. J. Falconer, *Fractal geometry: Mathematical foundations and applications*, 3rd Ed., Wiley, (2014).
A. Ferguson, J. M. Fraser and T. Sahlsten, *Scaling scenery of $(\times m,\times n)$ invariant measures*, Adv. Math., **268**, (2015), 564–602.
J. M. Fraser, *Assouad type dimensions and homogeneity of fractals*, Trans. Amer. Math. Soc., **366**, (2014), 6687–6733.
J. M. Fraser, *Distance sets, orthogonal projections, and passing to weak tangents*, Israel J. Math., **226**, (2018), 851–875.
J. M. Fraser and H. Yu, *Arithmetic patches, weak tangents, and dimension*, Bull. Lond. Math. Soc. **50**, (2018), 85–95.
H. Furstenberg, *Ergodic fractal measures and dimension conservation*, Ergodic Theory Dynam. Systems, **28**, (2008), 405–422.
M. Hochman, *Dynamics on fractals and fractal distributions*, Preprint, available at arXiv:1008.3731
M. Hochman and P. Shmerkin, *Local entropy averages and projections of fractal measures*, Ann. of Math. (2), **175**, (2012), 1001–1059.
A. K[ä]{}enm[ä]{}ki, T. Ojala, and E. Rossi, *Rigidity of quasisymmetric mappings on self-affine carpets*, Int. Math. Res. Not. IMRN, **12**, (2018), 3769–3799.
A. K[ä]{}enm[ä]{}ki and E. Rossi, *Weak separation condition, Assouad dimension, and Furstenberg homogeneity*, Ann. Acad. Sci. Fenn. Math., **41**, (2016), 465–490.
A. K[ä]{}enm[ä]{}ki, T. Sahlsten and P. Shmerkin, *Dynamics of the scenery flow and geometry of measures*, Proc. Lond. Math. Soc., **110**, (2015), 1248–1280.
J. M. Mackay, *Assouad dimension of self-affine carpets*, Conform. Geom. Dyn., [**15**]{}, (2011), 177–187.
J. Mackay and J. Tyson, *Conformal dimension. Theory and application*, University Lecture Series, 54. American Mathematical Society, Providence, RI, (2010).
P. Mattila, *Geometry of sets and measures in euclidean spaces: Fractals and rectifiability*, Cambridge Studies in Advanced Mathematics, Cambridge University Press, (1995).
[^1]:
|
---
abstract: 'In this paper, we derive formulas for the translated Whitney-Lah numbers and show that they are generalizations of already-existing identities of the classical Lah numbers. $q$-analogues of the said formulas are also obtained for the case of the translated $q$-Whitney numbers.'
---
Mahid M. Mangontarum\
Department of Mathematics\
Mindanao State University-Main Campus\
Marawi City 9700\
Philippines\
<mmangontarum@yahoo.com>\
<mangontarum.mahid@msumain.edu.ph>
.2 in
Introduction
============
The (unsigned) Lah numbers, denoted by $L(n,k)$, count the number of partitions of a set $X$ with $n$ elements into $k$ nonempty linearly ordered subsets. These numbers are known to satisfy the following basic combinatorial properties:
- explicit formula $$L(n,k)=\frac{n!}{k!}\binom{n-1}{k-1};\label{lah1}$$
- recurrence relation $$L(n+1,k)=L(n,k-1)+(n+k)L(n,k);\label{lah2}$$
- exponential generating function $$\sum_{n=0}^{\infty}L(n,k)\frac{t^n}{n!}=\frac{1}{k!}\left(\frac{t}{1-t}\right)^k.\label{lah3}$$
The numbers $L(n,k)$ are also known to be coefficients of rising factorials in terms of falling factorials. That is, $$\left\langle t\right\rangle_n=\sum_{k=0}^nL(n,k)(t)_k,\label{LahDef}$$ where $$\left\langle t\right\rangle_n=t(t+1)(t+2)\cdots(t+n-1)$$ is the rising factorial of $t$ of order $n$ and $$(t)_k=t(t-1)(t-2)\cdots(t-k+1)$$ is the falling factorial of $t$ of order $k$ with $\left\langle t\right\rangle_0=(t)_0=1$ and $(-t)_n=(-1)^n\left\langle t\right\rangle_n$. The Lah numbers $L(n,k)$ are actually closely related with the well-known Stirling numbers. To illustrate, we first recall that the Stirling numbers of the first and second kinds, denoted by ${\genfrac[]{0pt}{}{n}{k}}$ and ${\genfrac\{\}{0pt}{}{n}{k}}$, respectively, are defined as coefficients in the expansions of the relations $$(t)_n=\sum_{k=0}^n(-1)^{n-k}{\genfrac[]{0pt}{}{n}{k}}t^k\label{sn1}$$ and $$t^n=\sum_{k=0}^n{\genfrac\{\}{0pt}{}{n}{k}}(t)_k.\label{sn2}$$ Notice that putting $-t$ in place of $t$ in yields $$\left\langle t\right\rangle_n=\sum_{k=0}^n{\genfrac[]{0pt}{}{n}{k}}t^k.\label{sn1.1}$$ By substituting in the right-hand side of , $$\begin{aligned}
\left\langle t\right\rangle_n&=&\sum_{k=0}^n{\genfrac[]{0pt}{}{n}{k}}\sum_{j=0}^k{\genfrac\{\}{0pt}{}{k}{j}}(t)_j\\
&=&\sum_{j=0}^n\left(\sum_{k=j}^n{\genfrac[]{0pt}{}{n}{k}}{\genfrac\{\}{0pt}{}{k}{j}}\right)(t)_j.\end{aligned}$$ Combining this with and by comparing the coefficients of $(t)_j$, we are able to write $$L(n,k)=\sum_{j=k}^n{\genfrac[]{0pt}{}{n}{j}}{\genfrac\{\}{0pt}{}{j}{k}}.\label{lahsS}$$ It is important to note that here, the numbers ${\genfrac[]{0pt}{}{n}{k}}$ particularly refer to the “unsigned” Stirling numbers of the first kind which count the number of permutations of the $n$-element set $X$ into $k$ disjoint cycles. Similarly, the Stirling numbers of the second kind ${\genfrac\{\}{0pt}{}{n}{k}}$ can be combinatorially interpreted as the number of partitions of $X$ into $k$ nonempty blocks. With this, the Bell numbers $B_n$ are defined as the total number of partitions of the $n$-element set $X$. That is, $$B_n=\sum_{k=0}^n{\genfrac\{\}{0pt}{}{n}{k}}.$$ The paper of Petkovšek and Pisanski [@Pet], and the books of Comtet [@Comt] and Chen and Kho [@Chen] contain detailed discussions on the Lah, Stirling and Bell numbers especially their respective combinatorial properties and interpretations. In addition to these, Qi [@Qi] recently obtained an explicit formula for the Bell numbers expressed in terms of both the Lah numbers and the Stirling numbers of the second kind, viz. $$B_n=\sum_{k=1}^n(-1)^{n-k}\left(\sum_{\ell=1}^kL(k,\ell)\right){\genfrac\{\}{0pt}{}{n}{k}}.\label{QiF}$$
The results of this paper are organized as follows. In Section \[sec1\], we present the translated Whitney numbers and derive some formulas which generalize already-existing identities for the classical Lah numbers, including one that will generalize . In Section \[sec2\], we establish the $q$-analogues of some of the results in Section \[sec1\] using as framework the translated $q$-Whitney numbers.
Translated Whitney numbers {#sec1}
==========================
In 2013, Belbachir and Bousbaa [@Bel] introduced the translated Whitney numbers using a combinatorial approach which involves “mutations” of some elements. To be more precise, the translated Whitney numbers of first kind, denoted by $\widetilde{w}_{(\alpha)}(n,k)$, were defined as the number of permutations of $n$ elements with $k$ cycles such that the elements of each cycle can mutate in $\alpha$ ways, except the dominant one while the translated Whitney numbers of the second kind, denoted by $\widetilde{W}_{(\alpha)}(n,k)$, were defined as the number of partitions of the an $n$-element set into $k$ subsets such that the elements of each subset can mutate in $\alpha$ ways, except the dominant one. These numbers were shown to satisfy the recurrence relations [@Bel Theorems 2 and 8] $$\widetilde{w}_{(\alpha)}(n,k)=\widetilde{w}_{(\alpha)}(n-1,k-1)+\alpha(n-1)\widetilde{w}_{(\alpha)}(n-1,k)$$ and $$\widetilde{W}_{(\alpha)}(n,k)=\widetilde{W}_{(\alpha)}(n-1,k-1)+\alpha k\widetilde{W}_{(\alpha)}(n-1,k),$$ and the horizontal generating functions [@Bel Theorems 4 and 10] $$(t|-\alpha)_n=\sum_{k=0}^n\widetilde{w}_{(\alpha)}(n,k)x^k\label{wHGF1}$$ and $$x^n=\sum_{k=0}^n\widetilde{W}_{(\alpha)}(n,k)(t|\alpha)_k.\label{wHGF2}$$ Here, we used $(t|\alpha)_n$ to denote the generalized factorial of $t$ of increment $\alpha$ defined by $$(t|\alpha)_n=\prod_{i=0}^{n-1}(t-i\alpha),\ (t|\alpha)_0=1.$$ In the same paper, Belbachir and Bousbaa [@Bel] also defined translated Whitney-Lah numbers, denoted by $\widehat{w}_{(\alpha)}(n,k)$, as the number of ways to distribute the set $\{1,2,\ldots,n\}$ into $k$ ordered lists such that the elements of each list can mutate with $\alpha$ ways, except the dominant one. The values of the numbers $\widehat{w}_{(\alpha)}(n,k)$ can be computed using the recurrence relation [@Bel Theorem 13] $$\widehat{w}_{(\alpha)}(n,k)=\widehat{w}_{(\alpha)}(n-1,k-1)+\alpha(n+k-1)\widehat{w}_{(\alpha)}(n-1,k)\label{recwl}$$ and are generated using [@Bel Corollary 15] $$(t|-\alpha)_n=\sum_{k=0}^n\widehat{w}_{(\alpha)}(n,k)(t|\alpha)_k.\label{twlHGF}$$ Similar to what is seen in , the translated Whitney-Lah numbers may be expressed as sum of products of $\widetilde{w}_{(\alpha)}(n,k)$ and $\widetilde{W}_{(\alpha)}(n,k)$ as follows [@Bel Corollary 14] $$\widehat{w}_{(\alpha)}(n,j)=\sum_{k=j}^n\widetilde{w}_{(\alpha)}(n,k)\widetilde{W}_{(\alpha)}(k,j).\label{wlahwW}$$ It appears that the translated Whitney and Whitney-Lah numbers are generalizations of the Stirling and Lah numbers, respectively. This may be verified by simply setting $\alpha=1$ in the defining relations of the former.
Recently, Mansour et al. [@Mansour] defined the recurrence relation $$u(n,k)=u(n-1,k-1)+(a_{n-1}+b_k)u(n-1,k)$$ for two sequences $(a_i)_{i\geq 0}$ and $(b_i)_{i\geq 0}$ with boundary conditions given by $u(n,0)=\prod_{i=0}^{n-1}(a_i+b_0)$ and $u(0,k)=\delta_{0,k}$, where $$\delta_{i,j}=\left\{
\begin{array}{cll}
0,\ if \ i\neq j \\
1,\ if \ i=j
\end{array}\right.$$ is the kronecker delta. Notice that if $a_{n-1}=\alpha(n-1)$ and $b_k=\alpha k$, the above recurrence relation becomes . Hence, for $a_i=\alpha i$ and $b_j=\alpha j$, the formula [@Mansour Theorem 1.1] $$u(n,k)=\sum_{j=0}^k\left(\frac{\prod_{i=0}^{n-1}(b_j+a_i)}{\prod_{i=0,i\neq j}^{n-1}(b_j-b_i)}\right)$$ can be utilized to obtain an explicit formula for $\widehat{w}_{(\alpha)}(n,k)$ given in the next theorem.
The translated Whitney-Lah numbers satisfy the following explicit formula: $$\widehat{w}_{(\alpha)}(n,k)=\frac{\alpha^{n-k}}{k!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}\left\langle j\right\rangle_n.\label{r1}$$
This theorem enables us to write the numbers $\widehat{w}_{(\alpha)}(n,k)$ in a closed form similar to . It is implied in the proof of the succeeding corollary.
The translated Whitney-Lah numbers satisfy the following relation: $$\widehat{w}_{(\alpha)}(n,k)=\alpha^{n-k}L(n,k).\label{r2}$$
Since $\left\langle j\right\rangle_n=(j+n-1)_n$, then $$\begin{aligned}
\widehat{w}_{(\alpha)}(n,k)&=&\frac{\alpha^{n-k}}{k!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}(j+n-1)_n\\
&=&\alpha^{n-k}\frac{n!}{k!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}\binom{j+n-1}{n}.\end{aligned}$$ From [@Graham Identity 5.24], it is known that the binomial coefficients satisfy the following useful identity: $$\sum_{j}\binom{\ell}{m+j}\binom{s+j}{n}(-1)^j=(-1)^{\ell+m}\binom{s-m}{n-\ell}.\label{graham}$$ Hence, with $m=0$, $\ell=k$ and $s=n-1$, we obtain $$\widehat{w}_{(\alpha)}(n,k)=\alpha^{n-k}\frac{n!}{k!}\binom{n-1}{n-k}.\label{r2.1}$$ This completes the proof.
The translated Whitney-Lah numbers satisfy the following exponential generating function: $$\sum_{n=k}^{\infty}\widehat{w}_{(\alpha)}(n,k)\frac{t^n}{n!}=\frac{1}{k!}\left(\frac{t}{1-\alpha t}\right)^k.\label{r3}$$
Applying , and the binomial and negative binomial expansions, $$\begin{aligned}
\sum_{n=k}^{\infty}\widehat{w}_{(\alpha)}(n,k)\frac{t^n}{n!}&=&\frac{1}{\alpha^kk!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}\sum_{n=k}^{\infty}(\alpha t)^n\binom{j+n-1}{n}\\
&=&\frac{1}{\alpha^kk!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}(1-\alpha t)^{-j}\\
&=&\frac{1}{\alpha^k}\left[(1-\alpha t)^{-1}-1\right]^k\\
&=&\frac{1}{k!}\left(\frac{t}{1-\alpha t}\right)^k.\end{aligned}$$
Clearly, the results shown in the previous corollaries generalize identities and for the classical Lah numbers when $\alpha=1$. The binomial identity in can also be utilized to derive another interesting formula for the translated Whitney-Lah numbers. By setting $s=n$, $\ell=k-1$ and $m=-1$, $$\sum_{j=1}^k\binom{k-1}{j-1}\binom{n+j}{n}(-1)^j=(-1)^{k-2}\binom{n+1}{n-k+1}.$$ Multiplying both sides by $k!$ gives $$\sum_{j=1}^k\binom{k-1}{j-1}\binom{n+j}{n}(-1)^j=\sum_{j=1}^k\widehat{w}_{(\alpha)}(k,j)\frac{(n+j)!(-1)^j}{n!\alpha^{k-j}}$$ in the left-hand side after using . On the other hand, the right-hand side simply becomes $$(-1)^{k-2}\binom{n+1}{n-k+1}=(-1)^k\frac{(n+1)!}{(n-k+1)!}.$$ Thus, we have derived the following theorem:
\[thm1\] For $k\geq2$ and $n\geq k-1$, the translated Whitney-Lah numbers satisfy $$\sum_{j=1}^k(-\alpha)^j\widehat{w}_{(\alpha)}(k,j)(n+j)!=(-\alpha)^k\frac{n!(n+1)!}{(n-k+1)!}.\label{r4}$$
When $\alpha=1$, we immediately recognize $$\sum_{j=1}^k(-1)^jL(k,j)(n+j)!=(-1)^k\frac{n!(n+1)!}{(n-k+1)!},\label{gouqi}$$ an identity for the classical Lah numbers which was proved using six different methods by Guo and Qi [@Guo]. A more direct approach in establishing is as follows.
The generating function in may be rewritten as $$(-\alpha)^k(-t)_k=\sum_{j=0}^k\alpha^k\widehat{w}_{(\alpha)}(k,j)(t)_j.$$ Since $(-n-1)_jn!=(-1)^j(n+j)!$, then replacing $t$ with $-n-1$ in the previous equation gives $$(-\alpha)^kn!(n+1)_k=\sum_{j=0}^k(-\alpha)^j\widehat{w}_{(\alpha)}(k,j)(n+j)!$$ as desired.
Now, to derive a generalization of , there are two known methods presented in the paper of Qi [@Qi] to choose from. The first one employs the Faa di Bruno’s formula and the $n$-th derivative of the exponential function $e^{\pm 1/x}$ given by $$\left(e^{\pm 1/x}\right)^{(n)}=(-1)^ne^{\pm 1/x}\sum_{k=1}^n(\pm 1)^kL(n,k)\frac{1}{t^{n+k}}$$ found in the paper of Daboud et al. [@Dab]. The second is less complicated and requires only the use of the inverse relation $$f_n=\sum_{j=0}^n{\genfrac[]{0pt}{}{n}{j}}g_j\Longleftrightarrow g_n=\sum_{j=0}^n(-1)^{n-j}{\genfrac\{\}{0pt}{}{n}{j}}f_j.$$ To obtain the next objective, we choose a process similar to the latter since by using the orthogonal relations [@Mangontarum1 Corollary 4.2] $$\sum_{j=m}^n(-1)^{j-m}\widetilde{W}_{(\alpha)}(n,j)\widetilde{w}_{(\alpha)}(j,m)=\sum_{j=m}^n(-1)^{n-j}\widetilde{w}_{(\alpha)}(n,j)\widetilde{W}_{(\alpha)}(j,m)=\delta_{m,n},$$ it can be easily shown that the following inverse relation is valid: $$f_n=\sum_{j=0}^n\widetilde{w}_{(\alpha)}(n,j)g_j\Longleftrightarrow g_n=\sum_{j=0}^n(-1)^{n-j}\widetilde{w}_{(\alpha)}(n,j)f_j.$$ Next, we rewrite as $$\widehat{w}_{(\alpha)}(n,k)=\sum_{j=0}^n\widetilde{w}_{(\alpha)}(n,j)\widetilde{W}_{(\alpha)}(j,k),\label{wlahwW2}$$ and take $g_j=\widetilde{W}_{(\alpha)}(j,k)$ and $f_n=\widehat{w}_{(\alpha)}(n,k)$ so that when the above inverse relation is applied, we get $$\widetilde{W}_{(\alpha)}(n,k)=\sum_{j=0}^n(-1)^{n-j}\widetilde{W}_{(\alpha)}(n,j)\widehat{w}_{(\alpha)}(j,k).\label{wlahwW3}$$ We then recall that the translated Dowling numbers [@Mangontarum2], denoted by $D_{(\alpha)}(n)$, are defined as the sum of the translated Whitney numbers of the second kind, i.e. $$D_{(\alpha)}(n)=\sum_{k=0}^n\widetilde{W}_{(\alpha)}(n,k),\label{translatedDN}$$ and are known to satisfy the explicit formula [@Mangontarum2 Equation 26] $$D_{(\alpha)}(n)=\left(\frac{1}{e}\right)^{1/\alpha}\sum_{i=0}^{\infty}\frac{(i\alpha)^n}{i!\alpha^i}.$$ Summing both sides of up to $n$ and appyling gives $$D_{(\alpha)}(n)=\sum_{k=0}^n\sum_{j=0}^n(-1)^{n-j}\widetilde{W}_{(\alpha)}(n,j)\widehat{w}_{(\alpha)}(j,k).$$ Thus, we have proved the result in the next theorem.
The translated Dowling numbers satisfy the explicit formula given by $$D_{(\alpha)}(n)=\sum_{j=0}^n(-1)^{n-j}\left(\sum_{k=0}^j\widehat{w}_{(\alpha)}(j,k)\right)\widetilde{W}_{(\alpha)}(n,j).\label{GQiF1}$$
To close this section, notice that by , we may write $$D_{(\alpha)}(n)=\sum_{j=0}^n(-1)^{n-j}\left(\sum_{k=0}^j\alpha^{j-k}L(j,k)\right)\widetilde{W}_{(\alpha)}(n,j).$$ Since it is known that [@Mangontarum1; @Mangontarum2] $\widetilde{W}_{(1)}(n,j)={\genfrac\{\}{0pt}{}{n}{j}}$ and $D_{(1)}(n)=B_n$, it shows that the formula in generalizes Qi’s formula in when $\alpha=1$.
Translated $q$-Whitney-Lah numbers {#sec2}
==================================
The translated $q$-Whitney numbers of the first and second kind [@Mah4], denoted by $w^1_{(\alpha)}[n,k]_q$ and $w^2_{(\alpha)}[n,k]_q$, respectively, are defined in terms of the following horizontal generating functions: $$[t|\alpha]_n=\sum_{k=0}^nw^1_{(\alpha)}[n,k]_q[t]_q^k\label{def1}$$ and $$[t]_q^n=\sum_{k=0}^nw^2_{(\alpha)}[n,k]_q[t|\alpha]_k,\label{def2}$$ where $$[t|\alpha]_n=\prod_{i=0}^{n-1}[t-i\alpha]_q.$$ Here, $[n]_q$ is used to denote the $q$-analogue of the integer $n$ defined by $$[n]_q=\frac{q^n-1}{q-1}=1+q+q^2+\cdots+q^{n-1}.$$ Various combinatorial properties of the numbers $w^1_{(\alpha)}[n,k]_q$ and $w^2_{(\alpha)}[n,k]_q$ and a certain combinatorial interpretation in the context of $A$-tableaux have already been established in the same paper. The properties include the inverse relation [@Mah4 Corollary 2.10] $$f_n=\sum_{j=0}^nw^1_{(\alpha)}[n,j]_qg_j\Longleftrightarrow g_n=\sum_{j=0}^nw^2_{(\alpha)}[n,j]_qf_j. \label{invqTW}$$ Moreover, the said numbers have been shown to be proper $q$-analogues of the translated Whitney numbers. In general, the term “$q$-analogue” refers to a mathematical expression in terms of a parameter $q$ such that as $q\rightarrow 1$, it reduces to a known identity or formula. For instance, it is clear that $$\lim_{q\rightarrow 1}[n]_q=n.$$ Other examples are the $q$-binomial coefficient $$\binom{n}{k}_q=\prod_{j=1}^k\frac{q^{n-j+1}-1}{q^j-1}=\frac{[n]_q!}{[k]_q![n-k]_q!}$$ and the $q$-falling factorial of $n$ of order $k$ $$[n]_{q,k}=\prod_{j=0}^{k-1}\frac{q^{n-j}-1}{q-1}=\frac{[n]_q!}{}[n-k]_q!,$$ where $[n]_q!=\prod_{i=1}^n[i]_q$ is the $q$-factorial of $n$. The following limits are easy to verify: $$\lim_{q\rightarrow 1}[n]_q!=n!,\ \ \lim_{q\rightarrow 1}\binom{n}{k}_q=\binom{n}{k},\ \ \lim_{q\rightarrow 1}[n]_{q,k}=(n)_k.$$ The book of Kac and Cheung [@Kac] is a rich source for further discussions on $q$-analogues. The study of $q$-analogues of mathematical identities has been the interest of many mathematicians over a long period of time. For the case of the Lah numbers, Garsia and Remmel [@Gar] defined the $q$-Lah numbers, denoted by $L_q(n,k)$, by $$[t]_q[t+1]_q\cdots[t+n-1]_q=\sum_{k=0}^nL_q(n,k)[t]_q[t-1]_q\cdots[t-k+1]_q$$ with the recurrence relation $$L_q(n+1,k)=q^{n+k-1}L_q(n,k-1)+[n+k]_qL_q(n,k)$$ and explicit formula $$L_q(n,k)=\binom{n}{k}_q\frac{[n-1]_q!}{[k-1]_q!}q^{k(k-1)}.$$ A more general notion was also introduced in [@Mah4]. The translated $q$-Whitney numbers of the third kind, denoted by $L_{(\alpha)}[n,k]_q$, are defined as coefficients in the expansion of [@Mah4 Equation 15] $$[t|-\alpha]_n=\sum_{k=0}^nL_{(\alpha)}[n,k]_q[t|\alpha]_k\label{def3}$$ which can be computed recursively using the formula [@Mah4 Equation 31] $$L_{(\alpha)}[n+1,k]_q=q^{\alpha(n+k-1)}L_{(\alpha)}[n,k-1]_q+[\alpha(n+k)]_qL_{(\alpha)}[n,k]_q.$$ It is easy to see that $L_{(1)}[n,k]_q=L_q(n,k)$.
The numbers $L_{(\alpha)}[n,k]_q$ satisfy the following: $$L_{(\alpha)}[n,k]_q=\sum_{j=0}^nw^1_{(-\alpha)}[n,j]_qw^2_{(\alpha)}[j,k]_q.\label{qw1w2}$$
Putting $-\alpha$ in place of $\alpha$ in and by applying , $$\begin{aligned}
[t|-\alpha]_n&=&\sum_{k=0}^nw^1_{(-\alpha)}[n,k]_q[t]_q^k\\
&=&\sum_{j=0}^n\left\{\sum_{k=j}^nw^1_{(-\alpha)}[n,k]_qw^2_{(\alpha)}[k,j]_q\right\}[t|\alpha]_j.\end{aligned}$$ By comparing the coefficients of $[t|\alpha]_j$ in the last equation with that of , we get the desired result.
The identity in the previous theorem suggests that the numbers $L_{(\alpha)}[n,k]_q$ may be referred to as the translated $q$-Whitney-Lah numbers. To establish an explicit formula, we will use a method different from the one used in the previous section. We start by rewriting into the form $$\begin{aligned}
[\alpha k|-\alpha]_n&=&\sum_{j=0}^nL_{(\alpha)}[n,j]_q[\alpha k|\alpha]_j\\
&=&\sum_{j=0}^k\binom{k}{j}_{q^{\alpha}}\left\{\frac{L_{(\alpha)}[n,j]_q[\alpha k|\alpha]_j}{\binom{k}{j}_{q^{\alpha}}}\right\}.\end{aligned}$$ Since the well-known $q$-binomial inversion formula can be expressed as $$f_k=\sum_{j=0}^k\binom{k}{j}_{q^{\alpha}}g_j\Longleftrightarrow g_k=\sum_{j=0}^k(-1)^{k-j}q^{\alpha\binom{k-j}{2}}\binom{k}{j}_{q^{\alpha}}f_j,$$ then with $f_k=[\alpha k|-\alpha]_q$ and $g_j=\frac{L_{(\alpha)}[n,j]_q[\alpha k|\alpha]_j}{\binom{k}{j}_{q^{\alpha}}}$, we get $$[\alpha k|\alpha]_kL_{(\alpha)}[n,k]_q=\sum_{j=0}^k(-1)^{k-j}q^{\alpha\binom{k-j}{2}}\binom{k}{j}_{q^{\alpha}}[\alpha j|-\alpha]_n,$$ the result in the next theorem.
The translated $q$-Whitney-Lah numbers satisfy the following explicit formula: $$L_{(\alpha)}[n,k]_q=\frac{1}{[k]_{q^{\alpha}}![\alpha]_q^k}\sum_{j=0}^k(-1)^{k-j}q^{\alpha\binom{k-j}{2}}\binom{k}{j}_{q^{\alpha}}[\alpha j|-\alpha]_n.\label{qr1}$$
Formula is a $q$-analogue of the explicit formula in since $$\lim_{q\rightarrow1}[k]_{q^{\alpha}}!=k!,\ \ \lim_{q\rightarrow1}[\alpha j|\alpha]_n=\alpha^n\left\langle j\right\rangle_n$$ and $$\begin{aligned}
\lim_{q\rightarrow1}L_{(\alpha)}[n,k]_q&=&\lim_{q\rightarrow1}\left(\frac{1}{[k]_{q^{\alpha}}![\alpha]_q^k}\sum_{j=0}^k(-1)^{k-j}q^{\alpha\binom{k-j}{2}}\binom{k}{j}_{q^{\alpha}}[\alpha j|-\alpha]_n\right)\\
&=&\frac{\alpha^{n-k}}{k!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}\left\langle j\right\rangle_n.\end{aligned}$$ Furthermore, we may use the above explicit formula in establishing a kind of exponential generating function for the numbers $L_{(\alpha)}[n,k]_q$. But before proceeding, we first mention the following important identities: $$[\alpha j|-\alpha]_n=[\alpha]_q^n[j+n-1]_{q^{\alpha},n},\ \ \frac{[j+n-1]_{q^{\alpha},n}}{[n]_{q^{\alpha}}!}=\binom{j+n-1}{n}_{q^{\alpha}}\label{pe1}$$ and $$\prod_{k=0}^{n-1}\frac{1}{1-q^kt}=\sum_{k=0}^{\infty}\binom{n+k-1}{k}_qt^k.\label{pe2}$$
The translated $q$-Whitney-Lah numbers satisfy the following exponential generating function: $$\sum_{n=0}^{\infty}L_{(\alpha)}[n,k]_q\frac{t^n}{[n]_{q^{\alpha}}!}=\frac{1}{[k]_{q^{\alpha}}![\alpha]_q^k}\sum_{j=0}^k(-1)^{k-j}q^{\alpha\binom{k-j}{2}}\binom{k}{j}_{q^{\alpha}}\prod_{n=0}^{j-1}(1-q^{\alpha n}[\alpha]_qt)^{-1}.\label{qr1.1}$$
From and , $$\sum_{n=0}^{\infty}L_{(\alpha)}[n,k]_q\frac{t^n}{[n]_{q^{\alpha}}!}=\frac{1}{[k]_{q^{\alpha}}![\alpha]_q^k}\sum_{j=0}^k(-1)^{k-j}q^{\alpha\binom{k-j}{2}}\binom{k}{j}_{q^{\alpha}}\sum_{n=0}^{\infty}\binom{j+n-1}{n}_{q^{\alpha}}([\alpha]_qt)^n.$$ The result is obtained by applying in the second summation.
By taking the limit of as $q\rightarrow1$, $$\lim_{q\rightarrow1}\sum_{n=0}^{\infty}L_{(\alpha)}[n,k]_q\frac{t^n}{[n]_{q^{\alpha}}!}=\frac{1}{\alpha^kk!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}\left(\frac{1}{1-\alpha t}\right)^j$$ which in turn simplifies to . On the other hand, the next theorem contains a $q$-analogue of .
The translated $q$-Whitney-Lah numbers satisfy the following: $$\sum_{j=0}^k(-[\alpha]_q)^jq^{-nj-\binom{j+1}{2}}L_{(\alpha)}[k,j]_q[n+j]_{q^{\alpha}}!=\frac{(-[\alpha]_q)^k[n]_{q^{\alpha}}![n+1]_{q^{\alpha}}!}{[n-k+1]_{q^{\alpha}}!}.\label{qr2}$$
The proof is somewhat parallel to the alternative proof of Theorem \[thm1\]. We proceed by rewriting as $$[-\alpha]_q^k\prod_{i=0}^{k-1}[-t-i]_{q^{\alpha}}=\sum_{j=0}^k[\alpha]_q^jL_{(\alpha)}[k,j]_q\prod_{i=0}^{j-1}[t-i]_{q^{\alpha}}.$$ We put $-n-1$ in place of $t$ and multiply both sides by $[n]_{q^{\alpha}}!$ so that the left-hand side becomes $$\begin{aligned}
[-\alpha]_q^k\prod_{i=0}^{k-1}[n+1-i]_{q^{\alpha}}[n]_{q^{\alpha}}!&=&[-\alpha]_q^k[n]_{q^{\alpha}}![n+1]_{q^{\alpha},k}\\
&=&\frac{[-\alpha]_q^k[n]_{q^{\alpha}}![n+1]_{q^{\alpha}}!}{[n-k+1]_{q^{\alpha}}!}\end{aligned}$$ while the right-hand side is $$\sum_{j=0}^k[\alpha]_q^jL_{(\alpha)}[k,j]_q[n]_{q^{\alpha}}!\prod_{i=0}^{j-1}[t-i]_{q^{\alpha}}=\sum_{j=0}^k(-[\alpha]_q)^jq^{-nj-\binom{j+1}{2}}L_{(\alpha)}[k,j]_q[n+j]_{q^{\alpha}}!,$$ where the identity $j(n+1)+\binom{j}{2}=nj+\binom{j+1}{2}$ is used. Combining these equations give the desired result.
The corollary below is a direct consequence of when $\alpha=1$. This formula is a $q$-analogue of Guo and Qi’s [@Guo] identity in which can easily be verified by taking the limit as $q\rightarrow1$.
The $q$-Lah numbers satisfy $$\sum_{j=0}^k(-1)^jq^{-nj-\binom{j+1}{2}}L_q(k,j)[n+j]_q!=\frac{(-1)^k[n]_q![n+1]_q}{[n-k+1]_q}.\label{qr2.1}$$
The translated $q$-Dowling numbers [@Mah4], denoted by $D_{(\alpha)}[n]_q$, are defined by the following sum: $$D_{(\alpha)}[n]_q=\sum_{k=0}^nw^2_{(\alpha)}[n,k]_q.$$ The last theorem presents a $q$-analogue of the explicit formula in .
The translated $q$-Dowling numbers satisfy the following explicit formula $$D_{(\alpha)}[n]_q=\sum_{j=0}^n\left(\sum_{j=0}^kL_{(\alpha)}[j,k]_q\right)w_{(-\alpha)}^2[n,j]_q.\label{qGQiF1}$$
We put $-\alpha$ in place of $\alpha$, and set $g_j=w^2_{(\alpha)}[j,k]_q$ and $f_n=L_{(\alpha)}[n,k]_q$ in the inverse relation in so that when the resulting relation is applied to , $$w^2_{(\alpha)}[n,k]_q=\sum_{j=0}^nw^2_{(-\alpha)}[n,j]_qL_{(\alpha)}[j,k]_q.$$ The desired result is obtained by summing over up to $n$.
The explicit formula [@Mangontarum2 Equation 10] $$\widetilde{W}_{(\alpha)}(n,k)=\frac{1}{\alpha^kk!}\sum_{j=0}^k(-1)^{k-j}\binom{k}{j}(\alpha j)^n$$ shows that $\widetilde{W}_{(-\alpha)}(n,k)=(-1)^{n-k}\widetilde{W}_{(\alpha)}(n,k)$. Hence, $$\begin{aligned}
\lim_{q\rightarrow1}D_{(\alpha)}[n]_q&=&\lim_{q\rightarrow1}\sum_{j=0}^n\left(\sum_{j=0}^kL_{(\alpha)}[j,k]_q\right)w_{(-\alpha)}^2[n,j]_q\\
&=&\sum_{j=0}^n(-1)^{n-j}\left(\sum_{k=0}^j\widehat{w}_{(\alpha)}(j,k)\right)\widetilde{W}_{(\alpha)}(n,j)\end{aligned}$$ which is precisely .
As we end, it may be worthwhile to say that the present paper was not able to express the explicit formula of $L_{(\alpha)}[n,k]_q$ in a way similar to that of for the case of $\widehat{w}_{(\alpha)}(n,k)$. Perhaps this can be done by establishing a $q$-analogue of the binomial identity in .
[99]{}
M. M. Mangontarum, O. I. Cauntongan, and A. M. Dibagulun, A note on the translated Whitney numbers and their $q$-analogues, [*Turkish J. of Analysis and Number Theory*]{} [**4**]{} (2016), 74–81.
|
---
abstract: 'One potential star-planet interaction mechanism for hot Jupiters involves planetary heating via currents set up by interactions between the stellar wind and planetary magnetosphere. Early modeling results indicate that such currents, which are analogous to the terrestrial global electric circuit (GEC), have the potential to provide sufficient heating to account for the additional radius inflation seen in some hot Jupiters. Here we present a more detailed model of this phenomenon, exploring the scale of the effect, the circumstances under which it is likely to be significant, implications for the planetary magnetospheric structure, and observational signatures.'
---
Introduction
============
Hot Jupiters frequently have radii which are “inflated” relative to the expected mass-radius relation [@Fortney10 (Fortney & Nettleman 2010)]. The degree of inflation is correlated with orbital semimajor axis (Demory & Seager 2011) and typically becomes significant for $F > 2 \times 10^8 \rm ~erg~cm^{-2}~s^{-1}$, which corresponds to approximately the Alfvén radius $a = r_{A}$, and there is some indication that it exhibits a dependence on stellar activity level [@Buzasi13 (Buzasi 2013)]. Proposed models for this behavior include tidal heating [@Gu04 (Gu et al. 2004)] and “Ohmic heating” due to interactions between atmospheric flows and the planetary magnetic field [@Batygin11 (Batygin et al. 2011)]. [@Buzasi13 Buzasi (2013)] proposed interactions between the stellar wind and the planetary magnetic field as an alternate mechanism; in this work we explore the ramifications of that suggestion in the context of a more complete planetary model.
Model and Results
=================
The model calculates currents internal to the planet generated by the electric potential difference produced by the stellar wind/magnetosphere interaction and mapped down to the outer layers of the planet, analogous to the “global electric circuit” (GEC; [@Tinsley07 Tinsley et al. 2007]). A solar wind model [@Suzuki06 (Suzuki 2006)] is taken as input, and the conductivity is calculated for the planetary interior, which in turn is calculated using the MESA code [@Paxton11 (Paxton et al. 2011)]. The presence of a $\sim 10 \rm G$ planetary magnetic field renders conductivity inhomogeneous. The magnetic dipole moment is presumed coaligned with the rotation axis and ecliptic poles, and the potential difference drives Pedersen and parallel (field-aligned) currents, resulting in Joule heating of the planetary interior.
A series of models with solar composition were calculated for $M_{PL} = 10^{-3} M_{\odot}$ ($\sim M_J$) using MESA [@Paxton11 (Paxton et al. 2011)] and evolved to $t = 5 \rm~Gyr$. Model electron densities were combined with the adopted $10 \rm~G$ dipole planetary magnetic field to calculate the classical ($\sigma_0$), Pedersen ($σ\sigma_P$), and Hall ($\sigma_H$) conductivities. The ionospheric electric potential was taken as zero except at two regions in each hemisphere located at invariant latitude $\Lambda = \cos^{-1} \sqrt{\frac{R_{PL}}{R_M}}$ and separated by $\pi / 2$ in longitude. Regions were circular with diameter $\Lambda$. The potential adopted was the minimum of the wind-induced field $E = v_w B_w R_M$, or the value leading to the maximum energy deposition possible [@Akasofu81 (Akasofu 1981)], $\epsilon = v_w B_w^2 R_M^2$.
Heating resulting from these potential/conductivity combinations was calculated and incorporated into the MESA models. The resulting energy deposition profiles for two typical models are shown in Figure 1a. Total heating in both cases is limited by the power available from the wind rather than by the wind-induced field strength. Figure 1b illustrates planetary interior models resulting from the two cases; note the growth of the 1-bar planetary radius from $\approx 1.05 R_J$ at 1 AU to $\approx 1.58 R_J$ at 0.014 AU. The latter corresponds to a stellar irradiance of $F = 7 \times 10^9 \rm~erg~cm^{-2}~s^{-1}$ and an orbital period of 0.6 days.
Discussion
==========
Successful models which account for the observed radius excess in hot Jupiters by additional heating must be capable of supplying $>10^{27} \rm~erg~s^{-1}$ to the convective portion of the planetary interior. The proposed GEC model is capable of such heating, and produces planetary radii broadly in accord with observations (Figure 2). Variations in planetary mass, planetary composition (leading to changes in conductivity), planetary magnetic field, and stellar magnetic field may account for the range of radii observed in the sample. In addition, we note that it is likely that multiple heating models coexist, including tidal heating (for noncircular orbits) and Ohmic heating ([@Batygin10 Batygin & Stevenson 2010], [@Batygin11 Batygin et al. 2011]).
Observational testing of the model is possible. In particular, predictions include
1. [The correlation of radius inflation with stellar magnetic field, potentially derivable from accurate stellar activity proxies such as Ca II h+k and/or starspot coverage.]{}
2. [Planetary temperatures in excess of those possible based solely on radiative equilibrium calculations. Precise photometric transit observations and comparisons of atmospheric spectra to models should enable such testing.]{}
Figure 2a summarizes model results over a range of semimajor axes, and shows that the effect of GEC heating on planetary radius becomes important for $a > 0.1 \rm~AU$, and is capable of inflating planets to radii consistent with those observed. Note that variations in (a) stellar magnetic field, (b) planetary magnetic field, and c) planetary composition will have potentially significant impacts which are not explored in detail here (though see [@Buzasi13 Buzasi 2013] for limited discussion). Further improvements to the model are planned, and include examining the effects of variations in composition and magnetic field, as well as GEC model interactions with other heating mechanisms, such as Ohmic ([@Batygin11 Batygin et al. 2011]) and tidal heating.
Acknowledgments
===============
I am grateful to the Whitaker Foundation for its support of the Whitaker Chair at Florida Gulf Coast University. Funding for the Kepler mission is provided by the NASA Science Mission Directorate, and this work was in part funded by the Kepler Participating Science Program grant NNH09CE70C.
1981, *Space Sci. Revs*, 28, 121
2010, *ApJ*, 714, 238
2011, *ApJ*, 738, 1
2013, *ApJ*, 765, 25
2011, *ApJS*, 197, 12
2010, *Space Sci. Revs*, 152, 423
2004, *ApJ*, 608, 1076
2012, *ApJ*, 754, 9
2012, *ApJS*, 192, 3
2006, *ApJ*, 640, 75
2007, *Adv. Sp. Res.*, 40, 1126
|
---
abstract: 'With the aid of a computer, we provide a motion picture of the twist-spun trefoil which exhibits the periodicity well.'
address: 'Department of Mathematics, Tokyo Institute of Technology, Ookayama, Meguro–ku, Tokyo, 152–8551 Japan'
author:
- Ayumu Inoue
title: |
A symmetric motion picture\
of the twist-spun trefoil
---
Introduction {#sec:introduction}
============
Preliminaries {#sec:preliminaries}
=============
Spun $2$-knot, twist-spun $2$-knot, and their known motion pictures {#sec:spun_two_knot_twist_spun_two_knot_and_their_known_motion_pictures}
===================================================================
Diagram of the $2$-twist-spun trefoil {#sec:diagram_of_the_two_twist_spun_trefoil}
=====================================
Proof of Theorem \[thm:main\] {#sec:proof_of_main_theorem}
=============================
Acknowledgements {#acknowledgements .unnumbered}
----------------
The author would like to express his sincere gratitude to Professor Sadayoshi Kojima for encouraging him. He is also grateful to Professor Shin Satoh for his invaluable comments. He has used blender[^1], which is a free open source and quite powerful 3D content creation suite, to construct, slice, and render $2$-knot diagrams. This research has been supported in part by JSPS Global COE program “Computationism as a Foundation for the Sciences”.
[99]{}
E. Artin, [*Zur Isotopie zweidimensionalen Flächen im $R_{4}$*]{}, Abh. Math. Sem. Univ. Hamburg [**4**]{} (1926), 174–177.
S. Carter and M. Saito, [*Knotted surfaces and their diagrams*]{}, Mathematical Surveys and Monographs [**55**]{}, American Mathematical Society, 1998.
S. Kamada, [*Braid and knot theory in dimension four*]{}, Mathematical Surveys and Monographs [**95**]{}, American Mathematical Society, 2002.
S. Satoh, [*Surface diagrams of twist-spun 2-knots*]{}, Knots 2000 Korea, Vol. 1 (Yongpyong), J. Knot Theory Ramifications [**11**]{} (2002), no. 3, 413–430.
S. Satoh, [*Sheet numbers and cocycle invariants of surface-knots*]{}, Intelligence of low dimensional topology 2006, 287–291, Ser. Knots Everything [**40**]{}, World Sci. Publ., Hackensack, NJ, 2007.
S. Satoh and A. Shima, [*The $2$-twist-spun trefoil has the triple point number four*]{}, Trans. Amer. Math. Soc. [**356**]{} (2004), no. 3, 1007–1024.
E. C. Zeeman, [*Twisting spun knots*]{}, Trans. Amer. Math. Soc. [**115**]{} (1965), 471–495.
[^1]: 0.1em <http://www.blender.org/>
|
---
abstract: 'We have used the Fisher matrix formalism to quantify the prospects of detecting the $z = 3.35$ redshifted 21-cm HI power spectrum with the upcoming radio-imterferometric array OWFA. OWFA’s frequency and baseline coverage spans comoving Fourier modes in the range $1.8 \times 10^{-2} \le k \le 2.7 \, {\rm Mpc}^{-1}$. The OWFA HI signal, however, is predominantly from the range $k \le 0.2 \, {\rm Mpc}^{-1}$. The larger modes, though abundant, do not contribute much to the HI signal. In this work we have focused on combining the entire signal to achieve a detection. We find that a $5-\sigma$ detection of $A_{HI}$ is possible with $\sim 150 \, {\rm hr}$ of observations, here $A_{HI}^2$ is the amplitude of the HI power spectrum. We have also carried out a joint analysis for $A_{HI}$ and $\beta$ the redshift space distortion parameter. Our study shows that OWFA is very sensitive to the amplitude of the HI power spectrum. However, the anisotropic distribution of the $\k$ modes does not make it very suitable for measuring $\beta$.'
author:
- |
Somnath Bharadwaj$^{1,2}$[^1], Anjan Kumar Sarkar$^{2}$[^2] and Sk. Saiyad Ali$^{3}$[^3]\
$^{1}$ Department of Physics, IIT Kharagpur, 721302, India\
$^{2}$ Centre for Theoretical Studies, IIT Kharagpur, 721302 , India\
$^{3}$ Department of Physics, Jadavpur University, Kolkata 700032, India
title: 'Fisher matrix predictions for detecting the cosmological $21\, {\rm cm}$ signal with the Ooty Wide Field Array (OWFA)'
---
[cosmology: large scale structure of universe - intergalactic medium - diffuse radiation ]{}
Introduction
============
Work is currently in progress to upgrade the cylindrical Ooty Radio Telescope (ORT[^4])) so that it functions as a linear interferometric array the Ooty Wide Field Array (OWFA; Prasad & Subrahmanya 2011a,b; Ram Marthi & Chengalur 2014). This telescope operates at a nominal frequency of $\nu_o= 326.5 \,{\rm
MHz}$ which corresponds to the neutral hydrogen (HI) $1,420 \, {\rm MHz}$ radiation from a redshift $z=3.35$. Observations of the fluctuations in the contribution from the HI to the diffuse background radiation are a very interesting probe of the large-scale structures in the high-$z$ universe (Bharadwaj, Nath & Sethi 2001,Bharadwaj, & Sethi 2001). In addition to the power spectrum (Bharadwaj & Pandey 2003, Bharadwaj & Srikant 2004) this is also a sensitive probe of the bispectrum (Ali, Bharadwaj and Pandey 2006, Guha Sarkar & Hazra 2013). There has been a continued, growing interest towards the detection of the 21 cm signal from the lower redshifts $(0 < z < 4)$ to probe the post-reionization era (Chang et al. 2008; Visbal et al. 2009; Bharadwaj et al. 2009; Wyithe & Loeb 2009; Bagla, Khandai & Datta 2010; Seo et al. 2010; Mao 2012; Ansari et al. 2012; Bull et al. 2014; Villaescusa-Navarro et al. 2014). Recently, Ali & Bharadwaj (2014) (henceforth, Paper I) have studied the prospects for detecting the HI signal from redshift $z=3.35$ using OWFA. The OWFA provides an unique opportunity to study the large scale structures at $z=3.35$.
A number of similar upcoming packed radio interferometer (CHIME[^5], Bandura et al. 2014; BAOBAB[^6] and the KZN array[^7]) have been proposed to probe the expansion history of the low-redshift universe ($z \le
2.55$) with an unprecedented precision using BAO measurements from the large-scale HI fluctuations. Even more innovative designs are being planned for the future low frequency telescope SKA[^8]. This promises to yield highly significant measurements of the HI power spectrum over a large redshift range spanning nearly the entire post-reionization era $(z < 6)$. However, the detection of the faint $21\,\rm cm$ HI signal ($ \sim 1\, {\rm mK}$) is extremely challenging due to the presence of different astrophysical foregrounds. The foregrounds are four to five orders of magnitude brighter than the post-reionization HI signal (Ghosh et al. 2011a,2011b).
In this paper, we have considered the visibility correlation (Bharadwaj, & Sethi 2001, Bharadwaj, & Ali 2005) which essentially is the data covariance matrix that is necessary to calculate the Fisher matrix. We have employed the Fisher matrix technique to predict the expected signal-to-noise ratios (SNR) for detecting the HI signal. In our analysis we have assumed that the HI traces the total matter with a linear bias, and the matter power spectrum is precisely as predicted by the standard LCDM model with the parameter values mentioned later. The HI power spectrum is then completely specified by two parameters $A_{HI}$, which sets the overall amplitude of the power spectrum, and $\beta$ the redshift distortion parameter. The parameter $A_{HI}$ here is the product of the mean neutral hydrogen fraction ($\bar{x}_{\HI}$) and the linear bias parameter ($ b_{HI}$). For a detection, we focus on measuring the amplitude $A_{HI}$, marginalizing over $\beta$. We also consider the joint estimation of $A_{HI}$ and $\beta$. Our entire analysis is based on the assumption that the visibility data contains only the signal and the noise, and the foregrounds and radio-frequency interference have been completely removed from the data.
The BAO feature is within the baseline range covered by OWFA (Paper I). However, the frequency coverage ($\sim 30 \, {\rm MHz}$) is rather small. Further, for the present analysis we have only considered observations in a single field of view. All of these result in having very few Fourier modes across the $k$ range relevant for the BAO, and we do not consider this here.
The rest of the paper is organized as follows. Section 2 briefly discusses some relevant system parameters for OWFA. In Section 3, we present the theoretical model for calculating the signal and noise covariance, and predict their respective contributions. Here we also estimate the range of k-modes which are probed by the OWFA. In Section 4 we use the Fisher matrix analysis to make predictions for the SNR as a function of the observing time. Finally, we present summary and conclusions in Section 5.
In this paper, we have used the (Planck $+$ WMAP) best-fit $\Lambda$CDM cosmology with cosmological parameters (Ade et al. 2013): $\Omega_{m}=0.318, \Omega_bh^2=0.022,\Omega_{\Lambda}=0.682,
n_s=0.961, \sigma_8=0.834, h=0.67$. We have used the matter transfer function from the fitting formula of Eisenstein & Hu (1998) incorporating the effect of baryonic features.
Telescope parameters
====================
\[tab:array\]
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Parameter Phase I Phase II Phase III Phase IV
----------------------- ---------------------------------- ------------------------------------- ------------------------------------- ----------------------------------------
No. of antennas 40 264 528 1056
($N_A$)
No. of dipoles $N_d$ 24 4 2 1
Aperture area $30 $ 30 \,{\rm m} \times 1.92 \,{\rm $ 30 \,{\rm m} \times 0.96 \,{\rm $ 30 \,{\rm m} \times 0.48 \,{\rm m} $
\,{\rm m} \times 11.5 \,{\rm m}$ m} $ m} $
($b \times d$)
Field of View $ 1.75^{\circ} \times $ 1.75^{\circ} \times 27.4^{\circ}$ $ 1.75^{\circ} \times 54.8^{\circ}$ $ 1.75^{\circ} \times 109.6^{\circ}$
4.6^{\circ}$
(FoV)
Smallest baseline $11.5 \,{\rm m} $ $1.9 \,{\rm m} $ $0.96 \,{\rm m} $ $0.48 \,{\rm m} $
($d_{min}$)
Largest baseline $ 448.5 \,{\rm m}$ $505.0 \,{\rm $506.0 \,{\rm m}$ $506.5 \,{\rm m}$
m}$
($d_{max}$)
Total band- $18 \,{\rm MHz}$ $30 \,{\rm MHz}$ $60 \,{\rm MHz}$ $120 \,{\rm MHz}$
width (B)
Single Visibility $1.12$ Jy $6.69 $ Jy $13.38 $ Jy $26.76 $ Jy
rms. noise ($\sigma$)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: System parameters for Phases I, II, III and IV of the OWFA .
The ORT is a 530 m long and 30 m wide parabolic cylindrical reflector placed in the north-south direction on a hill with the same slope as the latitude $(11^{\circ})$ of the station (Swarup et al. 1971; Sarma et al. 1975). It thus becomes possible to observe the same part of the sky by rotating the parabolic cylinder along its long axis.The telescope has 1056 half-wavelength $ (0.5 \lambda_0 \approx 0.5 {\rm m})$ dipoles placed nearly end to end along the focal line of the cylinder. Work is underway to implement electronics that combines the digitized signal from every $N_d$ successive dipoles so that we have a linear array of $N_A$ antennas located along the length of the cylinder. The OWFA will, at present, have the ability to operate in two different modes one with $N_d=24$ and another with $N_d=4$, referred to as Pase I and Phase II respectively. For our theoretical analysis we have also considered two hypothetical (possibly future) upgrades Phases III and IV with $N_d=2$ and $N_d=1$ respectively. Table \[tab:array\] summarizes various parameters for different phases of the array. The individual antennas get more compact, and the field of view increases from Phase I to IV. The number of antennas and the frequency bandwidth also increases from Phase I to IV.
For any phase, each antenna has a rectangular aperture of dimensions $b\times d$, and is distributed at an interval ${\bf d}=d \, {\bf \hat{i}}$ along the length of the cylinder. The value of $b(=30 {\rm m})$, which corresponds to the width of the parabolic reflector, remains fixed for all the phases. The value of $d$ varies for the different phases (Table \[tab:array\] ). The baseline $\u$ quantifies the antenna pair separation projected perpendicular to the line of sight measured in the units of the observing wavelength $\lambda$. Assuming observations vertically overhead, we have the baselines $$\u_a = a \frac{{\bf d}}{\lambda} \hspace{2.0cm} (1 \le a \le N_A-1) \,.$$
In reality $\u_1,\u_2,...$ vary across the observing bandwidth as frequency changes. However, for the present purpose of the paper we keep $\u_a$ fixed at the value corresponding to the nominal frequency.
A schematic view of the OWFA array layout is presented in Paper I. The OWFA has a significant number of redundant baselines. For any baseline $\u_a$ we have $M_a = (N_A - a)$ times sampling redundancy of the baseline.
OWFA visibility covariance and the Fisher matrix {#sec:vc}
================================================
The OWFA measures visibilities $\V(\u_a,\nu_m)$ at a finite number of baselines $\u_a$ and frequency channels $\nu_m$ with frequency channel width $\Delta \nu_c$ spanning a frequency bandwidth $B$. The measured visibilities can be expressed as a combination of the HI signal and the noise $$\V(\u_a,\nu_m)=\S(\u_a,\nu_m)+\N(\u_a,\nu_m)
\label{eq:b1}$$ assuming that the foregrounds have been removed. The correlation expected between the HI signal at two different baselines and frequencies can be calculated (Paper I and references therein) using $$\begin{aligned}
\langle \S(\u_a,\nu_n) \S^{*}(\u_b,\nu_m) \rangle &=&
\left(\frac{2 k_B}{\lambda^2}\right)^2
\int d^2 U^{'} \tilde{A}(\u_a-\u^{'})
\tilde{A}^{*}(\u_b-\u^{'}) \nonumber \\
&\times& \frac{1}{2 \pi r_{\nu}^2}
\int d \kp \cos(\kp r_{\nu}^{'} \Delta \nu )
P_{\rm HI}(\frac{2 \pi \u^{'}}{r_{\nu}},\kp)
\label{eq:a3}\end{aligned}$$ where $P_{\rm HI}(\kpr,\kp)$ is the power spectrum of the 21-cm brightness temperature fluctuation in redshift space, $\left(\frac{2 k_B}{\lambda^2}\right)$ is the conversion from brightness temperature to specific intensity, $r_{\nu}$ is the comoving distance from the observer to the region where the HI radiation originated, $r_{\nu}^{'}=dr/d \nu $ is the radial conversion factor from frequency interval to comoving separation ($r_{\nu}=6.85$ Gpc and $r_{\nu}^{'}=11.5$ Mpc MHZ$^{-1}$ for OWFA), $\Delta \nu=\nu_m-\nu_n$ and $\tilde{A}(\u)$ is the Fourier transform of the OWFA primary beam pattern.
The real and imaginary parts of the noise $\N(\u_a,\nu_n)$ both have equal variance $\sigma^2$ with $$\sigma
=\frac{\sqrt{2}k_BT_{sys}}{\eta A\sqrt{\Delta \nu_c t}}
\label{eq:a4}$$ where $T_{sys}$ is the system Temperature, $\eta$ and $A=b \times d$ are respectively the efficiency and the geometrical area of the antenna aperture and $t$ is the observing time. We have used the values $ T_{sys}=150 \, {\rm K}$, $\eta=0.6$ and $\Delta \nu_c=0.1 \, {\rm MHz}$ which are the same as in Paper I.
The noise in the visibilities measured at different baselines and frequency channels are uncorrelated. We then have $$\langle \N(\u_a,\nu_n) \N^{*}(\u_b,\nu_m) \rangle =\delta_{a,b}
\delta_{n,m} 2 \sigma^2 \,.
\label{eq:a5a}$$
Earlier studies (Paper I) have shown that for a fixed baseline ($U_a=U_b$) the HI signal (eq. \[eq:a3\]) is correlated out to frequency separations $\mid \nu_n -\nu_m \mid \sim 0.5 \, {\rm MHz}$ which spans several frequency channels. This implies that the data covariance matrix $\langle \V(\u_a,\nu_n) \V^{*}(\u_a,\nu_m) \rangle$ has considerable off-diagonal terms, a feature that is not very convenient for the Fisher matrix analysis.
For the Fisher Matrix analysis it is convenient to use the delay channels $\tau_m$ (Morales 2005) instead of the frequency channels $\nu_c$. We define $$v(\u_a,\tau_m)=\Delta \nu_c \sum_n e^{2 \pi i \tau_m (\nu_n-\nu_0)} \V(\u_a,\nu_n)
\label{eq:b4}$$ where $N_c$ is the number of frequency channels, $B= N_c \Delta \nu_c$ and $$\tau_m=\frac{m}{B} \hspace{1.3cm} \frac{-N_c}{2} < m \le \frac{N_c}{2}\, .$$ The covariance matrix $\langle v(\u_a,\tau_m) v^*(\u_b,\tau_n) \rangle $ is zero if $n \neq m$, and we need only consider the diagonal terms $n=m$. Defining $C_{ab}(m)=\langle v(\u_a,\tau_m) v^*(\u_b,\tau_m) \rangle $ we have
$$\begin{aligned}
C_{ab}(m) &=& \frac{B}{
r_{\nu}^2 r_{\nu}^{'}} \left(\frac{2 k_B}{\lambda^2}\right)^2
\int d^2 U^{'}
\tilde{A}(\u_a-\u^{'}) \tilde{A}^{*}(\u_b-\u^{'})
P_{\rm HI}(\frac{2 \pi \u^{'}}{r_{\nu}},\frac{2 \pi \tau_m}{r_{\nu}^{'}})
\nonumber \\
&+& \delta_{a,b} \, 2 \, \Delta \nu_c \, B \, \frac{\sigma^2}{(N_A - a)} \,.
\label{eq:a5}\end{aligned}$$
The factor $(N_A - a)^{-1}$ in the noise contribution accounts for the redundancy in the baseline distribution. The functions $\tilde{A}(\u_a-\u^{'})$ and $\tilde{A}^{*}(\u_b-\u^{'})$ have an overlap only if $a=b$ or $a=b \pm 1$ (Paper I). The visibilities at two baselines$\u_a$ and $\u_b$ are uncorrelated $(C_{ab}(m)=0)$ if $\mid a -b \mid > 1$ [*ie.*]{} the visibility at a particular baseline $\u_a$ is only correlated with the other visibility measurements at the same baseline or the adjacent baselines $\u_{a \pm 1}$. Thus, for a fixed $m$, $C_{ab}(m)$ is a symmetric, tridiagonal matrix. Further, the noise only contributes to the diagonal terms, and it does not figure in the off-diagonal terms.
We use the data covariance $C_{ab}(m)$to calculate the Fisher Matrix using $$F_{\alpha \beta}=\frac{1}{2} \sum_m C^{-1}_{ab}(m)
[C_{bc}(m)]_{,\alpha} C^{-1}_{cd}(m) [C_{da}(m)]_{,\beta}
\label{eq:a6}$$ where the indices $a,b,c,d$ are to be summed over all baselines, and $\alpha,\beta$ refer to the various parameters which are to be estimated from the OWFA data.
It is possible to get further insight into the cosmological information contained in the data covariance $C_{ab}(m)$ by considering large baselines $U_a \gg d/\lambda$ where it is reasonable to assume that the function $\tilde{A}(\u_a-\u^{'}) \tilde{A}^{*}(\u_b-\u^{'}) $ in eq. (\[eq:a5\]) falls sharply in comparison to the slowly changing HI power spectrum as $\u^{'}$ is varied. The integral in equation eq. (\[eq:a5\]) can then be approximated as $$\approx P_{\rm HI}(\k)
\int d^2 U^{'}
\tilde{A}(\u_a-\u^{'}) \tilde{A}^{*}(\u_b-\u^{'})
\label{eq:a7}$$ where $$\k \equiv (\kpr,\kp)
\equiv (\pi[\u_a+\u_b]/r_{\nu},2 \pi \tau_m/r_{\nu}^{'})\,.
\label{eq:a8}$$ The integral in eq. (\[eq:a7\]) can be evaluated analytically, and we have the approximate formula $$\begin{aligned}
C_{ab}(m) = B \left[
\frac{(2 k_B)^2 (4 \delta_{a,b}+\delta_{a,b \pm 1})}
{ 9 \lambda^2 b d r_{\nu}^2 r_{\nu}^{'}}
P_{\rm HI}(\k)+ \frac{\delta_{a,b} \, 2 \, \Delta \nu_c
\sigma^2}{(N_A - a)} \,\right] \,.
\label{eq:a9}\end{aligned}$$
\[c\]\[c\]\[1.0\]\[0\][$S_{ab}(m)$ Jy$^{2}$MHZ$^{2}$]{} \[c\]\[c\]\[0.8\]\[0\][$(a=b)$]{} \[c\]\[c\]\[0.8\]\[0\][$(a=b\pm1)$]{} \[c\]\[c\]\[1.1\]\[0\][PHASE II]{}
Figure \[fig:convo2\] shows a comparison of the signal contribution to the covariance matrix calculated using eq. (\[eq:a5\]) and the approximate formula eq. (\[eq:a9\]). We find that the results are in reasonably good agreement over the entire $U$ range for $m=1$. The agreement is better at large baselines $U \ge 30$ where the two curves are nearly undistinguishable. The results are indistinguishable for the entire $U$ range for $m>1$ which has not been shown here. Although we have used the approximate equation (eq. \[eq:a9\]) to interpret $C_{ab}(m)$ in the subsequent discussion, we have used eq. (\[eq:a5\]) to compute $C_{ab}(m)$ throughout the entire analysis.
Returning to eq. (\[eq:a9\]), first, the signal contribution to the diagonal terms is found to be $4$ times larger than the off-diagonal terms. Next, we see that each non-zero element of the covariance matrix $C_{ab}(m)$ corresponds to the HI power spectrum at a particular comoving Fourier mode $\k$ given by eq. (\[eq:a8\]). Each delay channel $\tau_m$ corresponds to a $k_{\parallel m}=2 \pi \tau_m/r_{\nu}^{'}$ which spans the values $$k_{\parallel m}=m \left( \frac{2 \pi }{B r_{\nu}^{'}} \right)
\hspace{1cm} \frac{-N_c}{2} < m \le \frac{N_c}{2}\, .$$ For a fixed $\tau_m$, the diagonal terms of $C_{ab}(m)$ with $\u_a=\u_b$ correspond to $k_{\perp a}= 2 \pi U_a/r_{\nu}$ which spans the values $$k_{\perp a}= a \left( \frac{2 \pi d}{\lambda r_{\nu}} \right) \hspace{1cm}
1 \le a \le N_A-1 \,,$$ and the off-diagonal terms of $C_{ab}(m)$ with $\u_b=\u_{a+1}$ correspond to $k_{\perp a}= \pi [U_a+U_b]/r_{\nu}$ which spans the values $$k_{\perp a}= (a \pm 0.5) \left( \frac{2 \pi d}{\lambda r_{\nu}} \right) \hspace{1cm}
1 \le a \le N_A-2 \,,$$ We see that the $k_{\perp}$ value probed by any off-diagonal term is located mid-way between the $k_{\perp}$ values probed by the two nearest diagonal terms. Considering both the diagonal and the off-diagonal terms, we find that the different $k_{\perp}$ values that will be probed by OWFA are located at an interval of $\Delta k_{\perp}= \pi d/(\lambda r_{\nu})$.
\[tab:kperp\]
${\rm Mpc^{-1}}$ Phase I Phase II Phase III Phase IV
------------------------- --------------------- ----------------------- ----------------------- -----------------------
${k_{\perp}[min]}$ $1.1\times 10^{-2}$ $ 1.9\times 10^{-3} $ $ 9.5\times 10^{-4} $ $4.8\times 10^{-4} $
${k_{\perp}[max]}$ $4.8\times 10^{-1}$ $ 5.0\times 10^{-1} $ $ 5.1\times 10^{-1} $ $5.1\times 10^{-1} $
${k_{\parallel}[min]}$ $3.0\times 10^{-2}$ $ 1.8\times 10^{-2} $ $ 9.1\times 10^{-3} $ $4.6\times 10^{-3} $
${k_{\parallel}[max]} $ $2.73 $ $2.73 $ $ 2.73$ $2.73 $
: The $k_{\perp}$ and $k_{\parallel}$ range that will be probed by the different Phases of OWFA.
In addition to the HI signal and the noise considered in this paper, the OWFA visibilities will also contain a foreground contribution. For the purpose of this work we make the simplifying assumption that the foregrounds are constant across different frequency channels, and hence they only contribute to the $\kp=0$ mode. In reality the foreground contamination will possibly extend to other modes also. However, in this work we make the most optimistic assumption that the foregrounds will be restricted to the $\kp=0$ mode and we have excluded this in the subsequent analysis. Table \[tab:kperp\] shows the $k_{\perp},k_{\parallel}$ range that will be probed by the different Phases of OFWA. We see that for all the phases (except Phase I) the minimum value of $\kp$ is approximately $10$ times larger than the corresponding $\kperp[min]$. The sampling along $\kp$, which is decided by $1/B$, has a spacing $\Delta \kp=\kp[min]$ which also is $\sim 5$ times larger than $\Delta \kperp=\kperp[min]/2$ which is decided by the antenna spacing $d$. The maximum $\kp$ values also are approximately $4$ larger than the corresponding $\kperp[max]$. It is thus clear that the sampling in $\kperp$ is quite different from the $\kpar$ sampling, and the $\kpar$ values are several times larger than the $\kperp$ values. This disparity in the $\kp$ and $\kperp$ coverage and sampling poses a problem for using OWFA to quantify redshift space distortion. We shall return to this in Section \[sec:sum\] where we discuss the results of our analysis.
.2cm \[c\]\[c\]\[1.0\]\[0\][$P_{HI}(k)$ Mpc$^{3}$ mK$^{2}$]{} \[c\]\[c\]\[1.0\]\[0\][$k$ Mpc$^{-1}$]{} \[c\]\[c\]\[1.1\]\[0\][PHASE II]{} \[c\]\[c\]\[1.0\]\[0\][z=3.35]{}
Figure \[fig:pkHI1\] shows the $k=\mid \k \mid =\sqrt{\kp^2 + \kperp^2}$ range that will be probed by $C_{ab}(m)$ for different values of $m$. We see that the range $k \sim \kpar[min]$ to $k \sim \kperp[max]$ is probed for $m=1$. The $k$ range shifts to larger $k$ values as $m$ is increased, and the entire $k$ range lies beyond $1 \, {\rm Mpc}^{-1}$ for $m \ge 64$. Figure \[fig:pkHI2\] shows a histogram of all the different $k$ modes that will be probed by OWFA Phase II. We expect the number of modes $\Delta N_k$ in bins of constant width $\Delta k$ to scale as $\Delta N_k \sim k^2 \, \Delta k$ if the $\k$ modes are uniformly distributed in three dimensions (3D). The modes $\k$ have a 2D distribution for OWFA, and we expect $\Delta N_k \sim k \, \Delta k$ if the modes are uniformly distributed. However, we have seen that the distribution is not uniform ($\Delta \kpar$ and $\Delta \kperp$ have different values) and the histogram does not show the expected linear behaviour. The increase in $\Delta N_k$ is faster than linear, it peaks at $k \sim 1 \, {\rm Mpc}^{-1}$ and is nearly constant at $\sim 60 \, \%$ of the peak value for larger modes out to $k \le \kpar[max] \sim \, 3 \, {\rm Mpc}^{-1}$. It is clear that the a very large fraction of the Fourier modes $k$ that will be probed by OWFA are in the range $ 1 -3 \, {\rm Mpc}^{-1}$. We see that the Fourier modes all lie in this range for $m \ge 64$ (Figure \[fig:pkHI1\]). The range $ k < 1 \, {\rm Mpc}^{-1}$ will be sampled by a relatively small fraction of the modes, and the range $ k < 0.1 \, {\rm Mpc}^{-1}$ will only be sampled for $m \le 5$.
\[c\]\[c\]\[1.0\]\[0\][$\Delta N_k$]{} \[c\]\[c\]\[1.0\]\[0\][$k$ Mpc$^{-1}$]{} \[c\]\[c\]\[1.1\]\[0\][PHASE II]{} \[c\]\[c\]\[0.9\]\[0\][$\Delta k = 0.03$ Mpc$^{-1}$]{} .2cm
\[c\]\[c\]\[1.0\]\[0\][$C_{ab}(m)$ Jy$^{2}$ MHZ$^{2}$]{} \[c\]\[c\]\[1.1\]\[0\][PHASE II]{} \[c\]\[c\]\[1.0\]\[0\][10 hr]{} \[c\]\[c\]\[1.0\]\[0\][100 hr]{} \[c\]\[c\]\[1.0\]\[0\][1000 hr]{}
Figure \[fig:convo1\] shows the diagonal and the off-diagonal elements of the signal contribution to the covariance matrix $C_{ab}(m)$ (eq. \[eq:a5\]). The noise contribution is also shown for reference. The noise contribution is independent of $m$, and it increases at the larger baselines which have a lesser redundancy $N_{A}-a$. The power spectrum $P_{HI}(k)$ is a decreasing function of $k$ for $k \ge 0.1 \, {\rm Mpc}^{-1}$, and most of the modes that will be probed by OWFA lie in this range. For a fixed $m$, the signal contribution is nearly flat for $U < r_{\nu} m/(B r^{'}_{\nu})$ and then decreases if $U$ is increased further. For $m=1$, the signal at small baselines $U \le 10$ is comparable to the noise for $T=100 \, {\rm hr}$. The signal is smaller than the noise at larger baselines. The overall amplitude of the signal contribution decreases for larger values of $m$, The signal covariance falls by a factor of $\sim 10$ from $m=1$ to $m=8$, and it is comparable to the noise for $T=1,000 \, {\rm hr}$. The signal falls by another factor of $\sim 20$ from $m=8$ if we consider $m=32$. We see that the HI signal is relatively more dominant at the small delay channels and the small baselines. The HI signal is considerably weaker at the larger $m$ and $U$, the noise also is considerably higher at the larger baselines.
Results {#fma}
=======
We have assumed that the HI gas, which is believed to be associated with galaxies, traces the underlying matter distribution with a constant scale independent large-scale linear HI bias $b_{HI}$. Incorporating redshift space distortion, we have the HI power spectrum $$P_{\HI}(\k)= A_{HI}^2 \, {\bar{T}^2} \, \left[ 1+ \beta\, {\mu^2} \right]^2
\,P(k) \,.
\label{eq:pk}$$ where $P(k)$ is the matter power spectrum, $\mu= \kp/k$, and $$\bar{T}(z) = 4.66 \, {\rm mK}
\, (1+z)^2\, \left(\frac{\Omega_b h^2}{0.022} \right) \,
\left(\frac{0.67}{h} \right) \, \left( \frac{H_0}{H(z)} \right) \,.$$
The parameter $A_{HI}$ in eq. (\[eq:pk\]) sets the overall amplitude of the HI power spectrum, and $A_{HI}= \bar{x}_{\HI} \, b_{HI}$ where $\bar{x}_{\HI}$ is the mean neutral hydrogen fraction. The parameter $\beta=f(\Omega)/ b_{HI}$ is the linear redshift distortion parameter. Note that the various terms in eq. (\[eq:pk\]) are all at the redshift where the HI radiation originated, which is $z=3.35$ for the OWFA.
We have used the value $\bar{x}_{\HI} =0.02$ which corresponds to $\Omega_{gas}=10^{-3}$ from DLA observations (Prochaska & Wolfe 2009; Noterdaeme et al. 2012; Zafar et al. 2013) in the redshift range of our interest. N-body simulations (Bagla, Khandai & Datta 2010; Guha Sarkar et al. 2012) indicate that it is reasonably well justified to assume a constant HI bias $b_{HI}=2$ at wave numbers $k \le 1 \, {\rm Mpc}^{-1}$, and we have used this value for our entire analysis. This is also consistant with the Semi-empirical simulations of Mar[í]{}n et al. (2010). Using these values and the cosmological parameters values assumed earlier, we have $A_{HI}=4.0\times 10^{-2}$ and $\beta=4.93 \times10^{-1}$ which serve as the fiducial values for our analysis.
We have assumed that $\bar{T}$ and the $\Lambda$CDM matter power spectrum $P(k)$ are precisely known, and we have used the Fisher matrix analysis to determine the accuracy with which it will be possible to measure the parameters $A_{HI}$ and $\beta$ using OWFA observations. The Fisher matrix analysis (eq. \[eq:a6\]) was carried out with the two parameters $q_1=\ln(A_{HI})$ and $q_2=\ln(\beta)$.
\[c\]\[c\]\[1.0\]\[0\][SNR]{} \[c\]\[c\]\[1.0\]\[0\][t hr]{} .2cm
0.01cm [ ]{}
We first focus on estimating $A_{HI}$ the amplitude of the HI signal. The Fisher matrix element $\sqrt{F_{11}}$ gives the signal to noise ratio (SNR) for a detection of the HI signal ($A_{HI}$) provided the value of $\beta$ is precisely known apriori (Conditional SNR). The left panel of Figure \[fig:snr\] shows the expected Conditional SNR as a function of the observing time, and $t_C$ in Table \[tab:error\] summarizes the time requirements for $3-\sigma$ and $5-\sigma$ detections. In reality, the value of $\beta$ is not known apriori, and one hopes to measure this from HI observations. While the cosmological parameters which determine $f(\Omega)$ are known to a relatively high level of accuracy, there is no direct observational handle on the value of $b_{HI}$ at present. It is therefore necessary to allow for the possibility that $b_{HI}$ can actually have a value different from $b_{HI}=2$ assumed here. A recent compilation of the results from several studies (Padmanabhan, Roy Choudhury & Refregier, 2014) has constrained $b_{HI}$ to be in the range $1.090 \leq b_{HI} \leq 2.06$ in the redshift range $3.25 \leq z \leq 3.4$. In our analysis we have allowed $b_{HI}$ to have a value in a larger interval $1.0 \leq b_{HI} \leq 3.0$, and we have marginalized $\beta $ over the corresponding interval $0.329 \leq \beta \leq 0.986$. The right panel of Figure \[fig:snr\] shows the expected Marginalized SNR as a function of the observing time, and $t_M$ in Table \[tab:error\] summarizes the time requirements for $3-\sigma$ and $5-\sigma$ detections.
\[tab:error\]
Phase SNR $t_C\,({\rm hr})$ $ t_M\, ({\rm hr})$
----------- ------ -------------------- ---------------------- --
Phase I 5, 3 $800,350 $ $ 1190,390 $
Phase II 5, 3 $110, 60 $ $150,70 $
Phase III 5, 3 $50, 20 $ $50,20 $
Phase IV 5, 3 $20, 10 $ $ 25,15 $
: Here $t_C$ ($t_M$) is the observing time required for the Conditional ( Marginalized) SNR $=3$ and $5$ as respectively indicated in the Table.
We find (Figure \[fig:snr\]) that for small observing times $(t \le 50 \, {\rm hr})$, where the visibilities are dominated by the system noise, the Conditional and the Marginalized SNR both increase as ${\rm SNR} \propto t$. The increase in the SNR is slower for larger observing times, and it is expected to subsequently saturate at a limiting value which is set by the cosmic variance for very large observing times not shown here. We see (Table \[tab:error\]) that $\sim 1190 \, {\rm hr}$ of observation are needed for a $5-\sigma$ detection with Phase I. The corresponding observing time for Phase II falls drastically to $110 \, {\rm hr}$ and $150 \, {\rm hr}$ for the Conditional and the Marginalized cases respectively. For Phase II, the HI signal is largely dominated by the low wave numbers $k\leq 0.2 \, {\rm Mpc}^{-1}$ (discussed later). Phase I which has a larger antenna spacing and smaller frequency bandwidth does not cover many of the low $k$ modes which dominate the signal contribution for Phase II. The required observing times are $\sim 0.5$ and $\sim 0.25$ of those for Phase II for Phases III and IV respectively. The Marginalized SNR are somewhat smaller than the Conditional ones, the difference however is not very large. The required observing time does not differ very much except for Phase II where it increases from $110 \, {\rm hr}$ to $150 \, {\rm hr}$ for a $5-\sigma$ detection.
.2cm \[c\]\[c\]\[0.8\]\[0\][$\Delta A_{HI}/A_{HI}$]{} \[c\]\[c\]\[0.8\]\[0\][$\Delta \beta/\beta$]{}
[ ]{} [ ]{}
We have considered the joint estimation of the two parameters $A_{HI}$ and $\beta$ using OWFA. Figure \[fig:contour\] shows the expected $1 \sigma$ confidence intervals for $\Delta \beta/\beta $ and $\Delta A_{HI}/A_{HI} $ with three different observing times (630, 1600 and 4000 hr) for Phases II, III and IV. For Phase II, a joint estimation of the parameters $A_{HI}$ and $\beta$ is possible with 15% and 60% errors respectively using 1600 hr of observation. The errors on the parameters $A_{HI}$ and $\beta$ for 4000 hr are $\sim 2$ times smaller as compared to 1600 hr. The constraints are more tight in case of Phases III and IV. A joint detection of $A_{HI}$ and $\beta$ with 3% and 15% errors respectively is feasible with 1600 hr of observation with Phase IV.
\[c\]\[c\]\[1.0\]\[0\][$F_{ab}$]{} \[c\]\[c\]\[1.0\]\[0\][$k$ Mpc$^{-1}$]{} \[c\]\[c\]\[0.7\]\[0\][$F_{11}$]{} \[c\]\[c\]\[0.7\]\[0\][$F_{12}$]{} \[c\]\[c\]\[0.7\]\[0\][$F_{22}$]{} \[c\]\[c\]\[0.8\]\[0\][PHASE II]{} \[c\]\[c\]\[0.7\]\[0\][$t$=150 hr]{} \[c\]\[c\]\[0.7\]\[0\][$t$=800 hr]{}
0.01cm
It is interesting to investigate the $k$ range that contribute most to the HI signal at OWFA. We have seen that the Fourier modes $k$ sampled by OWFA are predominantly in the range $1 \le k \le 3 \, {\rm Mpc}^{-1}$, and there are relatively few modes in the range $ k \le 1 \, {\rm Mpc}^{-1}$ (Figure \[fig:pkHI2\]). However, the HI signal (Figure \[fig:convo1\]) is much stronger at the smaller modes, whereas the larger modes have a weaker HI signal and are dominated by the noise. It is therefore not evident as to which $k$ range contributes the most to the OWFA HI signal detection. Figure \[fig:pkbin\] shows the relative contributions to the Fisher matrix from the different $k$ modes. We see that for $t=150 \, {\rm hr}$, which corresponds to a $5-\sigma$ detection, the bulk of the contribution is from the range $ k \le 0.1 \, {\rm Mpc}^{-1}$. The larger modes do not contribute much to the signal. We have also considered $t=800 \, {\rm hr}$. Here we have a slightly larger range $k \le 0.2 \, {\rm Mpc}^{-1}$ and the contribution peaks around $k \approx 0.1 \, {\rm Mpc}^{-1}$. In a nutshell, the OWFA HI signal is predominantly from the $k$ range $0.018 \le k \le 0.2 \, {\rm Mpc}^{-1}$. The larger modes, though abundant, do not contribute much to the HI signal.
Summary and conclusions {#sec:sum}
=======================
We have considered four different Phases of OFWA, and studied the prospects of detecting the redshifted 21-cm HI signal at $326.5 \, {\rm MHz}$ which corresponds to redshift $z = 3.35$. Phases I and II are currently under development and are expected to be functional in the near future. Phases III and IV are two hypothetical configurations which have been considered as possible future expansions. We have used the Fisher matrix analysis to predict the accuracy with which it will be possible to estimate the two parameters $A_{HI}$ and $\beta$ using OWFA. Here $A_{HI}$ is the amplitude of the 21-cm brightness temperature power spectrum and $\beta$ is the linear redshift space distortion parameter. For the purpose of this work we make the most optimistic assumption that the foreground contributions are not changing across different frequency channels, and hence they only contribute to the $\kp=0$ mode. In reality the foreground contamination will extend to other modes also. Further, the chromatic response of the interferometer, calibration errors, systematics in the receivers and radio-frequency interference (RFI) have not been considered in the paper.
Focusing first on just detecting the HI signal, we have marginalized $\beta$ and considered the error estimates on $A_{HI}$ alone. We find that a $5-\sigma$ detection of the HI signal is possible with $1190$ and $150 \, {\rm hr}$ of observation for Phases I and II respectively. The observing time is reduced by factor $\sim 0.5$ and $\sim 0.25$ compared to Phase II for Phases III and IV respectively. We find that there is a significant improvement in the prospects of a detection using Phase II as compared to Phase I, and we have mainly considered Phase II for much of the discussion in the paper.
We have also considered the joint estimation of the parameters $A_{HI}$ and $\beta$. For Phase II, a joint estimation of the parameters $A_{HI}$ and $\beta$ is possible with 15% and 60% errors respectively using 1600 hr of observation. To estimate $\beta$ it is necessary to sample Fourier modes $\k$ of a fixed magnitude $k$ which are oriented at different directions to the line of sight. In other words, $\mu=\kpar/k$ should uniformly span the entire range $-1 \le \mu \le 1$. However, the $\kpar$ values are much larger than $\kperp$, and the Fourier modes are largely concentrated around $\mu \sim 1$ , for Phase II (Section \[sec:vc\]). The restriction arises from the limited OWFA frequency bandwidth (Table \[tab:array\]) which is restricted by the anti-aliasing filter.
Multi-field observations and larger bandwidth ($> 30 \,{\rm MHz})$ of the OWFA hold the potential to probe of the expansion history and constrain cosmological parameters using BAO measurements from the large-scale HI fluctuations at $z = 3.35$. Anisotropies in the clustering pattern in redshifted 21-cm maps at this redshift produced by Alcock-Paczyski effect has the possibility of probing cosmology and structure formation. It is also possible to constrain neutrino masses of using OWFA and compare among different fields of cosmology (LSS, CMBR, BBN). Thus the OWFA could provide highly complementary constraints on neutrino masses. We leave investigation of such issues for future studies.
The present work has assumed that the shape of the HI power spectrum is exactly determined by the $\Lambda$CDM model, and has only focused estimating the overall amplitude $A_{HI}$ from OWFA observations. The OWFA HI signal is predominantly from the $k$ range $0.02 \le k \le 0.2 \, {\rm Mpc}^{-1}$. It is possible to use OWFA observations to estimate $P_{HI}(k)$ in several separate bins over this $k$ range, without assuming the anything about the shape of the HI power spectrum. In a forthcoming paper, we plan to calculate Fisher matrix estimates for the binned power spectrum.
Acknowledgment {#acknowledgment .unnumbered}
==============
The authors acknowledge Jayaram N. Chengalur, Jasjeet S. Bagla, Tirthankar Roy Choudhury, C.R. Subrahmanya, P.K. Manoharan and Visweshwar Ram Marthi for useful discussions. AKS would like to acknowledge Rajesh Mon- dal and Suman Chatterjee for their help. SSA would like to acknowledge CTS, IIT Kharagpur for the use of its facilities and the support by DST, India, under Project No. SR/FTP/PS-088/2010. SSA would also like to thank the authorities of IUCAA, Pune, India for providing the Visiting As- sociateship programme.
Ali, S. S., Bharadwaj, S. 2014, Journal of Astrophysics and Astronomy, 35, 157
Ali, S. S., Bharadwaj, S., & Pandey, S. K. 2006, , 366, 213
Ade, P. A. R., Aghanim, N., et al. A&A 571, A16 (2014)
Ansari, R., Campagne, J. E., Colom, P., et al. 2012, , 540, A129
Bagla J. S., Khandai N., Datta K. K., 2010, MNRAS, 407, 567
Bandura, K., Addison, G. E., Amiri, M., et al. 2014, arXiv:1406.2288
Bharadwaj, S., Nath, B., Sethi, S. K., 2001, JApA, 22, 21
Bharadwaj, S., Sethi, S. K., 2001, JApA, 22, 293
Bharadwaj S., Pandey S. K., 2003, JApA, 24, 23
Bharadwaj, S., & Srikant, P. S. 2004,JApA. 25, 67
Bharadwaj S., Ali S. S., 2005, , 356, 1519
Bharadwaj S., Sethi S., Saini T. D., 2009, Phys Rev. D, 79, 083538
Bull et al. 2015, ApJ, 803, 21
Chang, T.-C., Pen, U.-L., Peterson, J. B., McDonald, P., 2008, Phys Rev. Lett., 100, 091303
Eisenstein D. J., Hu W., 1998, ApJ, 496, 605
Ghosh, A., Bharadwaj, S., Ali, S. S., & Chengalur, J. N. 2011a, , 411, 2426
Ghosh, A., Bharadwaj, S., Ali, S. S., & Chengalur, J. N. 2011b, , 418, 2584
Guha Sarkar, T., Mitra, S., Majumdar, S., Choudhury, T. R., 2012, MNRAS, 421, 3570
Guha Sarkar, T., & Hazra, D. K. 2013, , 4, 2
Marin F. A., Gnedin, N. Y., Seo, H.-J., Vallinotto, A. 2010, Ap.J, 718, 972
Mao, X.-C. 2012, , 744, 29
Morales, M. F. 2005, ApJ, 619, 678
Noterdaeme, P., Petitjean, P., Carithers, W. C., et al. 2012, , 547, L1
Padmanabhan, H., Roy Choudhury, T., Refregier, A., MNRAS, 447, 3745-3755 (2015)
Prasad, P., Subrahmanya, C. R. 2011a, arXiv:1102.0148.
Prasad, P., Subrahmanya, C. R. 2011b, Experimental Astronomy, 31, 1
Prochaska, J. X., Wolfe, A. M. 2009, ApJ, 696, 1543
Ram Marthi, V., Chengalur, J., 2014, MNRAS, 437 (1), 524
Sarma, N. V. G., Joshi, M. N., Bagri, D. S., Ananthakrishnan, S. 1975, J. Instn Electronics Telecommun. Engr., 21, 110.
Seo, H.-J., Dodelson, S., Marriner, J., et al. 2010, ApJ, 721, 164
Swarup, G., Sarma, N. V. G., Joshi, M. N., Kapahi, V. K., Bagri, D. S., Damle, S. H., Ananthakrishnan, S., Balasubramanian, V., Bhave, S. S., Sinha, R. P. 1971, Nature Phys. Sci., 230, 185.
Villaescusa-Navarro, F., Viel, M., Datta, K. K., & Choudhury, T. R. 2014, , 9, 050
Visbal, E., Loeb, A., Wyithe, S., 2009, JCAP, 10, 30
Wyithe, J. S. B., & Loeb, A. 2009, , 397, 1926
Zafar, T., P[é]{}roux, C., Popping, A., et al. 2013, , 556, A141
[^1]: Email:somnath@phy.iitkgp.ernet.in
[^2]: Email:anjan@cts.iitkgp.ernet.in
[^3]: Email:saiyad@phys.jdvu.ac.in
[^4]: http://rac.ncra.tifr.res.in/
[^5]: http://chime.phas.ubc.ca/
[^6]: http://bao.berkeley.edu/
[^7]: A compacted array of 1225 dishes with diameter 5m each, based on BAOBAB and sited in South Africa
[^8]: http://www.skatelescope.org/
|
---
abstract: |
We start from a finite dimensional Higgs bundle description of a result of Burns on negative curvature property of the space of complex structures, then we apply the corresponding infinite dimensional Higgs bundle picture and obtain a precise curvature formula of a Weil–Petersson type metric for general relative Kähler fibrations. In particular, our curvature formula implies a Burns type negative curvature property of the base manifold for a special class of maximal variation Kähler fibrations (named Poisson–Kähler fibrations).
[[Mathematics Subject Classification]{} (2010): 32G20, 53C55, 53D20]{}
[[Keywords]{}: Space of complex structures, Higgs bundle, Kodaira–Spencer map, Kähler fibration, horizontal lift, Weil–Petersson metric, negative curvature]{}
address:
- 'XUEYUAN WAN: SCHOOL OF MATHEMATICS, KIAS, HEOGIRO 85, DONGDAEMUN-GU SEOUL, 02455, REPUBLIC OF KOREA'
- 'XU WANG: DEPARTMENT OF MATHEMATICAL SCIENCES, NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY, NO-7491 TRONDHEIM, NORWAY'
author:
- Xueyuan Wan and Xu Wang
title: 'Poisson–Kähler fibration I: curvature of the base manifold'
---
\[section\] \[theorem\][Proposition]{} \[theorem\][Conjecture]{} \[theorem\][Corollary]{} \[theorem\][Lemma]{} \[theorem\][Observation]{}
Introduction
============
Our original aim is to study the following problem in Kähler geometry:
*\[Negative curvature problem (NCP)\]: Let $p: {\ensuremath{\mathcal{X}}}\to {\ensuremath{\mathcal{B}}}$ be a proper holomorphic submersion between two Kähler manifolds. Assume that the Kodaira–Spencer map is injective. Does there exist a Kähler metric, say $\omega$, on ${\ensuremath{\mathcal{B}}}$ satisfying the following NC property ?*
*\[NC property\] — The holomorphic sectional curvature of $\omega$ is bounded above by a negative constant and the holomorphic bisectional curvature of $\omega$ is non-positive*.
It is known that the answer to NCP is “Yes” in the following cases:
1\. Each fiber is one dimensional compact Riemann surface: by the Ahlfors theorem (see [@A61] for the Kähler part, [@A61b; @Royden; @Wol] for NC, see also [@Bern4] for a very recent new proof), one may choose $\omega$ to be the classical Weil–Petersson metric;
2\. The canonical line bundle of each fiber is Hermitian flat: follows from the standard variation of Hodge theory [@G84; @GS69] (in trivial canonical line bundle case) and the Higgs bundle package [@Lu99; @LS04; @Lu01; @Wang_Higgs] (for the general case) ; in both cases $\omega$ is of Hodge type;
3\. The canonical line bundle of each fiber is positive and the base manifold is one dimensional: follows from the main theorem in [@To-Yeung] and [@BPW].
**Remark**: There is also a weak algebraic version of NCP, called the “Viewheg–Zuo conjecture”, which has been proved by Popa and Schnell [@PS] recently in case each fiber has a good minimal model, see [@VZ03; @BPW; @CKT; @Deng] for related results.
Our approach to NCP is based on the following well known fact: the space $\mathcal J(V, \omega)$ of compatible complex structures on a (finite dimensional) symplectic vector space $(V, \omega)$ has a natural bounded symmetric domain structure (thus satisfies NC), the proof can be found in section 3.2. The recent Donaldson–Fujiki moment map picture gives a similar infinite dimensional $\mathcal J(V, \omega)$. Formally the proof of NC property for $\mathcal J(V, \omega)$ generalizes to the infinite Donaldson–Fujiki dimensional space also (see [@Fujiki] for the construction without proof). That is the reason why we believe that the answer to NCP should be “Yes” in general.
Another result related to NCP is Burns’ NC property [@Burns82; @CT][^1] along the leaves of a Monge–Ampère foliation. The key notion is the following:
A proper holomorphic submersion $p: ({\ensuremath{\mathcal{X}}}, \omega_{\ensuremath{\mathcal{X}}}) \to ({\ensuremath{\mathcal{B}}}, \omega_{\ensuremath{\mathcal{B}}})$ between two Kähler manifolds is said to be Poisson–Kähler if $$\label{eq1.1}
(\omega_{\ensuremath{\mathcal{X}}}- p^* \omega_{\ensuremath{\mathcal{B}}})^{n+1}\equiv 0$$ on ${\ensuremath{\mathcal{X}}}$, where $n$ denotes the dimension of the fibers. (we will say that $p$ is a Poisson–Kähler fibration and $\omega_{\ensuremath{\mathcal{X}}}- p^* \omega_{\ensuremath{\mathcal{B}}}$ is a Poisson–Kähler form).
Our main theorem is the following:
**Theorem A** \[Theorem \[th:wang-1\]\]: The answer to NCP is “Yes” for every Poisson–Kähler fibration.
**Remark**: Theorem A is also proved independently without using Higgs bundles by Berndtsson in [@Bern19]. A general curvature formula for *arbitrary* relative Kähler fibrations that implies Theorem A is given in Theorem \[th:main-r\].
Our main theorem suggests to study the following problem:
*Problem: For a proper holomorphic submersion $p: ({\ensuremath{\mathcal{X}}}, \omega_{\ensuremath{\mathcal{X}}}) \to ({\ensuremath{\mathcal{B}}}, \omega_{\ensuremath{\mathcal{B}}})$ between two Kähler manifolds, find a natural condition under which $p$ is Poisson–Kähler.*
In general, we do not know how to solve the above problem, it seems that it is related to a degenerated Donaldson $J$-equation since in case ${\ensuremath{\mathcal{B}}}$ is one dimensional, is equivalent to $$\omega_{\ensuremath{\mathcal{X}}}^{n+1}=(n+1)\omega_{{\ensuremath{\mathcal{X}}}}^{n} \wedge p^* \omega_{\ensuremath{\mathcal{B}}},$$ (it is degenerated because $p^* \omega_{\ensuremath{\mathcal{B}}}$ is only positive along the horizontal direction). Without any assumption, in general a Kähler fibration is not Poisson–Kähler. In fact, we are able to prove the following result (see [@Aikou] for related results).
**Theorem B** \[Theorem \[thm6.1\]\]: Let $E$ be a holomorphic vector bundle over a compact Kähler manifold ${\ensuremath{\mathcal{B}}}$. Let $P (E):= (E\setminus\{0\})/\mathbb C^*$ be the projectivization of $E$. Then the followings are equivalent:
- There exists a hermitian metric $h$ on $E$ such that $\Theta(E, h)=\alpha\otimes \text{Id}_E$, where $\alpha$ is a $(1,1)$-form on ${\ensuremath{\mathcal{B}}}$ and $\text{Id}_E$ denotes the identity map on $E$;
- There exists a Poisson–Kähler structure on $P(E)\to {\ensuremath{\mathcal{B}}}$.
In case $\dim {\ensuremath{\mathcal{B}}}=1$, both are equivalent to polystability of $E$ (in the sense of Mumford).
The above theorem suggest to find certain stability criterion (or Hermitian–Einstein condition) of the Poisson–Kähler property. As an attempt we obtain the following result:
**Theorem C**: A proper holomorphic submersion $p: ({\ensuremath{\mathcal{X}}}, \omega_{\ensuremath{\mathcal{X}}}) \to ({\ensuremath{\mathcal{B}}}, \omega_{\ensuremath{\mathcal{B}}})$ between two Kähler manifolds is Poisson–Kähler if and only if the following infinite rank quasi vector bundle $$\mathcal A:=\{\mathcal A_t\}_{t\in{\ensuremath{\mathcal{B}}}}$$ is Higgs flat, where each fiber $\mathcal A_t$ denotes the space of smooth differential forms on $X_t$.
**Remark 1**: It is known that a finite dimensional flat bundle is Higgs flat if and only if it is semi-simple (see Theorem 1, page 19 in [@Simpson] for the precise statement). In general we do not know how to generalize this criterion to the above infinite rank bundle $\mathcal A$.
**Remark 2**: The Poisson–Kähler condition is in general stronger than the geodesic–Einstein condition in [@FLW; @Wan2]. It is known that (see [@Arezzo-Tian; @Sun10]) every relative Kähler fibration is Poisson–Kähler locally in the following sense: for every $t\in {\ensuremath{\mathcal{B}}}$, there exists a small open neighborhood $U$ of $t$ such that the fibration from $p^{-1}(U)$ to $U$ possesses a Poisson–Kähler structure. In [@WW] we will continue the study of the existence theory of Poisson–Kähler structure and related results.
The paper is organized as follows:
In section 2, we recall several basic notions in Kähler fibration and prove a few basic properties of the Poisson–Kähler fibration. In section 3, we give a detailed introduction of the bounded symmetric space structure on the complex structure space $\mathcal J(V, \omega)$ and introduce the Higgs bundle approach to Burns’ result. The main section is section 4, which contains two proofs of Theorem A, a generalization of Schmacher’s formula and an explicit curvature formula for $\omega_{DF}$ for general relative Kähler fibrations. Examples of Poisson–Kähler fibrations are given in section 5. Theorem B is proved in section 6. The proof of Theorem C and the third proof of Theorem A is given in the appendix (section 7).
**Acknowledgments**: We would like to thank Bo Berndtsson and Ya Deng for several useful discussions about the topics of this paper.
Preliminaries
=============
Relative Kähler fibration
-------------------------
\[de:rkf\] We call a proper holomorphic submersion, $p: (\mathcal X, \omega)\to\mathcal B$, between two complex manifolds a relative Kähler fibration if $\omega$ is a, real, smooth, $d$-closed $(1,1)$-form on ${\ensuremath{\mathcal{X}}}$ and $\omega$ is positive on each fiber, $X_t:=p^{-1}(t)$, of $p$.
\[de:horizontal-vector-field\] Let $p: (\mathcal X, \omega)\to\mathcal B$ be a relative Kähler fibration. By vertical vector fields, we mean vector fields on $\mathcal X$ that are tangent to the fibers. A vector field $V$ on $\mathcal X$ is said to be horizontal with respect to $\omega$ if $$\omega(V, W)=0,$$ for every vertical $W$.
**Remark**: $\omega$ defines a natural inner product (not semi-positive in general) such that $$\label{eq:hermitian}
\langle V, W\rangle_{\omega}=\omega (V, J\overline W),$$ where $J$ is the complex structure on $\mathcal X$. Since $\omega$ is degree $(1,1)$, we have $$\langle V^{1,0}, W^{1,0} \rangle_{\omega}= -i \, \omega(V^{1,0}, \overline{W^{1,0}}), \ \ \langle V^{1,0}, W^{0,1} \rangle_{\omega}=0,$$ for every $(1,0)$-vector fields $ V^{1,0}, W^{1,0} $ and $(0,1)$-vector field $W^{0,1}$. Moreover, since $\omega$ is real, we have $$\langle V, W\rangle_{\omega}= \overline{\langle W, V\rangle_{\omega}}.$$ Positivity of $\omega$ on fibers is equivalent to that the inner product is positive on the space of vertical vector fields. We say that $V$ is *orthogonal* to $W$ with respect to $\omega$ if $$\langle V, W\rangle_{\omega}=0.$$ Thus a vector field is horizontal if and only if it is orthogonal to all vertical vector fields.
\[de:horizontal-lift\] Let $p: (\mathcal X, \omega)\to\mathcal B$ be a relative Kähler fibration. Let $v$ be a vector field on $\mathcal B$. A vector field $V$ on $\mathcal X$ is said to be a horizontal lift of $v$ with respect to $\omega$ if $V$ is horizontal and $p_* V=v$.
The following proposition is a generalization of Proposition 4.1 in [@BPW].
Every vector field on $\mathcal B$ has a unique horizontal lift. Horizontal lift of a $(1,0)$-vector field (resp. $(0,1)$-vector field) is still a $(1,0)$-vector field (resp. $(0,1)$-vector field).
Assume that $v$ on ${\ensuremath{\mathcal{B}}}$ has two horizontal lifts, say $V^1, V^2$. Then we have that $\pi_*(V^1-V^2)=0$. Thus $V^1-V^2$ is vertical. Since $V^1-V^2$ is also horizontal, we have $$\langle V^1-V^2, V^1-V^2\rangle_{\omega}=0,$$ which gives $V^1=V^2$ since $\omega$ is positive on fibers. Now it suffices to prove that every $(1,0)$-vector field possesses a horizontal $(1,0)$-lift. Let $\{t^j\}$ be a holomorphic local coordinate system on $\mathcal B$. Since $p$ is a holomorphic fibration, we can find ${\zeta^\alpha}$ such that $\{t^j, \zeta^\alpha\}$ is a holomorphic local coordinate system on $\mathcal X$. Let us write $$\omega=i \sum g_{\alpha\bar\beta} \ d\zeta^\alpha \wedge d\bar\zeta^\beta + i\sum g_{j \bar\beta} \ dt^j \wedge d\bar\zeta^\beta +
i \sum g_{\alpha\bar k} \ d\zeta^\alpha \wedge d\bar t^k + i \sum g_{j \bar k} \ dt^j \wedge d\bar t^k.$$ Then we know that each $$\label{eq:Vj}
V_j:=\frac{\partial}{\partial t^j}-\sum g_{j \bar\beta} g^{\bar \beta\alpha } \frac{\partial}{\partial \zeta^\alpha},$$ is a horizontal lift of $\frac{\partial}{\partial t^j}$, where $(g^{\bar \beta\alpha})$ is the inverse matrix of $(g_{\alpha\bar\beta})$, i.e. $$\sum g^{\bar \beta\alpha }g_{\alpha\bar \mu}=\delta_{\beta\mu}.$$ The proposition follows since the space of vector fields on $\mathcal B$ are generated by $\{\frac{\partial}{\partial t^j}, \frac{\partial}{\partial \bar t^k}\}$.
We shall also use the following definition (from [@BPW]), which is dual to Definition \[de:horizontal-vector-field\].
\[de:vertical-form\] A differential one-form on $\mathcal X$ is said to be horizontal if it vanishes on the space of vertical vector fields. A differential one-form on $\mathcal X$ is said to be vertical if it vanishes on the space of horizontal vector fields.
The following proposition is a generalization of Lemma 6.1 in [@Wang15].
\[pr:rkf\] Let $\{V_j\}$ be the vector fileds defined in . Then we have
1. $[V_j, V_k]=0$;
2. Let $n$ be the complex dimension of the fibers. Put $$\label{eq: cjk}
c_{j\bar k}=\langle V_j, V_k\rangle_{\omega}, \
c(\omega)=i \sum c_{j\bar k}\, dt^j\wedge d\bar t^k, \ \omega':=\omega-c(\omega),$$ Then $(\omega')^{n+1}\equiv 0$;
3. $[V_j, \bar V_k] \,\rfloor\, (\omega|_{X_t})=i (dc_{j\bar k})|_{X_t}$;
4. $[V_j, \bar V_k]\equiv 0$ for all $j,k$ if and only if $d\omega'=0$.
*(1)*: By a direct computation, we know that $[V_j, \bar V_k] $ are vertical. Since $\omega$ is non-degenerate on fibers and, it is enough to prove that $[V_j, V_k] \, \rfloor \, \omega=0$ on fibers. Notice that $$[V_j, V_k] \, \rfloor \, \omega=(L_{V_j}V_k) \, \rfloor \, \omega=L_{V_j}(V_k\, \rfloor \, \omega)-V_k \, \rfloor \, L_{V_j}\omega.$$ By , we have $$\label{eq:vj-omega}
V_j\, \rfloor \, \omega=i \sum c_{j\bar l} \, d\bar t^l.$$ Apply the Cartan formula, we get $$\label{eq:wang-1}
[V_j, V_k] \, \rfloor \, \omega=i \sum (V_j \, \rfloor\, d c_{k\bar l} )\, d\bar t^l- i \sum (V_k \, \rfloor\, d c_{j\bar l} )\, d\bar t^l.$$ Thus $[V_j, V_k] \, \rfloor \, \omega=0$ on fibers.
*(2)*: Notice that $$\langle V_j, V_k\rangle_{\omega'}\equiv 0, \ \langle V_j, V\rangle_{\omega}=\langle V_j, V\rangle_{\omega'} \equiv 0,$$ for every vertical vector field $V$. Thus we know that $\omega'$ is zero on the horizontal distribution and the horizontal distribution is orthogonal to the vertical distribution with respect to $\omega'$. Since the the vertical distribution is $n$-dimensional, we know $(\omega')^{n+1}\equiv 0$.
*(3)*: Notice that $$[V_j, \bar V_k] \, \rfloor \, \omega=(L_{V_j}\bar V_k) \, \rfloor \, \omega=L_{V_j}(\bar V_k\, \rfloor \, \omega)-\bar V_k \, \rfloor \, L_{V_j}\omega.$$ By , we have $$[V_j, \bar V_k] \, \rfloor \, \omega= i \ d c_{j\bar k}-i \sum (V_j \, \rfloor\, d c_{l \bar k} )\, d t^l- i \sum (\bar V_k \, \rfloor\, d c_{j\bar l} )\, d\bar t^l,$$ which gives $(3)$.
*(4)*: Since $d\omega=0$, by $(3)$, we know that $d\omega'=0$ gives $[V_j, \bar V_k]\equiv 0$. For the opposite direction, assume that $[V_j, \bar V_k]\equiv 0$ all for $j,k$, then by $(3)$, we know that $c_{j\bar k}$ depends only on $t\in {\ensuremath{\mathcal{B}}}$, thus by $(1)$ and , we have $$0=[V_j, V_k] \, \rfloor \, \omega=i \sum \frac{\partial c_{k\bar l}}{\partial t^j} \, d\bar t^l- i \sum\frac{\partial c_{j\bar l}}{\partial t^k} \, d\bar t^l,$$ which implies that $c(\omega)$ is $d$-closed. Thus $d\omega'=0$.
We call $c_{j\bar k}$ the geodesic curvatures and $c(\omega)$ the geodesic curvature form.
The above Proposition gives
\[pr:2018-12-17\] The horizontal distribution of a relative Kähler fibration is integrable if and only if each geodesic curvature $c_{j\bar k}$ is constant on fibers.
Poisson–Kähler fibration
------------------------
A relative Kähler fibration $p: (\mathcal X, \omega)\to\mathcal B$ is said to be Poisson–Kähler (we say that $\omega$ is Poisson–Kähler) if $\omega$ solves the homogeneous complex Monge–Ampère equation, i.e. $$\omega^{n+1}\equiv 0,$$ where $n$ denotes the dimension of the fibers. In general, a proper holomorphic submersion $p: ({\ensuremath{\mathcal{X}}}, \omega_{\ensuremath{\mathcal{X}}}) \to ({\ensuremath{\mathcal{B}}}, \omega_{\ensuremath{\mathcal{B}}})$ between two Kähler manifolds is said to be Poisson–Kähler if $$(\omega_{\ensuremath{\mathcal{X}}}- p^* \omega_{\ensuremath{\mathcal{B}}})^{n+1}\equiv 0,$$ in which case $\omega_{\ensuremath{\mathcal{X}}}- p^* \omega_{\ensuremath{\mathcal{B}}}$ is a Poisson–Kähler form).
**Remark 1**: By Proposition \[pr:rkf\], for a relative Kähler fibration $p: (\mathcal X, \omega)\to\mathcal B$, $d\omega'=0$ if and only if $[V_j, \bar V_k]\equiv 0$ all for $j,k$. Thus $\omega'$ is Poisson–Kähler if and only if the horizontal distribution associated to $\omega$ is integrable.
**Remark 2**: A relative Kähler fibration $p: (\mathcal X, \omega)\to\mathcal B$ is Poisson–Kähler if and only if $\omega$ has precisely $n$ positive eigenvalues and all the other eigenvalues are zero, which is equivalent to $c_{j\bar k}\equiv 0$ for all $j,k$ or $\omega'=\omega$.
Poisson–Kähler fibration is closely related to the *Poisson map* in symplectic geometry.
\[pr:pk\] A relative Kähler fibration $p: (\mathcal X, \omega)\to\mathcal B$ is Poisson–Kähler if and only if for every open Kähler submanifold $(U, \omega_U)\subset \mathcal B$, $\omega_{p^{-1}(U)}:=\omega+p^*\omega_U$ is positive on $p^{-1}(U)$ and $$p: (p^{-1}(U), \omega_{p^{-1}(U)}) \to (U, \omega_U),$$ is a Poisson map, i.e. $$\label{eq:pk}
\langle p^* d f, p^* d g \rangle_{\omega_{p^{-1}(U)}}= p^* \langle d f , d g \rangle _{\omega_U},$$ for every pair of smooth functions $f,g$ on $U$, where $\langle \cdot, \cdot \rangle$ denotes the dual metric of on the space of forms.
**Remark**: Notice that a Poisson–Kähler form is semi-positive. Thus if $\omega$ is Poisson–Kähler and $\omega_U$ is Kähler then $\omega_{p^{-1}(U)}:=\omega+p^*\omega_U$ is Kähler.
*Poisson–Kähler implies Poisson*: Let us define vector fields $V^f, V^g$ on ${\ensuremath{\mathcal{X}}}$ such that $$V^f \, \rfloor \, \omega_{p^{-1}(U)}= p^* df, \ V^g \, \rfloor \, \omega_{p^{-1}(U)}= p^* dg.$$ We know that $V^f, V^g$ are horizontal with respect to $\omega_{p^{-1}(U)}$ and $\omega$. Thus Poisson–Kählerness of $\omega$ gives $V^f \, \rfloor \, \omega =0$, which implies $$V^f \, \rfloor \, \omega_{p^{-1}(U)}= V^f \, \rfloor \, p^* \omega_U \ = p^* ( p_* V^f \, \rfloor \, \omega_U).$$ Compare with $V^f \, \rfloor \, \omega_{p^{-1}(U)}= p^* df$, we get $$p_* V^f \, \rfloor \, \omega_U =df.$$ Hence we have $$p^* \langle d f , d g \rangle _{\omega_U}= p^* \langle p_* V^f , p_* V^g \rangle _{\omega_U}=\langle V^f , V^g \rangle _{p^* \omega_U}.$$ Notice that Poisson–Kählerness of $\omega$ implies $\langle V^f , V^g \rangle _{\omega}=0$. Thus we can replace $\langle V^f , V^g \rangle _{p^* \omega_U}$ by $\langle V^f , V^g \rangle _{\omega_{p^{-1}(U)}}=\langle p^*d f, p^* dg \rangle_{\omega_{p^{-1}(U)}}$, which gives .
*Poisson implies Poisson–Kähler*: For every $t\in \mathcal B$, let us choose a Kähler neighborhood, say $(U, \omega_U)$, of $t$. Let $\{df^j\}$ be an orthonormal basis of $T_t^*\mathcal B$ with respect to $\omega_U$. gives that $p^* df^j$ are orthonormal with respect to $\omega_{p^{-1}(U)}$. For every $\zeta\in X_t$, let $\{u^\alpha\}$ be an orthonormal basis of the vertical part of $T^*_\zeta \mathcal X$ (see Definition \[de:vertical-form\]). Since $p^*df^j$ are horizontal, we know $\{p^*df^j, u^\alpha\}$ defines an orthonormal basis of $T^*_\zeta \mathcal X$, thus $$\omega_{p^{-1}(U)} (\zeta)= i\sum p^*df^j \wedge \overline{p^*df^j} + i \sum u^\alpha \wedge \overline{u^\alpha}.$$ Notice that $p^*\omega_U=i\sum p^*df^j \wedge \overline{p^*df^j} $ at $\zeta$, hence $\omega (\zeta)= i \sum u^\alpha \wedge \overline{u^\alpha}$ is a vertical $(1,1)$-form, which implies $\omega(\zeta)^{n+1}\equiv 0$ since the vertical dimension is $n$.
**Remark**: In general, we call a proper smooth submersion $p: (\mathcal X, \omega_{\mathcal X}) \to (\mathcal B, \omega_{\mathcal B})$ between two symplectic manifolds a *Poisson map* if $$\omega_{\mathcal X}(p^* df, p^* dg)= p^*\left(\omega_{\mathcal B}(df, dg) \right),$$ for every pair of smooth functions $f,g$ on $\mathcal B$, where $$\omega(\alpha, \beta):=-\omega(V, W), \qquad \text{if} \ \ V\rfloor \,\omega=\alpha, \ W \rfloor\, \omega=\beta.$$ Proposition \[pr:pk\] implies:
A proper holomorphic submersion $p: (\mathcal X, \omega_{\mathcal X}) \to (\mathcal B, \omega_{\mathcal B})$ between two Kähler manifolds is Poisson if and only if it is Poisson–Kähler.
Two types of Weil–Petersson metrics
-----------------------------------
Based on Definition 5.6 in [@Wang17] we shall use a relative Kähler form to define two types of Weil–Petersson metrics on the base manifold.
\[de:KST\] Let $p: (\mathcal X, \omega)\to \mathcal B$ be a relative Kähler fibration. Let $V_j$ (defined in ) be the **horizontal lift** of $\frac{\partial}{\partial t^j}$ with respect to $\omega$. We call $$\kappa_j:=({\ensuremath{\overline\partial}}V_j)|_{X_t},$$ $\omega$-Kodaira–Spencer tensors on $X_t$.
**Remark**: From the above definition, we know that $\omega$-Kodaira–Spencer tensor $\kappa_j$ are ${\ensuremath{\overline\partial}}$-closed $T_{X_t}$-valued $(0,1)$-forms on $X_t$.
\[de:DFWP\] Let $p: (\mathcal X, \omega)\to \mathcal B$ be a relative Kähler fibration. We call the following metric on $\mathcal B$ defined by $$\langle \frac{\partial}{\partial t^j},\frac{\partial}{\partial t^k} \rangle_{\rm DF}(t):=\int_{X_t} \langle \kappa_j, \kappa_k \rangle_{\omega_t} \,\frac{\omega_t^n}{n!}, \ \ \omega_t:= \omega|_{X_t},$$ where $\kappa_j$ are $\omega$-Kodaira–Spencer tensors, the non-harmonic Weil–Petersson metric on $\mathcal B$.
**Remark**: The *non-harmonic* Weil–Petersson metric is defined by the $L^2$–inner product of the *$\omega$-Kodaira–Spencer tensors*. In general, it is *different* from the following *harmonic* Weil–Petersson metric defined by the *harmonic* Kodaira–Spencer tensors. We use the notation “DF” since in the Poisson–Kähler case our non-harmonic Weil–Petersson metric is precisely the Donaldson–Fujiki metric.
\[de:HWP\] Let $p: (\mathcal X, \omega)\to \mathcal B$ be a relative Kähler fibration. We call the following metric on $\mathcal B$ defined by $$\langle \frac{\partial}{\partial t^j},\frac{\partial}{\partial t^k} \rangle_{\rm H}(t):=\int_{X_t} \langle \kappa_j^h , \kappa_k^h \rangle_{\omega_t} \,\frac{\omega_t^n}{n!}, \ \ \omega_t:= \omega|_{X_t},$$ the harmonic Weil–Petersson metric on $\mathcal B$, where $\kappa_j^h$ denotes the $\omega_t$ harmonic representative of the Kodaira–Spencer class $[\kappa_j]$.
**Remark 1**: The non-harmonic Weil–Petersson metric $\omega_{DF}$ is always no less then the harmonic Weil–Petersson metric $\omega_H$ (they are equal if $\omega$ is Kähler-Einstein on fibers). In particular, if the Kodaira–Spencer map is injective then $\omega_{DF}$ must be non degenerated.
**Remark 2**: It is proved in [@Wang16] that if the relative cotangent bundle is $(n-1)$-semi-positive then the bisectional curvature of the *harmonic Weil–Petersson metric* is semi-negative. But in general it is not easy to find fibrations with $(n-1)$-semi-positive relative cotangent bundle. The main theme of this paper is to use the *non-harmonic Weil–Petersson metric* to study the curvature properties of the base manifold.
Finite dimensional Higgs bundles and Burns’ result
==================================================
Our motivation comes from a natural finite dimensional Higgs bundle picture for the space of *compatible* complex structures on a $2n$-dimensional symplectic Euclidean space.
*List of notations*:
1\. $(V, \omega)$: a $2n$ dimensional real vector space $V$ with a symplectic form $\omega$;
2\. $\mathcal J(V, \omega)$: the space of $\omega$-compatible complex structures on $V$. Recall that a (linear) complex structure on $V$ means a real linear endomorphism, say $J$, of $V$ such that $J^2=-1$; $J$ is said to be compatible with $\omega$ if $\omega(u, Jv)$ defines an inner product on $V$;
3\. Fix a complex structure $J$ on $V$, *we shall use the same letter $J$ to denote the induced complex structure on $V^*$ and its complexification $\mathbb C\otimes V^*$*. We call the $i$ (resp. $-i$) eigenspace of $J$ on $\mathbb C\otimes V^*$ the space of $(1,0)$ (resp. $(0,1)$) forms, and denote it by $\wedge_J^{1,0}$ (resp. $\wedge_J^{0,1}$).
4\. We will consider the product bundle $\mathcal H^1:=\mathcal J(V, \omega) \times (\mathbb C\otimes V^*)$ over $\mathcal J(V, \omega)$, notice that $$\mathcal H^1= \mathcal H^{0,1}\oplus \mathcal H^{1,0}, \ \ \mathcal H^{0,1}:=\{\wedge_J^{0,1}\}_{J\in\mathcal J(V, \omega)}, \ \ \mathcal H^{1,0}:=\{\wedge_J^{1,0}\}_{J\in\mathcal J(V, \omega)}.$$
5\. In general, we shall denote by $\wedge_J^{p,q}$ the space of $J$-$(p,q)$-forms and define $$\mathcal H^k= \oplus_{p+q=k} \mathcal H^{p,q}, \ \ \mathcal H^{p,q}:=\{\wedge_J^{p,q}\}_{J\in\mathcal J(V, \omega)}.$$
6\. The Kodaira–Nirenberg–Spencer tensor $\Phi$ of a complex structure $J'$ with respect to another complex structure $J$ is defined in Definition \[de3.1\], with respect to a fixed frame $\{\xi^j\}$ of $\wedge_J^{1,0}$, one may write $\Phi$ as $$\Phi= \sum_{j,\, k=1}^n \Phi_{\bar j}^k \, \bar\xi^j \otimes \xi_k,$$ where $\{\xi_k\}$ denotes the dual of $\{\xi^j\}$. We call $\bm \Phi:=(\Phi_{\bar j}^k)$ the associated matrix of $\Phi$.
With the notations above, we are able to prove the followings:
“Fact 1”: $\mathcal J(V, \omega)$ has a natural bounded symmetric domain structure;
“Fact 2”: Each $\mathcal H^k$ possesses a natural Higgs bundle structure and the associated Lu’s Hodge metric, up to a constant, is equal to the canonical negatively curved metric on $\mathcal J(V, \omega)$.
**Remark**: “Fact 1” is a well known result, (as in Theorem 7.1 in [@Kempf]) the usual proof is to look at the action of the symplectic group. But we shall introduce another (hopefully more explicit) proof in the following section by using the Kodaira–Nirenberg–Spencer tensor (KNS tensor). “Fact 2” might also be a known result, but we can not find any directly related literature.
Kodaira–Nirenberg–Spencer description of $\mathcal J(V, \omega)$
----------------------------------------------------------------
The KNS description is a way to realise $\mathcal J(V, \omega)$ as an open set in $\mathbb C^N$. Let us start from the following lemma.
\[le3.1\] $\wedge_{J'}^{1,0} \cap \wedge_{J}^{0,1}=\{0\}$ for all $J, J'\in \mathcal J(V, \omega)$.
Just notice that if $u\in \wedge_{J'}^{1,0} \cap \wedge_{J}^{0,1}$ then $$-i \omega(u, \bar u)=\omega(u, J'\bar u)
\geq 0 \ (\text{since} \ u\in \wedge_{J'}^{1,0})$$ and $$i \omega(u, \bar u)=\omega(u, J\bar u)
\geq 0 \ (\text{since} \ u\in \wedge_{J}^{0,1}),$$ thus $u$ must be zero.
Since both $\wedge_{J'}^{1,0}$ and $\wedge_{J}^{0,1}$ are $n$ dimensional, the above lemma gives $$\label{eq:3.1}
\mathbb C\otimes V^*=\wedge_{J'}^{1,0} \oplus \wedge_{J}^{0,1}.$$ Compare it with $$\label{eq:3.2}
\mathbb C\otimes V^*=\wedge_J^{1,0} \oplus \wedge_{J}^{0,1}.$$
\[de3.1\] We call the mapping from $\wedge_J^{1,0}$ to $\wedge_{J}^{0,1}$ defined by the natural projection with respect to the decomposition the KNS map and denote it by $-\Phi$ (sometimes we write $\Phi$ as $\Phi(J';J)$ and think of it as a tensor in $\wedge_J^{-1,1}:= (\wedge_J^{1,0})^*\otimes \wedge_{J}^{0,1}$).
Every tensor $\sigma= u\otimes W\in \wedge_J^{-1,1}$ acts naturally on forms by $$\sigma \cdot v:=u \wedge (W\rfloor \,v).$$ In particular, if we choose a basis, say $\{\xi^j\}$, of $\wedge_J^{1,0}$ such that $$\omega= i\sum_{j=1}^n \xi^j\wedge \bar\xi^j$$ and $$\Phi= \sum_{j,\, k=1}^n \Phi_{\bar j}^k \, \bar\xi^j \otimes \xi_k,$$ where $\{\xi_k\}$ denotes the dual of $\{\xi^j\}$, then we have $$\Phi\cdot \omega=i \sum_{j,\, k=1}^n \Phi_{\bar j}^k \, \bar\xi^j \wedge \bar\xi^k.$$ Thus $$\Phi\cdot \omega =0 \Leftrightarrow \Phi_{\bar j}^k= \Phi_{\bar k}^j, \ \forall \ 1\leq j,k\leq n$$ Denote by $\mathfrak{gl}_n(\mathbb C)$ the space of $n$ by $n$ complex matrices and write the transpose of the matrix $\bm\Phi=(\Phi_{\bar j}^k)\in \mathfrak{gl}_n(\mathbb C)$ as $\bm \Phi^{\rm T}$, then we have $$\Phi\cdot \omega =0 \Leftrightarrow \bm \Phi=\bm \Phi^{\rm T}.$$
If $J, J'\in \mathcal J(V, \omega)$ then $\Phi= \Phi(J';J)$ satisfies $$\bm\Phi=\bm \Phi^{\rm T}$$ and $\bm \Phi \bar{\bm \Phi} <1$ (i.e. all eigenvalues of $\bm \Phi \bar{\bm \Phi}$ are less than $1$).
By the definition of $\Phi$, we know that $(1+\Phi) (\xi^k)$ is the projection of $\xi^k$ to $\wedge_{J'}^{1,0}$ with respect to . Thus $$\label{eq3.3}
(1+\Phi) (\xi^k) =\xi^k +\sum_{j=1}^n \Phi_{\bar j}^k \,\bar \xi^j \in \wedge_{J'}^{1,0}.$$ Now $ J'\in \mathcal J(V, \omega)$ implies that $\omega$ is a $J'$-$(1,1)$-form, thus the vector defined below is $J'$-$(0,1)$ $$V_{c} \,\rfloor\, \omega= \sum_{k=1}^n c_k (1+\Phi) (\xi^k).$$ Compute $$V_{c}=i \sum c_k (\bar\xi_k -\Phi_{\bar j}^k \xi_j).$$ Since $ J'\in \mathcal J(V, \omega)$ and each $V_c$ is $J'$-$(0,1)$, we get $$\omega (V_{c}, V_{c'}) \equiv 0,$$ (which is equivalent to $\bm \Phi=\bm\Phi^{\rm T}$) and for all $c=(c_1,\cdots, c_n) \neq 0$, $$i \omega(V_c, \bar V_c) >0, \ \ \text{notice that} \ V_c\in \wedge_{J'}^{(0,1)},$$ (which is equivalent to $\bm \Phi \bar{\bm \Phi} <1$).
**Remark**: The above proposition implies that $$\tau_{\xi} \circ \Phi_J: J'\mapsto \bm \Phi, \ \ \Phi_J(J')=\Phi(J'; J), \ \ \tau_{
\xi} (\Phi)= \bm \Phi,$$ maps $\mathcal J(V, \omega)$ to the following open set $${\rm BSD_{III}}:=\{\bm \Phi \in \mathfrak{gl}_n (\mathbb C): \bm \Phi=\bm\Phi^{\rm T}, \ \bm \Phi \bar{\bm \Phi} <1\}$$ in $\{\bm \Phi \in \mathfrak{gl}_n (\mathbb C): \bm \Phi=\bm\Phi^{\rm T}\} \simeq \mathbb C^{\frac{n(n+1)}{2}}$. On the other hand, if $
\bm \Phi \in {\rm BSD_{III}}$ then the associated tensor $\Phi$ naturally defines a complex structure, say $J'$, such that (see ) $$\wedge^{1,0}_{J'}={\rm Im} \, (1+\Phi),$$ one may directly check that $J' \in \mathcal J(V, \omega)$. To summarize we have
Fix $J\in \mathcal J(V, \omega)$ and an orthonormal basis, say $\xi=\{\xi^j\}$, of $\wedge^{1,0}_J$ with respect to the $(\omega, J)$-metric, then $$\tau_{\xi} \circ \Phi_J : \mathcal J(V, \omega) \to {\rm BSD_{III}}$$ is bijective.
Berndtsson approach
-------------------
This part comes from a discussion with Bo Berndtsson. The aim is to give a more accessible definition of the KNS tensor. Notice that a complex structure $J$ on $V$ naturally defines a $\mathbb C$-vector space structure on $V$ as follows: $$\label{eq:bo-1}
(a+bi)\cdot u:=au+bJu, \ \ \forall \ a,b\in\mathbb R, \ u\in V.$$ Thus any associated $\mathbb C$-basis, say $\{\xi_j\}$, gives an $\mathbb C^n$ realization of $(V, J)$. For any other complex structure $J'$, the $\mathbb R$-linear isomorphism defined by $$T': \xi_j \mapsto \xi'_j, \ \ \ J(\xi_j)\mapsto J'(\xi'_j),$$ is $\mathbb C$-linear as a map from $(V, J)$ to $(V, J')$. Two such maps give the same complex structure if and only if $$T''=T'S,$$ where $S$ is a $\mathbb C$-linear isomorphism on $(V, J)$. Hence one may write the the space of complex structures as $GL(2n,\mathbb R)/GL(n,\mathbb C)$.
\[de3.2\] For $T\in GL(2n,\mathbb R)$ on $\mathbb C^n$, we shall write $$T(z)=T_1 (z)+T_2 (z), \ \ T_1(z)=Az, \ \ T_2(z)=B\bar z, \ \ \forall \ z\in\mathbb C^n,$$ where $
T_1$ (resp. $T_2$) denotes the $\mathbb C$-linear (resp. anti-$\mathbb C$-linear) part of $T$, $A$ and $B$ are matrices. We say that $T$ is admissible if $T_1$ is invertible (i.e. $\det A\neq 0$), in which case we call the associated tensor in $(\mathbb C^n)^*\otimes \overline{\mathbb C^n}$ of $$\Phi(T):=T_1^{-1} T_2$$ the Berndtsson tensor (also denoted by $\Phi(T)$) of $T$.
**Remark 1**: Since $$\Phi(TS)=S^{-1}T_1^{-1} T_2 \bar S, \ \ \forall \ S\in GL(n,\mathbb C),$$ we know $\Phi(TS)=\Phi(T)$ as tensors in $(\mathbb C^n)^*\otimes \overline{\mathbb C^n}$. Thus $\Phi$ is well defined on $$\mathcal J_A:=AGL(2n,\mathbb R)/GL(n,\mathbb C),$$ where $AGL(2n,\mathbb R)$ denotes the space of admissible matrices in $GL(2n,\mathbb R)$. Notice that $\mathcal J(V, \omega)$ lies in $\mathcal J_A$ (see Lemma \[le3.1\]): in fact if $T^* (i{\ensuremath{\overline\partial}}{\ensuremath{\overline\partial}}|z|^2)=i{\ensuremath{\overline\partial}}{\ensuremath{\overline\partial}}|z|^2$ then $T'$ must be invertible (otherwise $T^* (i{\ensuremath{\overline\partial}}{\ensuremath{\overline\partial}}|z|^2)$ would be negative along a complex line). One may further show that $$\mathcal J(V, \omega)\simeq Sp(2n,\mathbb R)/U(n,\mathbb C).$$
**Remark 2**: We claim that the Berndtsson tensor is essentially equivalent to the KNS tensor in Definition in \[de3.1\]. In fact, let $V_J^*$ be the $\mathbb C$-vector space $(V^*, J)$. One may verify that $$u\mapsto \frac{u-i\otimes Ju}{2},$$ is a $\mathbb C$-linear isomorphism from $V_{J}^*$ onto $\wedge_J^{1,0}$. Replacing $J$ by $-J$ one also get a $\mathbb C$-linear isomorphism from $V_{-J}^*$ onto $\wedge_J^{0,1}$. Thus implies our claim.
Canonical homogeneous space structure on $\mathcal J(V, \omega)$
----------------------------------------------------------------
As a domain in $\mathbb C^{\frac{n(n+1)}{2}}$, ${\rm BSD_{III}}$ has a natural complex structure, its pull back along $\tau_{\xi} \circ \Phi_J$ thus gives a complex structure on $\mathcal J(V, \omega)$. In this section, we shall prove that all the $\tau_{\xi} \circ \Phi_J$ pull back complex structures on $\mathcal J(V, \omega)$ are equivalent! In particular, it implies that the group (so called automorphism group) of biholomorphic mappings of $\mathcal J(V, \omega)$ contains $$\tau_{\xi} \circ \Phi_J \circ (\tau_{\xi'} \circ \Phi_{J'})^{-1},$$ which maps $J'$ to $J$, thus the automorphism group of $\mathcal J(V, \omega)$ is transitive. Hence $\mathcal J(V, \omega)$ is a special bounded homogeneous domain (usually called *bounded symmetric domain of third type*).
*Main idea*: Let $J(t)$, $|t|<1$, be a smooth curve in $\mathcal J(V, \omega)$. Apply the differential to $J(t)^2=-1$ and $\omega(u, Jv)=\omega(v, Ju)$ (here we look at $J$ as a complex structure on $V^*$ and $u, v\in V^*$), we get $$J_tJ=-JJ_t, \ \ \omega(u, J_t v)=\omega(v, J_t u).$$ Thus the tangent space of $\mathcal J(V, \omega)$ at $J(0)$ can be written as $$T_{J(0)}:=\{A\in {\rm End}(V^*): AJ(0)=-J(0)A, \ \ \omega(u, A v)=\omega(v, A u), \ \forall \ u, v\in V^* \}.$$ Notice that $A\in T_{J(0)}$ implies $AJ(0)\in T_{J(0)}$, hence the following mapping $$A\to AJ(0)$$ defines an almost complex structure, say $\bm J$, on $\mathcal J(V, \omega)$.
**Remark**: If we use the original definition of the complex structure $J$ as an endmorphism on $V$, then of course the associated tangent space will be a subspace of ${\rm End}(V)$. But *different* from the above $V^*$ formulation, the corresponding almost complex structure $\bm J$ will be defined by $$\label{eq:bmJ}
\bm J(A):= J(0)A, \ \ \forall \ A\in T_{J(0)}\in {\rm End}(V).$$ The reason is as follows: fix $T\in {\rm End}(V)$, the associated $\# T\in {\rm End}(V^*)$ is then given by $$\# T (u):=u\circ T, \ \ \forall u \in V^*,$$ thus $\# (TS) (u)=u\circ T\circ S=\# S(u\circ T)$ gives $$\# (TS) =\# S \# T.$$
We call $\bm J$ the canonical almost complex structure on $\mathcal J(V, \omega)$.
The theorem below implies that $\bm J$ is integrable.
\[th3.4\] Each $\tau_{\xi} \circ \Phi_J : \mathcal J(V, \omega) \to {\rm BSD_{III}}$ is $\bm J$ holomorphic.
Since $\tau_{\xi}$ is $\mathbb C$ linear, it is enough to prove that $\Phi_J$ is $\bm J$ holomorphic. By the lemma below, we have $$\Phi_J(J(t))=2S(t)^{-1}-1, \ \ S(t):=1-JJ(t)$$ Thus the differential of $\Phi_J$ at $J(0)$ can be written as $$T: A \mapsto 2S(0)^{-1} JA S(0)^{-1}.$$ What we need to prove is $$T(\bm J A)=T(A) J$$ (notice that if $T(A)\in \wedge_J^{-1,1}$ then $T(A)J=i T(A)$, thus the natural complex structure on the image of $\Phi_J$ is given by $T(A) \mapsto T(A) J$), i.e. $$\label{eq3.4}
2S(0)^{-1} JAJ(0) S(0)^{-1}= 2S(0)^{-1} JA S(0)^{-1} J.$$ Compute $$S(0)^{-1} J=J-JS(0)^{-1}=(J S(0)-J)S(0)^{-1},$$ reduces to $$AJ(0)=A (J S(0)-J),$$ which follows from $J S(0)-J=J(0)$.
$\Phi_J(J')=(1+JJ')(1-JJ')^{-1}$.
Put $$\Psi= \frac{1-iJ}2 \frac{1-iJ'}2 + \frac{1+iJ}2 \frac{1+iJ'}2=\frac{1-JJ'}{2},$$ it suffices to show $$\Phi=(1-\Psi)\Psi^{-1},$$ i.e. we need to check that if $u\in \wedge^{1,0}_J$ then $$(1+\Phi) u \in \wedge^{1,0}_{J'} , \ \
-\Phi u \in \wedge^{0,1}_{J},$$ which follows since the projection to $\wedge^{1,0}_J$ with respect to $\mathbb C\otimes V^*=\wedge^{1,0}_J \oplus \wedge^{0,1}_J$ can be written as $(1-iJ)/2$ and $1+
\Phi=\Psi^{-1}$.
**Remark 1**: One may also prove Theorem \[th3.4\] using the Berndtsson tensor. In fact, the complex structure $J_T$ determined by $$Tz=z+B\bar z$$ satisfies $J_T\cdot T=T\cdot i$. Apply the derivative we get $$J'_T\cdot T+ J_T\cdot T'=T'\cdot i.$$ Notice that $T' z= B'\bar z$ gives $T'\cdot i =-i \cdot T'$, thus we have $$\Phi_* (J'_T) =T'=-(J_T+i)^{-1}\cdot J'_T\cdot T.$$ and $$i \cdot T'=T'\cdot (-i)=(J_T+i)^{-1}\cdot J'_T\cdot T\cdot i=(J_T+i)^{-1}\cdot J'_T
\cdot J_T\cdot T.$$ Together with $J'_T
\cdot J_T = - J_T\cdot J'_T$ the above identity gives $$i \cdot T'=-(J_T+i)^{-1}\cdot J_T\cdot J'_T\cdot T=\Phi_* (J_T\cdot J'_T),$$ which implies that $\Phi$ is holomorphic (by ).
**Remark 2**: The Berndtsson tensor approach also naturally gives a *holomorphic motion* of $\mathbb C^n$: $$F:{\rm BSD_{III}} \times \mathbb C^n \to {\rm BSD_{III}} \times \mathbb C^n; \ \ F(B, z)=(B, \zeta), \ \ \zeta:=z+B\bar z.$$ We claim that $$\Omega:=(F^{-1})^* (i\partial{\ensuremath{\overline\partial}}|z|^2)$$ is $(1,1)$ with respect to the $(B, \zeta)$ coordinate. One approach is to compute $\Omega$ directly using $$z=(1-B\bar B)^{-1} (\zeta-B\bar \zeta).$$ Here we shall introduce another approach: Fix any Kähler metric $\omega_{\ensuremath{\mathcal{B}}}$ on $ {\rm BSD_{III}}$, it suffices to show that the symplectic form $\tilde \Omega:=\Omega+ \omega_{\ensuremath{\mathcal{B}}}$ is $(1,1)$. Notice that $$d\zeta=dz+Bd\bar z+dB\wedge \bar z$$ has no $d\bar B$ part, which gives $$\tilde \Omega(d\zeta, dB)=0.$$ Thus it is enough to show that $\tilde \Omega$ has no horizontal $(2,0)$-part, i.e. $\tilde \Omega(d\zeta^j, d\zeta^k)=0$,which follows directly from the fact that $B$ is symmetric. Our claim implies the following
\[th:bo-1\] Put ${\ensuremath{\mathcal{X}}}:={\rm BSD_{III}} \times \mathbb C^n$, then the natural projection $$p: (B,\zeta) \to B,$$ defines a (non-proper) Poisson Kähler fibration $p:({\ensuremath{\mathcal{X}}}, \Omega)\to{\rm BSD_{III}} $.
### Higgs bundles over ${\rm BSD_{III}}$
Let $t=\{t^j\}$ be the canonical coordinate on ${\rm BSD_{III}}$, for each $t\in {\rm BSD_{III}}$, let us denote by $\mathcal A^k_t$ the space of translation invariant $k$-forms on $p^{-1}(t)=\mathbb C^n$. Then we have the following finite rank vector bundle $$\mathcal A^k:=\{\mathcal A^k_t\}_{t\in {\rm BSD_{III}}}.$$ Notice that our holomorphic motion $F$ defines a flat connection $$\label{eq:Bo-nabla}
\nabla:=\sum dt^j \otimes L_{V_j} +\sum d\bar t^j\otimes L_{\bar V_j}, \ \ V_j:=F_*\left(\frac{\partial}{\partial t^j}\right),$$ on $\mathcal A^k$ (since $F$ is linear on fibers!). By the Cartan formula for the Lie derivative, we have $$\label{eq:Bo3.8}
L_{\bar V_j}=[d, \delta_{V_j}]=[\partial, \delta_{V_j}]+[{\ensuremath{\overline\partial}}, \delta_{V_j}].$$ Denote by $$\mathcal A^{p,q}:=\{\mathcal A^{p,q}_t\}_{t\in {\rm BSD_{III}}}$$ each $(p,q)$ component of $\mathcal A^k$, i.e. each $\mathcal A^{p,q}_t$ is the space of translation invariant $(p,q)$-forms on $p^{-1}(t)$. For bidegree reason, and , $\nabla$ induces the following connection $$D=\sum dt^j \otimes D_{\partial/\partial t^j} + \sum d\bar t^j \otimes D_{\partial/\partial \bar t^k}, \ \ D_{\partial/\partial t^j}=[\partial, \delta_{V_j}], \ D_{\partial/\partial \bar t^k}=[{\ensuremath{\overline\partial}}, \delta_{\bar V_k}],$$ on each $\mathcal A^{p,q}$. Moreover, we have $$\nabla-D=\theta+\bar \theta, \ \ \theta:=\sum dt^j \otimes [{\ensuremath{\overline\partial}}, \delta_{V_j}].$$ We call $\theta$ the *Higgs field* on $\mathcal A^k$. We also need the following lemma (a special case of Theorem \[th:122\], or see the next section for a simple proof when $k=1$)
$D$ defines a Chern connection on each $\mathcal A^{p,q}$ with respect to the metric defined by $\Omega$, moreover $[{\ensuremath{\overline\partial}}, \delta_{V_j}]^*=[\partial, \delta_{\bar V_j}]$.
**Remark**: The above lemma implies that each $(\mathcal A^k, \theta, D)$ is a flat Hermitian Higgs bundle. In the next section, we shall introduce a “coordinate free” approach to this Higgs bundle structure.
Higgs bundles over $\mathcal J(V, \omega)$
------------------------------------------
Consider the following trivial vector bundle $$\mathcal H^1:=\mathcal J(V, \omega) \times (\mathbb C\otimes V^*)$$ over our symmetric domain $\mathcal J(V, \Omega)$. With respect to a global holomorphic coordinate system, say $\{t^j\}$ (comes from an arbitrary realization $\tau_{\xi}\circ \Phi_J$), on $\mathcal J(V, \Omega)$, the natural trivial flat connection on $\mathcal H^1$ can be written as $$\label{eq:nabla}
\nabla:=\sum dt^j \otimes \frac{\partial}{\partial t^j} +\sum d\bar t^j\otimes \frac{\partial}{\partial \bar t^j}.$$ Another structure on $\mathcal H^1$ is the following non-trivial decomposition $$\mathcal H^1= \mathcal H^{0,1}\oplus \mathcal H^{1,0},$$ where $$\mathcal H^{0,1}:=\{\wedge_J^{0,1}\}_{J\in\mathcal J(V, \omega)}, \ \ \mathcal H^{1,0}:=\{\wedge_J^{1,0}\}_{J\in\mathcal J(V, \omega)}.$$ Denote by $\pi^{0,1}$ and $\pi^{1,0}$ the natural projections to $ \mathcal H^{0,1}$ and $ \mathcal H^{1,0}$ respectively, then the induced connection on $ \mathcal H^{0,1}$ can be written as $$D=\sum dt^j \otimes \left( \pi^{0,1}\frac{\partial}{\partial t^j} \pi^{0,1}\right) +\sum d\bar t^j\otimes \left(\pi^{0,1}\frac{\partial}{\partial \bar t^j} \pi^{0,1}\right),$$ we shall use the same letter $D$ to denote the induced connection on $ \mathcal H^{1,0}$, which can be written as $$D=\sum dt^j \otimes \left( \pi^{1,0}\frac{\partial}{\partial t^j} \pi^{1,0}\right) +\sum d\bar t^j\otimes \left(\pi^{1,0}\frac{\partial}{\partial \bar t^j}\pi^{1,0}\right).$$ Now we think of $D$ as a connection on $\mathcal H^1=\mathcal H^{0,1}\oplus \mathcal H^{1,0}$, a crucial observation is
\[pr3.6\] There exists a bundle map, say $\theta$, from $\mathcal H^{1,0}$ to $\mathcal H^{0,1} \otimes \wedge^{1,0} \,T^*\mathcal J(V, \omega)$ such that $$\nabla=D+\theta+\bar \theta.$$
Fix $J\in \mathcal J(V, \omega)$ and $u\in \wedge^{1,0}_{J}$, by Theorem \[th3.4\], we know that $\tau_\xi\circ \Phi_J$ gives global holomorphic coordinate, say $t\in {\rm BSD}_{III}$, on $\mathcal J(V, \omega)$. Denote by $J(t)$ the associated complex structure in $\mathcal J(V, \omega)$, then we have $J(0)=J$ and $$\label{eq3.5}
\tilde u: t\mapsto (1+\Phi(J(t); J)) u\in \wedge^{1,0}_{J(t)}$$ is holomorphic in $t$. Thus $$\nabla \tilde u = \sum dt^j \otimes \frac{\partial}{\partial t^j} \Phi(J(t); J) u.$$ Since $ \Phi(J(t); J) u \in \wedge_J^{0,1}$ and $J=J(0)$, we know that $\Phi(J(t); J) u$ has no degree $(1,0)$ part at $t=0$, thus $$D \tilde u|_{t=0}=0.$$ which gives (since $(\nabla- D)$ is a tensor) $$(\nabla- D)(0) u = (\nabla- D)(u)|_{t=0}= \nabla \tilde u|_{t=0}= \sum dt^j \otimes \frac{\partial}{\partial t^j}\big|_{t=0} \Phi(J(t); J) u, \ \ \forall \ u\in \wedge_J^{1,0}.$$ In particular, it implies that the tensor $(\nabla- D)$ has no $d\bar t^j$ components, thus $\nabla- D$ is of pure degree $(1,0)$. Let us write it as $\theta$, i.e. $$\label{eq3.6}
\theta u= \sum dt^j \otimes \frac{\partial}{\partial t^j}\big|_{t=0} \Phi(J(t); J) u, \ \ \forall \ u\in \wedge_J^{1,0}.$$ To summarize, we have proved that $$\nabla = D+\theta \ \ \text{on smooth sections of} \ \mathcal H^{1,0}.$$ Consider $\bar u$ and $\overline{\tilde u}$ instead of $u$ and $\tilde u$, a similar argument gives $$\nabla = D+\bar \theta \ \ \text{on smooth sections of} \ \mathcal H^{0,1}.$$ Thus the proposition follows.
**Remark**: Notice that each component of $\theta$ maps $\mathcal H^{1,0}$ to $\mathcal H^{0,1}$, thus $\theta$ is of degree $(-1,1)$. But $D$ has degree $(0,0)$ thus for bidegree reason flatness of $\nabla$ gives $$(D^{1,0}\theta+\theta D^{1,0})=\theta^2=(D^{1,0})^2=0, \ \ (D^{0,1}\bar\theta+\bar \theta D^{0,1})=\bar\theta^2=(D^{0,1})^2=0,$$ and $$(D^{1,0}\bar\theta+\bar \theta D^{1,0})=(D^{0,1}\theta+\theta D^{0,1})=0, \ \ \ D^2+\theta\bar \theta+\bar\theta\theta=0.$$ In particular, we know that $D^{0,1}$ defines a holomorphic vector bundle structure, with respect to which $\theta$ is holomorphic and satisfies $\theta^2=0$, moreover $$(D+\theta+\bar \theta)^2=0.$$ To summarize, we have (see [@Wang_Higgs] for notations on Higgs bundles)
\[th3.7\] $(\mathcal H^1, \theta, D)$ is a flat Higgs bundle over $\mathcal J(V,\omega)$.
**Remark**: Recall that a Higgs bundle is said to be *admissible* if the associated bundle map of $\theta$, still denoted by $$\theta: \partial/ \partial t^j \mapsto \theta_j,$$ is injective, where $\theta_j$ is defined such as $\theta=\sum dt^j \otimes \theta_j$. In our case, since each $\Phi_J$ is biholomorphic, implies that $(\mathcal H^1, \theta, D)$ is admissible.
In order to study the geometry of the base manifold, it is also necessary to find a Hermitian metric on the Higgs bundle such that $D$ is the associated Chern connection. In our case, the natural metric on $\mathcal H^{0,1}$ can be defined by $$(u^t, v^t):= \frac{i\, u^t\wedge \bar v^t\wedge \omega_{n-1}}{\omega_n}, \ \ \omega_p:=\frac{\omega^p}{p!},$$ where $\{u^t, v^t\}_{t\in\mathcal J(V, \omega)}$ are smooth sections of $\mathcal H^{1,0}$. Apply the partial derivative we get $$\frac{\partial}{\partial {t^j}} (u^t, v^t) = \frac{i\, \nabla_{\partial/\partial {t^j}} u^t\wedge \bar v^t\wedge \omega_{n-1}}{\omega_n} +\frac{i\, u^t\wedge \overline{\nabla_{\partial/\partial {\bar t^j}} v^t}\wedge \omega_{n-1}}{\omega_n}.$$ Proposition \[pr3.6\] implies $$\nabla_{\partial/\partial{t^j}} u^t= D_{\partial/\partial{t^j}} u^t + \theta_j u^t, \ \ \
\nabla_{\partial/\partial{\bar t^j}} v^t= D_{\partial/\partial{\bar t^j}} v^t,$$ where $\theta_j u^t$ is the degree $(0,1)$ part, thus for bidegree reason, we have $$\theta_j u^t \wedge \bar v^t\wedge \omega_{n-1} =0,$$ which gives $$\frac{\partial}{\partial {t^j}}(u^t, v^t) = (D_{\partial/\partial{t^j}} u^t, v^t)+ (u^t, D_{\partial/\partial{\bar t^j}} v^t),$$ thus $D$ is a Chern connection on $\mathcal H^{1,0}$. A similar argument also works for $\mathcal H^{0,1}$. In general, one may consider the following trivial vector bundle $$\mathcal H^k:=\mathcal J(V, \omega) \times \wedge^k (\mathbb C\otimes V^*).$$ We can write $$\mathcal H^k=\oplus_{p+q=k} \mathcal H^{p,q}, \qquad p,q\geq 0$$ where $$\mathcal H^{p,q}:=\{\wedge_J^{p,q}\}_{J\in\mathcal J(V, \omega)}.$$ Similar as $k=1$ case, we can also define $\nabla, D, h, \theta$ for general $\mathcal H^k$, in particular the Higgs field $\theta$ is given by the Kodaira–Spencer action (i.e. degree $(-1,1)$ action of the Kodaira–Spencer tensor) $$\label{eq3.7}
(\theta_j u, v)=(u, \overline{\theta_j} v), \ \ i.e. \ \theta_j^*=\overline{\theta_j}.$$ One may prove the following result (for general $k$, we also need the pointwise hard Lefschetz decomposition to check that $D$ is a Chern connection, see [@Wang17]).
\[th:higgs\] Each $(\mathcal H^k, \theta, D)$ ($0\leq k \leq 2n$) is a flat Higgs bundle over $\mathcal J(V,\omega)$ with Chern connection $D$. $(\mathcal H^k, \theta)$ is admissible if $1\leq k\leq 2n-1$.
**Remark**: For each $1\leq k\leq 2n-1$, since $(\mathcal H^k, \theta)$ is admissible, we know the bundle map $$\theta: \partial/\partial t^j \mapsto \theta_j \in {\rm End} (\mathcal H^k)$$ is an injective holomorphic map, the pull back of the metric on $ {\rm End} (\mathcal H^k)$ defines a natural Hermitian metric on $\mathcal J(V, \omega)$, which is called *Lu’s Hodge metric* in [@Wang_Higgs], and shall denote the associated fundamental form as $\omega_{DF, k}$. One may verify that all $\omega_{DF, k}$ are equal up to positive constants, i.e. $$\omega_{DF, k} =c(k,n) \omega_{DF, 1},$$ where $c(k,n)$ depends only on $k$ and $n$. In fact, $\omega_{DF, 1}$ is just a “linear or pointwise version” of the non-harmonic Weil–Petersson metric in Definition \[de:DFWP\].
We call $\omega_{DF,1}$ is the canonical Weil–Petersson metric on $\mathcal J(V, \omega)$.
Burns’ result on negativity of $\mathcal J(V, \omega)$
------------------------------------------------------
The following curvature property of the canonical Weil–Petersson metric $\omega_{DF,1}$ is essentially contained in Burns’ paper [@Burns82].
$\omega_{DF,1}$ is Kähler on $\mathcal J(V, \omega)$ with non-positive holomorphic bisectional curvature; moreover, its holomorphic sectional curvature is bounded above by $-2/n$.
**Remark**: Burns’ theorem is a direct consequence of Theorem \[th:higgs\] (see the main theorem in [@Wang_Higgs]). The main idea in this paper is based on the above Higgs bundle approach to Burns’ theorem: instead of one single complex structure space $\mathcal J(V, \omega)$, we consider a family $$\{\mathcal J(T_x X, \omega_x)\}_{x\in X}$$ indexed by the points in a fixed compact symplectic manifold $(X, \omega)$. We already know that on each $\mathcal J(T_x X, \omega_x)$ there is an associated Higgs bundle. Apply Burns’ theorem on each $\mathcal J(T_x X, \omega_x)$, we get one proof of Theorem A. The precise formulation is based on the so called *Donaldson–Fujiki picture* (see [@D97] and [@Fujiki]).
We call the space, say $\mathcal J_\omega$, of all compatible almost complex structures on a compact symplectic manifold $(X, \omega)$ the Donaldson–Fujiki space.
**Remark**: Recall that a compatible almost complex structure on $(X, \omega)$ is defined to be a smooth family of linear complex structures $$J:=\{J_x\}_{x\in X},$$ where each $J_x$ is a linear complex structure on the tangent space $T_x X$. Moreover, “compatible” means that each $J_x$ is compatible with the symplectic form, say $\omega_x$, of $\omega$ at $x$. Using the notation in section 3 one may write $$J_x\in \mathcal J(T_x X, \omega_x),$$ thus $\mathcal J_\omega$ can be interpreted as the space of smooth sections of the following fiber space over $X$ $$\{\mathcal J(T_x X, \omega_x)\}_{x\in X}.$$ The following definition tells us how to look at the complex structure on $\mathcal J_\omega$.
\[de:complex-structure\] Let ${\ensuremath{\mathcal{B}}}$ be a complex manifold. We say that a mapping $$\tau : {\ensuremath{\mathcal{B}}}\to \mathcal J_\omega$$ is holomorphic if it satisfies the following three contitions:
- for every $t\in {\ensuremath{\mathcal{B}}}$, the almost complex structure $\tau(t)$ is integrable;
- for every $x\in X$, the mapping from ${\ensuremath{\mathcal{B}}}$ to $\mathcal J(T_x X, \omega_x)$ defined by $$\tau_x: t\mapsto \tau(t)_x\in \mathcal J(T_x X, \omega_x)$$ is holomorphic with respect to the complex structure (see section 3) on $\mathcal J(T_x X, \omega_x)$;
- $\tau(t)_x$ depends smoothly both on $t$ and $x$.
Poisson–Kähler fibration in Donaldson–Fujiki picture
----------------------------------------------------
Let $\tau : {\ensuremath{\mathcal{B}}}\to \mathcal J_\omega$ be a holomorphic mapping. Then for each $t\in {\ensuremath{\mathcal{B}}}$, we have a compact Kähler manifold $$X_t:= (X, \omega, \tau(t)).$$ It is natural to ask whether $$\mathcal X:=\{X_t\}_{t\in{\ensuremath{\mathcal{B}}}}$$ is a holomorphic family or not. In [@WW], we shall prove the following result:
If $\tau : {\ensuremath{\mathcal{B}}}\to \mathcal J_\omega$ is a holomorphic mapping then $\omega$ is degree $(1,1)$ with respect the non-trivial integrable complex structure $\{\tau(t)\}_{t\in {\ensuremath{\mathcal{B}}}}$ on ${\ensuremath{\mathcal{X}}}$. In particular, the natural projection defines a *canonical Poisson–Kähler fibration* $$p: ({\ensuremath{\mathcal{X}}}, \omega)\to \mathcal {\ensuremath{\mathcal{B}}},$$ the associated non-harmonic Weil-Petersson metric is equal to $\omega_{DF}$ in Definition \[de:DFWP\].
**Remark 1**: $\omega_{DF}$ (may up to a constant) is equal to the restriction to ${\ensuremath{\mathcal{B}}}$ of the canonical Kähler metric on $\mathcal J_\omega$ defined by Donaldson [@D97] and Fujiki [@Fujiki].
**Remark 2**: Using integral curves generated by the Horizontal distribution, one may prove that (see [@WW] or [@Bern19]) every Poisson–Kähler fibration comes from the above canonical Poisson–Kähler fibration “locally” (in the following sense):
$\star$: For every Poisson–Kähler fibration $p$, assume that the based manifold ${\ensuremath{\mathcal{B}}}$ is simply connected, then there is a holomorphic mapping $\tau: {\ensuremath{\mathcal{B}}}\to \mathcal J_\omega$ such that the associated canonical Poisson–Kähler fibration is equal to $p$.
**Remark 3**: Intuitively negative curvature property (NC) of each $\mathcal J(T_xX, \omega_x)$ gives NC of $\mathcal J_\omega$, which implies NC of an arbitrary Poisson–Kähler fibration. In the next section, we shall also compute the curvature of the non-harmonic Weil–Petersson metric without using integral curves (i.e. without pulling it back to the symplectically trivial fibration). Essentially the only difference is that we replace the usual derivative by the Lie derivative. But the Lie derivative method has the following advantage: *it also works for general relative Kähler fibrations*.
Curvature of the non-harmonic Weil–Petersson metric
===================================================
Proof of Theorem A
------------------
We rewrite Theorem A in the following form:
\[th:wang-1\] Let $p:({\ensuremath{\mathcal{X}}},\omega) \to {\ensuremath{\mathcal{B}}}$ be a Poisson–Kähler fibration with injective Kodaira–Spencer map. Denote by $\omega_{DF}$ the non-harmonic Weil–Petersson metric on ${\ensuremath{\mathcal{B}}}$ (see Definition \[de:DFWP\]). Denote by $|X_t|:=\int_{X_t}\frac{\omega_t^n}{n!}$ (a constant) the volume of the fibers $(X_t,\omega_t:=\omega|_{X_t})$. Then
- $\omega_{DF}$ is Kähler;
- Holomorphic sectional curvature of $\omega_{DF}$ is bounded above by $-\frac2n |X_t|^{-1}$;
- Holomorphic bisectional curvature of $\omega_{DF}$ is non-positive.
**Idea of the proof**: We shall follow the proof of the main theorem in [@Wang_Higgs]. The main idea is to use the Higgs bundle structure on the infinite rank bundle $\mathcal A^k:=\{C^\infty(X_t, \wedge^k(T^*X_t \otimes \mathbb C))\}_{t\in{\ensuremath{\mathcal{B}}}}$ (see the appendix, compare with the $\mathcal H^k$ bundle in Theorem \[th:higgs\]). To define the Higgs connection $\nabla$ on $\mathcal A^k$, one simply replace the partial derivatives in by Lie-derivatives: $$\nabla=\sum dt^j \otimes [d, \delta_{V_j}] + \sum d\bar t^j \otimes [d, \delta_{\bar V_j}],$$ where $V_j$ are the horizontal lifts of $\partial/\partial t^j$ with respect to $\omega$. The main observation is that:
*\[Key fact\]: $\nabla$ is flat if $\omega$ is Poisson Kähler*.
Indeed, if $\omega^{n+1}=0$ then $[V_j , \bar V_k]\equiv 0$ (see Proposition \[pr:rkf\]), thus the Lie-derivative identity $$[[d, \delta_{V_j}], [d, \delta_{\bar V_j}]]
= [d, \delta_{[V_j , \bar V_k]}]=0$$ implies that $(\nabla)^2\equiv 0$. Similar as section 3.3, the associated connection $$D=\sum dt^j \otimes D_{\partial/\partial t^j} + \sum d\bar t^j \otimes D_{\partial/\partial \bar t^k},$$ on each $(p,q)$ component $$\mathcal A^{p,q}:=\{C^\infty(X_t, \wedge^{p,q}(T^*X_t \otimes \mathbb C))\}_{t\in{\ensuremath{\mathcal{B}}}}$$ of $\mathcal A^k$ are determined by $$D_{\partial/\partial t^j}= [\partial, \delta_{V_j}], \ \ D_{\partial/\partial \bar t^k}= [{\ensuremath{\overline\partial}}, \delta_{\bar V_k}]$$ for bidegree reason. Thus the associated Higgs field $\theta$ defined by $\nabla-D=\theta+\bar \theta$ can be written as $$\theta=\sum dt^j\otimes \kappa_j,$$ where each $\kappa_j$ denotes the action of the Lie–derivative $[{\ensuremath{\overline\partial}}, \delta_{V_j}]$. Similar as Theorem \[th:higgs\], we can prove that $D$ is a Chern connection and its curvature $\Theta=D^2$ satisfies $$\label{eq:0305-1}
\Theta_{j\bar k}:= [D_{\partial/\partial t^j}, D_{\partial/\partial \bar t^k}]=-[\kappa_j, \overline{\kappa_k}].$$ Now since our non-harmonic Weil–Petersson metric $\omega_{DF}$ $$\langle \frac{\partial}{\partial t^l},\frac{\partial}{\partial t^m} \rangle_{\rm DF} = \langle \kappa_l, \kappa_m \rangle$$ is defined as the $\theta$-pull back metric (or a Hodge type metric, see the remark behind Proposition \[pr:DF-DF\]), we can write it as (see Proposition \[pr:DF-DF\] for more details) $$\omega_{DF}= i \langle \theta, \theta \rangle.$$
The above formula for $\omega_{DF}$ gives $$d \omega_{DF}=i \langle \bm D\theta, \theta \rangle - i \langle \theta, \bm D \theta \rangle,$$ where we use $\bm D$ to denote the Chern connection on ${\rm End} (\mathcal A)$, which satisfies $$\label{eq4.2}
\bm D \theta= [D, \theta]=D\theta +\theta D,$$ where $D$ in $[D, \theta]=D\theta +\theta D$ means the Chern connection on $\mathcal A$. Poisson–Kählerness of $\omega$ implies flatness of $\nabla=D+\theta+\bar\theta$, which gives $$D\theta +\theta D=0.$$ Thus we have $d \omega_{DF}=0$. Moreover, injectivity of the Kodaira–Spencer map gives strict positivity of $\omega_{DF}$. Thus $\omega_{DF}$ is Kähler.
**Remark**: is well known for finite rank vector bundles and its proof also applies to our infinite rank case, see the proof of Proposition \[pr: flat-higgs\] $iii)$.
Similar to the Chern connection, the Chern curvature $\bm\Theta$ on ${\rm End} (\mathcal A)$ also satisfies $\bm\Theta \kappa_l=[\Theta, \kappa_l]$, thus we have $$\label{eq:0305-2}
\frac{\partial^2}{\partial t^j \partial \bar t^k} \langle \kappa_l, \kappa_m \rangle = -\langle [\Theta_{j\bar k}, \kappa_l], \kappa_m \rangle + \langle [D_{\partial/\partial t_j}, \kappa_l], [D_{\partial/\partial t_k}, \kappa_m] \rangle.$$ By , we have (since $\kappa_j\kappa_l =0$ on $\mathcal A^1$) $$-[\Theta_{j\bar k}, \kappa_l]=[[\kappa_j, \overline{\kappa_k}], \kappa_l]=\kappa_j \overline{\kappa_k} \kappa_l + \kappa_l \overline{\kappa_k} \kappa_j,$$ thus $ \kappa_l^*=\overline{\kappa_l}$ gives $$\label{eq:wang-theta-ak}
-\langle [\Theta_{j\bar k}, \kappa_l], \kappa_m \rangle= \langle \overline{\kappa_k} \kappa_l , \overline{\kappa_j}\kappa_m \rangle + \langle \overline{\kappa_k} \kappa_j , \overline{\kappa_l}\kappa_m \rangle.$$ Now we have $$\label{eq4.5}
\frac{\partial^2}{\partial t^j \partial \bar t^j} \langle \kappa_j, \kappa_j \rangle \geq 2 \, || \overline{\kappa_j} \kappa_j||^2.$$ Thus it suffices to show $ || \overline{\kappa_j} \kappa_j||^2 \geq ||\kappa_j||^4/(n|X_t|)$. The main trick to show the following pointwise estimate $$| \overline{\kappa_j} \kappa_j |^2_{\omega_t} (x)=\sum_{k=1}^n | \overline{\kappa_j} \kappa_j \cdot e_k |^2_{\omega_t(x)} \geq \sum_{k=1}^n \big| \langle\overline{\kappa_j} \kappa_j \cdot e_k, e_k \rangle_{\omega_t(x)} \big|^2 , \qquad \ \forall \ x\in X_t ,$$ where $\{e_k\}_{1\leq k \leq n}$ denotes an orthonormal base of $T^*_x X_t$. Since $$\langle \overline{\kappa_j} \kappa_j \cdot e_k, e_k \rangle_{\omega_t(x)}=|\kappa_j \cdot e_k|^2_{\omega_t(x)},$$ $\sum_{k=1}^n |a_k|^4 \geq (\sum |a_k|^2)^2 /n $, $a_k:=|\kappa_j \cdot e_k|^2_{\omega_t(x)}$, gives $$|\overline{\kappa_j} \kappa_j|^2_{\omega_t} (x) \geq \frac{1}{n} \left(\sum_{k=1}^n |\kappa_j \cdot e_k|^2_{\omega_t(x)} \right)^2= \frac1n |\kappa_j|^4_{\omega_t}(x).$$ Integrate the above inequality on $X_t$ we get $$|| \overline{\kappa_j} \kappa_j||^2 \geq \frac 1n \int_{X_t} |\kappa_j|^4_{\omega_t}\, \frac{\omega_t^n}{n!} \geq \frac1n\left( \int_{X_t} \frac{\omega_t^n}{n!}\right)^{-1} ||\kappa_j||^4=\frac{||\kappa_j||^4}{n} |X_t|^{-1},$$ where we use Hölder inequality in the second inequality. Thus gives Theorem \[th:wang-1\] $ii)$.
We shall use the argument in the proof of the main theorem in [@Wang_Higgs]. To prove non-positivity of the holomorphic bisectional curvature, it is enough to show that each $|\partial/\partial t^l|^2_{DF}=||\kappa_l||^2$ is plurisubharmonic on ${\ensuremath{\mathcal{B}}}$. More precisely, we need to show $$I(\xi):=\sum \xi^j\bar\xi^k \frac{\partial^2}{\partial t^j \partial \bar t^k} ||\kappa_l||^2 \geq 0.$$ Notice that and $\Theta_{j\bar k}=-[\kappa_j, \overline{\kappa_k}]$ imply that $$I(\xi) \geq \langle [[\kappa, \bar\kappa], \kappa_l], \kappa_l \rangle, \ \ \kappa := \sum \xi^j \kappa_j.$$ By the super Jacobi identity, we have $$[[\kappa, \bar\kappa], \kappa_l]=[\kappa, [\bar\kappa, \kappa_l]]-[\bar\kappa, [\kappa, \kappa_l]],$$ since $\theta^2=0$ implies $[\kappa, \kappa_l]=0$, the above identity reduces to $[[\kappa, \bar\kappa], \kappa_l]=[\kappa, [\bar\kappa, \kappa_l]]$. Hence $$\langle [[\kappa, \bar\kappa], \kappa_l], \kappa_l \rangle= \langle [\kappa, [\bar\kappa, \kappa_l], \kappa_l \rangle= \langle \kappa[\bar\kappa, \kappa_l]-[\bar\kappa, \kappa_l] \kappa, \kappa_l \rangle,$$ now $\kappa^*_l=\bar\kappa_l$ gives $\langle \kappa[\bar\kappa, \kappa_l], \kappa_l \rangle = \langle [\bar\kappa, \kappa_l], \bar\kappa \kappa_l \rangle$ and $\langle [\bar\kappa, \kappa_l]\kappa, \kappa_l \rangle = \langle [\bar\kappa, \kappa_l], \kappa_l \bar\kappa \rangle$, thus we have $$\langle [[\kappa, \bar\kappa], \kappa_l], \kappa_l \rangle= || [\bar\kappa, \kappa_l]||^2\geq 0,$$ which gives $I(\xi) \geq 0$.
Generalized Schumacher formula
------------------------------
The notations in this section are the followings:
1\) $p:({\ensuremath{\mathcal{X}}},\omega) \to {\ensuremath{\mathcal{B}}}$: a general relative Kähler fibration;
2\) $\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}$: Chern curvature of the relative canonical line bundle $K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}$ with respect to the metric defined by $\omega$;
3\) Let $\{t^j\}$ be a holomorphic local coordinate system on ${\ensuremath{\mathcal{B}}}$. Denote by $V^j$ the horizontal lift of $\partial/\partial t^j$ with respect to $\omega$, put $$\kappa_j:=({\ensuremath{\overline\partial}}V_j)|_{X_t}, \ \ \ c_{j\bar k}:=\langle V_j , V_k\rangle_{\omega};$$
4\) $\Box:={\ensuremath{\overline\partial}}{\ensuremath{\overline\partial}}^*+{\ensuremath{\overline\partial}}^*{\ensuremath{\overline\partial}}$ be the ${\ensuremath{\overline\partial}}$-Laplacian on the fiber $X_t$ with respect to the metric $\omega_t:=\omega|_{X_t}$;
5\) For an arbitrary smooth function $\phi$ on ${\ensuremath{\mathcal{X}}}$. Let us define the *vertical* vector field $V^\phi$ on ${\ensuremath{\mathcal{X}}}$ such that $$(V^\phi\,\rfloor\, \omega)|_{X_t}=i({\ensuremath{\overline\partial}}\phi)|_{X_t}.$$ (if we write $\omega$ as $i\partial{\ensuremath{\overline\partial}}g$ locally then $V^\phi=\sum \phi_{\bar\alpha} g^{\bar\alpha\beta} \partial/\partial \zeta^\beta$). Put $$\label{eq:kappa-phi}
\kappa^\phi:=({\ensuremath{\overline\partial}}V^\phi)|_{X_t},$$ (we know that the cohomology class of $\kappa^\phi$ is trivial since it is ${\ensuremath{\overline\partial}}$-exact).
\[th:sch\] With the notation above, we have $$\label{eq:sch1}
\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\overline{V_k})=\langle \kappa_j , \kappa_k\rangle_{\omega_t}-\Box c_{j\bar k}$$ $$\label{eq:sch2}
\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\overline{V^\phi})=\langle \kappa_j , \kappa^\phi\rangle_{\omega_t}+{\ensuremath{\overline\partial}}^*(\kappa_j\cdot \partial\bar \phi)$$ and $$\label{eq:sch3}
\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V^\phi ,\overline{V^\psi})=\langle \kappa^\phi , \kappa^\psi\rangle_{\omega_t}+{\ensuremath{\overline\partial}}^*(\kappa^\phi\cdot \partial\bar \psi)-\langle {\ensuremath{\overline\partial}}\Box\phi, {\ensuremath{\overline\partial}}\psi\rangle_{\omega_t}.$$
**Remark 1**: In case $p$ is a canonically polarized fibration then the Aubin-Yau theorem gives a canonical Kähler metric $\omega$ such that $$i\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}= \omega.$$ Then we have $$\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\overline{V_k})= c_{j\bar k}.$$ Thus is equivalent to the classical Schumacher formula [@Sch12] $$(\Box+1)c_{j\bar k} =\langle \kappa_j , \kappa_k\rangle_{\omega_t}.$$ In general, is equivalent to the generalized Schumacher formula proved by Paun (see formula (35) in [@Paun], see also an early version of [@BPW] for another proof).
**Remark 2**: In case the fibration is Poisson–Kähler we have $c_{j\bar k}\equiv 0$, thus gives $$\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\overline{V_k})=\langle \kappa_j , \kappa_k\rangle_{\omega_t}.$$ In particular, we know that the non-harmonic Weil–Petersson metric is fully determined by the curvature of the relative canonical line bundle.
The idea is to consider the canonical section $$\mathbf 1: t\mapsto dz\otimes (dz)^{-1},$$ of (here $dz:=dz^1\wedge\cdots\wedge dz^n$ denotes a local frame of $K_{X_t}$) $$\mathcal A^{n,0}:=\{\mathcal A^{n,0}(-K_{X_t})\}_{t\in{\ensuremath{\mathcal{B}}}}.$$ Notice that both the holomorphic structure and the metric structure of $\mathcal A^{n,0}$ are isomorphic to $\{C^\infty(X_t)\}_{t\in{\ensuremath{\mathcal{B}}}}$, thus the canonical section, say $1$, of $\{C^\infty(X_t)\}_{t\in{\ensuremath{\mathcal{B}}}}$ defines a flat section of, say $\mathbf 1$, of $\mathcal A^{n,0}$. As a flat section, it satisfies $$\label{eq:0305-4}
D_{\partial/\partial t^j} \mathbf 1=0=D_{\partial/\partial \bar t^k} \mathbf 1,$$ where $D_{\partial/\partial t^j}$ and $D_{\partial/\partial \bar t^k}$ are components of the Chern connection on $\mathcal A^{n,0}$ (see Theorem \[th:122\]). Now on one hand, the Chern curvature operators $\Theta_{j\bar k}$ of $\mathcal A^{n,0}$ vanish on $\mathbf 1$; on the other hand, Theorem \[th:curvature\] gives $$\Theta_{j\bar k} \mathbf 1=[d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \bar V_k]}] \mathbf 1+\Theta^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\bar V_k) \mathbf 1+\langle \kappa_j , \kappa_k\rangle_{\omega_t}\mathbf 1.$$ Thus we have $$0=[d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \bar V_k]}] \mathbf 1+\Theta^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\bar V_k) \mathbf 1+ \langle \kappa_j , \kappa_k\rangle_{\omega_t}\mathbf 1$$ and follows from the following lemma.
\[le:sch1\] $[d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \bar V_k]}] \mathbf 1=-\Box c_{j\bar k} \mathbf 1$.
One may verify this lemma using local coordinates directly, the following is a coordinate free proof. For bidegree reason, we have $$[d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \bar V_k]}] \mathbf 1= [\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \bar V_k]_{1,0}}] \mathbf 1.$$ Notice that Proposition \[pr:rkf\] (3) implies $$\delta_{[V_j, \bar V_k]_{1,0}}=[i{\ensuremath{\overline\partial}}c_{j\bar k}, \Lambda_{\omega_t}],$$ where $\Lambda_{\omega_t}$ denotes the adjoint of $\omega_t\wedge$, thus we have $$[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \bar V_k]_{1,0}}] =[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},[i{\ensuremath{\overline\partial}}c_{j\bar k}, \Lambda_{\omega_t}]].$$ The super Jacobi identity gives $$[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},[i{\ensuremath{\overline\partial}}c_{j\bar k}, \Lambda_{\omega_t}]]=[[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},i{\ensuremath{\overline\partial}}c_{j\bar k}], \Lambda_{\omega_t}]- [i{\ensuremath{\overline\partial}}c_{j\bar k}, [\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \Lambda_{\omega_t}]].$$ Notice that $[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},i{\ensuremath{\overline\partial}}c_{j\bar k}]= i\partial{\ensuremath{\overline\partial}}c_{j\bar k}$, together with the Kähler identity $[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \Lambda_{\omega_t}]=-i{\ensuremath{\overline\partial}}^*$, we get $$[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},[i{\ensuremath{\overline\partial}}c_{j\bar k}, \Lambda_{\omega_t}]]=[i\partial{\ensuremath{\overline\partial}}c_{j\bar k}, \Lambda_{\omega_t}]-[{\ensuremath{\overline\partial}}c_{j\bar k}, {\ensuremath{\overline\partial}}^*].$$ Thus the lemma follows from $$[i\partial{\ensuremath{\overline\partial}}c_{j\bar k}, \Lambda_{\omega_t}] \mathbf 1=0$$ and $
[{\ensuremath{\overline\partial}}c_{j\bar k}, {\ensuremath{\overline\partial}}^*] \mathbf 1= {\ensuremath{\overline\partial}}^* ({\ensuremath{\overline\partial}}c_{j\bar k} \wedge \mathbf 1)= {\ensuremath{\overline\partial}}^* {\ensuremath{\overline\partial}}c_{j\bar k} \mathbf 1=\Box c_{j\bar k}\mathbf 1$.
The main idea is to use the Lie-derivative formulation of the curvature $$[[d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_V], [d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_W]]=[d^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V, W]}] +\Theta^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V, W).$$ In case $V=V_j$, $W=\overline{V^\phi}$ then the degree $(0,0)$-component of the left hand side is equal to $$[D_{V_j}, D_{\overline{V^\phi}}]+[\kappa_j, \overline{\kappa^\phi}], \ \ \ \ \text{where}\ D_{V_j}:=[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{V_j}], \ D_{\overline{V^\phi}}=[{\ensuremath{\overline\partial}}, \delta_{\overline{V^\phi}}].$$ Thus we get $$[D_{V_j}, D_{\overline{V^\phi}}] \mathbf 1+[\kappa_j, \overline{\kappa^\phi}]\mathbf 1=[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \overline{V^\phi}]_{1,0}}]\mathbf 1+ \Theta^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j , \overline{V^\phi})\mathbf 1.$$ Notice that $D_{\overline{V^\phi}}\mathbf 1=0$ thus gives $$[D_{V_j}, D_{\overline{V^\phi}}] \mathbf 1=0.$$ Now we know that follows from the following two lemmas.
\[le:sch-21\] $[\kappa_j, \overline{\kappa^\phi}]\mathbf 1=-\langle \kappa_j , \kappa^\phi\rangle_{\omega_t} \mathbf 1$.
For bidegree reason, we have $[\kappa_j, \overline{\kappa^\phi}]\mathbf 1=-\overline{\kappa^\phi} \kappa_j \mathbf 1$ and $$\langle \kappa_j \mathbf 1 , \mathbf 1\rangle_{\omega_t}=0.$$ Apply the Lie-derivative $L_{\overline{V^\phi}}:=[d, \delta_{\overline{V^\phi}}]$ to the above identity, we get (again for bidegree reason) $$0=\langle \overline{\kappa^\phi}\kappa_j \mathbf 1 , \mathbf 1\rangle_{\omega_t}+\langle \kappa_j \mathbf 1 , \star^{-1}\kappa^\phi\star\mathbf 1\rangle_{\omega_t},$$ where $\star$ denotes the Hodge star operator. The proof of Proposition 5.5 in [@Wang17] implies $\star^{-1}\kappa^\phi\star=-\kappa^\phi$. Thus $\overline{\kappa^\phi}\kappa_j \mathbf 1=\langle \kappa_j , \kappa^\phi\rangle_{\omega_t} \mathbf 1$ and the lemma follows.
\[le:sch21\] $[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V_j, \overline{V^\phi}]_{1,0}}]\mathbf 1={\ensuremath{\overline\partial}}^*(\kappa_j\cdot \partial\bar \phi)\mathbf 1$.
The proof is very similar to Lemma \[le:sch1\]. Since $$[V_j, \overline{V^\phi}] \, \rfloor\, \omega=L_{V_j}(\overline{V^\phi} \, \rfloor\, \omega)- \overline{V^\phi}\, \rfloor\, L_{V_j} \omega,$$ $(L_{V_j} \omega)|_{X_t}=0$ and $(\overline{V^\phi} \, \rfloor\, \omega)|_{X_t}=-i\partial\bar\phi$, we get $$[V_j, \overline{V^\phi}]_{1,0} \, \rfloor\, \omega_t=-i \kappa_j \cdot \partial\bar \phi,$$ which implies $$[V_j, \overline{V^\phi}]_{1,0}=[-i\kappa_j \cdot \partial\bar \phi, \Lambda_{\omega_t}].$$ Thus the super Jacobi identity gives $$[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},[V_j, \overline{V^\phi}]_{1,0}]=[[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, -i\kappa_j \cdot \partial\bar \phi], \Lambda_{\omega_t}]- [-i\kappa_j \cdot \partial\bar \phi, [\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \Lambda_{\omega_t}]].$$ A similar argument as in the proof of Lemma \[le:sch1\] gives the lemma.
Similar to the proof of , put $$D_{V^\phi}:=[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{V^\phi}], \ D_{\overline{V^\psi}}=[{\ensuremath{\overline\partial}}, \delta_{\overline{V^\psi}}],$$ we have $$[D_{V^\phi}, D_{\overline{V^\psi}}] \mathbf 1+[\kappa^\phi, \overline{\kappa^\psi}]\mathbf 1=[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V^\phi, \overline{V^\psi}]_{1,0}}]\mathbf 1+ \Theta^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V^\phi , \overline{V^\psi})\mathbf 1.$$ Similar to Lemma \[le:sch-21\], we have $$[\kappa^\phi, \overline{\kappa^\psi}]\mathbf 1= -\langle \kappa^\phi , \kappa^\psi\rangle_{\omega_t} \mathbf 1.$$ Also, similar to Lemma \[le:sch21\], $$[V^\phi, \overline{V^\psi}]_{1,0} \, \rfloor\, \omega_{X_t}=-i \kappa^\phi(\partial\bar \psi)$$ gives $$[\partial^{-K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}, \delta_{[V^\phi, \overline{V^\psi}]_{1,0}}]\mathbf 1={\ensuremath{\overline\partial}}^* (\kappa^\phi\cdot \partial \bar\psi).$$ Thus follows from the following lemma.
$[D_{V^\phi}, D_{\overline{V^\psi}}] \mathbf 1=\langle {\ensuremath{\overline\partial}}\Box\phi, {\ensuremath{\overline\partial}}\psi\rangle_{\omega_t} \mathbf 1$.
Similar to Lemma \[le:sch21\], $$V^\phi\, \rfloor\, \omega_{X_t}=i{\ensuremath{\overline\partial}}\phi, \ \ \ V^\psi\, \rfloor\, \omega_{X_t}=i{\ensuremath{\overline\partial}}\psi$$ together give $$D_{V^\phi} \mathbf 1=-{\ensuremath{\overline\partial}}^*{\ensuremath{\overline\partial}}\phi \mathbf 1=-\Box \phi \mathbf 1, \ \ \ \delta_{\overline{V^\psi}}=[-i\partial\bar\psi, \Lambda_{\omega_t}].$$ Thus $$[D_{V^\phi}, D_{\overline{V^\psi}}] \mathbf 1=D_{\overline{V^\psi}} (\Box \phi\mathbf 1)=[{\ensuremath{\overline\partial}}, [-i\partial\bar\psi, \Lambda_{\omega_t}]] (\Box \phi\mathbf 1).$$ Apply the super Jacobi identity $$[{\ensuremath{\overline\partial}}, [-i\partial\bar\psi, \Lambda_{\omega_t}]] =[[{\ensuremath{\overline\partial}},-i\partial\bar\psi], \Lambda_{\omega_t} ]-[-i\partial\bar\psi, [{\ensuremath{\overline\partial}}, \Lambda_{\omega_t}]],$$ we have $$[{\ensuremath{\overline\partial}}, [-i\partial\bar\psi, \Lambda_{\omega_t}]] (\Box \phi\mathbf 1)=-i\partial\bar\psi \wedge \Lambda_{\omega_t} ({\ensuremath{\overline\partial}}\Box\phi \mathbf 1)=\langle {\ensuremath{\overline\partial}}\Box\phi, {\ensuremath{\overline\partial}}\psi\rangle_{\omega_t} \mathbf 1.$$ Thus the lemma follows.
Curvature of the relative canonical bundle
------------------------------------------
A direct consequence of is the following *average horizontal positivity of the relative canonical line bundle*:
For any relative Kähler fibration, we have $$\int_{X_t} \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\bar V_j) \frac{\omega_t^n}{n!}= ||\kappa^j||^2=|\partial/\partial t^j|^2_{\rm DF} \geq 0.$$ Assume further that the fibration is Poisson–Kähler. Then pointwise positivity also holds $$\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\bar V_j)=|\kappa_j|^2_{\omega_t}\geq 0.$$ Moreover, we have strict positivity if the Kodaira–Spencer map is injective.
Another consequence of is following formula proved by Fujiki and Schumacher (see [@FS] or Lemma 3.8 (3.43) in [@Wan1] for another proof).
The fundamental form of the non-harmonic Weil–Petersson metric $$\omega_{\rm DF}:=i \sum \langle \kappa_j, \kappa_k\rangle \, dt^j \wedge d\bar t^k$$ can be written as $$\label{eq:FS-great}
\omega_{\rm DF}=p_*(i \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge \omega_n)+ p_*(S\cdot \omega_{n+1}), \ \ \omega_k:=\omega^k/ k!,$$ where $S$ denotes the scalar curvature of the fibers defined by $$S|_{X_t}\cdot (\omega_t)_n= -i\Theta^{K_{X_t}}\wedge (\omega_t)_{n-1}.$$
By , we have $$\langle \kappa_j, \kappa_k\rangle=p_*\left(\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\bar V_k) \omega_n\right)= p_*\left((\delta_{\bar V_k} \delta_{V_j} \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}) \omega_n\right).$$ Notice that $\delta_{V_j} \omega=i \sum c_{j\bar k} \, d\bar t^k$ gives $$(\delta_{V_j} \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}) \omega_n=\delta_{V_j} (\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}\wedge\omega_n)
-i\sum c_{j\bar k} \, d\bar t^k \wedge \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge \omega_{n-1}.$$ Thus we have $$p_*\left((\delta_{\bar V_k} \delta_{V_j} \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}) \omega_n\right)=p_*\left(\delta_{\bar V_k} \delta_{V_j}( \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge\omega_n)\right)+p_*( c_{j\bar k} \cdot S\cdot \omega_n).$$ Hence the theorem follows from $$i\sum dt^j \wedge d\bar t^k \wedge p_*\left(\delta_{\bar V_k} \delta_{V_j}( \Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge\omega_n)\right)= p_*\left( i\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge\omega_n\right)$$ and $i\sum c_{j\bar k} \, dt^j \wedge d\bar t^k \wedge\omega_n =\omega_{n+1}$.
Integrating along the fibers gives $$\label{eq:0306-1}
p_*(\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V_j ,\bar V^\phi) \omega_n)= \langle \kappa_j, \kappa^\phi\rangle,$$ which can be used to prove the following variational formula for the scalar curvature.
\[pr:do\] $ \langle \kappa_j, \kappa^\phi\rangle= \langle L_{V_j} S, \phi\rangle$.
Since $\overline{V^\phi}$ is vertical, we have $\delta_{\overline{V^\phi}} (\omega_n \wedge \delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}})=0$ on fibers, which gives $$p_*(\omega_n \wedge \delta_{\overline{V^\phi}} \delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} )=-p_*((\delta_{\overline{V^\phi}} \omega_n)\wedge \delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}})=p_*(i\partial\bar\phi \wedge \omega_{n-1}\wedge \delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}).$$ Thus gives $$\langle \kappa_j, \kappa^\phi\rangle= p_*(i\partial\bar\phi \wedge \omega_{n-1}\wedge \delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}})=p_*(-i\bar\phi\, \partial\delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge \omega_{n-1}).$$ where the second identity follows by the Stokes formula. Since $$p_*(-i\bar\phi\, \partial\delta_{V_j}\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge \omega_{n-1})=p_*(-i\bar\phi \, L_{V_j}(\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}} \wedge \omega_{n-1}))=\langle L_{V_j} S, \phi\rangle,$$ our lemma follows.
**Remark**: The Poisson–Kähler fibration case of the above formula is equivalent to the fact the “*scalar curvature can be seen as a moment map on the space of compatible complex structures*” (see [@D97] for the details).
Our last remark is the following: if we integrate along the fibers then we get $$\label{eq:0306-3}
p_*(\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V^\phi ,\bar V^\psi) \omega_n)= \langle \kappa^\phi, \kappa^\psi\rangle-\langle \Box\phi, \Box\psi\rangle.$$ Since both $V^\phi$ and $\bar V^\psi$ are vertical, we have $$\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}}(V^\phi ,\bar V^\psi)= \Theta^{K_{X_t}}(V^\phi ,\bar V^\psi).$$ In particular, we get the following Bochner–Kodaira–Nakano type formulas:
\[pr4.10\] If $i\Theta^{K_{X_t}}=\pm \omega_t$ then $$\pm||{\ensuremath{\overline\partial}}\phi||^2=||\kappa^\phi||^2-||\Box\phi||^2.$$ If $i\Theta^{K_{X_t}}=0$ then $||\kappa^\phi||=||\Box\phi||$.
General relative Kähler fibration case
--------------------------------------
Our main result in this section is a curvature formula for the non-harmonic Weil–Petersson metric associated to a general relative Kähler fibration. We shall use the following notation $$f_{t^j}:=\frac{\partial f}{\partial t^j}, \ \ f_{\bar t^k}:=\frac{\partial f}{\partial \bar t^k}, \ \ f_{t^j\bar t^k}=\frac{\partial^2 f}{\partial t^j \partial \bar t^k}.$$ Denote by $\Theta_{j\bar k}^{{\ensuremath{\mathcal{B}}}} $ the curvature operators of the non-harmonic Weil–Petersson metric $\omega_{\rm DF}$ on ${\ensuremath{\mathcal{B}}}$. We want a formula for $$\langle \Theta^{\ensuremath{\mathcal{B}}}_{j\bar k}e_l, e_m \rangle_{\rm DF}, \ \ e_l:=\partial/ \partial t^l.$$ By the following Chern curvature formula $$\left( \langle e_l, e_m \rangle_{\rm DF}\right)_{t^j \bar t^k}=\langle D_j^{\ensuremath{\mathcal{B}}}e_l, D_k^{\ensuremath{\mathcal{B}}}e_m \rangle_{\rm DF} -\langle \Theta^{\ensuremath{\mathcal{B}}}_{j\bar k}e_l, e_m \rangle_{\rm DF},$$ where $D_j^{\ensuremath{\mathcal{B}}}$ denote components of the $(1,0)$-part of the Chern connection on $(T_{\ensuremath{\mathcal{B}}}, \omega_{\rm DF})$ and $$\langle e_l, e_m \rangle_{\rm DF}=\langle \kappa_l, \kappa_m \rangle,$$ we know it suffices to find a nice formula for $$\langle \kappa_l, \kappa_m \rangle_{j\bar k}.$$ For the first order derivative, by Theorem \[th:chern\] and , we have $$\label{eq4.14}
\langle \kappa_l, \kappa_m \rangle_j= \langle [D_{V_j},\kappa_l], \kappa_m \rangle + \langle \kappa_l, [D_{\bar V_j},\kappa_m] \rangle,$$ where $$D_{V_j}=[\partial, \delta_{V_j}], \ \ D_{\bar V_j}:=[{\ensuremath{\overline\partial}}, \delta_{\bar V_j}].$$
The second term in is a “bad” term, we need the following lemma.
\[le4.11\] $[D_{\bar V_j},\kappa_m] = -\kappa^{c_{m\bar j}}$, where $\kappa^{c_{m\bar j}}$ is defined in .
By Proposition \[pr:rkf\], we have $$[V_m, \bar V_{j}]_{1,0} \, \rfloor\, \omega|_{X_t}= i\,{\ensuremath{\overline\partial}}c_{m\bar j}.$$ Notice that $[D_{\bar V_j},\kappa_m] $ is equal to the degree $(-1,1)$ part of $[L_{\bar V_j}, L_{V_m}]=L_{[\bar V_j, V_m]}$, thus $$[D_{\bar V_j},\kappa_m] =-[{\ensuremath{\overline\partial}}, \delta_{[V_m, \bar V_{j}]_{1,0} }]=-\kappa^{c_{m\bar j}}.$$
Apply partial derivatives to and use the above lemma, we have $$\langle \kappa_l, \kappa_m \rangle_{j\bar k}= \langle [D_{V_j},\kappa_l], \kappa_m \rangle_{\bar k} - \langle \kappa_l, \kappa^{c_{m\bar j}} \rangle_{\bar k}.$$
\[le:312\] $ \langle [D_{V_j},\kappa_l], \kappa_m \rangle_{\bar k} $ can be written as $$- \langle [\Theta_{j\bar k},\kappa_l], \kappa_m \rangle- \langle [D_{V_j}, \kappa^{c_{l\bar k}}], \kappa_m \rangle+ \langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle ,$$ where $$\Theta_{j\bar k}:=[D_{V_j}, D_{\bar V_k}]=[\partial, \delta_{V^{c_{j\bar k}}}]- [{\ensuremath{\overline\partial}}, \delta_{\bar V^{c_{k\bar j}}}]+[\kappa^*_k , \kappa_j].$$
By Theorem \[th:chern\], we have $$\langle [D_{V_j},\kappa_l], \kappa_m \rangle_{\bar k}= \langle [D_{\bar V_k}, [D_{V_j},\kappa_l]], \kappa_m \rangle +\langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle.$$ Apply the super Jacobi identity, we get $$[D_{\bar V_k}, [D_{V_j},\kappa_l]]=-[\Theta_{j\bar k}, \kappa_l]+ [D_{V_j}, [ D_{\bar V_k},\kappa_l]].$$ By Lemma \[le4.11\], we have $[ D_{\bar V_k},\kappa_l]=-\kappa^{c_{l\bar k}}$, moreover, Theorem \[th:curvature\] gives (for bidegree reason) $$\Theta_{j\bar k}=[\partial, \delta_{[V_j, \bar V_k]_{1,0}}]+ [{\ensuremath{\overline\partial}}, \delta_{[V_j, \bar V_k]_{0,1}}]+ [\kappa_k^*, \kappa_j].$$ Notice that Proposition \[pr:rkf\] gives $[V_j, \bar V_k]=V^{c_{j\bar k}}-\bar V^{c_{k\bar j}}$. Thus the Lemma follows.
\[le:313\] $\langle \kappa_l, \kappa^{c_{m\bar j}} \rangle_{\bar k}= -\langle \kappa^{c_{l\bar k}}, \kappa^{c_{m\bar j}} \rangle +\langle \kappa_l, [D_{V_k},\kappa^{c_{m\bar j}}] \rangle$ and $$\langle \kappa_l, [D_{V_k},\kappa^{c_{m\bar j}}] \rangle= \langle L_{V_l} S, c_{m\bar j} \rangle_{\bar k}+ \langle \kappa^{c_{l\bar k}},\kappa^{c_{m\bar j}} \rangle.$$
The first formula follows from Theorem \[th:chern\] and Lemma \[le4.11\]. The second formula is equivalent to the first since by Proposition \[pr:do\], we can replace $\langle \kappa_l, \kappa^{c_{m\bar j}} \rangle$ by $\langle L_{V_l} S, c_{m\bar j} \rangle$.
The above two lemmas implies that $$\langle \kappa_l, \kappa_m \rangle_{j\bar k} = -\, \langle [\Theta_{j\bar k},\kappa_l], \kappa_m \rangle- \langle [D_{V_j}, \kappa^{c_{l\bar k}}], \kappa_m \rangle+ \langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle -\langle L_{V_l} S, c_{m\bar j} \rangle_{\bar k}.$$ Notice that the second formula in Lemma \[le:313\] gives $$\langle [D_{V_j}, \kappa^{c_{l\bar k}}], \kappa_m \rangle=\langle c_{l\bar k}, L_{V_m} S \rangle_j + \langle \kappa^{c_{l\bar k}},\kappa^{c_{m\bar j}} \rangle.$$ Thus we get $$\begin{aligned}
\langle \kappa_l, \kappa_m \rangle_{j\bar k} & = & -\, \langle [\Theta_{j\bar k},\kappa_l], \kappa_m \rangle- \langle c_{l\bar k}, L_{V_m} S \rangle_j+ \langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle \\
& & -\, \langle L_{V_l} S, c_{m\bar j} \rangle_{\bar k}- \langle \kappa^{c_{l\bar k}},\kappa^{c_{m\bar j}} \rangle . \end{aligned}$$ The key lemma is the following
\[le:314\] $\langle [\Theta_{j\bar k},\kappa_l], \kappa_m \rangle$ can be written as $$-\langle [ \kappa_k^*, \kappa_j], [ \kappa_l^*, \kappa_m] \rangle + \langle c_{j\bar k}, L_{V_m} S \rangle_l + \langle \kappa^{c_{j\bar k}},\kappa^{c_{m\bar l}} \rangle -\langle L_{V_l} c_{j\bar k}, L_{V_m} S \rangle.$$
By Lemma \[le:312\], we have $$[\Theta_{j\bar k},\kappa_l]= [D_{V^{c_{j\bar k}}}, \kappa_l] - [D_{\bar V^{c_{k\bar j}}}, \kappa_l] + [[ \kappa_k^*, \kappa_j], \kappa_l].$$ Notice that the degree $(-1,1)$ part of $[L_{V^{c_{j\bar k}}}, L_{V_l}]=L_{[V^{c_{j\bar k}}, V_l]}$ gives $$[D_{V^{c_{j\bar k}}}, \kappa_l]= -[\kappa^{c_{j\bar k}}, D_{V_l}] +[{\ensuremath{\overline\partial}}, \delta_{[V^{c_{j\bar k}}, V_l]}].$$ A similar argument gives $$[D_{\bar V^{c_{k\bar j}}}, \kappa_l]= -[{\ensuremath{\overline\partial}}, \delta_{[V_l, \bar V^{c_{k\bar j}}]_{1,0}}].$$ Thus we have $$[\Theta_{j\bar k},\kappa_l]= [D_{V_l}, \kappa^{c_{j\bar k}}] +[{\ensuremath{\overline\partial}}, \delta_{[V^{c_{j\bar k}}, V_l]}]+ [{\ensuremath{\overline\partial}}, \delta_{[V_l, \bar V^{c_{k\bar j}}]_{1,0}}]+[[ \kappa_k^*, \kappa_j], \kappa_l].$$ Notice that $$\langle [[ \kappa_k^*, \kappa_j], \kappa_l], \kappa_m \rangle=-\langle [ \kappa_k^*, \kappa_j], [\kappa^*_l, \kappa_m] \rangle,$$ and Lemma \[le:313\] gives $$\langle [D_{V_l}, \kappa^{c_{j\bar k}}] ,\kappa_m \rangle=\langle c_{j\bar k}, L_{V_m} S \rangle_l + \langle \kappa^{c_{j\bar k}},\kappa^{c_{m\bar l}} \rangle.$$ Thus by Proposition \[pr:do\] it suffices to show that $$\label{eq:keykey}
[{\ensuremath{\overline\partial}}, \delta_{[V^{c_{j\bar k}}, V_l]}]+ [{\ensuremath{\overline\partial}}, \delta_{[V_l, \bar V^{c_{k\bar j}}]_{1,0}}]=-\kappa^{L_{V_l} c_{j\bar k}}.$$ Recall that the proof of Lemma \[le:sch21\] gives $$[V_l, \bar V^{c_{k\bar j}}]_{1,0} \, \rfloor \, \omega_t=-i \kappa^l \cdot \partial c_{j\bar k}.$$ Since $$[V^{c_{j\bar k}}, V_l]\, \rfloor \, \omega=L_{V^{c_{j\bar k}}} ( V_l\, \rfloor \, \omega )- V_l \, \rfloor \, L_{V^{c_{j\bar k}}} \omega$$ and each $V_l$ is horizontal, we have $$[V^{c_{j\bar k}}, V_l] \, \rfloor \, \omega_t=i \delta_{V_l} {\ensuremath{\overline\partial}}\partial c_{j\bar k}=i \kappa^l \cdot \partial c_{j\bar k} -i {\ensuremath{\overline\partial}}( L_{V_l} c_{j\bar k}).$$ Thus follows.
The above lemma implies
\[th:main-r\] For any relative Kähler fibration the following second order variation formula of the non-harmonic Weil–Petersson metric holds $$\begin{aligned}
\langle \kappa_l, \kappa_m \rangle_{j\bar k} & = & \langle [ \kappa_k^*, \kappa_j], [ \kappa_l^*, \kappa_m] \rangle -\langle c_{j\bar k}, L_{V_m} S \rangle_l -\langle \kappa^{c_{j\bar k}},\kappa^{c_{m\bar l}} \rangle + \langle L_{V_l} c_{j\bar k}, L_{V_m} S \rangle \\
& & -\, \langle c_{l\bar k}, L_{V_m} S \rangle_j+ \langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle \\
& & -\, \langle L_{V_l} S, c_{m\bar j} \rangle_{\bar k}- \langle \kappa^{c_{l\bar k}},\kappa^{c_{m\bar j}} \rangle.\end{aligned}$$ Assume that further that $dS\equiv 0$. Then $$\langle \kappa_l, \kappa_m \rangle_{j\bar k}=\langle [ \kappa_k^*, \kappa_j], [ \kappa_l^*, \kappa_m] \rangle+ \langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle-\langle \kappa^{c_{j\bar k}},\kappa^{c_{m\bar l}} \rangle -\langle \kappa^{c_{l\bar k}},\kappa^{c_{m\bar j}} \rangle.$$
**Remark 1**: In case the fibration is Poisson–Kähler, we have $c_{j\bar k}\equiv 0$, thus the above theorem gives $$\langle \kappa_l, \kappa_m \rangle_{j\bar k} =\langle [ \kappa_k^*, \kappa_j], [ \kappa_l^*, \kappa_m] \rangle+ \langle [D_{V_j},\kappa_l], [D_{V_k},\kappa_m] \rangle,$$ which implies Theorem A.
**Remark 2**: In case the canonical line bundle of each fiber is positive, by the Aubin–Yau theorem, one may choose $\omega$ such that $$\omega=i\Theta^{K_{{\ensuremath{\mathcal{X}}}/{\ensuremath{\mathcal{B}}}}},$$ which implies $S\equiv -n$. Thus the above theorem gives (for simplicity’s sake, let us assume that $\dim {\ensuremath{\mathcal{B}}}=1$, write $\kappa_1$ as $\kappa$, $D_{V_1}$ as $D_t$ and $c_{1\bar 1}$ as $c$) $$\left(||\kappa||^2\right)_{t\bar t}=||[\kappa^*, \kappa]||^2 +||[D_t, \kappa]||^2-2||\kappa^c||^2 \geq \frac{2}{n} ||\kappa||^4-2||\kappa^c||^2,$$ where we use the proof of Theorem \[th:wang-1\] ii) in the last inequality. By Proposition \[pr4.10\], we have $$||\kappa^c||^2=||{\ensuremath{\overline\partial}}c||^2+||\Box c||^2=((\Box+1) c, \Box c).$$ Apply the Schumacher formula $(\Box+1) c=|\kappa|^2$, we get $$||\kappa^c||^2=||\kappa||^4-(|\kappa|^2, c)=||\kappa||^4-(|\kappa|^2, (\Box+1)^{-1} |\kappa|^2).$$ In particular, in case $n=1$ (each fiber will be a compact Riemann surface with genus no less than two) we get $$\left(||\kappa||^2\right)_{t\bar t}\geq 2 (|\kappa|^2, c)= 2 (|\kappa|^2, (\Box+1)^{-1} |\kappa|^2) \geq 0,$$ which gives another proof of the Ahlfors negative curvature theorem for the Teichmüller space.
Examples of Poisson–Kähler fibrations
=====================================
Family of elliptic curves
-------------------------
For each $t\in \mathbb H=\{t\in\mathbb C: {\rm Im}\, t>0\}$, $$X_t:=\mathbb C/{(\mathbb Z+t\mathbb Z)},$$ is an elliptic curve. Consider the following $\mathbb R$-linear quasi-conformal mapping $$f^t: \mathbb C\to \mathbb C$$ defined by $$\label{eq:5.1}
f^t(1)=1, \ \ f^t(t)=i.$$ Since $f^t$ is $\mathbb R$-linear, implies that each $f^t, \ t\in\mathbb H$, also induces the corresponding mapping on the quotient space. We will still denote it by $f^t$: $$f^t: X_t\to X_i.$$ Moreover, by a direct computation, gives $$f^t(\zeta)=z=\frac{i-\bar t}{t-\bar t} \,\zeta+\frac{t-i}{t-\bar t}\, \overline{\zeta}.$$ Now $\{f^t\}_{t\in\mathbb H}$ defines a smooth trivialization of $${\ensuremath{\mathcal{X}}}:=\{X_t\}_{t\in\mathbb H} \simeq (\mathbb H\times \mathbb C)/\mathbb Z^2$$ as follows $$f: {\ensuremath{\mathcal{X}}}\to \mathbb H\times X_i, \ \ \ f(t,\zeta):=(t, f^t(\zeta)).$$ It is trivial that $$\omega_i:= i \,dz \wedge d\bar z$$ defines a relative Kähler form on $\mathbb H\times X_i$. Put $$\omega:=f^* \omega_i,$$ we have (similar result also holds for Abelian varieties)
$\omega$ defines a Poisson–Kähler structure on the following canonical fibration $$p: \mathcal X \to \mathbb H, \ \ \ p(X_t)=t.$$
$\omega_i^2=0$ gives $\omega^2=0$. Thus it suffices to show that $\omega$ is of degree-$(1,1)$ and positive on each fiber, which follows by a direct computation.
**Remark**: One may locally write $$\omega=i\partial{\ensuremath{\overline\partial}}\phi,$$ where $$\phi:= \frac{i\cdot|\zeta-\bar \zeta|^2 }{ t-\bar t}$$ is a locally defined on ${\ensuremath{\mathcal{X}}}$ (up to a constant, it is equal to the local weight of the canonical metric on the theta–line bundle). But the corresponding curvature form $$\omega:= i\partial{\ensuremath{\overline\partial}}\phi$$ is a globally defined relative Kähler form on ${\ensuremath{\mathcal{X}}}$. Notice that $$\phi= 2x^2/s, \ \ x:={\rm Re}\, \zeta, \ \ s:={\rm Im}\, t.$$ For each $s>0$, think of $ 2x^2/s$ as a convex function on $x\in \mathbb R$. Since the Legendre transform of $2x^2/s$ defined by $$y\mapsto \sup_{x\in\mathbb R} \left(xy-2x^2/s\right)$$ is $ sy^2/8$, which is a linear function of $s$, we say that $$\{2x^2/s\}_{s>0}$$ is a *geodesic ray* in the space of convex functions on $\mathbb R$ (see section 5.3 for generalizations).
Kähler metric geodesics
-----------------------
Let $(X, \omega)$ be a fixed $n$-dimensional compact Kähler manifold. Consider the following Mabuchi space of Kähler potentials $$\mathcal K:=\{\phi\in C^\infty (X, \mathbb R): \omega+i\partial{\ensuremath{\overline\partial}}\phi >0 \}$$ on $X$. Fix $\phi_0, \phi_1$ in $\mathcal K$, if there exists a smooth function $\phi$ on a neighborhood of the closure of $${\ensuremath{\mathcal{X}}}:=\mathbb H_{0,1} \times X, \ \ \ \mathbb H_{0,1}:= \{ \tau \in \mathbb C: 0<{\rm Re} \, \tau<1 \},$$ such that $\phi(0,x)=\phi_0(x)$ , $\phi(1,x)=\phi_1(x)$, $\phi$ does not depend on the imaginary part of $\tau$ and $$(\omega+i\partial{\ensuremath{\overline\partial}}\phi)^{n+1} \equiv 0 \ \text{on}\ {\ensuremath{\mathcal{X}}}, \ \ \ \phi(t, \cdot)\in \mathcal K,$$ then we say that $\{\phi(t, \cdot)\}_{t\in [0,1]}$ is a smooth geodesic in $\mathcal K$ connecting $\phi_0, \phi_1$. Associated to smooth geodesic the following trivial fibration $$p: ({\ensuremath{\mathcal{X}}}, \omega+i\partial{\ensuremath{\overline\partial}}\phi) \to \mathbb H_{0,1}$$ is Poisson–Kähler.
Convex function geodesics
-------------------------
If $\phi$ is a smooth strictly convex function on $\mathbb R^n$ then we know that its gradient map $$\nabla \phi : x\mapsto (\phi_{x_1}(x), \cdots, \phi_{x_n}(x)), \ \ \phi_{x_j}:=\partial\phi/\partial x_j,$$ defines a diffeomorphism from $\mathbb R^n$ onto an open set $$A_\phi:= \nabla \phi(\mathbb R^n)$$ in $\mathbb R^n$. Moreover, we have the following
$A_\phi$ is convex.
Assume that $y_0, y_1$ lie in $A_\phi$, we need to prove that $y_t:= ty_1 + (1-t) y_0$ lies in $A_\phi$ for every $t$ in $[0,1]$. Consider $$\phi^t(x):= \phi(x)-x\cdot y_t,$$ we have $$\phi^t= t\phi^1 + (1-t) \phi^0.$$ Notice that both $\phi^1$ and $\phi^0$ are proper (since they are smooth strictly convex functions with critical points), thus each $\phi^t$ is proper critical point, say $x_t$, which implies that $\nabla\phi(x_t)=y_t$. Hence $y_t\in A_\phi$.
**Remark**: The above proof also implies that $$A_{\phi+\psi}=\{x+y \in \mathbb R^n: x\in A_\phi, \ y\in A_\psi\},$$ we call the right hand side the Minkowski sum of $A_\phi$ and $A_\psi$ and write it as $A_{\phi}+A_{\psi}$.
Let $A$ be a bounded open convex set in $\mathbb R$. A smooth strictly convex function $\phi$ on $\mathbb R^n$ is said to be of type $A$ if $A_\phi=A$. We call denote by $\mathcal C_A$ the space of type $A$ functions.
**Remark 1**: $\mathcal C_A$ is not empty. In fact if $\psi$ is a smooth strictly convex function on $A$ that tends to infinity at the boundary of $A$. Then its Legendre transform $$\psi^*(x):=\sup_{y\in\mathbb A} x\cdot y-\psi(y), \ \ \forall \ x\in \mathbb R,$$ lies in $\mathcal C_A$. $A_{\phi+\psi}=A_\phi+A_\psi$ implies that $\mathcal C_A$ is a convex set.
**Remark 2**: The Legendre transform of $\phi \in \mathcal C_A$ is defined by $$\phi^*(y):= \sup_{x\in \mathbb R} x\cdot y-\phi(x), \ \ \forall \ y\in A.$$ We know that $\phi^*$ is smooth strictly convex on $A$. Moreover, if $\phi_0, \phi_1 \in \mathcal C_A$, then $$\label{eq:c-geo}
\phi: (t,x) \mapsto (t\phi^*_1+(1-t)\phi^*_0)^*(x)$$ satisfies $$MA (\phi)=0$$ on $[0,1] \times \mathbb R^n$, where $MA(\phi)$ denotes the determinant of the full Hessian of $\phi$.
We call $\phi$ defined in the geodesic between $\phi_0, \phi_1 \in \mathcal C_A$.
Let ${\ensuremath{\mathcal{X}}}:= [0,1] \times \mathbb R^n \times \mathbb R^{n+1}\subset \mathbb C^{n+1}$ be the natural complexification of $[0,1] \times \mathbb R^n$. Think of $\phi$ as a function on ${\ensuremath{\mathcal{X}}}$, then $$p: ({\ensuremath{\mathcal{X}}}, i\partial{\ensuremath{\overline\partial}}\phi) \to {\ensuremath{\mathcal{B}}}, \ \ {\ensuremath{\mathcal{B}}}:=[0,1] \times \mathbb R \subset \mathbb C,$$ is a (non-proper) Poisson–Kähler fibration.
Hermitian form geodesics
------------------------
Denote by $\mathcal H$ the space of hermitian forms on $\mathbb C^n$. Let $\{e_j\}$ be the canonical basis of $\mathbb C^n$ then a hermitian form, say $\omega\in \mathcal H$, can be written as $$\omega=i\sum_{j,k=1}^n a_{j\bar k} \, e_j^* \wedge \overline{e_k^*},$$ where $A:=(a_{j\bar k})$ satisfies $$a_{j\bar k}=\overline{a_{k\bar j}}$$ and $\sum a_{j\bar k} \xi^j \bar \xi^k >0$ if $\xi\neq 0$. Thus we can identify $\omega$ with a hermitian matrix $A$. Now let $$\mathbb A:=\{A_t\}_{t\in [0,1]}$$ be a smooth family (smooth on a neighborhood of $[0,1]$) of hermitian matrices. We know that $\mathbb A$ defines a smooth metric on the trivial bundle $$p: {\ensuremath{\mathcal{X}}}\to {\ensuremath{\mathcal{B}}}, \ \ {\ensuremath{\mathcal{X}}}:=[0,1]\times \mathbb R \times \mathbb C^n, \ \ {\ensuremath{\mathcal{B}}}:=[0,1]\times \mathbb R\subset \mathbb C,$$ with Chern curvature $$\Theta_{tt}(\mathbb A) e_j=\sum (a_{j\bar k, t} a^{\bar k l})_t e_l= \sum (a_{j\bar k, tt} a^{\bar k l} - a_{j\bar k, t} a_{p\bar q, t}a^{\bar k p} a^{\bar q l})e_l,$$ where $(a^{\bar k l})$ denotes the inverse matrix of $(a_{j\bar k})$ and $f_{,t}$ denotes the derivative of $f$ with respect to $t$. Think of $$\phi(t,z):= \sum a_{j\bar k}(t) z^j\bar z^k$$ as a function on ${\ensuremath{\mathcal{X}}}$. Then $i\partial{\ensuremath{\overline\partial}}\phi$ defines a relative Kähler form on ${\ensuremath{\mathcal{X}}}$. A direct computation gives
\[pr5.3\] $\Theta_{tt}(\mathbb A) \equiv 0$ if and only if $(i\partial{\ensuremath{\overline\partial}}\phi)^{n+1} \equiv 0$.
Now we know that if $\mathbb A$ is flat then $$p: ({\ensuremath{\mathcal{X}}}, i\partial{\ensuremath{\overline\partial}}\phi) \to {\ensuremath{\mathcal{B}}}$$ is a (non-proper) Poisson–Kähler fibration (a further study will be given in section 6).
We say that $\mathbb A$ is the geodesic between $A_0$ and $A_1$ if $\Theta_{tt}(\mathbb A) \equiv 0$.
**Remark**: Consider the partial (complex) Legendre transform of $\phi$ defined by $$\phi^*(t,w):=\sup_{z\in \mathbb C^n} 2 \,{\rm Re }\, z\cdot\bar w- \phi(t,z), \ \ z\cdot\bar w:=\sum z^k \bar w_k,$$ the supremum is attained at $z^j=\sum w_k a^{\bar k j}$, thus $$\phi^*(t,w)= \sum a^{\bar j k}(t) w_j \bar w_k$$ and we have (a consequence of Theorem 7.2 (i) in [@BCKR], or by a direct computation)
$(i\partial{\ensuremath{\overline\partial}}\phi)^{n+1} \equiv 0$ if and only if $(i\partial{\ensuremath{\overline\partial}}\phi^*)^{n+1} \equiv 0$.
**Remark**: In the real case, one may consider the space $\mathcal H_R$ of positive definite $n$ by $n$ real matrices (positive definite means symmetric with positive eigenvalues). Consider $$\psi(t,x):= \sum a_{jk}(t) x^j x^k,$$ we say that $\{(a_{jk}(t))\}_{t\in [0,1]} \subset \mathcal H_R$ is a geodesic between $(a_{jk}(0))$ and $(a_{jk}(1))$ in $\mathcal H_R$ if $\phi$ is smooth up to the boundary and $MA (\psi)\equiv 0$. This geodesic structure on $\mathcal H_R$ is *quite different* from $\mathcal H$. In fact, consider the partial (real) Legendre transform of $\psi$: $$\psi^*(t,y):=\sup_{x\in \mathbb R^n} x\cdot y- \psi(t,x),$$ different from the complex case we have (see [@B-Toulouse])
$MA(\psi)\equiv 0$ if and only if $\psi^*$ is linear on $t$, i.e. $(\psi^*)_{tt}\equiv 0$.
**Remark 1**: The associated Poisson–Kähler fibration for a geodesic in $\mathcal H_R$ is $$p: ({\ensuremath{\mathcal{X}}}, i\partial{\ensuremath{\overline\partial}}\psi)\to {\ensuremath{\mathcal{B}}}, \ \ {\ensuremath{\mathcal{X}}}:= [0,1]\times \mathbb R^n\times \mathbb R^{n+1}, \ \ {\ensuremath{\mathcal{B}}}:= [0,1] \times \mathbb R,$$ where we think of $\psi$ as a function on ${\ensuremath{\mathcal{X}}}$.
**Remark 2**: Since the Legendre transform maps geodesics in $\mathcal H_R$ to lines, it is also natural to look at $$A(t):= \sum_{j=1}^N t^j A_j,$$ where $\{A_j\} \subset \mathcal H_R$ is a basis of the space of symmetric matrices. Then one may identify $\mathcal H_R$ with a convex cone $$T:= \{ t\in \mathbb R^n: A(t)\in \mathcal H_R\}.$$ Notice that the following function $$\phi_{BM}: t\mapsto -\log \det A(t)$$ is well defined on $T$. One version of the matrix form of the Brunn–Minkowski inequality is the following
$\phi_{BM}$ is strictly convex on $T$.
**Remark**: Think of $\phi_{BM}$ as a function on $$T_{\mathbb C}:=T\times \mathbb R^N\subset \mathbb C^N,$$ then $i\partial{\ensuremath{\overline\partial}}\phi_{BM}$ defines a Kähler metric on $T_{\mathbb C}$, we call it the *Weil–Petersson metric* on $T_{\mathbb C}$. A special case of [@Wang_flat] is the following (thanks to Berndtsson [@Bern-p] who introduced the result to us)
\[th:neg-convex\] $i\partial{\ensuremath{\overline\partial}}\phi_{BM}$ has negative curvature property.
Notice that up to a constant $i\partial{\ensuremath{\overline\partial}}\phi_{BM}$ is equal to the Hodge metric in [@Wang_flat] in this linear case, thus the main result in [@Wang_flat] applies. Another way of looking at the above theorem is to use the fact that $T_{\mathbb C}$ is isomorphic to a classical bounded symmetric domain and $i\partial{\ensuremath{\overline\partial}}\phi_{BM}$ is just the associated canonical metric, thus has the negative curvature property we need.
**Remark**: From [@D02], we know that the non-linear term of the Mabuchi functional in toric variety case can be written as $$\mathcal M(u):=-\int_P \log \det (u_{j\bar k}) ,$$ where (we omit the Lebesgue measure) $P$ is fixed polytope and $u$ is a smooth strictly convex function on $P$ such that $\nabla u (P)=\mathbb R^n$ and $\int_P \log \det (u_{j\bar k})$ is well defined (see section 3.3 in [@D02] for the details). Since under the moment map, Kähler metric geodesics correspond to linear combinations of $u$, we know that the matrix form of the Brunn–Minkowski inequality implies that $\mathcal M$ is convex along geodesics. In particular, $$\frac{d^2 \mathcal M(u^t)}{dt^2}= \int_P \left(- \log \det (u^t_{j\bar k})\right)_{tt}, \ \ u^t:=tu^1+(1-t) u^0.$$ Write $$\rho(t, x)= \left(- \log \det (u^t_{j\bar k})\right)_{tt},$$ Theorem \[th:neg-convex\] gives $$(\log\rho)_{tt} \geq c \rho ,$$ where $c$ is a positive constant. Thus the Hölder inequality implies that $\log\frac{d^2 \mathcal M(u^t)}{dt^2} $ is also a convex function of $t$ and $$\left( \log\frac{d^2 \mathcal M(u^t)}{dt^2} \right)_{tt} \geq \frac{c}{|P|} \frac{d^2 \mathcal M(u^t)}{dt^2},$$ where $|P|$ denotes the Lebesgue measure of $P$.
Relation to stable vector bundles
=================================
We will prove the following theorem
\[thm6.1\] Let $E$ be a holomorphic vector bundle over a compact Kähler manifold ${\ensuremath{\mathcal{B}}}$. Let $P (E):= (E\setminus\{0\})/\mathbb C^*$ be the projectivization of $E$. Then TFAE:
- There exists a hermitian metric $h$ on $E$ such that $\Theta(E, h)=\alpha \text{Id}_E$ (i.e $(E, h)$ is projectively flat, sometimes we write $\alpha \text{Id}_E$ as $\alpha \otimes \text{Id}_E$);
- There exists a relative Kähler form on $P(E)$ such that the natural projection $P(E)\to {\ensuremath{\mathcal{B}}}$ is Poisson–Kähler.
In case $\dim {\ensuremath{\mathcal{B}}}=1$, both are equivalent to polystability of $E$.
**Remark 1**: By [@Ko3 (2.3.4), (2.3.5) and Proposition 2.3.1 (b)], Theorem \[thm6.1\] implies:
If $p:P(E)\to \mc{B}$ is a Poisson–Kähler and $\mc{B}$ is compact Kähler then
- $c(E)=\left(1+\frac{c_1(E)}{r}\right)^r$;
- $\text{ch}(\text{End}(E))=r^2$.
**Remark 2**: In [@Aikou], T. Aikou considered the projectively flat holomorphic vector bundle from the view of complex Finsler geometry and proved that $E$ admits a projectively flat Hermitian metric if and only if $P(E)\to \mc{B}$ is a flat Kähler fibration (see [@Aikou Definition 1.2, Theorem 3.2]).
To prove Theorem \[thm6.1\], first let us recall the definition of projectively flat vector bundle. From [@Ko3 Corollary 1.2.7, Proposition 1.2.8], a complex vector bundle $$\pi: E\to {\ensuremath{\mathcal{B}}}$$ is said to be projectively flat if it admits a projectively flat connection, i.e. the associated curvature satisfies $$\begin{aligned}
\label{3.1}
\Theta^E=\alpha \text{Id}_E \end{aligned}$$ for some $2$-form $\alpha$. Moreover, let $h$ be a smooth hermitian metric on $E$, we say that $ (E,h)$ is projectively flat if the *Chern curvature* of $(E,h)$ satisfies (\[3.1\]) for some $(1,1)$-form $\alpha$ (see e.g. [@Ko3 Proposition 4.1.11] ).
Let $\{s_\alpha\}_{\alpha=1}^r$ be a local holomorphic frame of $E$, denote the corresponding dual frame by $\{s_{\alpha}^*\}$. Then the hermitian metric $h$ is fully determined by $$h_{\alpha\b{\beta}}:=h(s_{\alpha}, s_{\beta}).$$ Denote by $(h^{\b{\beta}\alpha})$ the inverse matrix of $(h^{\b{\beta}\alpha})$. It is known that the Chern curvature of $(E,h)$ satisfies (*sometimes the summation sign is omitted*) $$\begin{aligned}
\Theta^E&=R^{\alpha}_{\beta j\b{k}}\,s_{\alpha}\otimes s_{\beta}^*\otimes dt^j\wedge d\b{t}^k\\
&= h^{\b{\gamma}\alpha}R_{\beta\b{\gamma}j\b{k}}s_{\alpha}\,\otimes s_{\beta}^*\otimes dt^j\wedge d\b{t}^k\\
&=h^{\b{\gamma}\alpha}(-\p_j\p_{\b{k}}h_{\beta\b{\gamma}}+\p_jh_{\beta\b{\sigma}}\p_{\b{k}}h_{\tau\b{\gamma}}h^{\b{\sigma}\tau})s_{\alpha}\otimes \,s_{\beta}^*\otimes dt^j\wedge d\b{t}^k.\end{aligned}$$ The associated Ricci curvature is $$\begin{aligned}
\text{Ric}:=\text{Tr}\,\Theta^E=\b{\p}\p\log\det h,\end{aligned}$$ which is a $d$-closed $(1,1)$-form on $\mc{B}$. Assume that $(E,h)$ is projectively flat, taking trace to both sides of (\[3.1\]) we get $
\alpha=\frac{1}{r}\text{Ric}.
$ Thus, $(E,h)$ is projectively flat if and only if $$\begin{aligned}
\label{3.2}
\Theta^E=\frac{1}{r}\text{Ric}\cdot\text{Id}_E. \end{aligned}$$ In case $\text{Ric}\equiv 0$, Proposition \[pr5.3\] implies $1) \Rightarrow 2)$ part of Theorem \[thm6.1\]. In general, denote by $p: P(E)\to\mc{B}$ the associated $\mathbb P^{r-1}$ fibration, we shall prove that:
\[prop2\] If $(E,h)$ is projectively flat, then $p:P(E)\to \mc{B}$ is Poisson–Kähler.
With respect to the holomorphic local frame $\{s_{\alpha}\}_{\alpha=1}^r$ of $E$, we denote by $$\begin{aligned}
(t; v)=(t^1,\cdots, t^{\dim \mc{B}}; v^1,\cdots, v^r) \end{aligned}$$ the local holomorphic coordinates of complex manifold $E$, which represents the point $v^{\alpha}s_{\alpha}\in E$. The hermitian metric $h$ now can be seen as a function on $E$ defined by $$\begin{aligned}
H(v):=h(v^{\alpha}s_{\alpha},v^{\beta}s_{\beta})=h_{\alpha\b{\beta}}v^{\alpha}\b{v}^{\beta}. \end{aligned}$$ By a simple calculation, one has $$\begin{aligned}
\label{3.3}
\p\b{\p}\log H=-R_{\alpha\b{\beta}j\b{k}}\frac{v^{\alpha}\b{v}^{\beta}}{H}dt^j\wedge d\b{t}^k+\frac{\p^2\log H}{\p v^{\alpha}\p\b{v}^{\beta}}\delta v^{\alpha}\wedge \delta \b{v}^{\beta},
\end{aligned}$$ where $\delta v^{\alpha}:=dv^{\alpha}+v^{\beta}h^{\b{\gamma}\alpha}\p_j h_{\beta\b{\gamma}}dt^j$. Notice that $ \p\b{\p}\log H$ is invariant under the natural $\mathbb C^*$ action on fibers of $E$. We know that $\p\b{\p}\log H$ defines a smooth form on $P(E)$. Since $(E,h)$ is projectively flat, gives $$\begin{aligned}
\label{3.4}
R_{\alpha\b{\beta}j\b{k}}dt^j\wedge d\b{t}^k=\frac{1}{r}\text{Ric}\cdot h_{\alpha\b{\beta}}. \end{aligned}$$ Substituting (\[3.4\]) into (\[3.3\]), one has $$\begin{aligned}
\label{3.5}
\p\b{\p}\log H=-\frac{1}{r}p^*\text{Ric}+\frac{\p^2\log H}{\p v^{\alpha}\p\b{v}^{\beta}}\delta v^{\alpha}\wedge \delta \b{v}^{\beta}.\end{aligned}$$ Now we define the following $d$-closed $(1,1)$-form on $P(E)$, $$\begin{aligned}
\label{eq6.6}
\omega:=i\left(\p\b{\p}\log H+\frac{1}{r} p^*\text{Ric}\right)= i \, \frac{\p^2\log H}{\p v^{\alpha}\p\b{v}^{\beta}}\delta v^{\alpha}\wedge \delta \b{v}^{\beta}.\end{aligned}$$ It suffices to show that $\omega$ is Poisson–Kähler. In fact, fix $\hat t\in {\ensuremath{\mathcal{B}}}$, we can a holomorphic frame $\{s_\alpha\}$ near $\hat t$ such that $$h_{\alpha\bar\beta}=\delta_{\alpha\beta}, \ \ \ \p_j h_{\beta\b{\gamma}}\equiv 0.$$ By , we have $$\omega(z)=i\p\b{\p}\log \sum_{\alpha=1}^r|v^{\alpha}|^2, \ \ \forall \ z\in p^{-1}(\hat t),$$ which gives $$\omega(z)^r\equiv 0$$ and that $\omega$ restricts to the Fubini-Study metric on fibers. Thus $p: (P(E),\omega)\to \mc{B}$ is Poisson–Kähler.
Now let us prove the $2) \Rightarrow 1)$ part of Theorem \[thm6.1\]. Assume that $p:(P(E),\omega)\to \mc{B}$ is Poisson–Kähler for some $\omega$. Then $\omega+p^*\omega_{\mc{B}}$ is Kähler on $P(E)$ for every Kähler form $\omega_{{\ensuremath{\mathcal{B}}}}$ on ${\ensuremath{\mathcal{B}}}$. In particular, we know that $P(E)$ is a compact Kähler manifold, which implies that the Dolbeault cohomology is equal to the complexification of the de Rham cohomology. Thus the Leray–Hirsch Theorem (see [@BT], page 50 and 270) implies that the Dolbeault cohomology ring of $P(E)$ is generated by the Dolbeault cohomology ring of ${\ensuremath{\mathcal{B}}}$ and $c_1(\mc{O}_{P(E)}(1))$, which gives the following proposition:
There exist a constant $k\in \mb{R}$ and a $d$-closed real $(1,1)$-form $\alpha$ on $\mc{B}$ such that $$\begin{aligned}
[\omega]=kc_1(\mc{O}_{P(E)}(1))+[p^*\alpha].\end{aligned}$$ Here $[\cdot]$ denotes the de Rham cohomology class.
Notice that $k>0$ since $\omega$ is relative Kähler, moreover the $\p\b{\p}$-lemma for compact Kähler manifolds (see e.g. [@Ko3 Proposition 1.7.24]) gives a smooth metric, say $e^{-\psi}$, on $\mc{O}_{P(E)}(1)$ such that $[i\p\b{\p}\psi / (2\pi)]= c_1(\mc{O}_{P(E)}(1))$ and $$\begin{aligned}
\frac{i\p\b{\p}\psi}{2\pi}=\frac{1}{k}(\omega-p^*\alpha).\end{aligned}$$ By our assumption $\omega^r=0$, we know that the geodesic curvature $c(\psi)$ satisfies $$\begin{aligned}
\label{3.7}
c(\psi):=c(i\p\b{\p}\psi)=-\frac{2\pi}{k}p^*\alpha.
\end{aligned}$$ Put $$\begin{aligned}
\label{3.6}
L:=\mc{O}_{P(E)}(1)\otimes K_{P(E)/\mc{B}}^{-1}=\mc{O}_{P(E)}(r+1)\otimes p^*\det E,\end{aligned}$$ where the second equality follows from [@Ko3 Proposition 3.6.20]. It is known that $$\begin{aligned}
c_1(\det E)=-p_*[c_1(\mc{O}_{P(E)}(1))^r]\end{aligned}$$ (see e.g. [@Fulton Section 3.2] or Lemma 2.3 in [@FLW0]), thus there exists a smooth metric $h_1$ on $\det E$ such that $$\begin{aligned}
\label{3.8}
c_1(\det E, h_1)=-\int_{P(E)/\mc{B}}\left(\frac{i}{2\pi}\p\b{\p}\psi \right)^r=-\frac{r}{(2\pi)^r}\int_{X_t}c(\psi)\wedge(i\p\b{\p}\psi)^{r-1}_{|X_t}=\frac{r}{k}\alpha,\end{aligned}$$ where the last equality follows from (\[3.7\]) and the fact $\int_{X_t}(\frac{i}{2\pi}\p\b{\p}\psi)^{r-1}_{|_{X_t}}=1$. From (\[3.6\]), the induced metric on $L$ is $$\begin{aligned}
e^{-\phi}=e^{-(r+1)\psi}\cdot p^*h_1. \end{aligned}$$ The curvature of $e^{-\phi}$ is $$\begin{aligned}
\label{3.9}
\p\b{\p}\phi=(r+1)\p\b{\p}\psi+p^*\b{\p}\p\log h_1. \end{aligned}$$ By (\[3.7\]), (\[3.8\]) and (\[3.9\]), one has
\[3.10\] c() &=(r+1)c()+ip\^\*h\_1\
&=(r+1)(-p\^\*)+2p\^\*c\_1(E, h\_1)\
&=-p\^\*.
It is known that (see section 7 in [@Bern2] or [@Shiff Lemma 5.37]) $$\begin{aligned}
E^*=p_*(\mc{O}_{P(E)}(1))=p_*(L\otimes K_{P(E)/\mc{B}}). \end{aligned}$$ Following [@Bern2; @Bern4], we shall consider the following $L^2$-metric on the direct image bundle $E^*$: for any $u\in E_{t}^*\equiv H^0(X_t, (L\otimes K_{P(E)/\mc{B}})|_{X_t})$, $t\in \mc{B}$, the square norm of $u$ is defined by $$\begin{aligned}
\label{L2 metric}
\|u\|^2=\int_{X_t}|u|^2e^{-\phi}, \end{aligned}$$ where the volume form $|u|^2e^{-\phi}$ is defined by $$|u|^2e^{-\phi}:=i^{n^2}|f|^2 e^{-\phi}dv\wedge \overline{dv}, \ \ u=f dv\otimes e,$$ (here $dv$ denotes a local frame for $K_{P(E_t)}$ and $e$ a local frame for $L|_{X_t}$ such that $h(e,e)=e^{-\phi}$).
\[thm4\] For any $t\in \mc{B}$ and let $u\in E_{t}^*$, one has $$\begin{aligned}
\label{cur}
\langle i\Theta^{E^*}u,u\rangle=\int_{X_t}c(\phi)|u|^2e^{-\phi}+\langle(1+\Box')^{-1}\k_j\cdot u,\k_k\cdot u\rangle \,i dt^j\wedge d\b{t}^k,\end{aligned}$$ where $\Theta^{E^*}$ denotes the curvature of the Chern connection on $E^*$ with respect to the $L^2$ metric defined above, here $\Box'=\n'\n'^*+\n'^*\n$ is the Laplacian on $L|_{X_t}$-valued forms on $X_t$ defined by the $(1,0)$-part of the Chern connection on $L|_{X_t}$.
Let $\{s^*_\alpha\}, 1\leq \alpha\leq r$, be a local holomorphic frame of $E^*$, and set $$h^*_{\alpha\b{\beta}}=\langle s^*_\alpha, s^*_\beta\rangle=\int_{X_t}s^*_\alpha \o{s^*_\beta}e^{-\phi}.$$ By taking trace to both sides of (\[cur\]) and using (\[3.10\]), we have $$\begin{aligned}
\label{3.11}
i\text{Ric}^{E^*}=-\frac{2\pi r}{k}\alpha+\sum \langle(1+\Box')^{-1}\k_j\cdot s^*_\alpha,\k_k\cdot s^*_\beta\rangle (h^*)^{\alpha\b{\beta}} i dt^j\wedge d\b{t}^k \geq -\frac{2\pi r}{k}\alpha\end{aligned}$$ and the equality holds if and only if $\k_j=0$ for all $1\leq j\leq \dim\mc{B}$. From (\[3.8\]), one has $$\begin{aligned}
\label{3.12}
[i\text{Ric}^{E^*}]=2\pi c_1(E^*)=\left[-\frac{2\pi r}{k}\alpha\right].
\end{aligned}$$ Thus and together give $$\begin{aligned}
\label{3.13}
\text{Ric}^{E^*}= -\frac{2\pi r}{k}\alpha, \ \ \ \k_j\equiv 0.\end{aligned}$$ Notice that the non-harmonic Weil-Petersson metrics associated to $\omega$ and $i\p\b{\p}\phi$ are equal (up to a constant factor), so $\omega_{\rm DF}\equiv 0$ on $\mc{B}$. Substituting (\[3.13\]) into (\[cur\]), we get $$\begin{aligned}
\langle i \Theta^{E^*}u,u\rangle=\int_{X_t}c(\phi)|u|^2e^{-\phi}=-\frac{2\pi}{k}\|u\|^2 \alpha,\end{aligned}$$ which is equivalent to $$\Theta^{E^*}=\frac{2\pi i}{k}\alpha\text{Id}_{E^*}.$$ Thus, with respect to the dual metric of the $L^2$-metric (\[L2 metric\]), the Chern curvature $\Theta^E$ satisfies $$\begin{aligned}
\Theta^E=-\frac{2\pi i}{k}\alpha\text{Id}_{E}. \end{aligned}$$ To summarize we get:
\[thm2\] If $p:(P(E),\omega)\to \mc{B}$ is a Poisson–Kähler fibration over a compact Kähler manifold $\mc{B}$ then there exists a Hermitian metric $h$ on $E$ such that $(E,h)$ is projectively flat.
The above proposition gives $2) \Rightarrow 1)$ part of Theorem \[thm6.1\].
Now it suffices to prove the last part. Assume that $\dim \mc{B}=1$, i.e. $\mc{B}$ is a compact Riemann surface. Put $$\begin{aligned}
\mu(E)=\frac{\int_{\mc{B}}c_1(E)}{\text{rank}(E)}. \end{aligned}$$ Recall that $E$ is said to be stable (resp. semi-stable) in the sense of Mumford if for every proper subbundle $E'$ of $E$, $0<\text{rank}(E')<\text{rank}(E)$, we have $$\begin{aligned}
\mu(E')<\mu(E),\quad (resp. \ \ \mu(E')\leq \mu(E)). \end{aligned}$$ $E$ is called polystable if $E=\oplus E_i$ with $E_i$ stable vector bundles all of the same slope $\mu(E)=\mu(E_i)$, see [@Huy Section 4.B].
By [@Ko3 Proposition 5.2.3], $(E,h)$ is projectively flat if and only if $(E,h)$ is weak Hermitian-Einstein , i.e. $\Lambda_{\omega_{\mc{B}}}R^E=\varphi \text{Id}_E$ for some function $\varphi$. By a conformal change (see e.g. [@Ko3 Proposition 4.2.4]), $E$ admits a weak Hermitian-Einstein metric if and only if $E$ admits a Hermitian-Einstein metric. Thus, $E$ admits a Hermitian-Einstein metric if and only if $E$ admits a projectively flat Hermitian metric, which is equivalent to that $p: P(E)\to \mc{B}$ is Poisson–Kähler, and all are equivalent to the polystablity of $E$ (see e.g. [@Huy Theorem 4.B.9]). The proof is complete.
Appendix
========
Quasi-vector bundle
-------------------
The notion of quasi-vector bundle that we will use comes from an early version of [@BPW].
Let $A:=\{A_t\}_{t\in {\ensuremath{\mathcal{B}}}}$ be a family of $\mathbb C$-vector spaces over a smooth manifold ${\ensuremath{\mathcal{B}}}$. Let $\Gamma$ be a $C^{\infty}({\ensuremath{\mathcal{B}}})$-submodule of the space of all sections of $A$. We call $\Gamma$ a smooth quasi-vector bundle structure on $V$ if each vector of the fiber $A_t$ extends to a section in $\Gamma$ locally near $t$.
### Lie-derivative connection
Let $p:({\ensuremath{\mathcal{X}}},\omega) \to {\ensuremath{\mathcal{B}}}$ be a relative Kähler fibration. Let $E$ be a holomorphic vector bundle over ${\ensuremath{\mathcal{X}}}$ with smooth hermitian metric $h_E$. We write $$X_t:=p^{-1}(t), \ \ E_t:= E_{X_t}, \ \ h_{E_t}:=h_E|_{E_t}.$$ For each $t\in {\ensuremath{\mathcal{B}}}$, denote by $\mathcal A^{p,q}(E_t)$ the space of all smooth $E_t$-valued $(p,q)$-forms on $X_t$. Put $$\mathcal A^{p,q}:=\{\mathcal A^{p,q}(E_t)\}_{t\in {\ensuremath{\mathcal{B}}}}.$$ Denote by $\mathcal A^{p,q}(E)$ the space of smooth $E$-valued $(p,q)$-forms on ${\ensuremath{\mathcal{X}}}$. Let us define $$\Gamma^{p,q}:=\{u: t\mapsto u^t \in \mathcal A^{p,q}(E_t): \exists \ \mathbf{u} \in \mathcal A^{p,q}(E), \ \mathbf u |_{X_t}=u^t, \ \forall \ t\in {\ensuremath{\mathcal{B}}}\}.$$ We call $\mathbf u$ a *smooth representative* of $u\in \Gamma^{p,q}$. Since $p$ is a proper smooth submersion, we know that each $\Gamma^{p,q}$ defines a quasi-vector bundle structure on $\mathcal A^{p,q}$. Consider $$(\mathcal A^k, \Gamma^k):=\oplus_{p+q=k} (\mathcal A^{p,q}, \Gamma^{p,q}).$$ We know that the fiber of $\mathcal A^k$ can be written as $$\mathcal A^k(E_t)=\oplus_{p+q=k} \mathcal A^{p,q}(E_t),$$ which is the space of all $E$-valued smooth $k$-forms on $X_t$. For every $u\in \Gamma^k$, let us define $$\nabla u:=\sum dt^j \otimes [d^E, \delta_{V_j}] \mathbf u+ \sum d\bar t^j \otimes [d^E, \delta_{\bar V_j}] \mathbf u,$$ where each $V_j$ denotes the horizontal lift of $\partial /\partial t^j$ with respect to $\omega$ and $$d^E:={\ensuremath{\overline\partial}}+\partial^E,$$ denotes the Chern connection on $(E, h_E)$.
In this paper we shall identify $u$ with its smooth representative $\mathbf u$. We call $\nabla$ the Lie-derivative connection on $(\mathcal A^k, \Gamma^k)$ with respect to $\omega$.
### Chern connection and Higgs field
For each $p,q$ with $p+q=k$, $\nabla$ induces a connection, say $D$, on $(\mathcal A^{p,q}, \Gamma^{p,q})$. For bidegree reason, we have $$D u:=\sum dt^j \otimes [\partial^E, \delta_{V_j}] \mathbf u+ \sum d\bar t^j \otimes [{\ensuremath{\overline\partial}}, \delta_{\bar V_j}] \mathbf u, \qquad \forall \ u\in \Gamma^{p,q}.$$ The associated second fundamental form can be written as $$(\nabla-D) u=\sum dt^j \otimes \kappa_j \cdot \mathbf u+ \sum d\bar t^j \otimes \overline{\kappa_j}\cdot \mathbf u,$$ where each $$\kappa_j : \mathbf u\mapsto \kappa_j \cdot \mathbf u,$$ denotes the action of the Kodaira–Spencer tensor $\kappa_j$ on $u$.
We call $$\theta:= \sum dt^j \otimes \kappa_j,$$ the Higgs field associated to $(\mathcal A^k, \Gamma^k, \omega)$.
By Theorem 5.6 in [@Wang17] (or an early version of [@BPW]), we know that
\[th:122\] $D$ defines a Chern connection on each $(\mathcal A^{p,q}, \Gamma^{p,q})$ and each $\overline{\kappa_j}=\kappa_j^*$.
### Chern Curvature formula
The curvature of the Lie-derivative connection is $$\label{eq:curvature-A}
\nabla^2 u =\sum (dt^j \wedge d\bar t^k)\otimes [ [d^E, \delta_{V_j}], [d^E, \delta_{\bar V_k}]] \mathbf u.$$ For bidegree reason, it gives the following curvature formula for the induced Chern connection $$\label{eq:curvature-C}
D^2 u= \nabla^2 u- \sum (dt^j \wedge d\bar t^k)\otimes [\kappa_j, \overline{\kappa_{k}}]\cdot \mathbf u.$$ Together with the following Lie-derivative identity (see Proposition 4.2 in [@Wang15]) $$\label{eq:curvature-B}
[ [d^E, \delta_{V_j}], [d^E, \delta_{\bar V_k}]] \mathbf u= [d^E, \delta_{[V_j, \bar V_k]}] \mathbf u +\Theta^E(V_j ,\bar V_k) \mathbf u,$$ where $
\Theta^E:=(d^E)^2$ denotes the Chern curvature of $(E, h_E)$, and imply
\[th:curvature\] For every $u\in \Gamma^{p,q}$, write $$D^2 u=\sum (dt^j \wedge d\bar t^k)\otimes \Theta_{j\bar k} u,$$ then the Chern curvature operators $\Theta_{j\bar k}$ satisfy $$(\Theta_{j\bar k} u, u)=( [d^E, \delta_{[V_j, \bar V_k]}] \mathbf u, u)+(\Theta^E(V_j ,\bar V_k) \mathbf u, u)+ (\kappa_j u, \kappa_k u)-(\overline{\kappa_{k}} u, \overline{\kappa_{j}}u).$$
Infinite rank Higgs bundle
--------------------------
### Admissible subbundle of the endomorphism bundle
Recall that each Kodaira–Spencer tensor $\kappa_j$ defines a map $$\kappa_j:= \Gamma^{p,q}\to \Gamma^{p-1,q+1}.$$ Thus we can look at $\kappa_j$ as an endomorphism of $(\mathcal A^k, \Gamma^k)$. Denote by $T^{1,0}_{X_t}$ the holomorphic tangent bundle of $X_t$ and $\mathcal A_t^{-1,1}$ the space of all smooth $T^{1,0}_{X_t}$-valued $(0,1)$-forms on $X_t$. Put $$\mathcal A^{-1,1}:= \{\mathcal A_t^{-1,1}\}_{t\in {\ensuremath{\mathcal{B}}}}.$$ We shall define $\Gamma^{-1,1}$ as the space of all maps, say $$\Phi: t\mapsto \Phi^t\in \mathcal A_t^{-1,1},$$ such that $\Phi(\Gamma^{p,q}) \subset \Gamma^{p-1,q+1}$. Then we know that $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ is a quasi-vector bundle. It is clear that $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ is a subbundle of the endomorphism bundle of $(\mathcal A^k, \Gamma^k)$.
We call $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ the admissible subbundle of the endomorphism bundle of $(\mathcal A^k, \Gamma^k)$.
Notice that $$\kappa : \frac{\partial}{\partial t^j} \mapsto \kappa_j$$ defines a natural bundle map from the holomorphic tangent bundle, say $T_{\ensuremath{\mathcal{B}}}$, of ${\ensuremath{\mathcal{B}}}$ to $(\mathcal A^{-1,1}, \Gamma^{-1,1})$. Fiberwise integration of the pointwise norm of the tensor, say $$\langle \Phi^t, \Psi^t \rangle:=\int_{X_t} \langle \Phi^t, \Psi^t \rangle_{\omega_t} \,\frac{\omega_t^n}{n!}, \ \ \omega_t:= \omega|_{X_t},$$ defines a natural Hermitian inner product structure, say $h_1$, on $\Gamma^{-1,1}$.
\[de:def7.5\] We call $h_1$ the Donaldson–Fujiki metric on $\Gamma^{-1,1}$.
From the definition we know the following proposition is true:
\[pr:DF-DF\] The pull back to $T_{\ensuremath{\mathcal{B}}}$ of the Donaldson–Fujiki metric on $\Gamma^{-1,1}$ is precisely the non-harmonic Weil–Petersson metric on ${\ensuremath{\mathcal{B}}}$.
**Remark**: In general, for every $0< k <2n$, since we look at $\Gamma^{-1,1}$ as an admissible subbundle of the endomorphism bundle of $(\mathcal A^k, \Gamma^k)$, it is also natural to look at the average of the pointwise *endomorphism norm*, we call it the $k$-th Hodge metric on $\Gamma^{-1,1}$: $$\langle \Phi^t, \Psi^t \rangle_k:=\int_{X_t} \langle \Phi^t, \Psi^t \rangle_{\omega_t, k} \,\frac{\omega_t^n}{n!}, \ \ \omega_t:= \omega|_{X_t},$$ where $$\langle \Phi^t, \Psi^t \rangle_{\omega_t, k} (z):=\sum \langle \Phi^t \cdot e_j , \Psi^t \cdot e_j \rangle_{\omega_t(z)}, \qquad \ \forall \ z\in X_t,$$ and $\{e_j\}$ denotes an orthonormal basis of $\mathbb C\otimes \wedge^k (T^*_z X_t)$. We know that $$\langle \Phi^t, \Psi^t \rangle_k= c_{n,k} \langle \Phi^t, \Psi^t \rangle, \ c_{n,1}=1,$$ where $c_{n,k}$, $0<k<2n$, are constants that only depend on $n, k$. Thus the first Hodge metric is equal to the non-harmonic Weil–Petersson metric and the general $k$-th Hodge metric is equal to a constant times the non-harmonic Weil–Petersson metric.
### Chern connection on the admissible subbundle $(\mathcal A^{-1,1}, \Gamma^{-1,1})$
The Chern connection $D$ on $(\mathcal A^k, \Gamma^k)$ clearly defines a connection, say $\bm D$, on $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ as follows $$\label{eq:d-end}
\bm D \Phi:=[D^{\mathcal A} , \Phi] , \qquad \forall \ \Phi\in \Gamma^{-1,1},$$ where we identify $\Phi$ as an endomorphism that maps $\Gamma^{1,0}$ to $\Gamma^{0,1}$. It is known that $\bm D$ gives the Chern connection on the endomorphism bundle of $(\mathcal A^k, \Gamma^k)$ *with respect to the natural endomorphism norm*. In our case, the more natural norm on $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ is the Donaldson–Fujiki $h_1$ norm in Definition \[de:def7.5\]. We shall show that $\bm D $ also defines the Chern connection on $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ with respect to the $h_1$ norm, more precisely, we shall prove the following result:
\[th:chern\] $\bm D$ defined in satisfies
- $d\langle \Phi, \Psi \rangle=\langle \bm D \Phi, \Psi \rangle+\langle \Phi, \bm D \Psi \rangle$ for every $\Phi, \Psi\in \Gamma^{-1,1}$;
- The square of the $(0,1)$-part of $\bm D$ is zero.
$i)$: Consider a partition of unity $1=\sum \lambda_l$ on ${\ensuremath{\mathcal{X}}}$ such that the support of each function $\lambda_l$ is relatively compact in a coordinate open set, say $U_l$, in ${\ensuremath{\mathcal{X}}}$. Let us choose smooth $(1,0)$-forms, say $e_j$, $1\leq j\leq n$, on $U_l$, such that for every $z\in U_l$, $\{e_j|_{X_{p(z)}} (z)\}_{1\leq j\leq n}$ defines an orthonormal base of $T_z X_{p(z)}$. Since $\Phi$ and $\Psi$ are smooth tensors, we can find smooth forms, say $\mathbf{\Phi}\cdot e_j$ and $\mathbf{\Psi}\cdot e_j$, on $U_l$ such that $
(\mathbf{\Phi}\cdot e_j )|_{X_t}=\Phi^t \cdot (e_j)|_{X_t}, \ (\mathbf{\Psi}\cdot e_j )|_{X_t}=\Psi^t \cdot (e_j)|_{X_t}$. Now we have $$\langle \Phi, \Psi \rangle =p_* G, \ \ G:= (-i) \sum \lambda_l \, (\mathbf{\Phi}\cdot e_j) \wedge \overline{(\mathbf{\Psi}\cdot e_j)} \wedge \omega_{n-1}, \ \omega_{n-1}:=\frac{\omega^{n-1}}{(n-1)!}.$$ By , what we need to prove is $$\frac{\partial}{\partial t^k} \langle \Phi, \Psi \rangle= \langle [D_{\partial/\partial t^k}, \Phi], \Psi \rangle+\langle \Phi, [D_{\partial/\partial \bar t^k}, \Psi] \rangle.$$ Since $
\frac{\partial}{\partial t^k} \langle \Phi, \Psi \rangle =p_* [d, \delta_{V_k}] G$ and each $V_k$ is horizontal, for bidegree reason we have $$\frac{\partial}{\partial t^k} \langle \Phi, \Psi \rangle= (-i)p_*(I_2+I_3),$$ where $$I_2:= \sum \lambda_l \, [\partial, \delta_{V_k}](\mathbf{\Phi}\cdot e_j) \wedge \overline{(\mathbf{\Psi}\cdot e_j)} \wedge \omega_{n-1}, \ I_3:= \sum \lambda_l \, (\mathbf{\Phi}\cdot e_j) \wedge \overline{[{\ensuremath{\overline\partial}}, \delta_{\bar V_k}](\mathbf{\Psi}\cdot e_j)} \wedge \omega_{n-1}.$$ Notice that $\langle [D_{\partial/\partial t^k}, \Phi], \Psi \rangle- (-i)p_* I_2=i p_*I'_2$, where $$I'_2:= \sum \lambda_l \, (\mathbf{\Phi}\cdot [\partial, \delta_{V_k}] e_j) \wedge \overline{(\mathbf{\Psi}\cdot e_j)} \wedge \omega_{n-1};$$ $\langle \Phi, [D_{\partial/\partial \bar t^k}, \Psi] \rangle- (-i)p_* I_3=i p_*I'_3$, where $$I'_3:= \sum \lambda_l \, (\mathbf{\Phi}\cdot e_j) \wedge \overline{(\mathbf{\Psi}\cdot [{\ensuremath{\overline\partial}}, \delta_{\bar V_k}]e_j)} \wedge \omega_{n-1}.$$ Thus it suffices to show $p_*(I'_2+I'_3)=0$, which will be proved in Lemma \[le7.5\].
$ii)$: It suffices to show $$[D_{\partial/\partial \bar t_j}, [D_{\partial/\partial \bar t^k}, \Phi]]= [D_{\partial/\partial \bar t_k}, [D_{\partial/\partial \bar t^j}, \Phi]].$$ Notice that the super Jacobi identity gives $$[D_{\partial/\partial \bar t_j}, [D_{\partial/\partial \bar t^k}, \Phi]]- [D_{\partial/\partial \bar t_k}, [D_{\partial/\partial \bar t^j}, \Phi]]=[[D_{\partial/\partial \bar t_j}, D_{\partial/\partial \bar t_k}], \Phi].$$ By Proposition \[pr:rkf\], we have $[\overline{V_j}, \overline{V_k}]=0$, which implies that $[L_{\overline{V_j}}, L_{\overline{V_k}}]=L_{[\overline{V_j}, \overline{V_k}]}=0$, thus $
[D_{\partial/\partial \bar t_j}, D_{\partial/\partial \bar t_k}]=0$ and $ii)$ follows.
\[le7.5\] $p_*(I'_2+I'_3)=0$.
Since $\{e_j\}$ is an orthonormal frame, for every $j,k$ and $l$, we have $$\label{eq7.5}
i \, e_j \wedge \overline{e_m} \wedge \omega_{n-1}=\delta_{jm} \,\omega_n, \ \ \omega_n:=\frac{\omega^n}{n!}, \ \delta_{jj}=1, \delta_{jm}=0 \ \ \text{if} \ j\neq m,$$ on fibers, which implies that (since $\omega$ is $d$-closed and $V_k$ are horizontal) $$L_{V_k} (e_j \wedge \overline{e_m} \wedge \omega_{n-1})=0,$$ on fibers. For bidegree reason, the above identity $$\label{eq7.6}
([\partial, \delta_{V_k}] e_j) \wedge \overline{e_m} \wedge \omega_{n-1} + e_j \wedge \overline{([{\ensuremath{\overline\partial}}, \delta_{\bar V_k}]e_m)} \wedge \omega_{n-1} =0,$$ on fibers. Assume that $$[\partial, \delta_{V_k}] e_j =\sum a_{kj}^p e_p, \ \ [{\ensuremath{\overline\partial}}, \delta_{\bar V_k}]e_m= \sum b_{k m}^q e_q.$$ Then (thanks to ) one may rewrite as $$a_{kj}^m +\overline{b_{km}^j}=0,$$ which implies that (since $\Phi$ and $\Psi$ are tensors, they commute with smooth functions) $
I'_2+I'_3=0$ on fibers. Thus $p_*(I'_2+I'_3)=0$.
### Infinite dimensional flat Higgs bundle
The following proposition is an infinite dimensional version of Theorem \[th:higgs\].
\[pr: flat-higgs\] Let $p:({\ensuremath{\mathcal{X}}},\omega) \to {\ensuremath{\mathcal{B}}}$ be a Poisson–Kähler fibration. If $\Theta^E\equiv 0$ then
- $\nabla^2=0$;
- $\theta^2=0$;
- $D\theta+\theta D=0$.
In particular, each $(\mathcal A^k, \Gamma^k, D, \theta)$ is an infinite rank flat Higgs bundle.
Since the total degree of the Kodaira–Spencer tensor is zero, $\theta^2=0$ is always true. Moreover $$D^{1,0}\theta+\theta D^{1,0}=0$$ follows from $[V_j, V_k]\equiv 0$, which is true for every relative Kähler fibration. Assume further that $\omega$ is Poisson–Kähler, then we have $$[V_j, \overline{V_k}]\equiv 0$$ by Proposition \[pr:rkf\], which gives $$D^{0,1}\theta+\theta D^{0,1}=0 \ \ \text{i.e.} \ \theta \ \text{is holomorphic},$$ and (by and ) $$\nabla^2= \sum (dt^j \wedge d\bar t^k)\otimes \Theta^E(V_j, \overline{V_k}).$$ Thus $\nabla^2=0$ if one further assumes that $\Theta^E\equiv 0$.
**Remark**: In finite dimensional case, we can always define Lu’s Hodge metric associated to a flat Higgs bundle (see [@Wang_Higgs]). In our case, the definition in [@Wang_Higgs] does not work since the Higgs bundle has infinite rank. But by Theorem \[th:chern\], we know that the Chern connection $\bm D$ on the ${\rm End} (\mathcal A)$ is also well defined on the subbundle $(\mathcal A^{-1,1}, \Gamma^{-1,1})$ of ${\rm End} (\mathcal A)$. The *key point* here is the natural fiber integral metric is well defined on $(\mathcal A^{-1,1}, \Gamma^{-1,1})$, which allows one to define the associated Lu’s Hodge metric $h_1$ (see Definition \[de:def7.5\]) for the above special infinite rank Higgs bundle. Since $h_1$ is precisely our non-harmonic Weil–Petersson metric, we know that the finite dimensional Higgs bundle computations also applies to $h_1$. This is main idea of our first proof of Theorem \[th:wang-1\]. Our second proof of Theorem \[th:wang-1\] is based on Theorem \[th:chern\], which also gives a precise curvature formula of the non-harmonic Weil–Petersson metric for general relative Kähler fibrations (see section 4.4). Our third proof of Theorem \[th:wang-1\] based on Schumacher’s method [@Sch12] will be given in section 7.4.
Proof of Theorem C
------------------
The bundle $\mathcal A$ in Theorem C is precisely $\oplus_{k=0}^{2n} \mathcal A^k$ (with $E$ being trivial). Thus if $p$ is Poisson–Kähler then Proposition \[pr: flat-higgs\] implies that $\mathcal A$ is Higgs flat. On the other hand, since $$\nabla^2 =\sum (dt^j \wedge d\bar t^k)\otimes [d, \delta_{[V_j, \overline{V_k}]}],$$ we know that if $\mathcal A$ is Higgs flat then $\nabla^2 \equiv 0$ gives $$[d, \delta_{[V_j, \overline{V_k}]}] u\equiv 0$$ on fibers for all smooth form $u$ on ${\ensuremath{\mathcal{X}}}$. Take $u$ to be an arbitrary smooth function, we get $$[d, \delta_{[V_j, \overline{V_k}]}] u=[V_j, \overline{V_k}] u=0,$$ which implies $[V_j, \overline{V_k}] \equiv 0$. Thus $\omega$ is Poisson–Kähler by Proposition \[pr:rkf\]. The proof is complete.
The third proof of Theorem A
----------------------------
In this subsection, we will give the third proof of Theorem \[th:wang-1\]. Let $p: (\mc{X},\omega)\to \mc{B}$ be a relative Kähler fibration, i.e. $\omega=i\p\b{\p}g$ is a real and smooth $d$-closed $(1,1)$-form on $\mc{X}$ and is positive on each fiber $X_t:=p^{-1}(t)$. From Definition \[de:DFWP\], the non-harmonic Weil-Petersson metric is defined by $$\begin{aligned}
\omega_{\rm DF}=iG_{j\b{k}}dt^j\wedge d\b{t}^k,\quad G_{j\b{k}}:=\langle \frac{\p}{\p t^j},\frac{\p}{\p t^k}\rangle_{\rm DF}=\int_{X_t}\langle \k_j,\k_k\rangle_{\omega_t}\frac{\omega^n_t}{n!}.\end{aligned}$$ Let $T_{X_t}$ denote the holomorphic tangent bundle of $X_t$, and denote $T_{X_t}^{\mb{C}}=T_{X_t}\oplus\o{T_{X_t}}$ the complexify tangent bundle. For any two tensors $$\Phi=\Phi^{A}_Bdx^B\otimes \frac{\p}{\p x^A},\quad \Psi=\Psi^A_Bdx^B\otimes\frac{\p}{\p x^A}\in A^{1}(X_t, T_{X_t}^{\mb{C}})\simeq A^0(X_t, \text{End}(T^{\mb{C}}_{X_t})),$$ where $x^A, x^B$ are taken $\{\zeta^{\alpha}, \b{\zeta}^{\beta}\}$. We define $$\begin{aligned}
\Phi\cdot\Psi:=\text{Tr}(\Phi\Psi)=\Phi^A_B\Psi^B_A.\end{aligned}$$ For any vector field $V$, we denote by $L_V$ the Lie derivative along $V$. And for any $\Phi=\Phi^{A}_Bdx^B\otimes \frac{\p}{\p x^A}\in A^{1}(X_t, T_{X_t}^{\mb{C}})$, one has $$\label{L1}
L_V\Phi=\left(L_V\Phi^{A}_{B}\right)\frac{\p}{\p x^{A}}\otimes dx^{B},$$ where $$\begin{aligned}
\label{L2}
\begin{split}
L_V\Phi^{A}_{B}&=V(\Phi^{A}_{B})-\Phi^C_B\frac{\p V^{A_t}}{\p x^C}+\Phi^A_C\frac{\p V^{C}}{\p x^{B}}\\
&=\n_V(\Phi^{A}_{B})-\Phi^{C}_{B}\n_C V^{A_t}+\Phi^{A}_{C}\n_{B} V^{C}.
\end{split}\end{aligned}$$ Here $\n_C$ denotes the Chern connection along $\p/\p x^{C}$ with respect to some Hermitian metric. Since Lie derivative commutes with contraction and satisfies Leibniz’s rule for tensors, so $$\begin{aligned}
L_V (\Phi\cdot\Psi)=(L_V\Phi)\cdot\Psi+\Phi\cdot(L_V\Psi). \end{aligned}$$ Denote $$\begin{aligned}
\k_j=A^{\alpha}_{j\b{\beta}}d\b{\zeta}^{\beta}\otimes \frac{\p}{\p \zeta^{\alpha}},\quad A^{\alpha}_{j\b{\beta}}=-\p_{\b{\beta}}(g_{j\b{\gamma}}g^{\b{\gamma}\alpha}).\end{aligned}$$ By a direct calculation, one has $$\begin{aligned}
\label{2.1}
A^{\alpha}_{j\b{\beta}}=A^{\sigma}_{j\b{\gamma}}g^{\b{\gamma}\alpha}g_{\sigma\b{\beta}},\end{aligned}$$ (see e.g. [@Wan1 (3.12)]). Thus $$\begin{aligned}
\langle \k_j,\k_k\rangle_{\omega_t}=A^{\alpha}_{j\b{\beta}}\o{A^{\sigma}_{k\b{\gamma}}}g^{\gamma\b{\beta}}g_{\alpha\b{\sigma}}=A^{\alpha}_{j\b{\beta}}\o{A^\beta_{k\b{\alpha}}}=\k_j\cdot \o{\k_{k}}.\end{aligned}$$ The first variation of non-harmonic Weil-Petersson metric is given by
\[2.4\] &=\_[X\_t]{}\_jø[\_[k]{}]{}\
&=\_[X\_t]{} (L\_[V\_l]{}\_j)ø[\_[k]{}]{}+\_[X\_t]{}\_jL\_[V\_l]{}ø[\_[k]{}]{}+\_[X\_t]{}\_jø[\_[k]{}]{}L\_[V\_j]{}\
&=\_[X\_t]{} (L\_[V\_l]{}\_j)ø[\_[k]{}]{}+\_[X\_t]{}\_jL\_[V\_l]{}ø[\_[k]{}]{},
where the second equality follows from [@Sch12 Lemma 1], the last equality holds by [@Sch Lemma 2.2 (2)]. From [@Sch Lemma 2.3] or (\[L1\]), (\[L2\]), one has
\[2.6\] L\_[V\_l]{}ø[\_[k]{}]{}&=L\_[V\_l]{}(ø[A\^\_[k]{}]{} d\^ )\
&=-(c\_[l]{})\^[;]{}\_[ ]{}d\^ -A\^\_[l]{}ø[A\^\_[k]{}]{}d\^+ø[A\^\_[k]{}]{}A\^\_[l]{}d\^\
&=-\_[l]{}ø[\_[k]{}]{}+ø[\_[k]{}]{}\_[l]{}
since $c_{l\b{k}}\equiv 0$. Thus
\[2.2\] \_[X\_t]{}\_jL\_[V\_l]{}ø[\_[k]{}]{}=0.
Substituting (\[2.2\]) into (\[2.4\]) one gets $$\begin{aligned}
\label{2.5}
\frac{\p G_{j\b{k}}}{\p t^l}=\int_{X_t} (L_{V_l}\k_j)\cdot\o{\k_{k}}\frac{\omega^n_t}{n!}.
\end{aligned}$$ From [@Sch Lemma 2.5], $(L_{V_l}\k_j)^{\alpha}_{\b{\beta}}=(L_{V_j}\k_k)^\alpha_{\b{\beta}}$, which implies that $$\begin{aligned}
\frac{\p G_{j\b{k}}}{\p t^l}=\frac{\p G_{l\b{k}}}{\p t^j}.
\end{aligned}$$ Thus, $\omega_{\rm DF}$ is Kähler.
Now we compute the second variation of non-harmonic Weil-Petersson metric. Since $[L_{\b{V}_{m}}, L_{V_l}]=L_{[\b{V}_{m}, V_l]}$ and by (\[2.5\]), then
\[2.10\] &= \_[X\_t]{} (L\_[V\_l]{}\_j)ø[\_[k]{}]{}\
&=\_[X\_t]{} (L\_[\_m]{}L\_[V\_l]{}\_j)ø[\_[k]{}]{}+\_[X\_t]{} L\_[V\_l]{}\_jL\_[\_m]{}ø[\_[k]{}]{}\
&=\_[X\_t]{} L\_[\[\_m, V\_l\]]{}ø[\_[k]{}]{}+\_[X\_t]{}L\_[\_m]{}\_jø[\_[k]{}]{}\
&-\_[X\_t]{}L\_[\_m]{}\_jL\_[V\_l]{}ø[\_[k]{}]{}+\_[X\_t]{} L\_[V\_l]{}\_jL\_[\_m]{}ø[\_[k]{}]{}\
&=-\_[X\_t]{}L\_[\_m]{}\_jL\_[V\_l]{}ø[\_[k]{}]{}+\_[X\_t]{} L\_[V\_l]{}\_jL\_[\_m]{}ø[\_[k]{}]{},
where the last equality holds by (\[2.2\]) and Proposition \[pr:rkf\] (4). From (\[2.6\]), one has
\[2.8\] \_[X\_t]{}L\_[\_m]{}\_jL\_[V\_l]{}ø[\_[k]{}]{}&=\_[X\_t]{}(-ø[\_[m]{}]{}\_j+\_jø[\_[m]{}]{})(-\_[l]{}ø[\_[k]{}]{}+ø[\_[k]{}]{}\_[l]{})\
&=-\_M ((ø[\_[m]{}]{}\_jø[\_[k]{}]{}\_[l]{})+(\_jø[\_[m]{}]{}\_[l]{}ø[\_[k]{}]{}))\
&=-(ø[\_[m]{}]{}\_j, ø[\_[l]{}]{}\_k)-(\_jø[\_[m]{}]{}, \_kø[\_l]{}).
Here we denote $$\begin{aligned}
(\cdot,\cdot):=\int_{X_t}\langle\cdot,\cdot\rangle\frac{\omega^n_{t}}{n!}\end{aligned}$$ denotes the global $L^2$-inner product. On the other hand, by (\[L1\]) and (\[L2\]), one has $$\begin{aligned}
\label{2.7}
L_{V_l}\k_j=(L_{V_l}\k_j)^{\alpha}_{\b{\beta}}d\b{\zeta}^{\beta}\otimes\frac{\p}{\p \zeta^{\alpha}}=\left(\p_l(A^{\alpha}_{j\b{\beta}})-g_{l\b{\gamma}}g^{\b{\gamma}\sigma}A^{\alpha}_{j\b{\beta};\sigma}+A^{\sigma}_{j\b{\beta}}g_{l\sigma\b{\gamma}}g^{\b{\gamma}\alpha}\right)d\b{\zeta}^{\beta}\otimes\frac{\p}{\p \zeta^{\alpha}}.\end{aligned}$$ By a direct calculation, one has $$\begin{aligned}
\label{2.13}
(L_{V_l}\k_j)^{\alpha}_{\b{\beta}}=(L_{V_l}\k_j)^{\tau}_{\b{\delta}}g^{\b{\delta}\alpha}g_{\tau\b{\beta}}.\end{aligned}$$ In fact, by (\[2.1\]), one has $$\begin{aligned}
(L_{V_l}\k_j)^{\alpha}_{\b{\beta}}&=\p_l(A^{\alpha}_{j\b{\beta}})-g_{l\b{\gamma}}g^{\b{\gamma}\sigma}A^{\alpha}_{j\b{\beta};\sigma}+A^{\sigma}_{j\b{\beta}}g_{l\sigma\b{\gamma}}g^{\b{\gamma}\alpha}\\
&=\p_l(A^{\alpha}_{j\b{\beta}})-A^{\sigma}_{j\b{\gamma}}g_{\sigma\b{\beta}}\p_lg^{\b{\gamma}\alpha}-(g_{l\b{\gamma}}g^{\b{\gamma}\sigma}A^{\tau}_{j\b{\delta};\sigma})g^{\b{\delta}\alpha}g_{\tau\b{\beta}}\\
&=\p_l A^{\sigma}_{j\b{\gamma}}g_{\sigma\b{\beta}}g^{\b{\gamma}\alpha}+ A^{\sigma}_{j\b{\gamma}}\p_l g_{\sigma\b{\beta}}g^{\b{\gamma}\alpha}-(g_{l\b{\gamma}}g^{\b{\gamma}\sigma}A^{\tau}_{j\b{\delta};\sigma})g^{\b{\delta}\alpha}g_{\tau\b{\beta}}\\
&=(\p_l(A^{\tau}_{j\b{\delta}})-g_{l\b{\gamma}}g^{\b{\gamma}\sigma}A^{\tau}_{j\b{\delta};\sigma}+A^{\sigma}_{j\b{\delta}}g_{l\sigma\b{\gamma}}g^{\b{\gamma}\tau})g^{\b{\delta}\alpha}g_{\tau\b{\beta}}\\
&=(L_{V_l}\k_j)^{\tau}_{\b{\delta}}g^{\b{\delta}\alpha}g_{\tau\b{\beta}},\end{aligned}$$ which completes the proof of (\[2.13\]). By (\[2.7\]) and (\[2.13\]), one has $$\begin{aligned}
\label{2.9}
\int_{X_t} L_{V_l}\k_j\cdot L_{\b{V}_m}\o{\k_{k}}\frac{\omega^n_t}{n!}=( L_{V_l}\k_j, L_{V_m}\k_{k}). \end{aligned}$$ Substituting (\[2.8\]) and (\[2.9\]) into (\[2.10\]), we have $$\begin{aligned}
\label{2.11}
\frac{\p^2 G_{j\b{k}}}{\p t^l\p \b{t}^m}=(\o{\k_{m}}\k_j, \o{\k_{l}}\k_k)+(\k_j\o{\k_{m}}, \k_k\o{\k_l})+( L_{V_l}\k_j, L_{V_m}\k_{k}).\end{aligned}$$ Denote by $H: A^{0,1}(X_t, T_{X_t})\to \text{Span}\{\k_{i}\}$ the orthogonal projection. By (\[2.5\]), one has $$\begin{aligned}
\label{2.12}
G^{p\b{q}}\frac{\p G_{j\b{q}}}{\p t^l}\frac{\p G_{p\b{k}}}{\p\b{t}^m}=G^{p\b{q}}(L_{V_l}\k_j,\k_q)(\k_p,L_{V_m}\k_k)=(H(L_{V_l}\k_j),H(L_{V_m}\k_k)). \end{aligned}$$ From (\[2.11\]) and (\[2.12\]), the curvature of non-harmonic Weil-Petersson metric for Poisson–Kähler fibration is
\[thm1\] R\_[jl]{}&=-+ G\^[p]{}\
&=-(ø[\_[m]{}]{}\_j, ø[\_[l]{}]{}\_k)-(\_jø[\_[m]{}]{}, \_kø[\_l]{})-(H\^(L\_[V\_l]{}\_j), H\^(L\_[V\_m]{}\_[k]{})).
Here $H^{\perp}$ denotes the orthogonal projection from $A^{0,1}(X_t, T_{X_t})$ to $\text{Span}\{\k_{i}\}^{\perp}$.
For any two vectors $\xi=\xi^j\frac{\p}{\p t^j},\eta=\eta^j\frac{\p}{\p t^j}$ in $T\mc{B}$, we denote $$\begin{aligned}
\k_{\xi}=\k_j\xi^j,\quad \k_{\eta}=\k_j\eta^j.\end{aligned}$$ From (\[thm1\]), one has $$\begin{aligned}
\label{2.15}
R(\xi,\o{\xi},\eta,\o{\eta})&:=R_{j\b{k}l\b{m}}\xi^j\b{\xi}^k\eta^l\b{\eta}^m\leq -(\o{\k_{\eta}}\k_\xi, \o{\k_{\eta}}\k_\xi)-(\k_\xi\o{\k_{\eta}}, \k_\xi\o{\k_\eta})=-2(\o{\k_{\eta}}\k_\xi, \o{\k_{\eta}}\k_\xi). \end{aligned}$$ Note that $$\begin{aligned}
\label{2.17}
\langle\o{\k_{\eta}}\k_\xi, \o{\k_{\eta}}\k_\xi\rangle\geq \frac{1}{n}\left|\sum_{\beta=1}^n(\k_{\eta}\o{\k_{\xi}})^{\beta}_{\beta}\right| ^2=\frac{1}{n}\left|\text{Tr}(\k_{\eta}\o{\k_{\xi}})\right|^2.\end{aligned}$$ In fact, by taking a normal coordinate system around a fix point, one can assume that $g_{\alpha\b{\beta}}=\delta_{\alpha\beta}$ at this point, one has $$\begin{aligned}
\langle\o{\k_{\eta}}\k_\xi, \o{\k_{\eta}}\k_\xi\rangle&=(\o{\k_{\eta}}\k_\xi)^{\b{\gamma}}_{\b{\beta}}(\k_{\eta}\o{\k_{\xi}})^{\tau}_{\alpha}g^{\alpha\b{\beta}}g_{\tau\b{\gamma}}
=\sum_{\beta,\gamma=1}^n(\o{\k_{\eta}}\k_\xi)^{\b{\gamma}}_{\b{\beta}}(\k_{\eta}\o{\k_{\xi}})^{\gamma}_{\beta}\\
&\geq \sum_{\beta=1}^n|(\k_{\eta}\o{\k_{\xi}})^{\beta}_{\beta}|^2
\geq \frac{1}{n}\left(\sum_{\beta=1}^n|(\k_{\eta}\o{\k_{\xi}})^{\beta}_{\beta}|\right)^2\\
&\geq \frac{1}{n}\left|\sum_{\beta=1}^n(\k_{\eta}\o{\k_{\xi}})^{\beta}_{\beta}\right| ^2=\frac{1}{n}\left|\text{Tr}(\k_{\eta}\o{\k_{\xi}})\right|^2.\end{aligned}$$ By (\[2.17\]), we have
\[2.14\] (ø[\_]{}\_, ø[\_]{}\_)&=\_[X\_t]{}ø[\_]{}\_, ø[\_]{}\_ \_[X\_t]{}|(\_ø[\_]{})|\^2\
&(\_[X\_t]{}|(\_ø[\_]{})|)\^2(\_[X\_t]{})\^[-1]{}\
&|,\_[DF]{}|\^2|X\_t|\^[-1]{},
where $|X_t|:=\int_{X_t}\frac{\omega^n_t}{n!}$ denotes the volume of each fiber. From (\[2.15\]) and (\[2.14\]), we obtain $$\begin{aligned}
\label{2.16}
R(\xi,\o{\xi},\eta,\o{\eta})\leq -\frac{2}{n}|X_t|^{-1}|\langle\eta,\xi\rangle_{DF}|^2.\end{aligned}$$ From (\[2.16\]), we obtain that the holomorphic bisectional curvature of the non-harmonic Weil-Petersson metric is non-positive and is negative if $\xi$ and $\eta$ are not orthogonal each other. The holomorphic sectional curvature satisfies $$\begin{aligned}
\frac{R(\xi,\o{\xi},\xi,\o{\xi})}{\|\xi\|^4}\leq -\frac{2}{n}|X_t|^{-1},\end{aligned}$$ and its Ricci curvature satisfies $$\begin{aligned}
\frac{{\rm Ric}(\xi,\o{\xi})}{\|\xi\|^2} =\frac{\sum_{j=1}^{\dim\mc{B}} R(\xi,\o{\xi},e_j,\o{e_j})}{\|\xi\|^2}\leq -\frac{2}{n}|X_t|^{-1}\frac{\sum_{j=1}^{\dim\mc{B}} |\langle e_j,\xi\rangle_{DF}|^2}{\|\xi\|^2}=-\frac{2}{n}|X_t|^{-1},\end{aligned}$$ where $\{e_j\}$ is an orthonormal basis with respect to $\omega_{\rm DF}$. The scalar curvature satisfies $$\begin{aligned}
\sum_{j=1}^{\dim\mc{B}}{\rm Ric}(e_j,\o{e_j})\leq -\frac{2}{n}|X_t|^{-1}\dim\mc{B}. \end{aligned}$$ Thus, we complete the proof of Theorem \[th:wang-1\].
From (\[2.17\]) and (\[2.14\]), one obtains that $R(\xi,\o{\xi},\eta,\o{\eta})=0$ if and only if $$\begin{aligned}
\k_{\eta}\o{\k_\xi}\equiv 0 \quad\text{and}\quad H^{\perp}(L_{V_\eta}\k_\xi)=0,
\end{aligned}$$ which is also equivalent to $\k_{\eta}\o{\k_\xi}$ is a zero matrix on $X_t$ and $L_{V_\eta}\k_\xi$ lies in $\text{Span}\{\k_i\}$.
[99]{}
L. V. Ahlfors, [*Some remarks on Teichmuller’s space of Riemann surfaces*]{}, Ann. Math. (1961), 171–191.
L. V. Ahlfors, [*Curvature properties of Teichmuller’s space*]{}, Journal d’Analyse Mathématique [**9**]{} (1961), 161–176.
T. Aikou, [*Projective flatness of complex Finsler metrics*]{}, Publ. Math. Debrecen [**63**]{} (2003), no. 3, 343–362.
C. Arezzo and G. Tian, [*Infinite geodesic rays in the space of Kähler potentials*]{}, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) [**2**]{} (2003), no. 4, 617–630.
B. Berndtsson, [*Convexity on the space of Kähler metrics*]{}, Ann. Fac. Sci. Toulouse Math. [**22**]{} (2013), 713–746.
B. Berndtsson, [*Curvature of vector bundles associated to holomorphic fibrations*]{}, Ann. Math. [**169**]{}, (2009), 531–560.
B. Berndtsson, [*Strict and non strict positivity of direct image bundles*]{}, Math. Z. [**269**]{} (3-4), (2011), 1201–1218.
B. Berndtsson, [*Negative curvature and complex structures*]{}, preprint.
B. Berndtsson, [*Private discussions*]{}.
B. Berndtsson, M. Păun and X. Wang, [*Algebraic fiber spaces and curvature of higher direct images*]{}, arXiv:1704.02279.
B. Berndtsson, D. Cordero–Erausquin, B. Klartag, Y.A. Rubinstein, [*Complex Legendre duality*]{}, arXiv:1608.05541, Amer. J. Math., to appear.
R. Bott, L. W. Tu, Differential Forms in Algebraic Topology (Springer, 1982).
D. Burns, [*Curvatures of Monge–Ampere Foliations and Parabolic Manifolds*]{}, Ann. Math. [**115**]{} (1982), 349–373.
X. X. Chen, [*The space of Kähler metrics*]{}. J. Differ. Geom. [**56**]{} (2000), 189–234.
X. X. Chen and G. Tian, [*Geometry of Kähler metrics and foliations by holomorphic discs*]{}, Pubi. Math. I.H.E.S. [**107**]{} (2008), 1–107.
B. Claudon, S. Kebekus and B. Taji, [*Generic positivity and applications to hyperbolicity of moduli spaces*]{}, arXiv preprint.
Y. Deng, [*Kobayashi hyperbolicity of moduli spaces of minimal projective manifolds of general type (with the appendix by Dan Abramovich)*]{}, arXiv:1806.01666.
S. K. Donaldson, [*Remarks on gauge theory, complex geometry and four–manifold topology*]{}, Fields Medallists’ Lectures (Atiyah and Iagolnitzer, eds.), World Scientific, 1997, pp. 384–403.
S. K. Donaldson, [*Scalar curvature and stability of toric varieties*]{}, J. Differential Geom. [**62**]{} (2002), 289–349.
H. Feng, K. Liu and X. Wan, [*Chern forms of holomorphic Finsler vector bundles and some applications*]{}, International Journal of Mathematics, Vol. [**27**]{}, No. 4, 2016.
H. Feng, K. Liu and X. Wan, [*Geodesic–Einstein metrics and nonlinear stabilities*]{}, Tran. Amer. Math. Soc. [**371**]{} (2019), no. 11, 8029-8049..
A. Fujiki, [*Moduli space of polarized algebraic manifolds and Kähler metrics*]{}, Sugaku Expositions, [**5**]{} (1992), 173–191.
A. Fujiki and G, Schumacher, [*The moduli space of extremal compact Kähler manifolds and generalized Weil–Petersson metrics*]{}. Publications of the Research Institute for Mathematical Sciences, [**26**]{} (1990), 101–183.
W. Fulton, Intersection Theory, Second Edition, Springer, 1998.
P. Griffiths (Ed.), [*Topics in Transcendental Algebraic Geometry*]{}, Princeton University Press, 1984.
P. Griffiths and W. Schmid, [*Locally homogeneous complex manifolds*]{}, Acta Math. [**123**]{} (1969), 253–302.
D. Huybrechts, Complex geometry. An introduction. Universitext. Springer-Verlag, Berlin, 2005. xii+309 pp.
G. R. Kempf, [*Complex abelian varieties and theta functions*]{}, Springer.
S. Kobayashi, Differential geometry of complex vector bundles, Iwanami-Princeton Univ. Press, 1987.
Z. Lu, [*On the geometry of classifying spaces and horizontal slices*]{}, Amer. J. Math. [**121**]{} (1999), 177–198.
Z. Lu, [*On the Hodge metric of the universal deformation space of Calabi-Yau threefolds*]{}, J. Geom. Anal. [**11**]{} (2001), 103–118.
Z. Lu and X. Sun, [*Weil-Petersson geometry on moduli space of polarized Calabi-Yau manifolds*]{}, Jour. Inst. Math. Jussieu [**3**]{} (2004), 185–229.
M. Păun, [*Relative adjoint transcendental classes and Albanese maps of compact Kaehler manifolds with nef Ricci curvature*]{}, in Advanced Studies in Pure Mathematics 74, (2017), Higher dimensional algebraic geometry, 335–356.
M. Popa and C. Schnell, [*Viehweg’s hyperbolicity conjecture for families with maximal variation*]{}, Inv. Math. [**208**]{} (2017), 677–713.
H. L. Royden, [*Intrinsic metrics on Teichmuller space*]{}, Proceedings of the International Congress of Mathematicians (Vancouver, BC, 1974). Vol. 2. 1974.
B. Shiffman, A. Sommese, Vanishing theorems on complex manifolds, Birkhäuser, Boston, Basel, Stuttgart, 1985.
C. Schnell, [*Generic vanishing theorem*]{}, lectures notes available in his homepage.
G. Schumacher, [*The curvature of the Petersson-Weil metric on the moduli space of Kähler-Einstein manifolds*]{}, Complex analysis and geometry, 339-354, Univ. Ser. Math., Plenum, New York, 1993.
G. Schumacher, [*Positivity of relative canonical bundles and applications*]{}, Invent. Math. [**190**]{} (2012), 1–56.
C. Simpson, [*Higgs bundles and local systems*]{}, Publ. I.H.E.S. [**75**]{} (1992), 5–95.
S. Sun, [*Note on geodesic rays and simple test configurations*]{}. J. Symplectic Geom. [**8**]{} (2010), 57–65.
W. K. To and S. K. Yeung, [*Finsler Metrics and Kobayashi hyperbolicity of the moduli spaces of canonically polarized manifolds*]{}, Ann. Math. [**181**]{} (2015), 547–586.
E. Viehweg and K. Zuo, [*On the Brody hyperbolicity of moduli spaces for canonically polarized manifolds*]{}, Duke Math. J. [**118**]{} (2003), 103–150,
X. Wan, [*Remarks on the geodesic–Einstein metrics of a relative ample line bundle (with an appendix by Xu Wang)*]{}, arXiv:1808.06435.
X. Wan and X. Wang, [*Poisson–Kähler fibration II: local existence theory*]{}, TBA.
X. Wan and G. Zhang, [*The asymptotic of curvature of direct image bundle associated with higher powers of a relative ample line bundles*]{}, arXiv: 1712.05922v1, 2017.
X. Wang, [*A curvature formula associated to a family of pseudoconvex domains*]{}, arXiv:1508.00242.
X. Wang, [*Curvature of higher direct image sheaves and its application on negative-curvature criterion for the Weil-Petersson metric*]{}, arXiv: 1607.03265.
X. Wang, [*Notes on variation of Lefschetz star operator and $T$–Hodge theory*]{}, arXiv:1708.07332.
X. Wang, [*Curvature restrictions on a manifold with a flat Higgs bundle* ]{}, arXiv: 1608.00777.
X. Wang, [*A flat Higgs bundle structure on the complexified Kähler cone*]{}, arXiv: arXiv:1612.02182.
R. Wells, Differential analysis on complex manifolds. Third edition. With a new appendix by Oscar Garcia-Prada. Graduate Texts in Mathematics, [**65**]{}. Springer, New York, 2008.
S. Wolpert, Chern forms and the Riemann tensor for the moduli space of curves, Invent. Math. [**85**]{} (1986), no. 1, 119–145.
[^1]: We would like to thank Bo Berndtsson for telling us Burns’ result. We did not know Burns’ result when we finished the proof of our main theorem based on the Higgs bundle structure on $\mathcal J(V, \omega)$.
|
---
abstract: 'Dicke states are typical examples of quantum states with genuine multipartite entanglement. They are valuable resources in many quantum information processing tasks, including multiparty quantum communication and quantum metrology. Phased Dicke states are a generalization of Dicke states and include antisymmetric basis states as a special example. These states are useful in atomic and molecular physics besides quantum information processing. Here we propose practical and efficient protocols for verifying all phased Dicke states, including $W$ states and qudit Dicke states. Our protocols are based on adaptive local projective measurements with classical communication, which are easy to implement in practice. The number of tests required increases only linearly with the number $n$ of parties, which is exponentially more efficient than traditional tomographic approaches. In the case of $W$ states, the number of tests can be further reduced to $O(\sqrt{n})$. Moreover, we construct an optimal protocol for any antisymmetric basis state; the number of tests required decreases (rather than increases) monotonically with $n$. This is the only optimal protocol known for multipartite nonstabilizer states.'
author:
- Zihao Li
- 'Yun-Guang Han'
- 'Hao-Feng Sun'
- Jiangwei Shang
- Huangjun Zhu
title: Efficient Verification of Phased Dicke States
---
Introduction
============
Quantum states with genuine multipartite entanglement (GME) play crucial roles in quantum information processing and foundational studies [@Horo09; @Guhne09]. Dicke states [@Dicke54; @Wei03] are one of the most important multipartite quantum states other than stabilizer states. They are useful in many quantum information processing tasks, such as multiparty quantum communication and quantum metrology [@Boure06; @Kiesel07; @Luo17; @Murao99; @Preve09; @Wiec09; @Pezze08]. Phased Dicke states are a generalization of Dicke states constructed by introducing phase changes and they are equally useful in the above research areas [@Krammer09; @Chiuri10]. Besides the usual Dicke states, antisymmetric basis states are a prominent example of phased Dicke states [@Denni01; @Zanar02; @Bravyi03]; they are usually used to represent the fermions, and play a paramount role in atomic and molecular physics. By now numerous experiments have been performed to prepare and engineer Dicke states [@Kiesel07; @Preve09; @Wiec09; @Zou18; @Cruz18; @Haff05], phased Dicke states [@Chiuri10; @Chiuri12], and antisymmetric basis states [@Zhang16; @Berry18] in various platforms.
In practice, it is usually extremely difficult to prepare quantum states with GME perfectly, and the success probability decays rapidly with the number of particles. Therefore, it is crucial to verify these states with high precision efficiently using limited resources. For the convenience of applications, it is also desirable to achieve this task using only local operations and classical communication (LOCC). Unfortunately, traditional tomographic approaches are notoriously inefficient and are too resource consuming for systems with more than 10 qubits [@Haff05]. Although direct fidelity estimation [@FlamL11] can improve the efficiency significantly, it is still not satisfactory except for some special states, like stabilizer states. Recently, an alternative approach known as quantum state verification [@HayaMT06; @Haya09; @Aolita15; @Hang17; @PLM18; @ZhuEVQPSshort19; @ZhuEVQPSlong19] has attracted increasing attention because of its potential to achieve a much higher efficiency. So far efficient verification protocols based on LOCC have been constructed for bipartite pure states [@LHZ19; @Wang19; @Yu19], stabilizer states (including graph states) [@HayaM15; @PLM18; @ZhuH19E; @ZhuEVQPSlong19], hypergraph states [@ZhuH19E], weighted graph states [@HayaTake19], and qubit Dicke states [@Liu19]. On the other hand, efficient protocols are still not available for many other important quantum states, including qudit Dicke states and phased Dicke states in particular. In addition, it is extremely difficult to construct optimal verification protocols, especially for nonstabilizer states. So far optimal protocols under LOCC are known only for maximally entangle states [@HayaMT06; @Haya09; @ZhuH19O], two-qubit pure states [@Wang19], and Greenberger-Horne-Zeilinger (GHZ) states [@LiGHZ19]. Any progress on this topic is of interest to both theoretical studies and practical applications.
In this work, we construct highly efficient and practical verification protocols for all phased Dicke states, including $W$ states and qudit Dicke states. Our protocols only require adaptive local projective measurements and are thus easy to realize with current technologies. The number of tests required increases only linearly with the number $n$ of parties, which is exponentially more efficient than traditional approaches, including tomography and direct fidelity estimation. In the case of $W$ states, the number of tests can be further reduced to $O(\sqrt{n})$, which is quadratically fewer compared with the best verification protocol known in the literature [@Liu19]. For the three-qubit $W$ state, one of our protocols is almost optimal under LOCC; in addition, the protocol is useful for fidelity estimation because the verification operator is homogeneous [@ZhuEVQPSlong19]. Moreover, we construct an optimal verification protocol for any antisymmetric basis state; the number of tests required decreases monotonically with $n$. This is the only optimal protocol known for multipartite nonstabilizer states. For quantum states with GME, such optimal protocols are known previously only for GHZ states [@LiGHZ19], which are stabilizer states and have Schmidt decomposition. In the course of study, we introduce several tools for improving the efficiency of a given verification protocol, which are useful to studying quantum verification in general.
Pure state verification
=======================
Basic framework
---------------
Before presenting our main results, let us take a brief review on the basic framework of pure state verification [@PLM18; @ZhuEVQPSshort19; @ZhuEVQPSlong19]. Suppose there is a quantum device that is expected to produce the pure target state $|\Psi\>\in {\mathcal{H}}$. However, some errors may occur when the device is working, and it actually produces the states $\sigma_1, \sigma_2, \ldots, \sigma_N$ in $N$ runs. Let $\epsilon_j:=1-\<\Psi|\sigma_j|\Psi\>$ denote the infidelity between $\sigma_j$ and the target state, and $\bar{\epsilon}:=\sum_j\epsilon_j/N$ denote the average infidelity. Our aim is to verify whether these states are sufficiently close to the target state on average, that is, whether the average infidelity $\bar{\epsilon}$ is smaller than some threshold $\epsilon$.
To achieve this task, for each state $\sigma_j$ the verifier performs a test and accepts the states produced if and only if (iff) all tests are passed. Each test is specified by a two-outcome measurement $\{E_l,\openone-E_l\}$, which is chosen randomly with probability $p_l$ from a set of accessible measurements. Here the test operator $E_l$ corresponds to passing the test and satisfies the condition $E_l|\Psi\>=|\Psi\>$, so that the target state can always pass the test. If the infidelity of $\sigma_j$ satisfies $\epsilon_j\geq\tilde\epsilon$ for some threshold $\tilde\epsilon\geq0$, then the maximum probability that $\sigma_j$ can pass each test on average is given by [@PLM18; @ZhuEVQPSlong19] $$\max_{\<\Psi|\sigma|\Psi\>\leq 1-\tilde\epsilon }{\operatorname{tr}}(\Omega \sigma)=1- [1-\lambda_2(\Omega)]\tilde\epsilon=1- \nu(\Omega)\tilde\epsilon,$$ where $\Omega:=\sum_{l} p_l E_l$ is called the verification operator or a strategy, $\lambda_2(\Omega)$ denotes the second largest eigenvalue of $\Omega$, and $\nu(\Omega):=1-\lambda_2(\Omega)$ is the spectral gap from the maximum eigenvalue. The probability of passing all $N$ tests is at most $\prod_j[1- \nu(\Omega)\epsilon_j]\leq[1- \nu(\Omega)\bar{\epsilon}]^N$. To ensure the condition $\bar{\epsilon}<\epsilon$ with significance level $\delta$, it suffices to take [@ZhuEVQPSshort19; @ZhuEVQPSlong19] $$\label{eq:NumberTest}
N=\biggl\lceil\frac{ \ln \delta}{\ln[1-\nu(\Omega)\epsilon]}\biggr\rceil\approx \frac{ \ln \delta^{-1}}{\nu(\Omega)\epsilon},$$ where the approximation is applicable when $\nu(\Omega)\epsilon\ll 1$. According to this equation, the efficiency of a verification strategy $\Omega$ is mainly determined by its spectral gap $\nu(\Omega)$.
\[sec:Opt\]Optimization of test probabilities
---------------------------------------------
To optimize the verification efficiency, we need to maximize the spectral gap of the verification operator for the target state $|\Psi\>$ or minimize the second largest eigenvalue. Suppose the set of test operators $\{E_l\}_l$ for $|\Psi\>$ is fixed, then we need to optimize the probabilities for performing individual tests. Given a general verification operator of the form $\Omega=\sum_l p_l E_l$, the second largest eigenvalue of $\Omega$ reads $$\lambda_2(\Omega)=\|\bar{\Omega}\|=\biggl\|\sum_l p_l \bar{E_l}\biggr\|,$$ where $\|\cdot\|$ denotes the operator norm and $$\bar{\Omega}:=\Omega-|\Psi\>\<\Psi|,\quad \bar{E}_l:=E_l-|\Psi\>\<\Psi|.$$ Note that $\lambda_2(\Omega)$ is convex in $\{p_l\}_l$, so that $\nu(\Omega)$ is concave in $\{p_l\}_l$. In addition, the minimum of $\lambda_2(\Omega)$ over $\{p_l\}_l$ can be computed via semidefinite programming, $$\label{eq:MinGapSDP}
\begin{aligned}
&\text{minimize} &f & \\
&\text{subject to} &\sum_l p_l \bar{E_l}\leq f\openone, \quad p_l \geq0,& & \\
&& \sum_l p_l=1.& \\
\end{aligned}$$
The minimum in [Eq. ]{} can be derived analytically when $\Omega$ consists of two projective tests thanks to the following lemma, which is proved in Appendix \[app:LemmaTwoTestProof\].
\[lem:2TestStrategy\] Suppose $\Omega=p P_1 +(1-p)P_2$, where $0\leq p\leq 1$ and $P_1, P_2$ are test projectors for $|\Psi\>$ with ranks at least 2. Then $\lambda_2(\Omega)\geq (1+\sqrt{q})/2$ and $\nu(\Omega)\leq(1-\sqrt{q})/2$, where $$q:=\|\bar{P}_1\bar{P}_2 \bar{P}_1\|=\max_{|\phi\>\in {\operatorname{supp}}(\bar{P}_1)} \<\phi|P_2|\phi\>.$$ If $q<1$, then the upper bound for $\nu(\Omega)$ is saturated iff $p=1/2$.
Note that any test projector based on LOCC has rank at least 2 if $|\Psi\>$ is entangled. Previously, [Lemma \[lem:2TestStrategy\]]{} is known in the special case in which $\bar{P}_1$ and $\bar{P}_2$ are orthogonal [@ZhuH19O].
\[sec:sym\]Symmetrization of verification operators
---------------------------------------------------
Here we consider another recipe for improving the verification efficiency by employing the symmetry of the target state $|\Psi\>$; similar ideas have already found applications in verifying bipartite pure states [@PLM18; @Wang19; @Yu19]. Suppose $\Omega$ is a verification operator for $|\Psi\>$, so that $\Omega\geq |\Psi\>\<\Psi|$ and $\Omega|\Psi\>=|\Psi\>$. Let $U$ be a unitary operator that leaves $|\Psi\>$ invariant up to a phase factor, that is, $U|\Psi\>\<\Psi|U^\dag =|\Psi\>\<\Psi|$ or, equivalently, $U|\Psi\>={\mathrm{e}}^{{\mathrm{i}}\phi} |\Psi\>$, where $\phi$ is a phase (a real number). Then $U\Omega U^\dag$ is also a valid verification operator for $|\Psi\>$. Moreover, $U\Omega U^\dag$ and $\Omega$ have the same spectral gap, that is, $$\label{eq:SpectralGapU}
\nu(U\Omega U^\dag)=\nu(\Omega).$$
Let $G$ be the group generated by product unitaries and permutations that leave $|\Psi\>\<\Psi|$ invariant. Then $U\Omega U^\dag$ can be realized by LOCC (is separable) iff $\Omega$ can be realized by LOCC (is separable) for any $U\in G$. Let $S$ be a subset of $G$ and define $$\label{eq:OmegaSym}
\Omega^S:=\int_SU\Omega U^\dag {\mathrm{d}}U,$$ where the integral is taken with respect to the normalized probability measure induced from the Haar measure (see Chapter 11 in Ref. [@Halmos13] for example) on $G$. The measure reduces to the normalized Haar measure on $S$ when $S$ is a group and reduces to the counting measure (see page 27 in Ref. [@Schilling05] for example) when $S$ is finite. The verification operator $\Omega$ is called $S$-invariant if $\Omega^S=\Omega$.
If the verification strategy $\Omega$ consists of $m$ distinct tests and has the form $\Omega=\sum_l p_l E_l$, then $$\Omega^S=\sum_l p_l E_l^S,$$ where $$E_l^S:=\int_SUE_l U^\dag {\mathrm{d}}U.$$ If $S$ is a finite set with cardinality $|S|$, then the above equation reduces to $$E_l^S=\frac{1}{|S|} \sum_{U\in S}U E_l U^\dag.$$ If each test operator $E_l$ is a projector, then each $E_l^S$ can be realized by at most $|S|$ distinct projective tests. Therefore, $\Omega^S$ can be realized by at most $m|S|$ distinct projective tests.
\[pro:GapSym\] Suppose $S\subseteq H\leq G$. Then $$\nu(\Omega)\leq\nu(\Omega^S)\leq\nu(\Omega^H)\leq \nu(\Omega^G).$$
Here the notation $S\subseteq H$ means $S$ is a subset of $H$; by contrast, the notation $H\leq G$ means $H$ is a subgroup of $G$. Proposition \[pro:GapSym\] shows that symmetrization is an effective way for improving the verification efficiency.
The inequality $\nu(\Omega)\leq\nu(\Omega^S)$ follows from [Eq. ]{} and the fact that $\nu(\Omega)$ is concave in $\Omega$. The inequality $\nu(\Omega^S)\leq\nu(\Omega^H)$ follows from the inequality $\nu(\Omega)\leq\nu(\Omega^S)$ and the fact that $\Omega^H=(\Omega^S)^H$ given that $S$ is a subset of the group $H$. The inequality $\nu(\Omega^H)\leq\nu(\Omega^G)$ follows from the inequality $\nu(\Omega^S)\leq\nu(\Omega^H)$.
The following proposition is useful to reducing the number of distinct tests when constructing a verification strategy based on the symmetrization procedure. It is a corollary of [Eq. ]{} below.
\[pro:GapSym3\] Suppose $S\leq H\leq G$; in addition, $S$ and $H$ have the same number of irreducible components. Then $\Omega^{S}=\Omega^{H}$ and $\nu(\Omega^S)=\nu(\Omega^{H})$.
Suppose $S$ is a subgroup of $G$ and has $r$ inequivalent irreducible components with dimensions $d_j$ and multiplicities $m_j$, respectively (here we view $S$ as a representation of itself). Then the Hilbert space ${\mathcal{H}}$ decomposes into $${\mathcal{H}}=\bigoplus_{j=1}^r {\mathcal{H}}_j\otimes {\mathbb{C}}^{m_j},$$ where ${\mathcal{H}}_j$ has dimension $d_j$ and carries the $j$th irreducible representation, and ${\mathbb{C}}^{m_j}$ denotes the multiplicity space. Let $Q_j$ be the projector onto ${\mathcal{H}}_j\otimes {\mathbb{C}}^{m_j}$, then $$\label{eq:OmegaSymProjGen}
\Omega^S=\sum_{j=1}^r \frac{1}{d_j} \bigl[\openone_{{\mathcal{H}}_j}\otimes {\operatorname{tr}}_{{\mathcal{H}}_j}(Q_j \Omega)\bigr] Q_j,$$ where ${\operatorname{tr}}_{{\mathcal{H}}_j}$ means the partial trace over ${\mathcal{H}}_j$ (cf. the appendix of Ref. [@Gross07]). If all irreducible components of $S$ are inequivalent, that is, $m_j=1$ for $j=1,2,\ldots, r$, then [Eq. ]{} reduces to $$\label{eq:OmegaSymProj}
\Omega^S=\sum_{j=1}^r \frac{{\operatorname{tr}}(Q_j \Omega)}{d_j} Q_j,$$ where $Q_j$ is the projector onto the $j$th irreducible component. In this case, all $S$-invariant verification operators commute with each other.
{width="15.54cm"}
VERIFICATION OF Qudit Dicke STATES
==================================
In this section we construct an efficient protocol for verifying general qudit Dicke states [@Wei03]. Previously, efficient protocols are known only for qubit Dicke states [@Liu19].
Dicke states
------------
Up to a local unitary transformation, each $n$-qudit Dicke state can be labeled by a sequence of ordered positive integers that sum up to $n$. Let $$\label{eq:bfk}
{\mathbf{k}}:=(k_0,k_1,\dots,k_r),$$ where $k_0,k_1,\dots,k_r$ are positive integers that satisfy the conditions $\sum_{j=0}^{r}k_j=n$ and $k_0\geq k_1\geq\cdots\geq k_r\geq 1$. Denote by $B({\mathbf{k}})$ the set of all sequences of $n$ symbols in which $k_i$ symbols are equal to $i$ for $i=0,1,\dots,r$. Then the $n$-partite Dicke state corresponding to the sequence ${\mathbf{k}}$ has the form $$\label{eq:quditDstate}
{\mbox{$ | D({\mathbf{k}}) \rangle $}}=\frac{1}{\sqrt{m}} \sum_{u\in B({\mathbf{k}})}|u\>,$$ where $m:=|B({\mathbf{k}})|=n!/\big(\prod _{j=0}^r k_j!\big)$. For example, the Dicke state corresponding to the sequence ${\mathbf{k}}=(1,1,1)$ reads $${\mbox{$ | D({\mathbf{k}}) \rangle $}}=\frac{1}{\sqrt{6}} \big( {\mbox{$ | 012 \rangle $}}+{\mbox{$ | 021 \rangle $}}+{\mbox{$ | 102 \rangle $}}+{\mbox{$ | 120 \rangle $}}+{\mbox{$ | 201 \rangle $}}+{\mbox{$ | 210 \rangle $}}\big).$$ To avoid trivial cases, we assume that $n\geq3$ and $k_0<n$ in the rest of this paper unless it is stated otherwise.
When $r=1$, ${\mbox{$ | D({\mathbf{k}}) \rangle $}}$ is a familiar qubit Dicke state. If in addition $k_1=1$, then the Dicke state reduces to a $W$ state [@Haff05], $$\label{eq:Wstate}
{\mbox{$ | W_n \rangle $}}=\frac{1}{\sqrt{n}}\sum_{u\in B_n^1}|u\>,$$ where $B_n^1$ is the set of strings in $\{0,1\}^n$ with Hamming weight 1. In particular, the three-qubit $W$ state ($n=3$) reads $$\label{eq:3qubitW}
{\mbox{$ | W_3 \rangle $}}=\frac{1}{\sqrt{3}}\big({\mbox{$ | 001 \rangle $}}+{\mbox{$ | 010 \rangle $}}+{\mbox{$ | 100 \rangle $}}\big).$$
Denote by $S$ the group of all permutations of the $n$ parties (realized as unitary transformations); denote by $H$ the group of all unitary transformations of the form $U^{\otimes n}$, where $U$ is diagonal in the computational basis; let $G=SH=HS$. The Dicke state ${\mbox{$ | D({\mathbf{k}}) \rangle $}}$ is invariant under any permutation of the $n$ parties and is thus invariant under the action of $S$. In addition, it is invariant (up to an overall phase factor) under any unitary transformation in $H$ or $G$. These observations are instructive to constructing efficient protocols for verifying the state ${\mbox{$ | D({\mathbf{k}}) \rangle $}}$. Given any verification operator $\Omega$ for ${\mbox{$ | D({\mathbf{k}}) \rangle $}}$, we can construct potentially more efficient verification operators $\Omega^H$, $\Omega^S$, $\Omega^G$ according to [Eq. ]{} and Proposition \[pro:GapSym\].
\[sec:DickeVerify\]Efficient verification of qudit Dicke states
---------------------------------------------------------------
To construct an efficient protocol for verifying the Dicke state $|D({\mathbf{k}})\>$ defined in [Eq. ]{}, it is convenient to introduce some additional notations. Let $$\begin{aligned}
{\mathbf{k}}_t^s &:=(k_0,\dots,k_s+1, \dots,k_t-1, \dots,k_r), \label{eq:kst} \\
{\mathbf{k}}_{st}&:=(k_0,\dots,k_s-1, \dots,k_t-1, \dots,k_r), \label{eq:kst2} \\
{\mathbf{k}}_{ss} &:=(k_0,\dots,k_s-2,\dots,k_r) \quad {\rm for}\ \, k_s\geq2. \label{eq:kss}\end{aligned}$$ Here we assume that $0\leq s, t\leq r$ and $s\ne t$ in [Eq. ]{}, $0\leq s< t\leq r$ in [Eq. ]{}, and $0\leq s\leq r$ in [Eq. ]{}. Now the sets $B({\mathbf{k}}_t^s)$, $B({\mathbf{k}}_{st})$, and $B({\mathbf{k}}_{ss})$ can be defined in the same way as $B({\mathbf{k}})$.
Our verification protocol consists of $\binom{n}{2}$ distinct tests performed with uniform probabilities. Each test is associated with a pair of parties among the $n$ parties and is based on adaptive local projective measurements. To be specific, the test $P_{i,j}$ associated with parties $i$ and $j$ is illustrated in Fig. \[fig:measurement-I\] and realized as follows. All $n-2$ parties other than parties $i$ and $j$ perform the generalized Pauli-$Z$ measurements (the projective measurement on the computational basis), and their outcomes are labeled by a sequence $u$ of $n-2$ symbols, which corresponds to the product state $|u\>$. The measurements of parties $i$ and $j$ depend on the outcome $u$, and we need to distinguish three cases. Suppose $k_0,k_1,\dots,k_g\geq2$ and $k_{g+1}=k_{g+2}=\cdots k_r=1$, where $-1\leq g\leq r$.
1. $u\in B({\mathbf{k}}_{ss})$ with $0\leq s\leq g$.\
In this case, the normalized reduced state of parties $i$ and $j$ reads $|s\>_i|s\>_j$ (if the target Dicke state is measured). Then the two parties both perform $Z$ measurement, and the test is passed if they both obtain outcome $s$.
2. $u\in B({\mathbf{k}}_{st})$ with $0\leq s<t\leq r$.\
In this case, the normalized reduced state of parties $i$ and $j$ reads $\frac{1}{\sqrt{2}}(|s\>_i|t\>_j+|t\>_i|s\>_j)$. Then the two parties both perform the projective measurement $\big\{T_{s,t}^+,T_{s,t}^-,I-T_{s,t}^+ -T_{s,t}^-\big\}$, where $I$ is the identity operator for one qudit and $$\begin{aligned}
T_{s,t}^+=\frac{1}{2}({\mbox{$ | s \rangle $}}+{\mbox{$ | t \rangle $}})({\mbox{$ \langle s | $}}+{\mbox{$ \langle t | $}}),\label{eq:Pst+} \\
T_{s,t}^-=\frac{1}{2}({\mbox{$ | s \rangle $}}-{\mbox{$ | t \rangle $}})({\mbox{$ \langle s | $}}-{\mbox{$ \langle t | $}}).\label{eq:Pst-}\end{aligned}$$ The test is passed if they both obtain the first outcome (corresponding to $T_{s,t}^+$) or if they both obtain the second outcome (corresponding to $T_{s,t}^-$).
3. Other cases.\
The state cannot be the target state ${\mbox{$ | D({\mathbf{k}}) \rangle $}}$, so the test is not passed.
[c|cc]{} &() &N(,,)\
\_[W\_n]{} (n1, n $is odd$) & b/(4) &4 (b)\^[-1]{}\^[-1]{}\
\_[W\_n]{} (n1, n $is even$) & a/(4) &4 (a)\^[-1]{}\^[-1]{}\
\_[W\_n]{}\^G (n1, n $is odd$) & b/ & (b)\^[-1]{}\^[-1]{}\
\_[W\_n]{}\^G (n1, n $is even$) & a/ & (a)\^[-1]{}\^[-1]{}\
\_ (n=3) & 0.305 &3.28 \^[-1]{}\^[-1]{}\
\_ (n=3) & 5/8 &(8/5) \^[-1]{}\^[-1]{}\
\_ $and$ \_\^ (=(2,1)) & 1/3 &3 \^[-1]{}\^[-1]{}\
\_ $and$ \_\^ ((2,1)) & 1/(n-1) &(n-1) \^[-1]{}\^[-1]{}\
\_[\_n]{} & 1/(n-1) &(n-1) \^[-1]{}\^[-1]{}\
\^\_[\_n]{} & n/(n+1) &(n+1)n\^[-1]{} \^[-1]{}\^[-1]{}\
The resulting test projector reads $$\begin{aligned}
\label{eq:Pij}
P_{i,j}=& \sum_{s=0}^g {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss}) \otimes \big[({\mbox{$ | s \rangle $}}{\mbox{$ \langle s | $}})^{\otimes2}\big]_{i,j}\nonumber\\
&+\sum_{s<t} {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st}) \otimes \big[(T_{s,t}^+)^{\otimes 2}+(T_{s,t}^-)^{\otimes 2}\big]_{i,j},\end{aligned}$$ where $$\begin{aligned}
{\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss})&=\sum_{u\in B({\mathbf{k}}_{ss}) }{\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}},\label{eq:barZijss}\\
{\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})&=\sum_{u\in B({\mathbf{k}}_{st}) }{\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}}.\label{eq:barZijst}\end{aligned}$$ Here the subscripts $i,j$ and the overbar indicate that the operators ${\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss})$ and ${\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})$ act on the $n-2$ parties other than $i$ and $j$. By contrast, the subscripts $i,j$ in $\big[({\mbox{$ | s \rangle $}}{\mbox{$ \langle s | $}})^{\otimes2}\big]_{i,j}$ and $\big[(T_{s,t}^+)^{\otimes 2}+(T_{s,t}^-)^{\otimes 2}\big]_{i,j}$ indicate that these operators act on parties $i$ and $j$. We perform each test with probability $1/\binom{n}{2}$, and the resulting verification operator reads $$\label{eq:OmegaD}
\Omega_{{\mathbf{k}}}=\binom{n}{2}^{-1}\sum_{i<j}P_{i,j}.$$ The efficiency of this protocol is guaranteed by the following theorem, which is proved in Appendix \[app:TheoDickeProof\]. The result is summarized in Table \[tab:Protocol\] and illustrated in Fig. \[fig:nu\]. Here it is worth pointing out that the spectral gap of $\Omega_{{\mathbf{k}}}$ is closely related to the spectrum of the transposition graph [@Chase1973; @Caputo2010], which is of interest to some researchers beyond quantum information science.
![\[fig:nu\] Spectral gaps $\nu(\Omega)$ of verification strategies for the $n$-qubit $W$ state, $n$-partite Dicke states, phased Dicke states, and antisymmetric basis state. The values of $\nu\big(\Omega_{W_n}\big)$ and $\nu\big(\Omega^G_{W_n}\big)$ oscillate with the parity of $n$. Strategies $\Omega_{\mathbf{k}}$, $\Omega_{\mathbf{k}}^\phi$, and $\Omega_{{\mathrm{AS}}_n}$ have the same spectral gap when $n\geq4$ \[cf. [Eqs. (\[eq:nuOmegaD\]), (\[eq:nuOmegaD’\]), and (\[eq:nuOmegaAS\])]{}\]. ](fig2){width="8.6cm"}
\[thm:Dicke\] The spectral gap of $\Omega_{\mathbf{k}}$ reads $$\begin{aligned}
\nu(\Omega_{{\mathbf{k}}})&=\begin{cases}
1/2 & {\mathbf{k}}=(1,1,1), \\
1/3 & {\mathbf{k}}=(2,1), \\
1/(n-1) \ \ & n\ge4.
\end{cases}\label{eq:nuOmegaD}\end{aligned}$$ To verify the Dicke state ${\mbox{$ | D({\mathbf{k}}) \rangle $}}$ within infidelity $\epsilon$ and significance level $\delta$, the number of tests required reads $$\begin{aligned}
\label{eq:NumberTestDicke}
N(\epsilon,\delta,\Omega_{{\mathbf{k}}})&\approx\begin{cases}
2 \epsilon^{-1} \ln\delta^{-1} & {\mathbf{k}}=(1,1,1), \\
3 \epsilon^{-1} \ln\delta^{-1} & {\mathbf{k}}=(2,1), \\
(n-1) \epsilon^{-1}\ln\delta^{-1} \ & n\ge4.
\end{cases}\end{aligned}$$
By construction $\Omega_{{\mathbf{k}}}$ is invariant under any permutation of the $n$ parties; actually we have $\Omega_{{\mathbf{k}}}=P_{1,2}^S$, where $S$ is the group of all permutations of the $n$ parties. Therefore, $\Omega_{{\mathbf{k}}}^S=\Omega_{{\mathbf{k}}}$ and $\Omega_{\mathbf{k}}^G=\Omega_{\mathbf{k}}^H$, where $G=HS$, and $H$ is the group of all diagonal unitary operators of the form $U^{\otimes n}$. As shown in Appendix \[app:ComputeOmegaD\^G\], the spectral gap of $\Omega_{{\mathbf{k}}}^G$ reads $$\begin{aligned}
\nu(\Omega_{{\mathbf{k}}}^G)=
\frac{1}{n-1},\quad n\ge3.\label{eq:nuOmegaD2}\end{aligned}$$ So we have $\nu(\Omega_{{\mathbf{k}}}^G)=\nu(\Omega_{{\mathbf{k}}})$ whenever $n\geq 4$, although $\Omega_{{\mathbf{k}}}^G\neq \Omega_{{\mathbf{k}}}$ in general; the symmetrization procedure discussed in Sec. \[sec:sym\] does not help in this case.
Efficient verification of $W$ states
====================================
In this section we present two more efficient protocols for verifying the $n$-qubit $W$ state defined in [Eq. ]{} [@Wei03; @Haff05]. These protocols can reduce the number of tests quadratically with respect to the number of qubits.
Efficient protocol based on two distinct tests
----------------------------------------------
The first protocol consists of only two distinct tests. In the first test, called the standard test, all parties perform the Pauli-$Z$ measurements, and the test is passed if only one of the $n$ outcomes is 1. The test projector reads $$\label{eq:P1}
P_1=\sum_{u\in B_n^1}|u\>\<u|,$$ where $B_n^1$ is the set of strings in $\{0,1\}^n$ with Hamming weight 1. In the other test, each of the first $n-1$ parties performs $X$ measurements; denote the outcome by $0(1)$ if the measurement result is $+1$ $(-1)$. The $n-1$ outcomes are labeled by a string $x\in \{0,1\}^{n-1}$ of $n-1$ bits, which corresponds to the product state $$\label{eq:alphax}
|\alpha_x\>=\frac{1}{\sqrt{2^{n-1}}}\sum_{y\in\{0,1\}^{n-1}} (-1)^{x\cdot y} |y\>\,.$$ The reduced state of party $n$ reads $$|\beta_{x}\>=\frac{|1\>+(n-1-2|x|)|0\>}{\sqrt{1+(n-1-2|x|)^2}},$$ where $|x|$ denotes the Hamming weight of $x$. Then party $n$ performs the two-outcome projective measurement $\big\{|\beta_{x}\>\<\beta_{x}|,I-|\beta_{x}\>\<\beta_{x}|\big\}$, and the test is passed if the first outcome (corresponding to $|\beta_{x}\>\<\beta_{x}|$) is obtained. The resulting test projector reads $$\label{eq:P2}
P_2=\sum_{x\in\{0,1\}^{n-1}} |\alpha_x\>\<\alpha_x|\otimes|\beta_{x}\>\<\beta_{x}|\,.$$
If we perform the two tests $P_1$ and $P_2$ with probability $p$ and $1-p$, respectively, then the verification operator reads $$\label{eq:OmegaWn}
\Omega_{W_{n}}=pP_1+(1-p)P_2.$$ According to [Lemma \[lem:2TestStrategy\]]{}, the spectral gap $\nu(\Omega_{W_{n}})$ is maximized when $p=1/2$, in which case $\Omega_{W_n}=(P_1+P_2)/2$ and $\nu(\Omega_{W_n})=(1-\sqrt{q})/2$, where $$\begin{aligned}
\label{eq:q}
q=\|\bar{P}_1\bar{P}_2 \bar{P}_1\|=\begin{cases}
\frac{2}{5} & n=3,\\
1-h(n-3) & n\geq 4,
\end{cases}\end{aligned}$$ with $$\begin{aligned}
\label{eq:hn}
h(n):=\frac{1}{2^{n}}\sum_{j=0}^{n}\frac{\binom{n}{j}}{1+(n-2j)^2} \,.\end{aligned}$$ Here the second equality in [Eq. ]{} is derived in Appendix \[app:derive q\]. Therefore, we have $\nu(\Omega_{W_{\!n}}\!)=(1/2)-(1/\sqrt{10})$ for $n=3$, and $$\label{eq:nuW}
\nu(\Omega_{W_n})=\frac{1-\sqrt{1-h(n-3)}}{2}> \frac{h(n-3)}{4} \quad {\rm for}\ \, n\geq 4.$$ The dependence of $\nu(\Omega_{W_{\!n}})$ on $n$ is illustrated in Fig. \[fig:nu\].
The function $\sqrt{n}\,h(n)$ has the following properties as proved in Appendix \[app:h(n)property\].
\[pro:sqrt[n]{}h(n)Monot\] $\sqrt{n}\,h(n)$ is strictly monotonically increasing in $n$ for odd $n$ and even $n$, respectively, assuming $n\geq0$.
\[pro:LimSqrt(n)h(n)\] When $n\rightarrow+\infty$, $\sqrt{n}\,h(n)$ converges for odd $n$ and even $n$, respectively, $$\begin{aligned}
&\lim_{n\rightarrow+\infty} \sqrt{2n+1}\,h(2n+1)
= \sqrt{\frac{\pi}{2}} \tanh\Bigl(\frac{\pi}{2}\Bigr)\approx 1.15, \label{eq:LimSqrt(n)h(n)Odd}\\
&\lim_{n\rightarrow+\infty} \sqrt{2n}\,h(2n)
= \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr)\approx 1.37. \label{eq:LimSqrt(n)h(n)Even}\end{aligned}$$ Here we assume that $n$ is an integer when taking the limits.
The above two propositions imply the following inequalities: $$\begin{aligned}
\frac{1}{2}&\leq \sqrt{n}h(n)\leq \sqrt{\frac{\pi}{2}} \tanh\Bigl(\frac{\pi}{2}\Bigr),\quad n\geq 1\mbox{ is odd}, \label{eq:hnbound1}\\
\frac{3\sqrt{2}}{5}&\leq \sqrt{n}h(n)\leq \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr),\quad n\geq 2\mbox{ is even}.\label{eq:hnbound2}\end{aligned}$$ By virtue of these results, we can derive lower and upper bounds for the spectra gap, namely, $$\begin{aligned}
\frac{1}{4\sqrt{n}}<\nu(\Omega_{W_n})<
\begin{cases}
3/(8\sqrt{n}) & n\geq 3, \ n\ne 5,\\
1/(2\sqrt{n}) & n=5;
\end{cases}
\label{eq:bound-nuW}\end{aligned}$$ these bounds can be improved when the parity of $n$ is given; see Appendix \[app:ProofEq:bound-nuW\] for more details. As a consequence of [Eq. ]{}, the number of tests required to verify ${\mbox{$ | W_n \rangle $}}$ within infidelity $\epsilon$ and significance level $\delta$ satisfies $$N(\epsilon,\delta,\Omega_{W_n})\leq \biggl\lceil\frac{4\sqrt{n}}{\epsilon} \ln \delta^{-1}\biggr\rceil. \\$$ In addition, $\nu(\Omega_{W_n})$ admits the following limits $$\begin{gathered}
\!\!\lim_{n \to +\infty} \sqrt{2n+1}\nu(\Omega_{W_{2n+1}})
\!=\! \frac{\sqrt{2\pi}}{8} \coth\Bigl(\frac{\pi}{2}\Bigr)
\!\approx\! 0.342,\label{eq:Lim sqrt(2n+1)nu}\\
\lim_{n \to +\infty} \sqrt{2n}\nu(\Omega_{W_{2n}})
= \frac{\sqrt{2\pi}}{8} \tanh\Bigl(\frac{\pi}{2}\Bigr)
\approx 0.287,\label{eq:Lim sqrt(2n)nu}\end{gathered}$$ as proved in Appendix \[app:Proof Lim sqrt(n)nu\]. When $n\gg1$, we have $$\begin{aligned}
&\nu(\Omega_{W_n})\approx\begin{cases}
\frac{\sqrt{2\pi}}{8\sqrt{n} } \coth\bigl(\frac{\pi}{2}\bigr)\approx \frac{0.342}{\sqrt{n}} & n \ $is odd$,\\[0.8ex]
\frac{\sqrt{2\pi}}{8\sqrt{n} } \tanh\bigl(\frac{\pi}{2}\bigr)\approx \frac{0.287}{\sqrt{n} }\ \ & n \ $is even$;
\end{cases}\label{eq:lim-nuW}\\
&N(\epsilon,\delta,\Omega_{W_n})\approx\begin{cases}
2.93\sqrt{n} \epsilon^{-1}\ln \delta^{-1} & n \ $is odd$,\\
3.48\sqrt{n} \epsilon^{-1}\ln \delta^{-1} \ & n \ $is even$.
\end{cases}\end{aligned}$$ These results are summarized in Table \[tab:Protocol\] and illustrated in Fig. \[fig:nu\]. Compared with the protocol in Ref. [@Liu19] which achieves $\nu=1/(n-1)$ with $\binom{n}{2}$ distinct tests when $n\geq 4$ (cf. Sec. \[sec:DickeVerify\]), the current protocol achieves a much better scaling behavior in $n$ and a higher efficiency whenever $n\geq15$, although only two distinct tests are required.
\[sec:symW\] Higher efficiency from symmetrization
--------------------------------------------------
The efficiency of the above protocol can be improved by applying the symmetrization procedure described in Sec. \[sec:sym\]. Let $G$ be the group generated by all permutations of the $n$ qubits and diagonal unitary operators of the form $U^{\otimes n}$. Consider the symmetrized verification operator $$\Omega_{W_{n}}^G=pP_1^G+(1-p)P_2^G=pP_1+(1-p)P_2^G,$$ Note that $P_1^G=P_1$ is a projector, but $P_2^G$ is not a projector. So [Lemma \[lem:2TestStrategy\]]{} is not applicable, and here the optimal choice of $p$ is not $1/2$ in contrast to [Eq. ]{}. Denote by ${\mathcal{H}}_1$ the support of $P_1$ and ${\mathcal{H}}_2$ the orthogonal complement of ${\mathcal{H}}_1$. Then ${\mathcal{H}}_1$ and ${\mathcal{H}}_2$ are invariant subspaces of $G$. In addition, $G$ has two inequivalent irreducible components in ${\mathcal{H}}_1$: one component is spanned by $|W_n\>$ and is one dimensional; the other component consists of all vectors in ${\mathcal{H}}_1$ that are orthogonal to $|W_n\>$. Each irreducible component in ${\mathcal{H}}_2$ is not equivalent to any irreducible component in ${\mathcal{H}}_1$. Consequently, $P_2^G$ is block diagonal with respect to ${\mathcal{H}}_1$ and ${\mathcal{H}}_2$; in addition, $P_1 \bar{P}_2^G P_1$ is proportional to a projector. Let $R$ be the subgroup of $G$ generated by ${\operatorname{diag}}(1,{\mathrm{e}}^{2\pi{\mathrm{i}}/(n+1)})^{\otimes n}$ and a cyclic permutation of order $n$; note that $R$ has order $n(n+1)$. By virtue of Proposition \[pro:GapSym3\], it is not difficult to verify that $\Omega_{W_{n}}^R=\Omega_{W_{n}}^G$, given that $P_1$ is invariant under all permutations, while $P_2$ is invariant under permutations of the first $n-1$ parties. Therefore, the strategy $\Omega_{W_{n}}^G$ can be realized using $n^2+n+1$ distinct projective tests.
As derived in Appendix \[app:Wsymmetrization\], we have $$\begin{aligned}
\label{eq:TraceP1P2}
{\operatorname{tr}}(P_1 P_2^G)={\operatorname{tr}}(P_1 P_2)=n-1-(n-2)h(n-1),\end{aligned}$$ where $h(n)$ is defined in [Eq. ]{}. It follows that $$\|P_1 \bar{P}_2^G P_1\|=\frac{(n-2)[1-h(n-1)]}{n-1}\leq 1-h(n-1).$$ Let $$\label{eq:OptProbW}
p=\frac{1-\|P_1 \bar{P}_2^G P_1\|}{2-\|P_1 \bar{P}_2^G P_1\|}=\frac{1+(n-2)h(n-1)}{n+(n-2)h(n-1)};$$ then we have $$\begin{aligned}
\lambda_2(\Omega_{W_{n}}^G)&=1-p=\frac{n-1}{n+(n-2)h(n-1)}, \label{eq:lambda(Omega^G)}\\
\nu(\Omega_{W_{n}}^G)&=p=\frac{1+(n-2)h(n-1)}{n+(n-2)h(n-1)}>\frac{1}{\sqrt{n}+1},\label{eq:nu(Omega^G)Bound}\end{aligned}$$ as shown in Appendix \[app:Wsymmetrization\]. In addition, by virtue of Proposition \[pro:LimSqrt(n)h(n)\] as well as Eqs. (\[eq:hnbound1\]) and , we can deduce the following limits, $$\begin{gathered}
\!\!\lim_{n \to +\infty} \sqrt{2n+1}\,\nu \big(\Omega_{W_{2n+1}\!}^G\big)
= \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr)
\approx 1.37,\\
\lim_{n \to +\infty} \sqrt{2n}\, \nu \bigl(\Omega_{W_{2n}}^G\bigr)
= \sqrt{\frac{\pi}{2}} \tanh\Bigl(\frac{\pi}{2}\Bigr)
\approx 1.15.\end{gathered}$$ Numerical calculation shows that a good approximation of $\nu(\Omega_{W_n})$ can be expressed as follows, $$\begin{aligned}
&\nu(\Omega_{W_n})\approx\begin{cases}
\frac{1.37}{\sqrt{n}+1.37} & n \ $is odd$,\\
\frac{1.15}{\sqrt{n}+1.11 }\ \ & n \ $is even$;
\end{cases}\end{aligned}$$ When $n\gg 1$, we have $\nu(\Omega_{W_{n}}^G)\approx 4\nu(\Omega_{W_n})$, so the symmetrization procedure can improve the efficiency by about four times.
A comparison of the strategies $\Omega_{{\mathbf{k}}}$, $\Omega_{W_{n}}$, and $\Omega_{W_{n}}^G$ indicates that $\Omega_{W_{n}}^G$ has the largest spectral gap and thus the highest efficiency for all $n\geq3$ \[cf. [Eqs. (\[eq:nuOmegaD\]), (\[eq:bound-nuW\]), and (\[eq:nu(Omega\^G)Bound\])]{}\], as illustrated in Fig. \[fig:nu\]. The strategy $\Omega_{W_{n}}$ requires only two distinct tests, which is much fewer than the number $O(n^2)$ of distinct tests required by the other two strategies. On the other hand, the strategies $\Omega_{W_{n}}$ and $\Omega_{W_{n}}^G$ only apply to $W$ states, while the strategy $\Omega_{{\mathbf{k}}}$ applies to all qudit (including qubit) Dicke states.
Nearly Optimal Verification of the three-qubit $W$ state
========================================================
In this section we construct a nearly optimal protocol for verifying the three-qubit $W$ state $|W_3\>$ [@Dur00] shared by Alice, Bob, and Charlie. Before presenting this protocol, it is instructive to set an upper bound for the spectral gap of any verification operator based on LOCC.
According to Ref. [@Wang19], for a normalized two-qubit entangled pure state $s_0|00\>+s_1|11\>$ with Schmidt coefficients $s_0, s_1$ ($0<s_0,s_1<1$ and $s_0^2+s_1^2=1$), the maximum spectral gap of any verification operator based on LOCC or separable measurements is $1/(1+s_0s_1)$. With respect to the partition between Alice and the other two parties, $|W_3\>$ can be regarded as a two-qubit state in a proper subspace and has two Schmidt coefficients equal to $\sqrt{1/3}$ and $\sqrt{2/3}$, respectively. Therefore, the spectral gap of any verification operator based on LOCC or separable measurements is upper bounded by $$\frac{1}{1+\sqrt{2/9}}=\frac{9-3\sqrt{2}}{7}\approx 0.6796.$$ If each test of the verification strategy can be realized by LOCC with one-way communication, then the upper bound can be reduced to $2/3$ according to Refs. [@Wang19; @Yu19].
Nearly Optimal Verification Protocol
------------------------------------
To start with, we construct an efficient protocol using three distinct tests. In the first test, all three parties perform $Z$ measurements, and the test is passed if only one of the three outcomes is $1$. The test projector reads $$\label{eq:W3P1}
P_1 = |001\>\<001|+ |010\>\<010|+ |100\>\<100|,$$ which is a special case of the projector defined in [Eq. ]{} The other two tests are based on adaptive local projective measurements. The second test $P_2$ is defined in [Eq. ]{} with $n=3$ and has the form $$\begin{aligned}
P_2=& \; X^+X^+\otimes|\gamma_+\>\<\gamma_+| + X^-X^-\otimes|\gamma_-\>\<\gamma_-| \nonumber\\
& \, + (X^+X^- + X^-X^+) \otimes |1\>\<1| ,\end{aligned}$$ where $|\gamma_\pm\>=\frac{1}{\sqrt{5}}(2|0\>\pm|1\>)$, $X^\pm=|\pm\>\<\pm|$, and $|\pm\>=\frac{1}{\sqrt{2}}(|0\>\pm|1\>)$ are eigenstates of the operator $X$. For the third test, Alice performs $Z$ measurement and send her outcome to Bob and Charlie. If the outcome of Alice is $1$, so that the normalized reduced state of Bob and Charlie is ${\mbox{$ | 00 \rangle $}}$ (if $|W_3\>$ is measured), then both Bob and Charlie perform $Z$ measurement, and the test is passed if their outcomes are both $0$. If the outcome of Alice is $0$, so that the normalized reduced state reads $\frac{1}{\sqrt{2}}({\mbox{$ | 01 \rangle $}}+{\mbox{$ | 10 \rangle $}})$ (if $|W_3\>$ is measured), then both Bob and Charlie perform $X$ measurement, and the test is passed if their outcomes coincide. The resulting test projector reads $$P_3 = |100\>\<100| + |0\>\<0| \otimes (X^+X^+ + X^-X^-) .$$ Note that the three test projectors $P_1$, $P_2$, and $P_3$ have ranks 3, 4, and 3, respectively.
If we perform the three tests $P_1$, $P_2$, and $P_3$ with probabilities $p_1$, $p_2$, and $1-p_1-p_2$, respectively, then the verification operator is given by $$\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral1}}}}=p_1P_1+p_2P_2+(1-p_1-p_2)P_3.$$ Note that this strategy can be realized using local projective measurements with one-way communication. Numerical calculation shows that $\lambda_2(\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral1}}}})\geq 0.695$, and the lower bound is saturated when $p_1\approx0.246$ and $p_2\approx0.444$, in which case we have $\nu(\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral1}}}})\approx 0.305$.
The efficiency of the above protocol can be improved by applying the symmetrization procedure described in Sec. \[sec:sym\]. Let $G$ be the group generated by the six permutations and diagonal unitary operators of the form $U^{\otimes3}$. Then $G$ has six irreducible components, all of which are inequivalent. Let $$\begin{aligned}
&|\tau_0\>:=|000\>, \qquad |\tau_1\>:=|111\>, \nonumber\\
&|\tau_2\> :=(|001\>-|010\>)/\sqrt{2}, \nonumber\\
&|\tau_3\> :=(|001\>+|010\>-2|100\>)/\sqrt{6}, \nonumber\\
&|\tau_4\> :=(|011\>+|101\>+|110\>)/\sqrt{3}, \\
&|\tau_5\>:=(|011\>-|101\>)/\sqrt{2}, \nonumber\\
&|\tau_6\>:=(|011\>+|101\>-2|110\>)/\sqrt{6}. \nonumber\end{aligned}$$ Then four one-dimensional irreducible components of $G$ are spanned by $|W_3\>$, $|\tau_0\>$, $|\tau_1\>$, $|\tau_4\>$, respectively. One two-dimensional component is spanned by $|\tau_2\>$ and $|\tau_3\>$, and the other two-dimensional component is spanned by $|\tau_5\>$ and $|\tau_6\>$. Given any verification operator $\Omega$ for $|W_3\>$, then $\Omega^G$ has the form $$\begin{aligned}
\label{eq:SymmOmegaPPT}
\Omega^G
= &\,|W_3\>\<W_3| +\mu_0|\tau_0\>\<\tau_0| +\mu_1|\tau_1\>\<\tau_1| +\mu_4|\tau_4\>\<\tau_4| \nonumber\\
& +\mu_2\left(|\tau_2\>\<\tau_2| +|\tau_3\>\<\tau_3|\right)+\mu_3\left(|\tau_5\>\<\tau_5| +|\tau_6\>\<\tau_6|\right)\end{aligned}$$ according to [Eq. ]{}, where $0\leq \mu_0,\mu_1,\mu_2,\mu_3,\mu_4\leq1$. On the other hand, any verification operator of this form is $G$-invariant.
Let $K$ be the subgroup of $G$ that is generated by six permutations and $U_{\pi/2}^{\otimes 3}$ with $U_{\pi/2}={\operatorname{diag}}(1,{\mathrm{i}})$; note that $K$ has order 24. Then $K$ has the same number of irreducible components as $G$, so $\Omega^K=\Omega^G$ for any verification operator of $|W_3\>$ according to Proposition \[pro:GapSym3\]. In addition, if $\Omega$ can be realized by $m$ distinct projective tests, then $\Omega^K$ can be realized by at most $24m$ distinct projective tests.
Consider the verification operator $$\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}:=\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral1}}}}^K=p_1P_1+p_2P_2^K+(1-p_1-p_2)P_3^K;$$ note that $P_1^K=P_1$. Each test operator $P_j^K$ for $j=1,2,3$ has the form in [Eq. ]{} with at most five distinct eigenvalues. The parameter vectors $\mu=(\mu_0,\mu_1,\mu_2,\mu_3,\mu_4)$ associated with the three test operators are respectively given by $$\begin{aligned}
\mu&=(0,0,1,0,0) \quad {\rm for}\ \, P_1^K, \\
\mu&=\frac{1}{15}(6,9,3,8,8) \quad {\rm for}\ \, P_2^K,\\
\mu&=\frac{1}{6}(3,0,3,1,1) \quad {\rm for}\ \, P_3^K.
\end{aligned}$$ Therefore, the second largest eigenvalue of $\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}$ reads $$\begin{aligned}
\lambda_2(\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}})
=&\max_{\substack{p_1\!,\,p_2\geq0\\p_1+p_2\leq1}}
\bigg\{ \frac{5-5p_1-p_2}{10}, \frac{5-5p_1+11p_2}{30}£¬\nonumber\\
&\ \frac{5+5p_1-3p_2}{10}, \frac{3}{5}p_2 \bigg\}
\geq\frac{3}{8} \,.\end{aligned}$$ The bound is saturated iff $p_1=1/8$ and $p_2=5/8$, in which case we have $$\label{eq:OurStraforW3}
\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}=|W_3\>\<W_3|+\frac{3}{8}\big(\openone-|W_3\>\<W_3|\big),$$ and $$\label{eq:nu5/8}
\nu(\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}) =\frac{5}{8}\,, \qquad
N(\epsilon,\delta,\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}})\approx\frac{8}{5\epsilon} \ln \delta^{-1}.$$ Compared with the protocol in Ref. [@Liu19] which achieves $\nu=1/3$ (cf. Sec. \[sec:sym\]), this protocol has a much higher efficiency. In addition, the spectral gap is only 8.04% smaller than the upper bound $\nu(\Omega)\leq(9-3\sqrt{2})/7$ for strategies based on LOCC or separable measurements. Accordingly, the number of tests required by the strategy $\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}$ is only 8.74% more than the optimal strategy based on separable measurements.
{width="15.95cm"}
Additional applications
-----------------------
The strategy $\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}$ is homogeneous and so can be applied to fidelity estimation [@ZhuEVQPSlong19]. Note that the passing probability of any state $\rho$ is related to its fidelity with the target state $|W_3\>$ as follows, ${\operatorname{tr}}(\rho\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}})=\frac{5}{8}\<W_3|\rho|W_3\>+\frac{3}{8}$, which implies that $$F=\<W_3|\rho|W_3\>=\frac{8}{5}{\operatorname{tr}}(\rho\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}})-\frac{3}{5}.$$ According to Ref. [@ZhuEVQPSlong19], the standard deviation of this estimation is given by $\Delta F\!=\!\sqrt{(1-F)(F+3/5)/N}$, where $N$ is the number of tests performed.
Besides fidelity estimation, our protocol in [Eq. ]{} is also useful for state verification in the adversarial scenario, in which case the state to be verified is prepared by a potentially malicious adversary [@ZhuEVQPSshort19; @ZhuEVQPSlong19]. If there is no restriction on the accessible measurements, the optimal strategy for verifying $|\Psi\>$ in the adversarial scenario can be chosen to be homogeneous, $$\label{eq:HomoStra}
\Omega=|\Psi\>\<\Psi|+\lambda_2(\Omega)\big(\openone-|\Psi\>\<\Psi|\big).$$ According to Refs. [@ZhuEVQPSshort19; @ZhuEVQPSlong19], in the high-precision limit $\epsilon,\delta\rightarrow 0$, the minimal number of tests required to verify $|\Psi\>$ reads (assuming $\lambda_2(\Omega)>0$), $$\label{eq:NumTestAdv}
N\approx[\lambda_2(\Omega)\epsilon\ln \lambda_2(\Omega)^{-1}]^{-1} \ln \delta^{-1}.$$ This number is minimized when $\lambda_2(\Omega)=1/{\mathrm{e}}$, which yields $N\approx {\mathrm{e}}\epsilon^{-1} \ln \delta^{-1}$. Since our verification strategy $\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}}$ for $|W_3\>$ is homogeneous with $\lambda_2(\Omega_{{\operatorname{\uppercase\expandafter{\romannumeral2}}}})=3/8$, it can be applied to the adversarial scenario directly. For high-precision state verification, the number of tests required reads $N\approx 2.7188 \epsilon^{-1} \ln\delta^{-1}$, which is only about 0.02% more than the optimal strategy. When $\epsilon,\delta$ are small but not infinitesimal (say $\epsilon,\delta \leq 0.01$), our strategy is still nearly optimal.
\[sec:PhasedDicke\]Verification of phased Dicke states
======================================================
In this section we consider the verification of phased Dicke states [@Krammer09; @Chiuri10], which have the form $$\label{eq:quditDstateGen}
{\mbox{$ | D_{\phi}({\mathbf{k}}) \rangle $}}=\frac{1}{\sqrt{m}} \sum_{u\in B({\mathbf{k}})}{\mathrm{e}}^{{\mathrm{i}}\phi(u)}|u\>,$$ where $m=|B({\mathbf{k}})|=n!/\big(\prod _{j=0}^r k_j!\big)$ and the phase $\phi(u)$ is a real-valued function of the sequence $u$.
Similar to the verification protocol for Dicke states, our protocol for $|D_{\phi}({\mathbf{k}})\>$ consists of $\binom{n}{2}$ distinct tests based on adaptive local projective measurements. Each test is associated with a pair of parties among the $n$ parties. The test $P^{\phi}_{i,j}$ associated with parties $i$ and $j$ is realized as follows as illustrated in Fig. \[fig:measurement-III\]. All $n-2$ parties other than parties $i$ and $j$ perform the generalized Pauli-$Z$ measurements, and their outcomes are labeled by a sequence $u$ of $n-2$ symbols, which corresponds to the product state $|u\>$. The measurements of parties $i$ and $j$ depend on the outcome $u$, and we need to distinguish three cases. Recall that ${\mathbf{k}}_{st}$ and ${\mathbf{k}}_{ss}$ are defined in Eqs. (\[eq:kst2\]) and (\[eq:kss\]), respectively. Suppose $k_0,k_1,\dots,k_g\geq2$ and $k_{g+1}=k_{g+2}=\cdots k_r=1$, where $-1\leq g\leq r$.
1. $u\in B({\mathbf{k}}_{ss})$ with $0\leq s\leq g$.\
In this case, the normalized reduced state of parties $i$ and $j$ reads $|s\>_i|s\>_j$ up to an irrelevant phase factor (if the target phased Dicke state is measured). Then the two parties both perform $Z$ measurement, and the test is passed if they both obtain outcome $s$.
2. $u\in B({\mathbf{k}}_{st})$ with $0\leq s<t\leq r$.\
In this case, the normalized reduced state of parties $i$ and $j$ reads, $$\begin{aligned}
&\frac{1}{\sqrt{2}}\bigl[|s\>_i|t\>_j+{\mathrm{e}}^{{\mathrm{i}}\theta(i,j,u)}|t\>_i|s\>_j\bigr], \\
&\theta(i,j,u):=\phi(v(j,i,u)) -\phi(v(i,j,u)).\end{aligned}$$ Here $v(i,j,u), v(j,i,u)\in B({\mathbf{k}})$ are defined as follows, $$\begin{aligned}
v_i(i,j,u)&=s, \quad v_j(i,j,u)=t, \quad v_{\overline{{i,j}}}(i,j,u)=u,\label{eq:viju}\\
v_i(j,i,u)&=t, \quad v_j(j,i,u)=s, \quad v_{\overline{{i,j}}}(j,i,u)=u, \label{eq:vjiu}\end{aligned}$$ where $v_{\overline{{i,j}}}(i,j,u)$ means the subsequence of $v(i,j,u)$ without the $i$th and $j$th components, and $v_{\overline{{i,j}}}(j,i,u)$ is defined in the same way. Note that the parameters $s$ and $t$ are determined by $u$. Then parties $i$ and $j$ perform projective measurements $\big\{\Gamma_{i,j,u}^+,\Gamma_{i,j,u}^-,I-\Gamma_{i,j,u}^+ - \Gamma_{i,j,u}^-\big\}$ and $\big\{\Gamma_{j,i,u}^+,\Gamma_{j,i,u}^-,I-\Gamma_{j,i,u}^+ - \Gamma_{j,i,u}^-\big\}$, respectively, where $$\begin{aligned}
\Gamma_{i,j,u}^+=\frac{1}{2}\big[{\mbox{$ | s \rangle $}}+{\mathrm{e}}^{{\mathrm{i}}\theta(i,j,u)/2}{\mbox{$ | t \rangle $}}\big]\!
\big[{\mbox{$ \langle s | $}}+{\mathrm{e}}^{-{\mathrm{i}}\theta(i,j,u)/2}{\mbox{$ \langle t | $}}\big], \\
\Gamma_{i,j,u}^-=\frac{1}{2}\big[{\mbox{$ | s \rangle $}}-{\mathrm{e}}^{{\mathrm{i}}\theta(i,j,u)/2}{\mbox{$ | t \rangle $}}\big]\!
\big[{\mbox{$ \langle s | $}}-{\mathrm{e}}^{-{\mathrm{i}}\theta(i,j,u)/2}{\mbox{$ \langle t | $}}\big],\end{aligned}$$ and $\Gamma_{j,i,u}^\pm$ are defined in a similar way with $\theta(i,j,u)$ replaced by $\theta(j,i,u)=-\theta(i,j,u)$. The test is passed if they both obtain the first outcome (corresponding to $\Gamma^+$) or if they both obtain the second outcome (corresponding to $\Gamma^-$).
3. Other cases.\
The state cannot be the target state ${\mbox{$ | D_{\phi}({\mathbf{k}}) \rangle $}}$, so the test is not passed.
The resulting test projector reads $$\begin{aligned}
P^{\phi}_{i,j}&= \sum_{s=0}^g {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss}) \otimes \big[({\mbox{$ | s \rangle $}}{\mbox{$ \langle s | $}})^{\otimes2}\big]_{i,j}+\sum_{s<t}\sum_{u\in B({\mathbf{k}}_{st})}|u\>\<u| \nonumber\\
&\quad \otimes
\Bigl(\Gamma_{i,j,u}^+\otimes \Gamma_{j,i,u}^+ + \Gamma_{i,j,u}^- \otimes \Gamma_{j,i,u}^-\Bigr)_{i,j},\end{aligned}$$ where ${\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss})$ is the projector defined in [Eq. ]{}. Each test is performed with probability $1/\binom{n}{2}$, and the resulting verification operator reads $$\label{eq:OmegaD'}
\Omega^{\phi}_{{\mathbf{k}}}=\binom{n}{2}^{-1}\sum_{i<j}P^{\phi}_{i,j}.$$ The efficiency of this protocol is guaranteed by the following theorem, which is proved in Appendix \[app:TheoDstateGenProof\]. As in the case of Dicke states, the spectral gap of $\Omega^{\phi}_{{\mathbf{k}}}$ is closely related to the spectrum of the transposition graph [@Chase1973; @Caputo2010].
\[thm:DstateGen\] The spectral gap of $\Omega^{\phi}_{{\mathbf{k}}}$ is the same as that of $\Omega_{{\mathbf{k}}}$ in [Eq. ]{}, namely, $$\label{eq:nuOmegaD'}
\nu\big(\Omega^{\phi}_{{\mathbf{k}}}\big)=\nu(\Omega_{{\mathbf{k}}})=\begin{cases}
1/2 & {\mathbf{k}}=(1,1,1), \\
1/3 & {\mathbf{k}}=(2,1), \\
1/(n-1) \ \ & n\ge4.
\end{cases}$$ To verify the phased Dicke state ${\mbox{$ | D_{\phi}({\mathbf{k}}) \rangle $}}$ within infidelity $\epsilon$ and significance level $\delta$, the number of tests required reads $$\begin{aligned}
\label{eq:NOmegaD'}
N\big(\epsilon,\delta,\Omega^{\phi}_{{\mathbf{k}}}\big)&\approx\begin{cases}
2 \epsilon^{-1} \ln\delta^{-1} & {\mathbf{k}}=(1,1,1), \\
3 \epsilon^{-1} \ln\delta^{-1} & {\mathbf{k}}=(2,1), \\
(n-1) \epsilon^{-1}\ln\delta^{-1} \ & n\ge4.
\end{cases}\end{aligned}$$
{width="15.9cm"}
Optimal verification of antisymmetric basis states
==================================================
Finally, we consider the verification of the $n$-partite antisymmetric basis state, also known as the Slater determinant state [@Denni01; @Zanar02; @Bravyi03]. It has the following form $$\label{eq:AntisymState}
|{\mathrm{AS}}_n\>=\frac{1}{\sqrt{n!}}\sum_{j_1,j_2,\dots,j_n} \tilde\epsilon_{j_1,\dots,j_n} {\mbox{$ | j_1-1 \rangle $}}\otimes\cdots\otimes{\mbox{$ | j_n-1 \rangle $}} ,$$ where $j_1,j_2\dots,j_n\in\{1,2,\dots,n\}$ and $\tilde\epsilon_{j_1,j_2,\dots,j_n}$ is the Levi-Civita symbol. Note that $|{\mathrm{AS}}_n\>$ can be regarded as a bipartite maximally entangled state of Schmidt rank $n$ between one party and the other parties. So the spectral gap of any verification operator based on LOCC or separable measurements is upper bounded by $n/(n+1)$ according to known results on the verification of maximally entangled states [@HayaMT06; @Haya09; @ZhuH19O]. Here we shall show that this upper bound can be saturated for any antisymmetric basis state with $n\geq 2$. When $n=2$, the state $|{\mathrm{AS}}_2\>$ is a singlet and can be verified using protocols for bipartite pure states proposed in Refs. [@HayaMT06; @Haya09; @ZhuH19O; @LHZ19; @Wang19; @Yu19]. Here we focus on the multipartite case with $n\geq3$.
Efficient verification protocol
-------------------------------
Note that the antisymmetric basis state $|{\mathrm{AS}}_n\>$ in [Eq. ]{} is a special case of phased Dicke states in [Eq. ]{} with ${\mathbf{k}}=(1,1,\ldots, 1)$ and $\phi(u)=1$ ($-1$) if $u$ is an even (odd) permutation of $0,1,2,\ldots, n-1$. Therefore, $|{\mathrm{AS}}_n\>$ can be verified using the strategy presented in [Eq. ]{} tailored to this specific case. Here we shall construct a variant protocol that also consists of $\binom{n}{2}$ distinct tests, and each test is associated with a pair of parties. In the test $P_{i,j}^{{\mathrm{AS}}}$ associated with parties $i$ and $j$, all $n-2$ parties other than parties $i$ and $j$ perform the generalized Pauli-$Z$ measurements, and their outcomes are labeled by a sequence $u$ of $n-2$ symbols, which corresponds to the product state $|u\>$. The measurements of parties $i$ and $j$ depend on the outcome $u$, and we need to distinguish two cases.
1. $u\in B({\mathbf{k}}_{st})$ with $0\leq s<t\leq n-1$.\
In this case, the normalized reduced state of parties $i$ and $j$ reads $\frac{1}{\sqrt{2}}(|s\>_i|t\>_j-|t\>_i|s\>_j)$. Then the two parties both perform the projective measurement $\big\{T_{s,t}^+,T_{s,t}^-,I-T_{s,t}^+ -T_{s,t}^-\big\}$, where $T_{s,t}^+$ and $T_{s,t}^-$ are projectors defined in [Eqs. (\[eq:Pst+\]) and (\[eq:Pst-\])]{}. The test is passed if one of them obtains the first outcome (corresponding to $T_{s,t}^+$) and the other one obtains the second outcome (corresponding to $T_{s,t}^-$).
2. Other cases.\
The state cannot be the target state ${\mbox{$ | {\mathrm{AS}}_n \rangle $}}$, so the test is not passed.
The resulting test projector reads $$\label{eq:PijAS}
P_{i,j}^{{\mathrm{AS}}}=\sum_{s<t} {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st}) \otimes \left(T_{s,t}^+\otimes T_{s,t}^- + T_{s,t}^-\otimes T_{s,t}^+\right)_{i,j},$$ where ${\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})$ is defined in [Eq. ]{} and acts on the tensor product space of all parties other than $i$ and $j$.
We perform each test with probability $1/\binom{n}{2}$, and the resulting verification operator reads $$\label{eq:OmegaAS}
\Omega_{{\mathrm{AS}}_n}=\binom{n}{2}^{-1} \sum_{i<j}P_{i,j}^{{\mathrm{AS}}}.$$ The efficiency of this protocol is guaranteed by the following theorem, which is proved in Appendix \[app:TheoAntiStateProof\].
\[thm:Antisymmetric\] The spectral gap of $\Omega_{{\mathrm{AS}}_n}$ with $n\geq3$ reads $$\label{eq:nuOmegaAS}
\nu\big(\Omega_{{\mathrm{AS}}_n}\big) = \frac{1}{n-1}.$$ To verify the antisymmetric basis state ${\mbox{$ | {\mathrm{AS}}_n \rangle $}}$ within infidelity $\epsilon$ and significance level $\delta$, the number of tests required reads $$\label{eq:NumberTestAS}
N\big(\epsilon,\delta,\Omega_{{\mathrm{AS}}_n}\big) \approx \frac{n-1}{\epsilon}\ln\delta^{-1}.$$
Incidentally, the measurement $\big\{T_{s,t}^+,T_{s,t}^-,I-T_{s,t}^+ -T_{s,t}^-\big\}$ employed in the above verification protocol can be replaced by the alternative $\big\{\tilde{T}_{s,t}^+,\tilde{T}_{s,t}^-,I-\tilde{T}_{s,t}^+ -\tilde{T}_{s,t}^-\big\}$, where $$\begin{aligned}
\tilde{T}_{s,t}^+&=\frac{1}{2}({\mbox{$ | s \rangle $}}+{\mathrm{i}}{\mbox{$ | t \rangle $}})({\mbox{$ \langle s | $}}-{\mathrm{i}}{\mbox{$ \langle t | $}}),\\
\tilde{T}_{s,t}^-&=\frac{1}{2}({\mbox{$ | s \rangle $}}-{\mathrm{i}}{\mbox{$ | t \rangle $}})({\mbox{$ \langle s | $}}+{\mathrm{i}}{\mbox{$ \langle t | $}}).\end{aligned}$$ Accordingly, the test projector $P_{i,j}^{{\mathrm{AS}}}$ is replaced by $$\tilde{P}_{i,j}^{{\mathrm{AS}}}=\sum_{s<t} {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st}) \otimes \left(\tilde{T}_{s,t}^+\otimes \tilde{T}_{s,t}^- + \tilde{T}_{s,t}^-\otimes \tilde{T}_{s,t}^+\right)_{i,j},$$ and the resulting verification operator $\tilde{\Omega}_{{\mathrm{AS}}_n}$ is given by [Eq. ]{} with $P_{i,j}^{{\mathrm{AS}}}$ replaced by $\tilde{P}_{i,j}^{{\mathrm{AS}}}$. This verification strategy is a special case of the strategy presented in Sec. \[sec:PhasedDicke\] (tailored to the antisymmetric basis state). According to Theorems \[thm:DstateGen\] and \[thm:Antisymmetric\], we have $$\nu\big(\tilde{\Omega}_{{\mathrm{AS}}_n}\big) = \frac{1}{n-1}=\nu\big(\Omega_{{\mathrm{AS}}_n}\big).$$ Therefore, the two strategies $\Omega_{{\mathrm{AS}}_n}$ and $\tilde{\Omega}_{{\mathrm{AS}}_n}$ are equally efficient.
\[sec:OptimalASn\]Optimal verification protocol based on symmetrization
-----------------------------------------------------------------------
Let $\tilde{H}$ be the group of all unitary transformations of the form $U^{\otimes n}$ with $U\in {\mathrm{U}}({\mathbb{C}}^n)$ (here $U$ is not required to be diagonal), let $S$ be the group of all permutations of the $n$ parties, and let $\tilde{G}=\tilde{H}S$. Then the projector onto the antisymmetric basis state $|{\mathrm{AS}}_n\>$ is invariant under $\tilde{G}$. Therefore, we can construct a symmetrized strategy $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}$ according to Sec. \[sec:sym\]. Similar to $\Omega_{{\mathbf{k}}}$, by construction $\Omega_{{\mathrm{AS}}_n}$ is invariant under $S$, so we have $\Omega_{{\mathrm{AS}}_n}^S=\Omega_{{\mathrm{AS}}_n}$ and $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}=\Omega_{{\mathrm{AS}}_n}^{\tilde{H}}$. For the convenience of practical applications, the group ${\mathrm{U}}({\mathbb{C}}^n)$ used to construct $\tilde{H}$ can also be replaced by a unitary $t$-design with $t=n$ [@DankCEL09; @Gross07].
To determine $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}$, note that $\tilde{H}$ is a representation of ${\mathrm{U}}({\mathbb{C}}^n)$ and $S$ is a representation of the symmetric group ${\mathscr{S}}_n$ of $n$ letters. Accordingly, $\tilde{G}$ is a representation of ${\mathrm{U}}({\mathbb{C}}^n)\times {\mathscr{S}}_n$. By Schur-Weyl duality [@Weyl1931; @Proc2007], all the irreducible components of $\tilde{G}$ in $({\mathbb{C}}^n)^{\otimes n}$ are multiplicity free, and each irreducible component is labeled by a partition of $n$. Meanwhile, $({\mathbb{C}}^n)^{\otimes n}$ has the following decomposition $$({\mathbb{C}}^n)^{\otimes n}=\bigoplus_{\mu \vdash n} {\mathcal{H}}_\mu=\bigoplus_{\mu \vdash n} {\mathcal{W}}_\mu\otimes {\mathcal{S}}_\mu,$$ where the notation $\mu \vdash n$ means $\mu=(\mu_1, \mu_2,\ldots, \mu_n)$ is a partition of $n$, which means $\mu_j$ are nonnegative integers arranged in decreasing order and sum up to $n$. Here ${\mathcal{W}}_\mu$ carries the irreducible representation of the unitary group ${\mathrm{U}}({\mathbb{C}}^n)$, while ${\mathcal{S}}_\mu$ carries the irreducible representation of the symmetric group ${\mathscr{S}}_n$. Let $D_\mu=\dim({\mathcal{W}}_\mu)$ and $d_\mu=\dim ({\mathcal{S}}_\mu)$; let $P_\mu$ be the projector onto ${\mathcal{H}}_\mu$; then we have ${\operatorname{tr}}(P_\mu)=\dim({\mathcal{H}}_\mu)=d_\mu D_\mu$. The following theorem is proved in Appendix \[app:ComputeOmegaASG\].
\[thm:OmegaASG\] For $n\geq 3$ we have $$\begin{aligned}
\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}&=\sum_{\mu} \frac{d_\mu}{D_\mu} P_\mu, \label{eq:OmegaASG} \\
\beta(\Omega_{{\mathrm{AS}}_n}^{\tilde{G}})&=\frac{1}{n+1}, \quad \nu(\Omega_{{\mathrm{AS}}_n}^{\tilde{G}})=\frac{n}{n+1}. \label{eq:nuOmegaASG}
\end{aligned}$$ To verify the antisymmetric basis state ${\mbox{$ | {\mathrm{AS}}_n \rangle $}}$ within infidelity $\epsilon$ and significance level $\delta$, the number of tests required reads $$\label{eq:NumberTestASG}
N\big(\epsilon,\delta,\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}\big) \approx \frac{n+1}{n\epsilon}\ln\delta^{-1}.$$
Equation [Eq. ]{} in Theorem \[thm:OmegaASG\] follows from [Eq. ]{} and [Lemma \[lem:DimRatio\]]{} below, which imply that the second largest eigenvalue of $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}$ is $d_\mu/D_\mu$ with $\mu=(2,1,\ldots, 1)$. In this case, we have $d_\mu=n-1$ and $D_\mu=n^2-1$, which yields [Eq. ]{}. Theorem \[thm:OmegaASG\] implies that our protocol associated with the verification operator $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}$ is optimal for verifying the antisymmetric basis state $|{\mathrm{AS}}_n\>$ under LOCC. This is the only optimal protocol known for multipartite nonstabilizer states. For quantum states with GME, it is extremely difficult to construct optimal verification protocols under LOCC, and such optimal protocols are known previously only for GHZ states [@LiGHZ19].
A partition $\mu\vdash n$ is majorized by another partition $\mu'\vdash n$, denoted by $\mu\prec \mu'$, if $$\sum_{j=1}^k\mu_j \leq \sum_{j=1}^k\mu_j'\quad \forall k=1,2\ldots, n.$$ Note that the inequality is saturated when $k=n$. The following lemma as proved in Appendix \[app:ComputeOmegaASG\] is very instructive to understanding the spectrum and spectral gap of the verification operator $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}$.
\[lem:DimRatio\] Suppose $\mu, \mu' \vdash n$ and $\mu\prec\mu'$; then $$\label{eq:DimRatio}
\frac{D_\mu}{d_\mu}\leq \frac{D_{\mu'}}{d_{\mu'}}.$$
Efficient certification of GME
------------------------------
According to Ref. [@Guhne09], a quantum state $\rho$ is genuinely multipartite entangled (GME) if its fidelity with some multipartite entangled state $|\Psi\>$ is larger than $C_\Psi$, where $C_\Psi$ is the square of the maximum of the Schmidt coefficients of $|\Psi\>$ with respect to all bipartitions. Note that $C_{\Psi}$ equals $1/n$ when $|\Psi\>$ is the antisymmetric basis state $|{\mathrm{AS}}_n\>$. Thus a state $\rho$ is GME if ${\operatorname{tr}}(\rho|{\mathrm{AS}}_n\>\<{\mathrm{AS}}_n|)>1/n$. Given a verification strategy $\Omega$ for $|{\mathrm{AS}}_n\>$, to certify the GME of the antisymmetric basis state with significance level $\delta$, the number of tests is determined by [Eq. ]{} with $\epsilon=(n-1)/n$. If $\Omega$ is the optimal local strategy with $\nu(\Omega)=n/(n+1)$ (the strategy $\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}$ constructed in Sec. \[sec:OptimalASn\] for example), then this number reads $$N_\mathrm{E}=\biggl\lceil\frac{ \ln \delta}{\ln2-\ln(n+1)}\biggr\rceil,$$ which decreases monotonically with $n$. We have $N_\mathrm{E}=1$ when $n\geq2\delta^{-1}-1$, so the GME of the antisymmetric basis state can be certified with any given significance level using only one test when the number $n$ of particles is large enough. Previously, single-copy certification of GME is known only for GHZ states [@LiGHZ19] and qudit stabilizer states [@ZhuEVQPSlong19]. The current result is of special interest because it may shed light on the certification of GME of other nonstabilizer states.
Summary
=======
We proposed several efficient protocols for verifying general phased Dicke states, including $W$ states and qudit Dicke states. Our protocols require only adaptive local projective measurements and can be realized with current technologies. To verify an $n$-qudit phased Dicke state within infidelity $\epsilon$, the number of tests required is only $O(n/\epsilon)$, which is exponentially more efficient than previous approaches based on quantum state tomography and direct fidelity estimation. In addition, this number can be further reduced to $O(\sqrt{n}/\epsilon)$ for $W$ states. One of our protocols for the three-qubit $W$ state is nearly optimal for both nonadversarial and adversarial scenario, and it can also be applied to fidelity estimation. Moreover, we constructed an optimal protocol for verifying the antisymmetric basis state; the number of tests required decreases monotonically with the number $n$ of particles. By virtue of this protocol, the GME of the antisymmetric basis state can be certified with any given significance level using only one test when $n$ is sufficiently large. In this way, our work provides powerful tools for characterizing and verifying various phased Dicke states. In the course of study, we introduced several methods for improving the efficiency of a given verification strategy, which are useful to studying quantum verification in general. In addition, our work highlights the significance of graph theory and representation theory in studying quantum verification, which is of interest to many researchers beyond quantum information science.
Acknowledgment {#acknowledgment .unnumbered}
==============
ZL thanks Jiahao Li for helpful discussion on the proof of Proposition \[pro:LimSqrt(n)h(n)\]. HZ is grateful to Prof. Eiichi Bannai for stimulating discussion on the spectrum of the transposition graph. This work is supported by the National Natural Science Foundation of China (Grant No. 11875110) and Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX04). JS acknowledges support by the Beijing Institute of Technology Research Fund Program for Young Scholars and the National Natural Science Foundation of China (Grant No. 11805010).
\[app:LemmaTwoTestProof\]Proof of [Lemma \[lem:2TestStrategy\]]{}
=================================================================
Note that $|\Psi\>$ is an eigenstate of $P_1$ and $P_2$ with eigenvalue 1 by assumption. Without loss of generality, we can assume that $P_1$ has rank $l+1$ and $P_2$ has rank $h+1$ with $h\leq l$. Then we can find two sets of orthonormal states $\{|\phi_j\>\}_{j=1}^l$ and $\{|\varphi_k\>\}_{k=1}^h$ such that $$\label{eq:qk}
P_1=|\Psi\>\<\Psi|+\sum_{j=1}^l|\phi_j\>\<\phi_j|, \qquad
P_2=|\Psi\>\<\Psi|+\sum_{k=1}^h|\varphi_k\>\<\varphi_k|, \qquad
|\<\phi_j|\varphi_k\>|^2=q_k\delta_{jk},$$ where the overlaps $q_k$ are arranged in decreasing order, that is, $1\geq q_1\geq q_2\geq\cdots q_h\geq0$. As a consequence, we have $$\begin{aligned}
\bar{P}_1=&\sum_{j=1}^l|\phi_j\>\<\phi_j|, \qquad
\bar{P}_2=\sum_{k=1}^h|\varphi_k\>\<\varphi_k|,\\
q:=&\left\|\bar{P}_1\bar{P}_2 \bar{P}_1\right\|
=\Bigg\|\bigg(\sum_{j=1}^l|\phi_j\>\<\phi_j|\bigg) \bigg(\sum_{k=1}^h|\varphi_k\>\<\varphi_k|\bigg) \bigg(\sum_{j=1}^l|\phi_j\>\<\phi_j|\bigg)\Bigg\|
=\Bigg\| \sum_{k=1}^h q_k |\phi_k\>\<\phi_k| \Bigg\|
=q_1,\\
\max_{|\phi\>\in {\operatorname{supp}}(\bar{P}_1)} \<\phi|P_2|\phi\>=&\max_{\sum_{j=1}^l|c_j|^2=1}\sum_{j,k=1}^l c_j^* c_k\<\phi_j |P_2|\phi_k\>=\max_{\sum_{j=1}^l|c_j|^2=1} \sum_{j=1}^h q_j |c_j|^2=q_1 =\left\|\bar{P}_1\bar{P}_2 \bar{P}_1\right\|=q.\end{aligned}$$
In addition, the verification operator $\Omega$ can be expressed as follows, $$\Omega=pP_1+(1-p)P_2=|\Psi\>\<\Psi|+\sum_{j=1}^h \big[ p|\phi_j\>\<\phi_j|+(1-p)|\varphi_j\>\<\varphi_j| \big]+p\sum_{k=h+1}^l|\phi_k\>\<\phi_k|.$$ So the second largest eigenvalue of $\Omega$ reads $$\begin{aligned}
\lambda_2(\Omega)&=\bigl\|\Omega-|\Psi\>\<\Psi|\bigr\|
=\max_{1\leq j\leq h} \bigl\|p|\phi_j\>\<\phi_j|+(1-p)|\varphi_j\>\<\varphi_j|\bigr\| =\max_{1\leq j\leq h} \frac{1}{2}\Big[1+\sqrt{(2p-1)^2+4p(1-p)q_j} \,\Big]
\nonumber\\
&=\frac{1}{2}\Big[1+\sqrt{(2p-1)^2+4p(1-p)q_1}\, \Big]=\frac{1}{2}\Big[1+\sqrt{(2p-1)^2+4p(1-p)q}\, \Big] \nonumber\\
&= \frac{1}{2}\Big[1+\sqrt{4(1-q)p^2-4(1-q)p+1}\Big]
\geq \frac{1+\sqrt{q}}{2}.\end{aligned}$$ If $q<1$, then the lower bound is saturated iff $p=1/2$, in which case we have $\Omega=(P_1+P_2)/2$. Therefore, the spectral gap satisfies $\nu(\Omega)\leq(1-\sqrt{q})/2$, and the upper bound is saturated iff $p=1/2$, which confirms [Lemma \[lem:2TestStrategy\]]{}.
\[app:TheoDickeProof\]Proof of [Theorem \[thm:Dicke\]]{}
========================================================
The verification operator $\Omega_{\mathbf{k}}$ can be expressed as $$\begin{aligned}
\label{eq:decomOmegaD}
\Omega_{\mathbf{k}}&= \binom{n}{2}^{-1}\sum_{i<j} \sum_{s=0}^g {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss}) \otimes \left[({\mbox{$ | s \rangle $}}{\mbox{$ \langle s | $}})^{\otimes2}\right]_{i,j}
+\binom{n}{2}^{-1}\sum_{i<j}\sum_{s<t}{\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st}) \otimes
\left[(T_{s,t}^+)^{\otimes 2}+(T_{s,t}^-)^{\otimes 2}\right]_{i,j}\nonumber\\
&=\frac{1}{n(n-1)} \Bigg(\sum_{s=0}^r k_s^2-n\Bigg) \mathcal{Z}({\mathbf{k}})
+\frac{2}{n(n-1)}\sum_{i<j}\sum_{s<t} {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})
\otimes \left[\bigl({\mbox{$ | \psi_{s,t}^+ \rangle $}}{\mbox{$ \langle \psi_{s,t}^+ | $}}\bigr)_{i,j}+\bigl({\mbox{$ | \varphi_{s,t}^+ \rangle $}}{\mbox{$ \langle \varphi_{s,t}^+ | $}}\bigr)_{i,j}\right] \nonumber\\
&= \frac{1}{n(n-1)}\bigg[M_1+\sum_{s<t}M_{(s,t)}\bigg]\,.\end{aligned}$$ Here ${\mbox{$ | \psi_{s,t}^+ \rangle $}}=\frac{1}{\sqrt{2}}({\mbox{$ | s \rangle $}}{\mbox{$ | t \rangle $}}+{\mbox{$ | t \rangle $}}{\mbox{$ | s \rangle $}})$, ${\mbox{$ | \varphi_{s,t}^+ \rangle $}}=\frac{1}{\sqrt{2}}({\mbox{$ | s \rangle $}}{\mbox{$ | s \rangle $}}+{\mbox{$ | t \rangle $}}{\mbox{$ | t \rangle $}})$, ${\mathcal{Z}}({\mathbf{k}})=\sum_{u\in B({\mathbf{k}}) }{\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}}$, and $$\begin{aligned}
M_1&:=\Bigg(\sum_{s=0}^r k_s^2-n+\sum_{s<t} k_s k_t \Bigg){\mathcal{Z}}({\mathbf{k}})+\sum_{\substack{u,v\in B({\mathbf{k}})\\ u\sim v }}{\mbox{$ | u \rangle $}}{\mbox{$ \langle v | $}}
=\frac{1}{2}\Bigg(n^2-2n+\sum_{s=0}^r k_s^2 \Bigg){\mathcal{Z}}({\mathbf{k}})+\sum_{u,v\in B({\mathbf{k}}) }A_{u v}{\mbox{$ | u \rangle $}}{\mbox{$ \langle v | $}}\,,\label{eq:M1}\\
M_{(s,t)}
&:= \frac{k_s(k_s+1)}{2}\sum_{u\in B({\mathbf{k}}^s_{t}) }{\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}}
+\frac{k_t(k_t+1)}{2}\sum_{v\in B({\mathbf{k}}^t_{s}) }{\mbox{$ | v \rangle $}}{\mbox{$ \langle v | $}}
+\sum_{\substack{u\in B({\mathbf{k}}^s_{t})\\v\in B({\mathbf{k}}^t_{s})\\ u\sim v}}\bigr({\mbox{$ | u \rangle $}}{\mbox{$ \langle v | $}}+{\mbox{$ | v \rangle $}}{\mbox{$ \langle u | $}}\bigr),
\label{eq:M(s,t)}\end{aligned}$$ where the notation $u\sim v$ means $u_j\neq v_j$ for exactly two values of $j$. The coefficient matrix $(A_{uv})$ for $u,v\in B({\mathbf{k}})$ happens to be the adjacency matrix $A({\mathbf{k}})$ of the transposition graph $G({\mathbf{k}})$ [@Chase1973] explained in Appendix \[app:SpecGraph\]. Note that $M_1$ and all $M_{(s,t)}$ (with $s,t=0,1,\dots,r$ and $s<t$) are hermitian and have mutually orthogonal supports, so all of them are positive semidefinite given that $\Omega_{\mathbf{k}}$ is positive semidefinite by construction.
According to [Lemma \[lem:GraphSpect\]]{} in Appendix \[app:SpecGraph\], the maximum eigenvalue of $A({\mathbf{k}})$ is $d=\big(n^2-\sum_{s=0}^r k_s^2\big)/2$ with multiplicity 1, and the second largest eigenvalue of $A({\mathbf{k}})$ is equal to $d-n$. Therefore, the two largest eigenvalues of $M_1$ read $$\label{eq:M1lambda}
\lambda_1(M_1)=n(n-1), \qquad \lambda_2(M_1)=n(n-1)-n=n(n-2).$$ In addition, direct calculations show that $M_{(s,t)}$ has an eigenstate $$\begin{aligned}
{\mbox{$ | \Theta_{s,t} \rangle $}}&=\frac{1}{\sqrt{k_s(k_s+1)+k_t(k_t+1)}}\Big[ \sqrt{k_s(k_s+1)}\, \big|D({\mathbf{k}}^s_{t})\big\>
+\sqrt{k_t(k_t+1)}\, \big|D({\mathbf{k}}^t_{s})\big\> \Big]\nonumber\\
&=\sqrt{\frac{\prod_{j=0}^r k_j!}{ k_s k_t[k_s(k_s+1)+k_t(k_t+1)](n!)}}\,\Biggl[k_s(k_s+1)\sum_{u\in B({\mathbf{k}}^s_{t})} |u\>+ k_t(k_t+1)\sum_{u\in B({\mathbf{k}}^t_{s})} |u\>\Biggr],
\label{eq:defphiD}\end{aligned}$$ with eigenvalue $$\label{eq:M(s,t)lambda}
\lambda_1\big(M_{(s,t)}\big)=\frac{k_s(k_s+1)}{2}+\frac{k_t(k_t+1)}{2}\,.$$ According to the Perron-Frobenius theorem (see Chapter 8 in Ref. [@Meyer2000] for example), this is the largest eigenvalue of $M_{(s,t)}$, given that $M_{(s,t)}$ is irreducible in the subspace spanned by ${\mbox{$ | u \rangle $}}$ with $u\in B({\mathbf{k}}^s_{t})\cup B({\mathbf{k}}^t_{s})$, that is, the graph corresponding to the third term of $M_{(s,t)}$ in [Eq. ]{} is connected. In conjunction with Eqs. and , we can deduce the second largest eigenvalue and spectral gap of $\Omega_{\mathbf{k}}$, with the result $$\begin{aligned}
\lambda_2(\Omega_{\mathbf{k}})&=\max\left\{\frac{\lambda_2(M_1)}{n(n-1)},\,\max_{s<t}\frac{ \lambda_1\big(M_{(s,t)}\big) }{n(n-1)} \right\}
=\max\left\{\frac{n-2}{n-1}, \frac{ k_0(k_0+1)+k_1(k_1+1) }{2n(n-1)} \right\},\\
\nu(\Omega_{\mathbf{k}})&=1-\lambda_2(\Omega_{\mathbf{k}})=\min\left\{\frac{1}{n-1},1-\frac{ k_0(k_0+1)+k_1(k_1+1) }{2n(n-1)} \right\}.
\label{eq:gapD}\end{aligned}$$ When $n\ge 4$, the above equations reduce to $$\lambda_2(\Omega_{\mathbf{k}})=\frac{n-2}{n-1}\,,\qquad\qquad \nu(\Omega_{\mathbf{k}})=\frac{1}{n-1}\,,$$ which confirms [Eq. ]{}. When $n=3$, [Eq. ]{} can be verified directly by virtue of [Eq. ]{}.
Equation follows from [Eqs. (\[eq:NumberTest\]) and (\[eq:nuOmegaD\])]{}. This observation completes the proof of [Theorem \[thm:Dicke\]]{}.
\[app:ComputeOmegaD\^G\]Determination of $\Omega_{\mathbf{k}}^G$ and proof of [Eq. ]{}
======================================================================================
Denote by $H$ the group of all unitary transformations of the form $U^{\otimes n}$, where $U$ is diagonal in the computational basis. According to [Eqs. (\[eq:OmegaSym\]) and (\[eq:decomOmegaD\])]{}, we have $$\begin{aligned}
\label{eq:decomOmegaD^G}
\Omega_{\mathbf{k}}^G=\Omega_{\mathbf{k}}^H=\frac{M_1^H+\sum_{s<t}M_{(s,t)}^H}{n(n-1)},\end{aligned}$$ where $$\label{eq:M^H}
M_1^H=M_1, \quad M_{(s,t)}^H =
\frac{k_s(k_s+1)}{2}\sum_{u\in B({\mathbf{k}}^s_{t}) }{\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}}
+\frac{k_t(k_t+1)}{2}\sum_{v\in B({\mathbf{k}}^t_{s}) }{\mbox{$ | v \rangle $}}{\mbox{$ \langle v | $}}.$$ Equation follows from [Eqs. (\[eq:M1\]) and (\[eq:M(s,t)\])]{} as well as the fact that $(|u\>\<v|)^H=|u\>\<v|$ if $u$ and $v$ can be turned into each other by a permutation, while $(|u\>\<v|)^H=0$ otherwise.
Note that $M_1$ and all $M^H_{(s,t)}$ (with $s,t=0,1,\dots,r$ and $s<t$) are positive semidefinite and have mutually orthogonal supports. The largest eigenvalue of $M^H_{(s,t)}$ reads $\lambda_1\big(M_{(s,t)}^H\big)=k_s(k_s+1)/2$. In conjunction with [Eqs. (\[eq:decomOmegaD\^G\]), (\[eq:M\^H\]), and (\[eq:M1lambda\])]{}, we can deduce the second largest eigenvalue and the spectral gap of $\Omega_{\mathbf{k}}^G$, with the result $$\lambda_2\big(\Omega^G_{\mathbf{k}}\big)
=\max\left\{\frac{\lambda_2(M_1^H)}{n(n-1)},\,\max_{s<t}\frac{ \lambda_1\big(M_{(s,t)}^H\big) }{n(n-1)} \right\}=
\max\left\{\frac{n-2}{n-1}, \frac{ k_0(k_0+1) }{2n(n-1)} \right\}= \frac{n-2}{n-1}, \qquad\quad
\nu\big(\Omega^G_{\mathbf{k}}\big)
=\frac{1}{n-1},$$ which confirms [Eq. ]{}.
\[app:derive q\]Proof of [Eq. ]{}
=================================
Note that each ket $|\zeta\>$ in the support of the test projector $P_1$ in [Eq. ]{} has the following form $$\label{eq:zetaState}
|\zeta\>=a_1|10\dots00\>+a_2|01\dots00\>+\cdots+a_n|00\dots01\>,$$ where $a_1,a_2,\dots,a_n$ are complex numbers that satisfy the normalization condition $\sum_{j=1}^n |a_j|^2=1$. In conjunction with [Lemma \[lem:2TestStrategy\]]{} and Eqs. -, we can deduce that $$\begin{aligned}
\label{eq:q expansion}
q=&\|\bar{P}_1\bar{P}_2 \bar{P}_1\|=\max_{\substack{\<W_n|\zeta\>=0\\ \<\zeta|P_1|\zeta\>=1}}\<\zeta|P_2|\zeta\>
=\frac{1}{2^{n-1}} \max_{\substack{\sum_{j} a_j=0\\\sum_{j} |a_j|^2=1}} \sum_{x\in\{0,1\}^{n-1}}
\frac{\left|a_n+ (n-1-2|x|)\sum_{j=1}^{n-1}(-1)^{x_j}a_j \right|^2}{1+\left(n-1-2|x|\right)^2}
\nonumber\\
=&\frac{1}{2^{n-1}} \max_{\substack{\sum_{j} a_j=0\\\sum_{j} |a_j|^2=1}} \sum_{k=0}^{n-1} \frac{1}{1+\left(n-1-2k\right)^2}
\sum_{\substack{x\in\{0,1\}^{n-1}\\|x|=k}}
\left|a_n+ (n-1-2k)\sum_{j=1}^{n-1}(-1)^{x_j}a_j \right|^2,\end{aligned}$$ where $|x|$ denotes the Hamming weight of $x$. When $x\in\{0,1\}^{n-1}$ and $a_1,a_2,\dots,a_n$ satisfy the conditions $\sum_{j} a_j=0$ and $\sum_{j}|a_j|^2=1$, we can derive the following equalities, $$\begin{aligned}
&\sum_{|x|=k}
\left|a_n+ (n-1-2k)\sum_{j=1}^{n-1}(-1)^{x_j}a_j \right|^2
\nonumber\\
&=\sum_{|x|=k} |a_n|^2
+2{\mathrm{Re}}\!\left[(n-1-2k) a_n^* \sum_{|x|=k}\sum_{j=1}^{n-1}(-1)^{x_j}a_j\right]
+(n-1-2k)^2 \sum_{|x|=k} \left|\sum_{j=1}^{n-1}(-1)^{x_j}a_j \right|^2
\nonumber\\
&=\frac{4k(n-1-k)(n-1-2k)^2}{(n-1)(n-2)}\binom{n-1}{k}
+\binom{n-1}{k}\left\{1+(n-1-2k)^2\left[1-\frac{2}{n-1}-\frac{8k(n-1-k)}{(n-1)(n-2)}\right]\right\}|a_n|^2\label{eq:|.| expansion}\end{aligned}$$ for $k=0,1,\dots,n-1$, where $a_j^*$ denotes the complex conjugate of $a_j$. The last equality in [Eq. ]{} follows from the following equations, $$\begin{aligned}
\sum_{|x|=k}\sum_{j=1}^{n-1}(-1)^{x_j}a_j
&=\frac{n-1-2k}{n-1}\binom{n-1}{k}\sum_{j=1}^{n-1}a_j=-\frac{n-1-2k}{n-1}\binom{n-1}{k}a_n, \\
\sum_{|x|=k} \Biggl|\sum_{j=1}^{n-1}(-1)^{x_j}a_j \Biggr|^2
&= \binom{n-1}{k} \left[ \sum_{j=1}^{n-1}|a_j|^2
+\frac{(n-1)(n-2)-4k(n-1-k)}{(n-1)(n-2)} \sum_{\substack{ i,j=1,\dots,n-1 \\i\ne j}} a_i a_j^*\right] \nonumber\\
&=\binom{n-1}{k}\left[ \frac{4k(n-1-k)}{(n-1)(n-2)}+\frac{(n-1)(n-2)-8k(n-1-k)}{(n-1)(n-2)}|a_n|^2\right]. \label{eq:AltSumSq}\end{aligned}$$ In deriving the second equality in [Eq. ]{}, we have employed the following facts $$\sum_{j=1}^{n-1}|a_j|^2=1-|a_n|^2,\quad\sum_{\substack{ i,j=1,\dots,n-1 \\i\ne j}} a_i a_j^*=\left|\sum_{ i=1}^{n-1} a_i\right|^2 -\sum_{j=1}^{n-1}|a_j|^2=|-a_n|^2-(1-|a_n|^2)=2|a_n|^2-1.$$
Now by plugging [Eq. ]{} into [Eq. ]{} we obtain $$\begin{aligned}
\label{eq:qan}
q= c_1(n)+\max_{\substack{\sum_{j} a_j=0\\\sum_{j} |a_j|^2=1}} c_2(n)|a_n|^2,\end{aligned}$$ where the coefficients $c_1(n)$ and $c_2(n)$ read $$\begin{aligned}
c_1(n)
:=&\frac{1}{2^{n-1}}\sum_{k=0}^{n-1}\frac{\binom{n-1}{k}4k(n-1-k)(n-1-2k)^2}{(n-1)(n-2)[1+(n-1-2k)^2]}=\frac{1}{2^{n-3}}\sum_{k=1}^{n-2}\frac{\binom{n-3}{k-1}(n-1-2k)^2}{[1+(n-1-2k)^2]}\nonumber\\=&\frac{1}{2^{n-3}}\sum_{k=0}^{n-3}\frac{\binom{n-3}{k}[1+(n-3-2k)^2-1]}{[1+(n-3-2k)^2]}
=1-h(n-3),\\
c_2(n)
:=&\frac{1}{2^{n-1}}\sum_{k=0}^{n-1}\frac{\binom{n-1}{k}}{1+(n-1-2k)^2}\left\{1+(n-1-2k)^2\left[1-\frac{2}{n-1}-\frac{8k(n-1-k)}{(n-1)(n-2)}\right]\right\} \nonumber\\
=&\frac{1}{2^{n-1}}\sum_{k=0}^{n-1}\binom{n-1}{k}-\frac{2}{n-1}\frac{1}{2^{n-1}}\sum_{k=0}^{n-1}\frac{\binom{n-1}{k}(n-1-2k)^2}{1+(n-1-2k)^2}-2c_1(n)\nonumber\\
=&1-\frac{2}{n-1}[1-h(n-1)]-2[1-h(n-3)]
=\frac{2}{n-1}h(n-1)+2h(n-3)-\frac{n+1}{n-1},\end{aligned}$$ and $h(n)$ is defined in [Eq. ]{}.
If $n=3$, then $c_1(n)=0$ and $c_2(n)=\frac{3}{5}>0$, so the maximum in [Eq. ]{} is attained when $a_1=a_2=-\frac{1}{\sqrt{6}}$ and $a_3=\sqrt{\frac{2}{3}}$, in which case we have $$q=c_1(3)+\frac{2}{3}c_2(3)=\frac{2}{5},$$ which confirms [Eq. ]{} in the case $n=3$.
If $n=4,5$, then $c_2(n)<0$ by direct calculation. If $n\geq 6$, then $$c_2(n)=\frac{2}{n-1}h(n-1)+2h(n-3)-\frac{n+1}{n-1}< \frac{1}{n-1}+1-\frac{n+1}{n-1}=-\frac{1}{n-1}<0,$$ where the first inequality follows from the fact that $h(n)<1/2$ for $n\geq3$, which is easy to prove. Therefore, $c_2(n)<0$ for $n\geq4$. In this case, the maximum in [Eq. ]{} is attained when $a_1=-a_2=\frac{1}{\sqrt{2}}$ and $a_j=0$ for $j=3,4,\dots,n$, which yields $$q=c_1(n)=1-h(n-3)$$ and confirms [Eq. ]{}.
\[app:h(n)property\]Proofs of Propositions \[pro:sqrt[n]{}h(n)Monot\] and \[pro:LimSqrt(n)h(n)\]
================================================================================================
To prove Proposition \[pro:sqrt[n]{}h(n)Monot\], it suffices to prove that $\sqrt{n+2}\,h(n+2)>\sqrt{n}\,h(n)$ for each integer $n\geq0$. When $n=0$, the inequality is obvious; when $n\geq1$, the inequality can be proved as follows, $$\begin{aligned}
&\frac{2^{n+2}}{\sqrt{n}}\left[\sqrt{n+2}\,h(n+2)-\sqrt{n}\,h(n)\right]
=\sqrt{\frac{n+2}{n}}\sum_{j=0}^{n+2}\frac{\binom{n+2}{j}}{1+(n+2-2j)^2}
-\sum_{j=0}^{n}\frac{4\binom{n}{j}}{1+(n-2j)^2}
\nonumber\\
&\quad> \frac{n+2}{n+1}\sum_{j=0}^{n+2}\frac{\binom{n+2}{j}}{1+(n+2-2j)^2}
-\sum_{j=0}^{n}\frac{4\binom{n}{j}}{1+(n-2j)^2}\nonumber= \frac{n+2}{n+1}\sum_{k=-1}^{n+1}\frac{\binom{n+2}{k+1}}{1+(n-2k)^2}
-\sum_{j=0}^{n}\frac{4\binom{n}{j}}{1+(n-2j)^2}
\nonumber\\
&\quad > \sum_{j=0}^{n}\frac{1}{1+(n-2j)^2}\left[\frac{n+2}{n+1}\binom{n+2}{j+1}-4\binom{n}{j}\right]\geq0.\end{aligned}$$ Here the first inequality holds because $\sqrt{\frac{n+2}{n}}>\frac{n+2}{n+1}$, and the last inequality holds because $$\frac{n+2}{n+1}\binom{n+2}{j+1}-4\binom{n}{j}
=\left[\frac{(n+2)^2}{(j+1)(n+1-j)}-4\right]\binom{n}{j}
\geq0,\qquad j=0,1,\dots,n.$$ Therefore, $\sqrt{n}\,h(n)$ is strictly monotonically increasing in $n$ for odd $n$ and even $n$, respectively.
First, [Eq. ]{} in Proposition \[pro:LimSqrt(n)h(n)\] can be derived as follows, $$\begin{aligned}
& \lim_{n \to +\infty} \sqrt{2n}\,h(2n)
=\lim_{n \to +\infty} \frac{\sqrt{2n}}{2^{2n}}\sum_{j=0}^{2n}\frac{\binom{2n}{j}}{1+(2n-2j)^2} = \left[\lim_{n \to +\infty} \frac{\sqrt{2n}}{2^{2n}}\binom{2n}{n}\right]
\left[\lim_{n\to+\infty} \sum_{j=0}^{2n}\frac{\binom{2n}{n}^{-1}\binom{2n}{j}}{1+(2n-2j)^2}
\right] \nonumber\\
&=\sqrt{\frac{2}{\pi}}
\left[\lim_{n\to+\infty} \sum_{j=0}^{2n}\frac{1}{1+(2n-2j)^2}
\right]= \sqrt{\frac{2}{\pi}}
\left[1+ \lim_{n \to +\infty} \sum_{k=1}^{n}\frac{2}{1+(2k)^2}\right] \nonumber\\
&= \sqrt{\frac{2}{\pi}} \left[1+ \frac{\pi}{2}\coth\left(\frac{\pi}{2}\right)-1 \right]
= \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr)
\approx 1.37, \label{eq:limEvenProof0}
\end{aligned}$$ where the third equality follows from [Eqs. (\[eq:limEvenProof1\]) and (\[eq:limEvenProof2\])]{} below, and the fifth equality is a corollary of [Eq. ]{} below, $$\begin{aligned}
&\lim_{n \to +\infty} \frac{\sqrt{2n}}{2^{2n}}\binom{2n}{n}
= \lim_{n \to +\infty} \frac{\sqrt{2n}(2n)!}{2^{2n}(n!)^2}
= \sqrt{\frac{2}{\pi}}\, , \label{eq:limEvenProof1} \\
&\lim_{n\to+\infty} \sum_{j=0}^{2n}\frac{\binom{2n}{n}^{-1}\binom{2n}{j}}{1+(2n-2j)^2}=\lim_{n\to+\infty} \sum_{j=0}^{2n}\frac{1}{1+(2n-2j)^2}, \label{eq:limEvenProof2}\\
&\lim_{n \to +\infty} \sum_{k=1}^{n}\frac{2}{1+(2k)^2}
=\frac{{\mathrm{i}}}{2} \lim_{n \to +\infty} \sum_{k=1}^{n} \Big(\frac{1}{{\mathrm{i}}/2-k}+\frac{1}{{\mathrm{i}}/2+k}\Big)
=\frac{{\mathrm{i}}}{2}\Big[\pi\cot\Bigl(\frac{{\mathrm{i}}\pi}{2}\Bigr)+2{\mathrm{i}}\Big]
=\frac{\pi}{2}\coth\Bigl(\frac{\pi}{2}\Bigr)-1. \label{eq:limEvenProof3}
\end{aligned}$$ The second equality in [Eq. ]{} follows from the Wallis formula \[see Eq. (1) in Ref. [@Piros03] for example\] or the Stirling formula; the second equality in [Eq. ]{} follows from Theorem 6.12 in Ref. [@Ullri08].
To prove [Eq. ]{}, note that the left-hand side in [Eq. ]{} cannot be larger than the right-hand side thanks to the inequality $\binom{2n}{j}\leq \binom{2n}{n}$. To complete the proof, it suffices to prove the opposite inequality, which can be derived as follows. $$\begin{aligned}
&\lim_{n\to+\infty} \sum_{j=0}^{2n}\frac{1}{1+(2n-2j)^2}-
\lim_{n\to+\infty} \sum_{j=0}^{2n}\frac{\binom{2n}{n}^{-1}\binom{2n}{j}}{1+(2n-2j)^2}
=2\lim_{n\to+\infty} \sum_{k=1}^{n}\frac{1-\binom{2n}{n}^{-1}\binom{2n}{n+k}}{1+4k^2}\nonumber\\
&=
2\lim_{n\to+\infty} \sum_{k=1}^{\lceil n^{2/3}\rceil}\frac{1-\binom{2n}{n}^{-1}\binom{2n}{n+k}}{1+4k^2}+
2\lim_{n\to+\infty} \sum_{\lceil n^{2/3}\rceil+1}^n\frac{1-\binom{2n}{n}^{-1}\binom{2n}{n+k}}{1+4k^2}
\nonumber\\
&\leq 2\lim_{n\to+\infty}
\sum_{k=1}^{\lceil n^{2/3}\rceil}\frac{k^2}{n(1+4k^2)}+2\lim_{n\to+\infty} \sum_{\lceil n^{2/3}\rceil}^n\frac{1}{1+4k^2}\leq
2\lim_{n\to+\infty}
\frac{\lceil n^{2/3}\rceil}{4n}+2\lim_{n\to+\infty} \frac{n}{4n^{4/3}}=0,\end{aligned}$$ where the first inequality is a consequence of the following equation, $$\binom{2n}{n}^{-1}\binom{2n}{n+k}=\frac{(n!)^2}{(n+k)!(n-k)!}=\frac{n(n-1)\cdots (n-k+1)}{(n+k)(n+k-1)\cdots (n+1)}\geq \left(\frac{n-k}{n}\right)^k\geq 1-\frac{k^2}{n},\quad k\in \{1,2,\dots,n\}.$$
Next, [Eq. ]{} in Proposition \[pro:LimSqrt(n)h(n)\] can be derived as follows, $$\begin{aligned}
& \lim_{n \to +\infty} \sqrt{2n+1}\,h(2n+1)
=\lim_{n \to +\infty} \frac{\sqrt{2n+1}}{2^{2n+1}}\sum_{j=0}^{2n+1}\frac{\binom{2n+1}{j}}{1+(2n+1-2j)^2} \nonumber\\
&= \left[\lim_{n \to +\infty} \frac{\sqrt{2n+1}}{2^{2n+1}}\binom{2n+1}{n}\right]
\left[\lim_{n\to+\infty} \sum_{j=0}^{2n+1}\frac{\binom{2n+1}{n}^{-1}\binom{2n+1}{j}}{1+(2n+1-2j)^2}
\right]= \sqrt{\frac{2}{\pi}}
\left[\lim_{n\to+\infty} \sum_{j=0}^{2n+1}\frac{1}{1+(2n+1-2j)^2}
\right]\nonumber\\
&= \sqrt{\frac{2}{\pi}}
\left[ \lim_{n \to +\infty} \sum_{k=0}^{n}\frac{2}{1+(2k+1)^2}\right]= \sqrt{\frac{2}{\pi}}\times \frac{\pi}{2}\tanh\left(\frac{\pi}{2}\right)
= \sqrt{\frac{\pi}{2}} \tanh\Bigl(\frac{\pi}{2}\Bigr)
\approx 1.15, \label{eq:limOddProof0}\end{aligned}$$ where the third equality follows from [Eqs. (\[eq:limOddProof1\]) and (\[eq:limOddProof2\])]{} below, and the fifth equality is a corollary of [Eq. ]{} below. $$\begin{aligned}
&\lim_{n \to +\infty} \frac{\sqrt{2n+1}}{2^{2n+1}}\binom{2n+1}{n}
= \bigg[\lim_{n \to +\infty} \frac{\sqrt{2n}(2n)!}{2^{2n}(n!)^2}\bigg] \bigg[\lim_{n \to +\infty} \sqrt{\frac{2n+1}{2n}} \frac{2n+1}{2(n+1)} \bigg]
= \sqrt{\frac{2}{\pi}}, \label{eq:limOddProof1}\\
&\lim_{n\to+\infty} \sum_{j=0}^{2n+1}\frac{\binom{2n+1}{n}^{-1}\binom{2n+1}{j}}{1+(2n+1-2j)^2}
= \lim_{n\to+\infty} \sum_{j=0}^{2n+1}\frac{1}{1+(2n+1-2j)^2},\label{eq:limOddProof2}\\
&\lim_{n \to +\infty} \sum_{k=0}^{n}\frac{2}{1+(2k+1)^2}
=\bigg(\lim_{n \to +\infty} \sum_{k=1}^{2n+1}\frac{2}{1+k^2}\bigg) - \bigg[\lim_{n \to +\infty} \sum_{k=1}^{n}\frac{2}{1+(2k)^2}\bigg] \nonumber \\
&\quad ={\mathrm{i}}\lim_{n \to +\infty} \sum_{k=1}^{n} \Big(\frac{1}{{\mathrm{i}}-k}+\frac{1}{{\mathrm{i}}+k}\Big) - \Big[\frac{\pi}{2}\coth\Bigl(\frac{\pi}{2}\Bigr)-1\Big]
={\mathrm{i}}\big[\pi\cot({\mathrm{i}}\pi)+{\mathrm{i}}\big]-\frac{\pi}{2}\coth\Bigl(\frac{\pi}{2}\Bigr)+1
=\frac{\pi}{2}\tanh\Bigl(\frac{\pi}{2}\Bigr). \label{eq:limOddProof3}
\end{aligned}$$ The second equality in [Eq. ]{} follows from the Wallis formula \[cf. [Eq. ]{}\]; the second and third equalities in [Eq. ]{} follow from [Eq. ]{} above and Theorem 6.12 in Ref. [@Ullri08], respectively; [Eq. ]{} can be proved in a similar way as [Eq. ]{}, given the following equation, $$\binom{2n+1}{n}^{-1}\binom{2n+1}{n+1+k}=\frac{n!(n+1)!}{(n+1+k)!(n-k)!}\geq \left(\frac{n-k}{n+1}\right)^k\geq 1-\frac{k^2+k}{n+1},\quad k\in \{0,1,\dots,n\}.$$
\[app:ProofEq:bound-nuW\]Bounds for $\nu(\Omega_{W_n})$ and proof of [Eq. ]{}
=============================================================================
To derive lower bounds for $\nu(\Omega_{W_n})$, we shall consider two cases depending on the parity of the qubit number $n$.
1. $n$ is an odd integer.
Direct calculation based on Eqs. - shows that $\sqrt{n}\nu(\Omega_{W_n})>3/10$ for $3\leq n\leq33$. When $n\geq 35$, we have $$\sqrt{n}\,\nu(\Omega_{W_n})
>\frac{1}{4}\sqrt{n-3}\,h(n-3)
\geq\frac{1}{4}\sqrt{32}\,h(32)>\frac{3}{10},$$ where the first inequality follows from [Eq. ]{}, and the second inequality follows from Proposition \[pro:sqrt[n]{}h(n)Monot\]. Therefore, $\sqrt{n}\nu(\Omega_{W_n})>3/10$ when $n$ is odd and $n\geq 3$, which implies the lower bound in [Eq. ]{}.
2. $n$ is an even integer.
Direct calculation based on [Eqs. (\[eq:hn\]) and (\[eq:nuW\])]{} shows that $\sqrt{n}\nu(\Omega_{W_n})>1/4$ for $4\leq n\leq42$. When $n\geq 44$, we have $$\sqrt{n}\,\nu(\Omega_{W_n})
>\frac{1}{4}\sqrt{n-3}\,h(n-3)
\geq\frac{1}{4}\sqrt{41}\,h(41)>\frac{1}{4},$$ where the first inequality follows from [Eq. ]{}, and the second inequality follows from Proposition \[pro:sqrt[n]{}h(n)Monot\]. Therefore, $\sqrt{n}\nu(\Omega_{W_n})>1/4$ when $n$ is even and $n\geq 4$, which implies the lower bound in [Eq. ]{} again.
In conclusion, the lower bound in [Eq. ]{} holds for any integer $n$ that satisfies $n\geq 3$.
To derive upper bounds for $\nu(\Omega_{W_n})$, we also consider two cases depending on the parity of the qubit number $n$.
1. $n$ is an odd integer.
Direct calculation based on Eqs. - shows that $\sqrt{n}\nu(\Omega_{W_n})<3/8$ for $3\leq n\leq45$ with $n\ne5$ and $\sqrt{n}\nu(\Omega_{W_n})<0.411<1/2$ when $n=5$. When $n\geq 47$, we have $$\sqrt{n}\,\nu(\Omega_{W_n})
=\sqrt{n-3}\,h(n-3) \sqrt{\frac{n}{n-3}} \frac{1-\sqrt{1-h(n-3)}}{2h(n-3)}
< \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr) \times 1.034 \times 0.265 < \frac{3}{8},$$ where the first inequality follows from [Eq. ]{} and the following equations, $$\begin{gathered}
\sqrt{\frac{n}{n-3}}\leq \sqrt{\frac{47}{47-3}}<1.034, \\
h(n-3)\leq \frac{1}{\sqrt{n-3}} \times \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr)
\leq \sqrt{\frac{\pi}{2(47-3)}} \coth\Bigl(\frac{\pi}{2}\Bigr) < 0.207, \label{eq:hUpperBoundOdd} \\
\frac{1-\sqrt{1-h(n-3)}}{2h(n-3)} < \frac{1-\sqrt{1-0.207}}{2\times0.207}<0.265. \label{eq:(1-sqrt)UpperBoundOdd}\end{gathered}$$ The first inequality in [Eq. ]{} follows from [Eq. ]{}; the first inequality in [Eq. ]{} follows from [Eq. ]{} and the fact that the real-valued function $(1-\sqrt{1-x})/(2x)$ is monotonically increasing in $x$ when $0<x\leq1$. Therefore, $\sqrt{n}\nu(\Omega_{W_n})<3/8$ when $n$ is odd and $n\geq3, n\ne5$, which implies the upper bound in [Eq. ]{}.
2. $n$ is an even integer.
Direct calculation based on [Eqs. (\[eq:hn\]) and (\[eq:nuW\])]{} shows that $\sqrt{n}\nu(\Omega_{W_n})<0.31$ for $4\leq n\leq52$. When $n\geq 54$, we have $$\sqrt{n}\,\nu(\Omega_{W_n})
=\sqrt{n-3}\,h(n-3) \sqrt{\frac{n}{n-3}} \frac{1-\sqrt{1-h(n-3)}}{2h(n-3)}
< \sqrt{\frac{\pi}{2}} \tanh\Bigl(\frac{\pi}{2}\Bigr) \times 1.03 \times 0.261 < 0.31,$$ where the first inequality follows from [Eq. ]{} and the following equations, $$\begin{gathered}
\sqrt{\frac{n}{n-3}}\leq \sqrt{\frac{54}{54-3}}<1.03, \\
h(n-3)\leq \sqrt{\frac{\pi}{2(n-3)}} \tanh\Bigl(\frac{\pi}{2}\Bigr)
\leq \sqrt{\frac{\pi}{2(54-3)}} \tanh\Bigl(\frac{\pi}{2}\Bigr) < 0.161, \label{eq:hUpperBoundEven} \\
\frac{1-\sqrt{1-h(n-3)}}{2h(n-3)} < \frac{1-\sqrt{1-0.161}}{2\times0.161}<0.261. \label{eq:(1-sqrt)UpperBoundEven}\end{gathered}$$ The first inequality in [Eq. ]{} follows from [Eq. ]{}; the first inequality in [Eq. ]{} follows from [Eq. ]{} and the fact that the real-valued function $(1-\sqrt{1-x})/(2x)$ is monotonically increasing in $x$ when $0<x\leq1$. Therefore, $\sqrt{n}\nu(\Omega_{W_n})<0.31$ when $n$ is even and $n\geq4$, which implies the upper bound in [Eq. ]{} again.
In conclusion, [Eq. ]{} holds for any integer $n$ that satisfies $n\geq 3$.
\[app:Proof Lim sqrt(n)nu\]Proofs of [Eqs. (\[eq:Lim sqrt(2n+1)nu\]) and (\[eq:Lim sqrt(2n)nu\])]{}
===================================================================================================
Equation can be proved as follows, $$\begin{aligned}
&\lim_{n \to +\infty} \sqrt{2n+1}\nu(\Omega_{W_{2n+1}})
=\lim_{n \to +\infty} \sqrt{2n+1}\,\frac{1-\sqrt{1-h(2n-2)}}{2} \nonumber\\
&=\left[\lim_{n \to +\infty} \sqrt{2n-2}\,h(2n-2) \right]
\left(\lim_{n \to +\infty} \sqrt{\frac{2n+1}{2n-2}} \right)
\left[\lim_{n \to +\infty} \frac{1-\sqrt{1-h(2n-2)}}{2h(2n-2)}\right]
= \frac{1}{4} \sqrt{\frac{\pi}{2}} \coth\Bigl(\frac{\pi}{2}\Bigr)
\approx 0.342.\end{aligned}$$ Here the first equality follows from [Eq. ]{}; the third one follows from [Eq. ]{} and the fact that $\lim_{n \to +\infty}h(n)=0$.
Equation can be proved as follows, $$\begin{aligned}
&\lim_{n \to +\infty} \sqrt{2n}\nu(\Omega_{W_{2n}})
= \lim_{n \to +\infty} \sqrt{2n}\,\frac{1-\sqrt{1-h(2n-3)}}{2} \nonumber\\
&=\left[\lim_{n \to +\infty} \sqrt{2n-3}\,h(2n-3) \right]
\left(\lim_{n \to +\infty} \sqrt{\frac{2n}{2n-3}} \right)
\left[\lim_{n \to +\infty} \frac{1-\sqrt{1-h(2n-3)}}{2h(2n-3)}\right]
= \frac{1}{4} \sqrt{\frac{\pi}{2}} \tanh\Bigl(\frac{\pi}{2}\Bigr)
\approx 0.287 .\end{aligned}$$ The first equality follows from [Eq. ]{}; the third one follows from [Eq. ]{} and the fact that $\lim_{n \to +\infty}h(n)=0$.
\[app:Wsymmetrization\]Proofs of [Eqs. (\[eq:TraceP1P2\]), (\[eq:lambda(Omega\^G)\]), and (\[eq:nu(Omega\^G)Bound\])]{}
=======================================================================================================================
The equality ${\operatorname{tr}}(P_1 P_2^G)={\operatorname{tr}}(P_1 P_2)$ in [Eq. ]{} follows from the fact that $P_1$ is invariant under the action of $G$, that is, $P_1^G=P_1$. The second equality in [Eq. ]{} can be derived from Eqs. (\[eq:P1\]) and (\[eq:P2\]) as follows, $$\begin{aligned}
{\operatorname{tr}}(P_1 P_2)&=\<00\dots01|P_2|00\dots01\>+ \sum_{u\in B_{n-1}^1}(\<u|\otimes\<0|)P_2(|u\>\otimes|0\>) \nonumber\\
&=h(n-1)+(n-1)[1-h(n-1)]
=n-1-(n-2)h(n-1),\end{aligned}$$ where $B_{n-1}^1$ is the set of strings in $\{0,1\}^{n-1}$ with Hamming weight 1. Here the second equality follows from the following equations, $$\begin{aligned}
&\<00\dots01|P_2|00\dots01\>
=\sum_{x\in\{0,1\}^{n-1}}|\<00\dots0|\alpha_{x}\>|^2 \cdot |\<1|\beta_{x}\>|^2 =\frac{1}{2^{n-1}}\sum_{x\in\{0,1\}^{n-1}}\frac{1}{1+(n-1-2|x|)^2}
\nonumber\\
&\quad=\frac{1}{2^{n-1}}\sum_{j=0}^{n-1}\frac{\binom{n-1}{j}}{1+(n-1-2j)^2}
=h(n-1), \\
&(\<u|\otimes\<0|)P_2(|u\>\otimes|0\>)
=\sum_{x\in\{0,1\}^{n-1}}|\<u|\alpha_{x}\>|^2 \cdot |\<0|\beta_{x}\>|^2 =\frac{1}{2^{n-1}}\sum_{x\in\{0,1\}^{n-1}}\left[1-\frac{1}{1+(n-1-2|x|)^2}\right]
\nonumber\\
&\quad=1 - \frac{1}{2^{n-1}}\sum_{j=0}^{n-1}\frac{\binom{n-1}{j}}{1+(n-1-2j)^2}
=1-h(n-1),\qquad u\in B_{n-1}^1.\end{aligned}$$
Note that $\Omega_{W_{n}}^G$ can be expressed as follows, $$\Omega_{W_{n}}^G=pP_1+(1-p)P_2^G=pP_1+(1-p)P_1P_2^GP_1+(1-p)(\openone-P_1)P_2^G(\openone-P_1),$$ given that $P_1$ and $P_2^G$ commute with each other. Therefore, $$\begin{aligned}
\lambda_2(\Omega_{W_{n}}^G)&=\max\left\{ p+(1-p)\|P_1 \bar{P}_2^G P_1\|, (1-p)\|(\openone-P_1)P_2^G(\openone-P_1)\| \right\}
=1-p=\frac{n-1}{n+(n-2)h(n-1)}.\end{aligned}$$ Here the second equality follows from the equality $p+(1-p)\|P_1 \bar{P}_2^G P_1\|=1-p$ \[cf. [Eq. ]{}\] and the inequality $(1-p)\|(\openone-P_1)P_2^G(\openone-P_1)\|\leq 1-p$.
The equalities in [Eq. ]{} follow from [Eq. ]{}.
When $3\leq n\leq40$, the lower bound in [Eq. ]{} can be verified directly by virtue of [Eq. ]{}. When $n\geq41$, the lower bound follows from the following equation $$\label{eq:sqrt{n}}
\sqrt{n}+(n-2)\sqrt{n} h(n-1)+1> \sqrt{n}+n-1> n.$$ Here the first inequality is a consequence of the inequalities $\sqrt{n} h(n-1)> \sqrt{n-1} h(n-1)>1$, the second of which follows from Proposition \[pro:sqrt[n]{}h(n)Monot\] and the assumption $n\geq41$, given that $\sqrt{40}\,h(40)>\sqrt{41}\,h(41)>1$. This observation completes the proof of [Eq. ]{}.
\[app:TheoDstateGenProof\]Proof of [Theorem \[thm:DstateGen\]]{}
================================================================
In analogy to $\Omega_{\mathbf{k}}$ (cf. Appendix \[app:TheoDickeProof\]), the verification operator $\Omega_{\mathbf{k}}^\phi$ can be expressed as $$\begin{aligned}
\label{eq:decomOmegaD'}
\Omega_{\mathbf{k}}^\phi
&= \binom{n}{2}^{-1}\sum_{i<j} \sum_{s=0}^g {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{ss}) \otimes \left[({\mbox{$ | s \rangle $}}{\mbox{$ \langle s | $}})^{\otimes2}\right]_{i,j}
+\binom{n}{2}^{-1}\sum_{i<j}\sum_{s<t}\sum_{u\in B({\mathbf{k}}_{st})}|u\>\<u| \otimes
\big(\Gamma_{i,j,u}^+\otimes \Gamma_{j,i,u}^+ + \Gamma_{i,j,u}^- \otimes \Gamma_{j,i,u}^-\big)_{i,j}\nonumber\\
&=\frac{1}{n(n-1)} \bigg(\sum_{s=0}^r k_s^2-n\bigg) \mathcal{Z}({\mathbf{k}})
+\frac{2}{n(n-1)}\sum_{i<j}\sum_{s<t} {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})
\otimes \Big[\bigl({\mbox{$ | \varphi_{s,t}^+ \rangle $}}{\mbox{$ \langle \varphi_{s,t}^+ | $}}\bigr)_{i,j} + \frac{1}{2}\bigl(|st\>\<st|+|ts\>\<ts|\bigl)_{i,j}\Big] \nonumber\\
&\ \quad +\frac{1}{n(n-1)}\sum_{i<j}\sum_{s<t}\sum_{u\in B({\mathbf{k}}_{st})}|u\>\<u| \otimes
\Big[{\mathrm{e}}^{{\mathrm{i}}\phi(v(i,j,u))}|st\>\<ts|{\mathrm{e}}^{-{\mathrm{i}}\phi(v(j,i,u))}
+{\mathrm{e}}^{{\mathrm{i}}\phi(v(j,i,u))}|ts\>\<st|{\mathrm{e}}^{-{\mathrm{i}}\phi(v(i,j,u))} \Big]_{i,j} \nonumber\\
&= \frac{1}{n(n-1)}\bigg[M'_1+\sum_{s<t}M_{(s,t)}\bigg]\,.\end{aligned}$$ Here ${\mbox{$ | \varphi_{s,t}^+ \rangle $}}=\frac{1}{\sqrt{2}}({\mbox{$ | s \rangle $}}{\mbox{$ | s \rangle $}}+{\mbox{$ | t \rangle $}}{\mbox{$ | t \rangle $}})$, ${\mathcal{Z}}({\mathbf{k}})=\sum_{u\in B({\mathbf{k}})}{\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}}$, ${\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})$ is defined in [Eq. ]{}, $v(i,j,u)$ and $v(j,i,u)$ are defined in [Eqs. (\[eq:viju\]) and (\[eq:vjiu\])]{}, $M_{(s,t)}$ is defined in [Eq. ]{}, and $$\begin{aligned}
M'_1:=&\bigg(\sum_{s=0}^r k_s^2-n+\sum_{s<t} k_s k_t \bigg){\mathcal{Z}}({\mathbf{k}})
+\sum_{\substack{u,v\in B({\mathbf{k}})\\ u\sim v }} \left({\mathrm{e}}^{{\mathrm{i}}\phi(u)}{\mbox{$ | u \rangle $}}\right)\left({\mbox{$ \langle v | $}}{\mathrm{e}}^{-{\mathrm{i}}\phi(v)}\right) \nonumber\\
=&\frac{1}{2}\bigg(n^2-2n+\sum_{s=0}^r k_s^2 \bigg){\mathcal{Z}}({\mathbf{k}})
+\sum_{u,v\in B({\mathbf{k}}) }A_{u v}\left({\mathrm{e}}^{{\mathrm{i}}\phi(u)}{\mbox{$ | u \rangle $}}\right)\left({\mbox{$ \langle v | $}}{\mathrm{e}}^{-{\mathrm{i}}\phi(v)}\right)\,,\end{aligned}$$ where the notation $u\sim v$ means $u_j\neq v_j$ for exactly two values of $j$. The coefficient matrix $(A_{uv})$ for $u,v\in B({\mathbf{k}})$ happens to be the adjacency matrix $A({\mathbf{k}})$ of the transposition graph $G({\mathbf{k}})$ [@Chase1973] (cf. Appendix \[app:SpecGraph\]).
Note that $M'_1$ can be turned into $M_1$ in [Eq. ]{} by a diagonal unitary transformation; similarly, $\Omega_{\mathbf{k}}^\phi$ can be turned into $\Omega_{\mathbf{k}}$ by a diagonal unitary transformation \[cf. Appendix \[app:TheoDickeProof\]\]. Therefore, $\Omega_{\mathbf{k}}^\phi$ and $\Omega_{\mathbf{k}}$ have the same spectrum and the same spectral gap. Thanks to [Eqs. (\[eq:gapD\]) and (\[eq:nuOmegaD\])]{}, we have $$\nu\big(\Omega_{\mathbf{k}}^\phi\big)=\nu\big(\Omega_{\mathbf{k}}\big)=
\min\left\{\frac{1}{n-1},1-\frac{ k_0(k_0+1)+k_1(k_1+1) }{2n(n-1)} \right\}=\begin{cases}
1/2 & {\mathbf{k}}=(1,1,1), \\
1/3 & {\mathbf{k}}=(2,1), \\
1/(n-1) \ \ & n\ge4.
\end{cases}$$ This result confirms [Eq. ]{} and implies [Eq. ]{} in view of [Eq. ]{} (cf. Theorem \[thm:Dicke\]).
\[app:TheoAntiStateProof\]Proof of [Theorem \[thm:Antisymmetric\]]{}
====================================================================
The verification operator $\Omega_{{\mathrm{AS}}_n}$ can be expressed as $$\begin{aligned}
\Omega_{{\mathrm{AS}}_n}
&= \binom{n}{2}^{-1}\sum_{i<j}\sum_{s<t}{\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st}) \otimes \Big(T_{s,t}^+\otimes T_{s,t}^- + T_{s,t}^-\otimes T_{s,t}^+\Big)_{i,j}\nonumber\\
&= \frac{2}{n(n-1)}\sum_{i<j}\sum_{s<t} {\bar{\mathcal{Z}}}_{i,j}({\mathbf{k}}_{st})
\otimes \Big[\bigl({\mbox{$ | \psi_{s,t}^- \rangle $}}{\mbox{$ \langle \psi_{s,t}^- | $}}\bigr)_{i,j}+\bigl({\mbox{$ | \varphi_{s,t}^- \rangle $}}{\mbox{$ \langle \varphi_{s,t}^- | $}}\bigr)_{i,j}\Big] \nonumber\\
&= \frac{1}{n(n-1)}\bigg[M_1^{{\mathrm{AS}}}+\sum_{s<t}\sum_{\substack{u\in B({{\mathbf{k}}}^s_{t})\\v\in B({{\mathbf{k}}}^t_{s})\\ u\sim v}}X_{s,t}^{u,v}\bigg]\,.\label{eq:decomOmegaAS}
\end{aligned}$$ Here ${\mbox{$ | \psi_{s,t}^- \rangle $}}=\frac{1}{\sqrt{2}}({\mbox{$ | s \rangle $}}{\mbox{$ | t \rangle $}}-{\mbox{$ | t \rangle $}}{\mbox{$ | s \rangle $}})$, ${\mbox{$ | \varphi_{s,t}^- \rangle $}}=\frac{1}{\sqrt{2}}({\mbox{$ | s \rangle $}}{\mbox{$ | s \rangle $}}-{\mbox{$ | t \rangle $}}{\mbox{$ | t \rangle $}})$, $$\begin{aligned}
M_1^{{\mathrm{AS}}}&:=\frac{n(n-1)}{2} {\mathcal{Z}}({\mathbf{k}})-\sum_{\substack{u,v\in B({\mathbf{k}})\\ u\sim v }}{\mbox{$ | u \rangle $}}{\mbox{$ \langle v | $}}
=\frac{n(n-1)}{2} {\mathcal{Z}}({\mathbf{k}})-\sum_{u,v\in B({\mathbf{k}}) }A_{u v}{\mbox{$ | u \rangle $}}{\mbox{$ \langle v | $}}\,,\label{eq:M1AS}\\
X_{s,t}^{u,v}&:={\mbox{$ | u \rangle $}}{\mbox{$ \langle u | $}}+{\mbox{$ | v \rangle $}}{\mbox{$ \langle v | $}}-{\mbox{$ | u \rangle $}}{\mbox{$ \langle v | $}}-{\mbox{$ | v \rangle $}}{\mbox{$ \langle u | $}},
\end{aligned}$$ and the notation $u\sim v$ means $u_j\neq v_j$ for exactly two values of $j$. In addition, the coefficient matrix $(A_{uv})$ for $u,v\in B({\mathbf{k}})$ happens to be the adjacency matrix $A({\mathbf{k}})$ of the transposition graph $G({\mathbf{k}})$ [@Chase1973]. Note that $M_1^{{\mathrm{AS}}}$ and all $X_{s,t}^{u,v}$ \[with $s<t$, $u\in B\big({{\mathbf{k}}}^s_{t}\big)$, $v\in B\big({{\mathbf{k}}}^t_{s}\big)$, and $ u\sim v$\] are hermitian and have mutually orthogonal supports, so all of them are positive semidefinite given that $\Omega_{{\mathrm{AS}}_n}$ is positive semidefinite by construction.
According to [Lemma \[lem:CaylGraphSpect\]]{} in Appendix \[app:SpecGraph\], the smallest eigenvalue of $A({\mathbf{k}})$ is equal to $-n(n-1)/2$ with multiplicity 1, and the second smallest eigenvalue of $A({\mathbf{k}})$ is $n-n(n-1)/2$. Therefore, the two largest eigenvalues of $M_1^{{\mathrm{AS}}}$ read $$\label{eq:M1ASlambda}
\lambda_1(M_1^{{\mathrm{AS}}})=n(n-1), \qquad \lambda_2(M_1^{{\mathrm{AS}}})=n(n-1)-n=n(n-2).$$ In addition, direct calculations show that the maximum eigenvalue of $X_{s,t}^{u,v}$ is 2. In conjunction with Eqs. and , we can deduce the second largest eigenvalue and spectral gap of $\Omega_{{\mathrm{AS}}_n}$, with the result (assuming $n\geq3$) $$\begin{aligned}
\lambda_2\big(\Omega_{{\mathrm{AS}}_n}\big)&=\max\left\{\frac{\lambda_2(M_1^{{\mathrm{AS}}})}{n(n-1)},\,\max_{s<t}\frac{\lambda_1(X_{s,t}^{u,v})}{n(n-1)}\right\}=\frac{n-2}{n-1},\\
\nu\big(\Omega_{{\mathrm{AS}}_n}\big)&=1-\lambda_2\big(\Omega_{{\mathrm{AS}}_n}\big)=\frac{1}{n-1},
\label{eq:gapAS}
\end{aligned}$$ which confirms [Eq. ]{}.
Equation follows from [Eqs. (\[eq:NumberTest\]) and (\[eq:nuOmegaAS\])]{}.
\[app:ComputeOmegaASG\]Proofs of Theorem \[thm:OmegaASG\] and [Lemma \[lem:DimRatio\]]{}
========================================================================================
According to [Eq. ]{}, we have $$\Omega_{{\mathrm{AS}}_n}^{\tilde{G}}=\sum_{\mu\vdash n}\frac{1}{d_\mu D_\mu} {\operatorname{tr}}\bigl(\Omega_{{\mathrm{AS}}_n}^{\tilde{G}} P_\mu\bigr) P_\mu=\sum_{\mu\vdash n}\frac{1}{d_\mu D_\mu} {\operatorname{tr}}\bigl(\Omega_{{\mathrm{AS}}_n} P_\mu\bigr) P_\mu.$$ By representation theory, the projector $P_\mu$ can be expressed as follows, $$P_\mu =\frac{d_\mu}{n!}\sum_{\sigma\in {\mathscr{S}}_n}\chi_\mu(\sigma) U_\sigma,$$ where $U_\sigma$ is the unitary operator corresponding to the permutation $\sigma$, and $\chi_\mu(\sigma)$ is the character of $\sigma$ associated with the representation labeled by $\mu$. Therefore, $${\operatorname{tr}}\bigl(\Omega_{{\mathrm{AS}}_n} P_\mu\bigr)=\frac{1}{n(n-1)}\bigg[{\operatorname{tr}}(P_\mu M_1^{{\mathrm{AS}}})+\sum_{s<t}\sum_{\substack{u\in B({{\mathbf{k}}}^s_{t})\\v\in B({{\mathbf{k}}}^t_{s})\\ u\sim v}}{\operatorname{tr}}(P_\mu X_{s,t}^{u,v})\bigg]=d_\mu^2,$$ which implies [Eq. ]{}. Here the first equality follows from [Eq. ]{}, and the notation $u\sim v$ means $u_j\neq v_j$ for exactly two values of $j$. The second equality follows from [Eqs. (\[eq:PmuM\]) and (\[eq:PmuX\])]{} below, $$\begin{aligned}
&{\operatorname{tr}}(P_\mu M_1^{{\mathrm{AS}}})=\frac{n(n-1)}{2}{\operatorname{tr}}[P_\mu {\mathcal{Z}}({\mathbf{k}}) ]-\sum_{u,v\in B({\mathbf{k}})} A_{u,v} \<v|P_\mu |u\>
=\frac{n(n-1)}{2}\sum_{u\in B({\mathbf{k}})} \<u |P_\mu |u\> -\sum_{\substack{u,v\in B({\mathbf{k}})\\ u\sim v }} \<v|P_\mu |u\>\nonumber\\
&\quad =\frac{n(n-1)}{2( n!)}d_\mu^2 |B({\mathbf{k}})|
- \sum_{\substack{u,v\in B({\mathbf{k}})\\ u\sim v }} \frac{d_\mu \chi_\mu(\tau)}{n!}=\frac{n(n-1)}{2}d_\mu^2 -\frac{n(n-1)}{2}d_\mu \chi_\mu(\tau), \label{eq:PmuM}\\
&\sum_{s<t}\sum_{\substack{u\in B({{\mathbf{k}}}^s_{t})\\v\in B({{\mathbf{k}}}^t_{s})\\ u\sim v}}{\operatorname{tr}}(P_\mu X_{s,t}^{u,v})=\sum_{s<t}\sum_{\substack{u\in B({{\mathbf{k}}}^s_{t})\\v\in B({{\mathbf{k}}}^t_{s})\\ u\sim v}}(\< u |P_\mu |u\> +\<v | P_\mu |v\>)=2\sum_{s<t}\sum_{u\in B({{\mathbf{k}}}^s_{t})}\< u |P_\mu |u\> \nonumber\\
&\quad =n(n-1)|B({{\mathbf{k}}}^s_{t})|\frac{d_\mu}{n!}[d_\mu +\chi_\mu(\tau)]
=\frac{n(n-1)}{2}d_\mu^2 +\frac{n(n-1)}{2}d_\mu \chi_\mu(\tau), \label{eq:PmuX}
\end{aligned}$$ where $\tau\in {\mathscr{S}}_n$ is any transposition.
Alternatively, the trace ${\operatorname{tr}}\bigl(\Omega_{{\mathrm{AS}}_n} P_\mu\bigr)$ can be derived by virtue of [Eqs. (\[eq:PijAS\]) and (\[eq:OmegaAS\])]{} as follows, $$\begin{aligned}
&{\operatorname{tr}}\bigl(\Omega_{{\mathrm{AS}}_n} P_\mu\bigr)
=\binom{n}{2}^{-1} \sum_{i<j} {\operatorname{tr}}(P_\mu P_{i,j}^{{\mathrm{AS}}})
={\operatorname{tr}}(P_\mu P_{1,2}^{{\mathrm{AS}}})=\sum_{s<t}{\operatorname{tr}}\Bigl\{P_\mu \bigl[\left(T_{s,t}^+\otimes T_{s,t}^- + T_{s,t}^-\otimes T_{s,t}^+\right)\otimes {\bar{\mathcal{Z}}}({\mathbf{k}}_{st})\bigr]\Bigr\}\nonumber\\
&=\sum_{s<t}{\operatorname{tr}}\Bigl\{P_\mu U_{st}^{\otimes n} \bigl[\left(T_{s,t}^+\otimes T_{s,t}^- + T_{s,t}^-\otimes T_{s,t}^+\right)\otimes {\bar{\mathcal{Z}}}({\mathbf{k}}_{st})\bigr]U_{st}^{\otimes n\dag}\Bigr\}=\sum_{s<t}{\operatorname{tr}}\Bigl\{P_\mu \bigl[\left(|st\>\<st|+|ts\>\<ts|\right)\otimes {\bar{\mathcal{Z}}}({\mathbf{k}}_{st})\bigr]\Bigr\}\nonumber\\
&
={\operatorname{tr}}[P_\mu {\mathcal{Z}}({\mathbf{k}})]=\sum_{u\in B({\mathbf{k}})} \<u|P_\mu |u\>
=\sum_{u\in B({\mathbf{k}})}\frac{d_\mu}{n!}\sum_{\sigma\in {\mathscr{S}}_n} \chi_\mu(\sigma)\<u|U_\sigma|u\>
=\frac{d_\mu^2}{n!}|B({\mathbf{k}})|=d_\mu^2,
\end{aligned}$$ where $$U_{st}=\frac{1}{\sqrt{2}}(|s\>\<s|+|s\>\<t| +|t\>\<s|-|t\>\<t|)+\sum_{r\neq s,t}|r\>\<r|.$$
Equation in Theorem \[thm:OmegaASG\] follows from [Eq. ]{} and [Lemma \[lem:DimRatio\]]{}. Equation follows from [Eqs. (\[eq:NumberTest\]) and (\[eq:nuOmegaASG\])]{}.
According to the well-known dimension formulas for $D_\mu$ and $d_\mu$ (see Refs. [@Weyl1931; @Proc2007] for example), we have $$\frac{D_\mu }{d_\mu}=\frac{1}{n!}\prod_{j=1}^d \frac{(d+\mu_j-j)!}{(d-j)!}=\frac{1}{n!}\prod \frac{\Gamma(d+\mu_j-j+1)}{\Gamma(d-j+1)},$$ where $d=n$ is the local dimension. Note that this formula is still applicable when $d\neq n$. As an implication, we have $$\begin{aligned}
\ln \frac{D_\mu }{d_\mu}=\sum_{j=1}^d \ln \Gamma(d+\mu_j-j+1)-\sum_{j=1}^d\ln \Gamma(d-j+1)-\ln (n!).
\end{aligned}$$ Recall that the function $\ln\Gamma(x)$ is convex in $x$ for $x>0$, we conclude that $\ln \frac{D_\mu }{d_\mu}$ is convex and thus Schur convex in $\mu$. Therefore, $\ln \frac{D_\mu }{d_\mu}\leq \ln \frac{D_{\mu'}}{d_{\mu'}}$ whenever $\mu \prec\mu'$, which confirms [Lemma \[lem:DimRatio\]]{}.
\[app:SpecGraph\]The spectrum of the transposition graph
========================================================
Let ${\mathbf{k}}:=(k_0,k_1,\dots,k_r)$ with $k_0,\dots,k_r$ being positive integers and $n=\sum_{j=0}^{r}k_j$. Recall that $B({\mathbf{k}})$ is the set of all sequences of $n$ symbols in which $k_s$ symbols are equal to $s$ for $s=0,1,\dots,r$. The transposition graph $G({\mathbf{k}})$ is a regular graph whose vertices are labeled by sequences in $B({\mathbf{k}})$. Two distinct vertices $u,v\in B({\mathbf{k}})$ are adjacent iff $u$ and $v$ can be turned into each other by a transposition [@Chase1973], that is, $u_j\neq v_j$ for exactly two values of $j$. The number of vertices in $G({\mathbf{k}})$ is equal to the cardinality of $B({\mathbf{k}})$, which reads $|B({\mathbf{k}})|=n!/\big(\prod _{j=0}^r k_j!\big)$, and the degree of $G({\mathbf{k}})$ is given by $$d:=\frac{1}{2}\bigg(n^2-\sum_{s=0}^r k_s^2\bigg).$$ Let $A({\mathbf{k}})$ be the adjacency matrix of $G({\mathbf{k}})$. The eigenvalues of $G({\mathbf{k}})$ are defined as the eigenvalues of $A({\mathbf{k}})$. Here we are interested in the largest and second largest eigenvalues of $G({\mathbf{k}})$, which are crucial to the proof of Theorems \[thm:Dicke\] and \[thm:DstateGen\]. The lemma below follows from Eq. (4.2) in Ref. [@Caputo2010].
\[lem:GraphSpect\] The largest eigenvalue of $G({\mathbf{k}})$ is equal to its degree $d$ and has multiplicity 1; the second largest eigenvalue of $G({\mathbf{k}})$ is equal to $d-n$.
When $k_0=k_1=\cdots k_{n-1}=1$, the graph $G({\mathbf{k}})$ reduces to the Cayley graph of the symmetric group. In this case we can also determine the smallest and second smallest eigenvalues of $G({\mathbf{k}})$. To this end, note that the sequences in $B({\mathbf{k}})$ can be divided into two groups of equal size: one group can be constructed from the sequence $(0,1,\ldots, n-1)$ by even permutations, and the other group can be constructed by odd permutations. In addition, $G({\mathbf{k}})$ is a bipartite graph with respect to this partition; accordingly, the adjacency matrix $A({\mathbf{k}})$ has a block form, $$A=\begin{pmatrix}
0 &B\\
B^T & 0
\end{pmatrix},$$ where $B$ is a matrix of size $(n!/2)\times (n!/2)$. Therefore, the eigenvalues of $A({\mathbf{k}})$ form pairs: if $\lambda$ is an eigenvalue of $A$, then $-\lambda$ is an eigenvalue with the same multiplicity. Together with [Lemma \[lem:GraphSpect\]]{}, this observation implies the following lemma (cf. Aldous’ spectral gap conjecture, which was proved in Ref. [@Caputo2010]).
\[lem:CaylGraphSpect\] Suppose ${\mathbf{k}}=(k_0,k_1,\dots,k_{n-1})$ with $k_j=1$ for $j=0,1,\dots,n-1$. Then the smallest eigenvalue of $G({\mathbf{k}})$ is equal to $-d$ and has multiplicity 1, where $d=n(n-1)/2$ is the degree of $G({\mathbf{k}})$; the second smallest eigenvalue of $G({\mathbf{k}})$ is equal to $n-d$.
Let us take ${\mathbf{k}}=(1,1,1)$ for example. In this case we have $n=d=3$, and $G({\mathbf{k}})$ is a bipartite graph with six vertices labeled by the sequences $(0,1,2)$, $(1,2,0)$, $(2,0,1)$, $(0,2,1)$, $(1,0,2)$, $(2,1,0)$. With respect to this order, the adjacency matrix of $G({\mathbf{k}})$ reads $$A({\mathbf{k}})=\begin{pmatrix}
0&0&0&1&1&1\\
0&0&0&1&1&1\\
0&0&0&1&1&1\\
1&1&1&0&0&0\\
1&1&1&0&0&0\\
1&1&1&0&0&0\\
\end{pmatrix}.$$ Direct calculation shows that $A({\mathbf{k}})$ has three distinct eigenvalues $3,0,-3$, with multiplicities $1,4,1$, respectively, which agrees with Lemmas \[lem:GraphSpect\] and \[lem:CaylGraphSpect\].
[9]{}
and . ** ****, ().
and . ** ****, ().
. ** ****, ().
and . ** ****, ().
*et al.* . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
. ** ****, ().
. ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
. ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . (), [arXiv:1909.08979]{}, to appear in **
**, (, , ).
**, (, , ).
and . ** ****, ().
. ** ****, ().
and . ** ****, ().
and . ** ****, ().
and . ** ****, ().
*The Theory of Groups and Quantum Mechanics*. (Methuen & co. ltd., London, 1931. Translated from the second (revised) German edition by H. P. Robertson).
*Lie Groups: An Approach through Invariants and Representations*. (Springer, New York, 2007).
*Matrix Analysis and Applied Linear Algebra*. (Society for Industrial and Applied Mathematics, 2000).
. ** ****, ().
** (Amer. Math. Soc., Providence, RI, ).
|
---
abstract: |
-.2in For $m = 3, 4, ...$, let $\lambda_m = 2 \cos \pi/m$ and let $G(\lambda_m)$ be a Hecke group. Let $J_m (m = 3, 4, ...$) be a triangle function for $G(\lambda_m)$ such that, when normalized appropriately, $J_3$ becomes Klein’s $j$-invariant $
j(z) = 1/e^{2 \pi i z} + 744 + ...,
$ and $J_m(z)$ has a Fourier expansion $
J_m(z) = \sum_{n=-1}^\infty a_n(m)q_m^n
$ with $q_m(z) = \exp 2 \pi i z/\lambda_m$. Raleigh [@raleigh1962fourier] conjectures that, for each non-negative integer $n$, there is a rational function $F_n(x)$ with coefficients in $\mathbb{Q}$ such that $a_n(m) = F_n(m)/a_{-1}(m)^n$. We offer a similar conjecture (involving polynomials instead of rational functions) for normalizations of the $J_m$ and extend it to normalizations of other modular forms on the $G(\lambda_m)$. We use this to study cusp forms $\Delta^{\ast}_m$ such that $\Delta^{\ast}_3(\tau)
=\Delta(z) = \sum_{n=1}^{\infty}
\tau(n) e^{2 \pi inz}$ where $\tau$ is Ramanujan’s function. The article is concerned with numerical experiments. The only theorems are quotations from the literature.
author:
- Barry Brent
bibliography:
- 'triangle.bib'
date: 27 July 2020
title: Polynomial interpolation of modular forms for Hecke groups
---
Introduction
============
Let us write $\mathbb{H}^*$ for the extension of the upper half plane $\mathbb{H}$ by adjoining cusps at the points on the real axis with rational abscissa and at $i \infty$, regarded as a metric space with the Poincar[é]{} metric. Figures $T$ made by three geodesics of $\mathbb{H}^*$ are called hyperbolic or circular-arc triangles. Schwarz [@schwarz1873ueber], Lehner [@lehner1954note] and others studied Schwarz triangle functions, which map hyperbolic triangles $T$ in the extended upper half $z$-plane onto the extended upper half $w$-plane. Let $G(\lambda_m)$ be the Hecke group $G(2 \cos \pi/m)$. For certain $T = T_m$, a triangle function $\phi_{\lambda_m}: T \to \mathbb{H}^*$ extends to a map $J_m: \mathbb{H}^* \rightarrow \mathbb{H}^*$ invariant under modular transformations from $G(\lambda_m)$. Suitably normalized, the $J_m$ become analogues $j_m$ of the normalized Klein’s modular invariant $$j(z) = -1/q + 744 + 196884 q + ...$$ where $q = q(z) = \exp (2 \pi i z)$ and $j_3(z) = j(z)$. With $\lambda_m = 2 \cos \pi/m$ and $q_m(z) = \exp (2 \pi i z/\lambda_m)$, the original $J_m$ have Fourier series $J_m(z)= $ $\sum_{n \geq -1} a_n(m) q_m(z)^n$. Raleigh [@raleigh1962fourier] conjectured that the $a_n(m)$ are interpolated by polynomials $R_n(x)$ in $\mathbb{Q}[x]$ in the sense that $a_n(m) = m^{-2n-2} a_{-1}(m)^{-n} R_n(m)$. Under a certain normalization, the Fourier coefficients of $j_m$ appear to interpolate polynomials in $\mathbb{Q}[x]$ (not rational functions as in Raleigh’s original conjecture); the $a_{-1}$ factors are not seen either. The plan of the article is as follows. We sketch the theory of Schwartz triangles; then, the construction of their triangle functions; from the triangle functions, the modular functions for the Hecke groups; and, from them, entire modular forms for these groups. To this point, the article is just a summary of background material. By methods familiar from the classical case, we then construct modular forms with rational Fourier expansions and describe experiments on them, especially on cusp forms, using code adapted from the results of several authors, in particular, Hecke’s theory. J. Leo [@leo2008fourier], anticipated the author in calculating Fourier expansions of triangle functions with computers. His code was based on Lehner’s construction. We based ours on a variant in [@raleigh1962fourier].
A glossary
==========
1. The digamma function $\psi(z):=\Gamma'(z)/\Gamma(z)$.
2. The Schwarzian derivative ([@caratheodory2], p. 130, equation 370.8) $$\{w,z\} =
\frac{2 w' w''' - 3w''^2}{2w'^2}$$ for $w = w(z)$.
3. The Pochhammer symbol $$(a)^0: = 1 \text{ and, for }
n \geq 1,
(a)^n:=a(a+1)...(a+n-1) = \Gamma(a+n)/\Gamma(a).$$
4. The function $$c_{\nu} = c_{\nu}(\alpha, \beta, \gamma) :=
\frac{(\alpha)^n (\beta)^{\nu}}
{{\nu}!(\gamma)^{\nu}}, {\nu} \geq 0.$$ To facilitate comparison with Raleigh’s equation ($9^1$) [@raleigh1962fourier], we remark that $$c_{\nu} =
\frac{\Gamma(\alpha + {\nu})}{\Gamma(\alpha)}
\cdot \frac{\Gamma(\beta + {\nu})}{\Gamma(\beta)}
\cdot \frac{\Gamma(1)}{\Gamma(1 + {\nu})}
\cdot \frac{\Gamma(\gamma)}{\Gamma(\gamma + {\nu})}.$$ In the terms of this article’s Theorem 1 below, Raleigh is treating the case $\lambda = 0$, for which (equation (6) below) $\gamma = 1$ and the expression on the right side of (2) becomes, as in Raleigh, $$\frac{\Gamma(\alpha + {\nu})\Gamma(\beta + {\nu})}
{\Gamma(\alpha)\Gamma(\beta)({\nu}!)^2}.$$
5. The function $e_{\nu}$ given by ([@raleigh1962fourier], equation $9^1$) $$e_{\nu} = e_{\nu}(\alpha, \beta) :=
\sum_{p = 0}^{{\nu} - 1}
\left ( \frac 1{\alpha + p} +
\frac 1{\beta + p}
- \frac 2{1 + p} \right ).$$ Here, we are dealing with the same ambiguity present in the definition of $c_{\nu}$: this is a specialization to the case $\gamma = 1$ of the $e_{\nu}$ for ${\nu} \geq 1$ given by ([@caratheodory2], p. 153, equation 387.5) $$e_{\nu} = e_{\nu}(\alpha, \beta, \gamma) :=
\sum_{p = 0}^{n - 1}
\left ( \frac 1{\alpha + p} +
\frac 1{\beta + p}
- \frac 2{\gamma + p} \right ).$$ Unless it is explicitly indicated to be otherwise, we intend the first (Raleigh’s) definition.
6. Gauss’s hypergeometric series $$F(\alpha,\beta,\gamma;\tau):=
\sum_{{\nu}=0}^{\infty}
c_{\nu}(\alpha, \beta, \gamma) \tau^{\nu}.$$ ($F$ is occasionally written in Carath[é]{}odory as $\phi_1$.)
7. With $F = F(\alpha, \beta, \gamma;\tau)$, a special function $$F^*(\alpha, \beta, \gamma;\tau): =
\frac{\partial F}{\partial \alpha} +
\frac{\partial F}{\partial \beta}+
2\frac{\partial F}{\partial \gamma};$$ by [@caratheodory2], equation (387.4) on p. 153, $F^*$ may be written $$F^*(\alpha, \beta, \gamma;\tau) =
\sum_{{\nu} = 1}^{\infty}
c_{\nu} (\alpha, \beta, \gamma)
e_{\nu} (\alpha, \beta, \gamma) \tau^{\nu}.$$
8. A special function $\phi_2^*(\tau)$ is defined as a certain limit ([@caratheodory2], p. 152, equation 386.2) but is immediately (equation 386.3) reduced to $$\phi_2^*(\tau) =
F(\alpha, \beta, 1;\tau) \log \tau
+ F^*(\alpha, \beta, 1;\tau).$$
Schwarz triangles, triangle functions and Hecke groups
======================================================
For our purposes, Schwarz triangles $T$ are hyperbolic triangles in $\mathbb{H}^*$ with certain restrictions on the angles at the vertices. From a Euclidean point of view, their sides are vertical rays, segments of vertical rays, semicircles orthogonal to the real axis and meeting it at points $(r,0)$ with $r$ rational, or arcs of such semicircles. We choose $\lambda, \mu$ and $\nu$, all non-negative, such that $\lambda + \mu + \nu <1$; then the angles of $T$ are $\lambda \pi, \mu \pi$, and $\nu \pi$. By reflecting $T$ across one of its edges, we get another Schwarz triangle. The reflection between two triangles in $\mathbb{H^*}$ is effected by a Möbius transformation, so the orbit of $T$ under repeated reflections is associated to a collection of Möbius transformations. The group generated by these transformations is a triangle group $M$, say. By the Riemann Mapping Theorem there is a conformal, onto map $\phi: T \mapsto \mathbb{H}^*$ called a triangle function. Hecke groups [@hecke1936bestimmung] are triangle groups $M$ that act properly discontinuously on $\mathbb{H}$. This means that for compact $K \subset \mathbb{H}$, the set $\{\mu \in M$ s.t. -.05in $K \cap \mu(K) \neq \emptyset\}$ is finite. Recall that $G(\lambda_m)$ is the Hecke group generated by the maps $\tau \mapsto -1/\tau$ and $\tau \mapsto \tau + \lambda_m$. Apparently it was Hecke who, also in [@hecke1936bestimmung], established that $G(\lambda_m)$ has the structure of a free product of cyclic groups $C_2 * C_m$, generalizing the relation [@serre1970course; @cangul1996group] $SL(2,\mathbb{Z}) = C_2 * C_3$. Let $\rho = -\exp(-\pi i/m) =
-\cos(\pi/m) + i\sin(\pi/m)$, and let $T_m \subset \mathbb{H}^*$ denote the hyperbolic triangle with vertices $\rho, i$, and $i\infty$. The corresponding angles are $\pi/m, \pi/2$ and $0$ respectively. Let $\phi_{\lambda_m}$ be a triangle function for $T_m$. The function $\phi_{\lambda_m}$ has a pole at $i\infty$ and period $\lambda_m$. For $P, Q \in \mathbb{H}^*$, let us us write $P \equiv_M Q$ when $\mu \in M$ and $Q = \mu(P)$. Then $\phi_{\lambda_m}$ extends to a function $J_m: \mathbb{H}^* \rightarrow \mathbb{H}^*$ by declaring that $J_m(P) = J_m(Q)$ if and only if $P \equiv_M Q$. $J_m$ is a modular function for $G(\lambda_m)$.
Calculation of Schwarz’s inverse triangle function
==================================================
Schwarz proved
([@caratheodory2], 374)
1. Let the half-plane $\Im z > 0$ be mapped conformally onto an arbitrary circular-arc triangle whose angles at its vertices $A, B$, and $C$ are $\pi \lambda, \pi \mu$, and $\pi \nu$, and let the vertices $A, B, C$ be the images of the points $z = 0, 1, \infty$, respectively. Then the mapping function $w(z)$ must be a solution of the third-order differential equation $$\{w,z\} =
\frac{1-\lambda^2}{2 z^2} +
\frac{1-\mu^2}{2(1-z^2)} +
\frac{1-\lambda^2-\mu^2+\nu^2}
{2z(1-z)}.$$
2. If $w_0(z)$ is any solution of equation (3) that satisfies $w'_0(z) \neq 0$ at all interior points of the half-plane, then the function $$w(z) = \frac {aw_0(z) + b}{cw_0(z) + d}
\hskip 1in (ad - bc \neq 0)$$ is likewise a solution of equation 3.
3. Also, every solution of equation (3) that is regular and non-constant in the half-plane $\Im z > 0$ represents a mapping of this half-plane onto a circular-arc triangle with angles $\pi \lambda,
\pi \mu,$ and $\pi \nu$.
In Carath[é]{}odory’s lexicon ([@caratheodory1] p. 124), a regular function is one that is differentiable on an open connected set. Carath[é]{}odory writes the left side of (3) as “$\{w,z\} = \frac{w^{'}w^{'''} -
3 w^{''2}}{w^{'2}} = ...$”, but this is not the case. We infer that $\{w,z\}$ is intended from the automorphy property of clause 2. Let us write $$\alpha = \frac 12
(1 - \lambda - \mu + \nu),$$ $$\beta = \frac 12(1-\lambda - \mu - \nu),$$ and $$\gamma = 1 - \lambda.$$ The solutions $w$ of (3) are inverse to triangle functions; they are quotients of arbitrary solutions of $$u'' + p(z)u' +q(z)u = 0$$ when ([@caratheodory2], p. 136, equation (376.4)) $$p = \frac {1-\lambda}z-\frac {1-\mu}{1-z}$$ and $$q = - \frac {\alpha \beta}{z(1-z)}.$$ Equation (7) reduces ([@caratheodory2], p. 137, equations 376.5-7) to the hypergeometric differential equation $$z(1-z)u'' +(\gamma - (\alpha + \beta +1)z)u'
- \alpha \beta u = 0.$$ As long as $\gamma$ is not a non-positive integer, $u=F(\alpha,\beta,\gamma;z)$ is a solution of (8); it is the only solution regular at $z = 0$, and it satisfies $F(\alpha,\beta,\gamma;0) = 1$ (final paragraph of [@caratheodory2], 377, p. 138.) In [@caratheodory2], 386-388 (pp. 151-155), we find that when $\gamma = 1$ and $\lambda = 0$, another, linearly independent, solution of equation (7) is $\phi_2^*(z)$. [@caratheodory2], Section 394, pp. 165 - 167 is devoted to the case $\lambda = 0$. The mapping function $w$ of Theorem 1 satisfies ([@caratheodory2], p. 166, equation 394.4) $$w = \frac 1{\pi i}
\left [ \frac{\phi_2^*}{\phi_1} -
\left (2 \psi(1) -
\psi(1 - \alpha)
- \psi(1-\beta) \right ) \right ] +
i \frac{ \sin \pi \mu}
{\cos \pi \mu + \cos \pi \nu}.$$
Inversion of Schwarz’s inverse triangle function
================================================
Following Lehner and Raleigh, we consider the Schwarz triangle $T_m$ with vertices at $\rho = -\exp(-\pi i/m), i$, and $i\infty$. In terms of Theorem 1, $T_m$ has $\lambda = 0$ (an angle $0$ at the vertex $i\infty$), $\mu = 1/2$ (an angle $\pi/2$ at $i$), and $\nu = 1/m$ (an angle $\pi/m$ at $\rho$.) In this situation, $\gamma = 1$. Let $J_m$ be automorphic for $G(\lambda_m)$ with $J_m(\rho) = 0, J_m(i) = 1$, and $J_m(i \infty) = \infty$. In terms of Theorem 1, $w$ and $J_m$ are inverse functions. We are going to write down the Fourier expansion of $J_m$. By clause 2 of Theorem 1, if $w$ satisfies equations (3) and (9), so does $\tau = \thinspace \tau(z) = \lambda_m w(z)/2$, and therefore $$2\pi i\tau /\lambda_m =
\frac{\phi_2^*}{\phi_1} - \left (2 \psi(1) -
\psi(1 - \alpha)
- \psi(1-\beta) \right ) -
\pi \sec(\pi/m).$$ Let us write $\log A_m = -2 \psi(1) + \psi(1 - \alpha)
+ \psi(1 - \beta) - \pi \sec(\pi/m).$ Recalling the definitions of $\phi_1$ and $\phi_2^*$ from our glossary items 6 and 8, we find (abbreviating $J_m(\tau)$ as $J_m$) that $$2\pi i\tau /\lambda_m
= - \log J_m
+ \frac {F^*(\alpha, \beta, 1;1/J_m)}
{F(\alpha, \beta, 1;1/J_m)} +
\log A_m.$$ Equation (10) is equation (6) of [@raleigh1962fourier], but Raleigh suppresses the subscripts. He also writes $\exp 2 \pi i \tau/\lambda_m$ as $x_m$, so that (in our earlier notation) $x_m = q_{_m}(\tau)$. In Raleigh’s notation, after taking exponentials, $$x_m/A_m =
\frac 1{J_m} \exp
\frac {F^*(\alpha, \beta, 1;1/J_m)}
{F(\alpha, \beta, 1;1/J_m)},$$ the right side of which has a power series in $J_m$ with rational coefficients. Writing $X_m = x_m/A_m$ we can regard $X_m = X_m(J_m)$ as a power series in $J_m$. Following [@lehner1954note] and [@raleigh1962fourier], we inverted this power series to obtain one for the modular function $J_m$. Let $\mathscr{I}$ be a formal operation taking a power series $\sigma(v)$ to its inverse; that is, if $u=\sigma(v)$ then $v = \mathscr{I}(\sigma)(u)$. Let $Y_m(J)$ be a power series such that $$Y_m(J_m) = J_m \exp
\frac {F^*(\alpha, \beta, 1;J_m)}
{F(\alpha, \beta, 1;J_m)} =
X_m \left (1/J_m \right )$$ and hence $$Y_m(1/J_m) =
\frac 1{J_m} \exp _m
\frac {F^*(\alpha, \beta, 1;1/J_m)}
{F(\alpha, \beta, 1;1/J_m)} = X_m(J_m),$$ so that $\mathscr{I}(Y_m)(X_m(J)) = 1/J_m$ and, therefore, $J_m = 1/\mathscr{I}(Y_m)(X_m)$.
Modular forms from modular functions
====================================
When the $w$-image of $\mathbb{H}^*$ is $T_m$, the inverse of $w$ is $\phi_{\lambda_m}$. $J_m$, the extension by modularity of $\phi_{\lambda_m}$ to $\mathbb{H}^*$, is periodic with period $\lambda_m$ and maps $\rho$ to $0$, $i$ to $1$, and $i\infty$ to $\infty$ ([@lehner1954note], equation (2).) These mapping properties allow us, following Berndt’s exposition of Hecke, to construct positive weight modular forms for $G(\lambda_m)$ from $J_m$. This section describes results of Hecke that are perhaps most easily accessible for the classical case in Schoeneberg [@schoeneberg1974], and, for the general case, in Berndt [@berndt2008hecke].
$m=3$.
------
By keeping track of the weights, zeros and poles of the constituent factors in the numerator and denominator of the fraction defining $$f_{a,b,c} =
\frac{J^{'a}}
{J^b (J - 1)^c},$$ Schoeneberg [@schoeneberg1974] (Theorem 16, p.45) demonstrates that $f_{a,b,c}$ is an entire modular form of weight $2a$ for $SL(2,\mathbb{Z})$ if $a \geq 2, 3c \leq a, 3b \leq 2a$, $b+c \geq a$ and $a, b, c$ are integers. (Schoeneberg speaks of “dimension $-2a$.”) Thus he is able to write down a weight $4$ entire modular form $E^*_4 = f_{2,1,1}$ for $SL(2,\mathbb{Z})$ with a zero of order $\frac 13$ at $\rho = e^{2 \pi i/3}$ and a weight $6$ entire modular form $E^*_6 = f_{3,2,1}$ for $SL(2,\mathbb{Z})$ with a zero of order $\frac 12$ at $i$. (Schoeneberg writes $G^*_4, G^*_6$.) It is well known that the (vector space) dimension of the spaces of weight $4$ and $6$ entire modular forms for $SL(2,\mathbb{Z})$ is equal to one, so $E^*_4$ and $E^*_6$ may be identified with the usual weight $4$ and weight $6$ Eisenstein series, up to a normalization. Finally, Schoeneberg defines the weight $12$ cusp form $\Delta^* = E_4^{*3} - E_6^{*2}$ with a zero of order $1$ at $i \infty$. It is a multiple of $\Delta$.
$m \geq 3$.
-----------
We quote statements from Berndt [@berndt2008hecke], which is an exposition of Hecke’s [@hecke1938lectures] and other writings. We depart occasionally from Berndt’s choices of variable to avoid clashes with our earlier notation.
([@berndt2008hecke], Definition 2.2) We say that $f$ belongs to the space $M(\lambda, k, \gamma)$ if
1. $$f(\tau) = \sum_{n = 0}^{\infty}
a_n e^{2 \pi i n \tau/\lambda},$$ where $\lambda > 0$ and $\tau \in \mathbb{H}$, and
2. $f(-1/\tau) = \gamma(\tau/i)^k f(\tau)$, where $k > 0$ and $\gamma = \pm 1$.
We say that $f$ belongs to the space $M_0(\lambda,k, \gamma)$ if $f$ satisfies conditions (i), (ii), and if $a_n = O(n^c)$ for some real number $c$, as $n$ tends to $\infty$.
After defining the notion of a fundamental region in the usual way and defining as $G(\lambda)$ the group of linear fractional transformations generated by $\tau \mapsto -1/\tau$ and $\tau \mapsto \tau + \lambda$, Berndt states
([@berndt2008hecke], Theorem 3.1) Let $B(\lambda)= \{\tau \in \mathbb{H}:
x < \lambda/2, |\tau| > 1\}$. Then if $\lambda \geq 2$ or if $\lambda = 2 \cos(\pi/m)$, where $m \geq 3$ is an integer, $B(\lambda)$ is a fundamental region for $G(\lambda)$.
([@berndt2008hecke], Definition 3.4) Let $T_A = \{ \lambda:
\lambda = 2 \cos(\pi/m), m \geq 3, m \in \mathbb{Z}\}$.
Berndt states in his Theorem 5.4 that $G(\lambda)$ is discrete if and only if $\lambda$ belongs to $T_A$. This discreteness is the premise of the theory of automorphic functions generally. He embeds within the proof of his Lemma 3.1 (which we omit), the
1. Let $\tau_{\lambda}$ denote the intersection in $\mathbb{H}$ of the line $x = -\lambda/2$ and the unit circle $|\tau| = 1$. (Berndt also remarks on page 35 that $\tau_{\lambda}$ is the lower left corner of $B(\lambda)$.)
2. “Letting $\pi \theta = \pi - \arg(\tau_{\lambda})$ (so that $\lambda = 2 \cos \theta$) ....”
To characterize Eisenstein series, we need to keep track of some analytical properties. The next definition summarizes the second paragraph of Berndt’s Chapter 5. (Throughout his Chapter 5, $\lambda < 2$.)
Let $f \in M(\lambda, k, \gamma), f$ not identically zero.
1. $N = N_f$ counts the zeros of $f$ on $\overline{B(\lambda)}$ with multiplicities.
2. $N_f$ does not count zeros at $\tau_{\lambda}, \tau_{\lambda} + \lambda, i$ or $i\infty$.
3. If $\tau_0 \in
\overline{B(\lambda)}, f(\tau_0) = 0$ and $\Re (\tau_0) = -\lambda/2$, then $f(\tau_0 + \lambda) = 0$ and $N_f$ counts only one of the two zeros.
4. If $\tau_0 \in
\overline{B(\lambda)}, f(\tau_0) = 0$, and $|\tau_0| = 1$, then, $f(-1/\tau_0) = 0$, and $N_f$ counts only one of these two zeros.
5. The numbers $n_{\lambda}, n_i,$ and $n_{\infty}$ are the orders of the zeros of $f$ at $\tau_{\lambda},
i$ and $i\infty$, repectively. The order $n_{\infty}$ is measured in terms of $\exp(2 \pi i \tau/\lambda)$.
The multiplier $\gamma$ is given by
([@berndt2008hecke], Corollary 5.2) Let $f \in M(\lambda,k,\gamma)$ and let $n_i$ be the order of the zero of $f$ at $\tau = i$. Then $$\gamma= (-1)^{n_i}.$$
The next two results tell us that the only nontrivial case in this theory is the one that we are interested in.
([@berndt2008hecke], Lemma 5.1) If $\dim M(\lambda,k,\gamma)
\neq 0$, $$N_f + n_{\infty} + \frac 12 n_i +
\frac {n_{\lambda}}m =
\frac 12 k \left ( \frac 12 - \theta \right).$$
By Berndt’s equation (5.16), if $m \geq 3$ then the right side can be written as $k(m-2)/4m$.
([@berndt2008hecke], Theorem 5.2) If $\dim M(\lambda,k,\gamma)
\neq 0$, then $\theta = 1/m$ where $m \geq 3$ and $m \in \mathbb{Z}$.
We are concerned with $\lambda \in T_A$. This makes $\lambda < 2$ as in all the results of Berndt’s Chapter 5. One estimate for $\dim M(\lambda, k, \gamma)$ is
([@berndt2008hecke], Theorem 5.6) If $\lambda \notin T_A$, then $\dim M(\lambda,k,\gamma) = 0$. If $\lambda = 2\cos(\pi/m) \in T_A$, then for nontrivial $f \in
M(\lambda,k,\gamma)$, the weight $k$ has the form $$k = \frac {4h}{m-2} + 1 - \gamma,$$ where $h \geq 1$ is an integer. Furthermore, $$\dim M(\lambda,k,\gamma) = 1 + \left \lfloor
\frac{h + (\gamma-1)/2}m \right \rfloor.$$
Eliminating $h$, we find that $$\dim M(\lambda,k,\gamma) = 1 +
\left \lfloor
k\left (\frac 14 - \frac 1{2m} \right ) +
\frac {\gamma}4 -
\frac 14
\right \rfloor.$$ Berndt ([@berndt2008hecke], Remark 5.3) proves that the dimension formula above holds also when $h = 0$. The existence of certain modular forms is provided by
([@berndt2008hecke], Theorem 5.5) Let $\lambda \in T_A$. Then there exist functions $f_{\lambda}, f_i$, and $f_{\infty} \in M(\lambda,k,\gamma)$ such that each has a simple zero at $\tau_{\lambda}, i$, and $i \infty$, respectively, and no other zeros. Here, $\gamma$ is given by \[this article’s Theorem 3\], and $k$ is determined in each case from \[Theorem 4 of this article\]. Thus, $f_{\lambda} \in
M(\lambda, 4/(m-2), 1), f_i \in
M(\lambda, 2m/(m-2), -1)$, and $f_{\infty} \in M(\lambda, 4m/(m-2),1)$.
([@berndt2008hecke], pages 47-48) By the Riemann mapping theorem there exists a function $g(\tau)$ that maps the simply connected region $B(\lambda)$ one-to-one and conformally onto $\mathbb{H}$. If we require that $g(\tau_{\lambda}) = 0,
g(i) = 1$, and $g(i\infty) =
\infty$, then $g$ is determined uniquely.
Now we can write down $f_{\lambda}, f_i$, and $f_{\infty}$ explicitly. The next theorem is extracted from the proof of Theorem 7. $f_{\lambda}$ and $f_i$ correspond to Eisenstein series and $f_{\infty}$ to a cusp form. In our code, we take $g$ to be a normalized form of $J_m$.
([@berndt2008hecke], page 50) $$f_{\lambda}(\tau)
=\left \{
\frac {g'(\tau)^2}
{g(\tau)(g(\tau) -1)}
\right \}^{1/(m-2)},$$ $$f_i(\tau) =
\left \{
\frac {g'(\tau)^m}
{g(\tau)^{m-1} (g(\tau) - 1)}
\right \}^{1/(m-2)},$$ and $$f_{\infty}(\tau) =
\left \{ \frac{g'(\tau)^{2m}}
{g(\tau)^{2m-2}(g(\tau)-1)^m}
\right \}^{1/(m-2)}.$$
In our applications to Lehmer’s problem, we will be interested in the dimensions of the weight $12$ cusp spaces for $\lambda = \lambda_m = 2 \cos \pi/m$.
([@berndt2008hecke], Definition 5.2) If $f\in M(\lambda, k, \gamma)$ and $f(i \infty) = 0$, then we call $f$ a cusp form of weight $k$ and multiplier $\gamma$ with respect to $G(\lambda)$. We denote by $C(\lambda,k, \gamma)$ the vector space of all cusp forms of this kind.
([@berndt2008hecke], equation (5.25)) $$\dim C(\lambda, k, \gamma) \geq
\dim M(\lambda, k, \gamma) -1.$$
In view of (i) Theorem 6, (ii) equation (12), (iii) Remark 2, and (iv) the fact that $\gamma = \pm 1$, we see that $\dim C(\lambda_m, 12, \gamma) >1$ when $m$ is greater than or equal to $12$.
Normalizations
==============
In this section we write down normalizations of the functions from Theorem 8 such that, at $m = 3$, the normalizations reduce to corresponding functions in the classical theory, namely (in Serre’s notation [@serre1970course]), $j, E_2, E_3$, and $\Delta$— Klein’s invariant, the weight $4$ and weight $6$ Eisenstein series, and the normalized discriminant $\Delta$. Also, the coefficients of their Fourier expansions should be rational numbers, but this we have verified only experimentally. For simplicity, we will abuse our notation and write $f(q_m)$ for functions $f(z)$ with Fourier expansions $f(z) = \sum c_n q_m(z)^n$. Let us also write $W_m(X_m)$ for the series $1/\mathscr{I}(Y_m)(X_m)$ in the last equation of section 5. The section 5 parameter $A_3 = 1/1728 = 1/(2^6 3^3)$ and $J_3(\tau) = W_3(X_3) = W_3(x_3/A_3) = W_3(1728x_3) =
W_3(2^6 3^3 q_{_3}(\tau))$. The Fourier expansion of $J_3$ and that of the Klein $j$-invariant agree, but if $m \neq 3, 4,$ or $6$, the Fourier expansion of $J_m$ has irrational coefficients because the residue $a_{-1}$ of $J_m(q_m)$ is transcendental if $m \neq 3, 4$ or $6$, ([@wolfart1981transzendente], according to [@leo2008fourier]) and $$a_{-1} = A_m.$$ This equation can be justified by reference to Raleigh, but that author does, it seems, commit a sign error in his equation (10), which it is necessary to compare to his equation (I) to conclude that our equation (13) is true. The same comparison indicates the sign error, because in (I) the signs of $\pi \sec (\pi/m)$ and $2\psi(1)$ disagree, whereas they agree in Raleigh’s equation (10). Therefore, following [@leo2008fourier], we set $$j_m(x):= W_m(2^6 m^3 x_m)/B_m,$$ where $B_m$ is the coefficient of $1/x_m$ in the series $W_m(2^6 m^3 x_m)$; by construction, $B_m = 2^{-6} m^{-3}$. Corresponding to $f_{\lambda}$, we set $$H_{4,m}(\tau):=
\left \{
\frac {j_m'(\tau)^2}
{j_m(\tau)(j_m(\tau) -2^6 m^3)}
\right \}^{1/(m-2)}.$$ Let $$K_{6,m}(\tau):=
\left \{
\frac {j_m'(\tau)^m}
{j_m(\tau)^{m-1} (j_m(\tau) - 2^6 m^3)}
\right \}^{1/(m-2)}.$$ We set $H_{6,m} := K_{6,m}/\epsilon$ where $\epsilon = e^{i \pi/(m-2)}$ or $1$, depending on whether $m$ is odd or even, respectively. Corresponding to $f_{\infty}$, we set $$\Delta^{\star}(\tau) =
\left \{ \frac{j_m'(\tau)^{2m}}
{j_m(\tau)^{2m-2}(j_m(\tau)-2^6 m^3)^m}
\right \}^{1/(m-2)}.$$ Finally, we set $\Delta^{\dagger}_m =
H_{4,m}^3 - H_{6,m}^2$ and $\Delta^{\diamond}_m = H_{4,m}^3/j_m$. Because $J_3'$ is a weight-$2$ modular function ([@schoeneberg1974], p.44), and the weight-$4$ and weight-$6$ spaces and the weight-$12$ cusp space are one-dimensional when $m = 3$, the desired property follows after checking the first few terms of the Fourier expansions at issue.
Interpolation by polynomials of the Fourier coefficients of normalized modular forms and functions for Hecke groups
===================================================================================================================
Let $$J_m(z) =
\sum_{n = -1}^{\infty}a_n(m) q_m(z)^n.$$ For integers $m \geq 3$ Raleigh [@raleigh1962fourier] showed that $$a_{-1}(m) =
\exp(\pi \sec(\pi/m) - 2\psi(1) +
\psi(1/4 + 1/(2m)) +
\psi(1/4 - 1/(2m))$$ and that, for $n = 0, 1, 2, 3,
a_n(m) = m^{-2n-2} a_{-1}(m)^{-n} R_n(m)$ where $R_n(x)$ is a polynomial with rational coefficients and degree $2n+2$. He conjectured that similar relations exist among the $a_n$ for all positive $n$. We made numerical studies to explore how this conjecture might extend to the $j_m$. We computed the Fourier expansions of $j_m = 1/q_m +
\sum_{n\geq 0} c_n(m) q_m^n$ to order $23$ and used *Mathematica functions to generate polynomials $r_n$ (not merely rational functions) with rational coefficients which, we conjecture, interpolate the sequences $\{c_n(3),c_n(4), ...\})$. This procedure has obvious drawbacks. We cannot rebut the objection that counter-examples might lurk beyond the range of our observations, or that any data can be made to fit some polynomial or other. On the other hand, the polynomials we will mention exhibit regularities which may improve the credibility of the following conjectures. *Mathematica notebooks and associated data files are here [@docs2020].**
For each integer $n$ greater than $-2$, there exists a polynomial $r_n(x) \in \mathbb{Q}[x]$ that satisfies the relation $c_n(m) = r_n(m)$ for $m = 3, 4, ...$, with $r_{-1}(x) \equiv 1,
r_0(x) = 8x(3x^2+4)$, and $r_1(x) = 4x^2(69x^4-8x^2-48)$. For $n$ greater than one, $r_n(x) = (x-2)(x+2)x^{n+1}p_n(x)$ where $p_n(x)$ is an irreducible polynomial over $\mathbb{Q}$ of degree $2n$.
Conjecture 1 implies that for all integers $m$ greater than or equal to three, $r_n(m)$ is nonzero, and therefore that
For each fixed $n$ greater than or equal to $-1$ and all integers $m$ greater than or equal to three, $c_n(m)$ is nonzero.
\(i) If $r_n(\rho) = 0$ and $\rho \neq \pm 2$, then $$|\rho| \leq n.$$ (ii) Consequently (even supposing that conjecture 2 is false), if $n\geq 0$ and $m > \max (n,2)$ then $c_n(m) \neq 0$. (iii) For each $n$, there is a closed curve $P_n$ symmetric about both axes in the complex plane, such that the roots of $r_n$ lie on $P_n$ or on one of the axes. Exactly two non-zero roots of $r_n$ are imaginary. $P_n$ has exactly one self-intersection, at zero.
Perhaps the bound in (14) can be improved; $\frac 13 \times$ the maximum modulus among the roots of $p_3$ is less than $0.86$ and, for $n$ greater than or equal to three, the maximum ratio $|\rho|/n$ among the roots of $p_n$ appears to decrease monotonically. It is already known that, for all integers $n \geq -1$, $c_n(3)$ is positive. (See, for example, page 199 in [@rankinModular].)
The function $H_{4,m}$ has a Fourier expansion $$H_{4,m}(z) =
\sum_{n=0}^{\infty} \beta_n(m) q_m(z)^n.$$ For each $n$ there is a polynomial $B_n(x)$ with rational coefficients such that (i) $\beta_n(m)=B_n(m)$ for $m = 3,4, ....$, (ii) $B_0(x) \equiv 1$ identically, (iii) $B_1(x) = 16x^2+32x$, (iv) and, for $n$ larger than one, $B_n(x) = (x^2-4)x^n b_n(x)$ where $b_n(x)$ is a polynomial of degree $2n-3$.
The function $H_{6,m}$ has a Fourier expansion $$H_{6,m}(z) =
\sum_{n=0}^{\infty} \gamma_n(m) q_m(z)^n.$$ For each $n$ there is a polynomial $C_n(x)$ such that (i) $\gamma_n(m)=C_n(m)$ for $m = 3,4, ....$, (ii) $C_0(x) \equiv 1$ identically, (iii) $C_1(x) = -8x^2(3x-2)$, (iv) and, for $n$ larger than one, $C_n(x) = (x-2)(3x-2)x^{n+1} d_n(x)$ where $d_n(x)$ is a polynomial, irreducible over $\mathbb{Q}$, of degree $2n-3$.
Let $\Delta$ be usual normalized discriminant, a weight $12$ cusp form for $SL(2,\mathbb{Z}) = G(\lambda_3)$ with integer coefficients. Its Fourier expansion is written $$\Delta(z) = \sum_{n=1}^{\infty} \tau(n) q^n$$ where $q = e^{2 \pi i z}$ and $\tau(n)$ is Ramanujan’s function. Whether or not the equation $\tau(n) = 0$ has any solutions is an open question [@lehmer1947vanishing]. (Recently, Balakrishnan, Craig, and Ono [@balakrishnan2020variations] ruled out $\pm 1, \pm 3, \pm 5,
\pm 7$, and $\pm 691$ from membership in the image of Ramanujan’s function.)
1\. Let $\Delta_m = \Delta^{\star}_m,
\Delta^{\dagger}_m$ or $\Delta^{\diamond}_m$ and let the Fourier expansion of $\Delta_m(z)$ be $$\Delta_m(z) = \sum_{n=1}^{\infty} \tau_m(n) q_m^n.$$ where $\tau_m = \tau_m^{\ast}, \tau_m^{\dagger}$ or $\tau_m^{\diamond}$ respectively. Then there is a set of polynomials $T_n = T^{\star}_n, T^{\dagger}_n$, or $T^{\diamond}_n$, respectively, with coefficients in $\mathbb{Q}$ such that $\tau_m(n) = T_n(m)$ for each $m = 3, 4, ....$ 2. $T^{\star}_1(x) \equiv 1$ identically, and, if $n > 1$, then $T^{\star}_n(x) =
(x-2)^2 x^{n-1} t^{\star}_n(x)$, where $t^{\star}_n(x)$ is an irreducible polynomial over $\mathbb{Q}$ of degree $2n - 4$. 3. (i) $T^{\dagger}_1(x) = 16x(3x^2+x+6)$. (ii) $T^{\dagger}_2(x) = -16x^2(39x^4-95x^3+66x^2-260x-120)$. (iii) $T^{\dagger}_3(x) = $ $$64x^3(189x^6-3021x^5+9574x^4
-12520x^3+19136x^2-2960x-2208)/9.$$ (iv) If $n > 3$, then $T^{\dagger}_n(x) =
(x-2) x^n t^{\dagger}_n(x)$, where $t^{\dagger}_n(x)$ is an irreducible polynomial over $\mathbb{Q}$ of degree $2n - 1$. 4. (i) $T^{\diamond}_1(x), T^{\diamond}_2(x)$, and $T^{\diamond}_3(x)$ are irreducible polynomials over $\mathbb{Q}$ of degrees $3, 6$, and $9$, respectively. (ii) If $n$ is greater than $2$, $T^{\diamond}_1(x) = (x-2)t^{\diamond}_n(x)$, where $t^{\diamond}_n(x)$ is an irreducible polynomial over $\mathbb{Q}$ of degree $3n-1$.
None of the $T_n(x)$ takes an integer greater than two to zero; consequently, none of the $\tau_m$ vanish for $m = 3, 4, ....$
Obviously, one basis of conjecture 7 is conjecture 6, but it is also a consequence of the following
For $T_n = T^{\diamond}_n(x), T^{\diamond}_n(x)$, or $T^{\diamond}_n(x)$, for each positive integer $n$, and for each integer $m$ greater than two, let the minimum distance from $m$ of any root of $T_n$ be denoted as $d(m,n)$. For fixed $n$, $d(m,n)$ is never zero and decays exponentially as $n$ increases. If $m$ is greater than three, then $d(m,n) > d(3,n)$. (We note, however, that for $T = T^{\star}$ and $n = 1$, the interpolating polynomial is identically equal to one and the corresponding root set is therefore empty.)
|
---
abstract: |
We say that a rational function $F$ satisfies the summability condition with exponent $\alpha$ if for every critical point $c$ which belongs to the Julia set $J$ there exists a positive integer $n_{c}$ so that $\sum_{n=1}^{\infty}
|(F^{n})'(F^{n_{c}}(c))|^{-\alpha}<\infty$ and $F$ has no parabolic periodic cycles. Let $\mmax$ be the maximal multiplicity of the critical points.
The objective is to study the Poincaré series for a large class of rational maps and establish ergodic and regularity properties of conformal measures. If $F$ is summable with exponent $\alpha< \frac{\dpoin(J)}{\dpoin(J)+\mmax}$ where $\dpoin(J)$ is the Poincaré exponent of the Julia set then there exists a unique, ergodic, and non-atomic conformal measure $\nu$ with exponent $\dpoin(J)=\HD(J)$. If $F$ is polynomially summable with the exponent $\alpha$, $\sum_{n=1}^{\infty}
n |(F^{n})'(F^{n_{c}}(c))|^{-\alpha}<\infty$ and $F$ has no parabolic periodic cycles, then $F$ has an absolutely continuous invariant measure with respect to $\nu$. This leads also to a new result about the existence of absolutely continuous invariant measures for multimodal maps of the interval.
We prove that if $F$ is summable with an exponent $\alpha< \frac{2}{2+\mmax}$ then the Minkowski dimension of $J$ is strictly less than $2$ if $J\not=\C$ and $F$ is unstable. If $F$ is a polynomial or Blaschke product then $J$ is conformally removable. If $F$ is summable with $\alpha< \frac{1}{1+\mmax}$ then connected components of the boundary of every invariant Fatou component are locally connected. To study continuity of Hausdorff dimension of Julia sets, we introduce the concept of the uniform summability.
Finally, we derive a conformal analogue of Jakobson’s (Benedicks-Carleson’s) theorem and prove the external continuity of the Hausdorff dimension of Julia sets for almost all points $c$ from the Mandelbrot set with respect to the harmonic measure.
---
=6.25in =7.95in
\[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\]
=msam10 at 11pt =msbm10 at 11pt \#1
\#1[[[Var]{}]{}]{} \#1 \#1 \#1
\#1[[\#1]{}]{}
\#1\#2[,[\#2]{}]{} \#1[(\#1)]{} \#1 \#1[{\#1}]{} \#1[|\#1|]{} \#1[\#1]{} \#1 \#1
\#1
\#1[(\#1)]{} ł2t[L]{} \#1\#2[[[H]{}\_[\#1]{}(\#2)]{}]{} \#1\#2[[\# ]{}]{}
[||c||]{}\
\
\
\
\
\
\
\
\
> [ **Résumé**]{}
>
> Nous disons qu’une application rationnelle $F$ satisfait la condition de la sommabilité avec un exposant $\alpha$ si pour tout point critique $c$ qui appartient à l’ensemble de Julia $J$, il y a un entier positif $n_{c}$ tel que $\sum_{n=1}^{\infty}|(F^{n})'(F^{n_{c}}(c))|^{-\alpha}<\infty$ et $F$ n’a pas de points périodiques paraboliques. Soit $\mmax$ la multiplicité maximale des points critiques de $F$.
>
> L’objectif est d’étudier les séries de Poincaré pour une large classe d’applications rationnelles et d’établir les propriétés ergodiques et la regularité des mesures conformes. Si $F$ est sommable avec un exposant $\alpha< \frac{\dpoin(J)}{\dpoin(J)+\mmax}$, où $\dpoin(J)$ est l’exposant de Poincaré de l’ensemble de Julia, alors il existe une unique mesure conforme $\nu$ avec l’exposant $\dpoin(J)=\HD(J)$ qui est invariante, ergodique, et non-atomique. En plus, $F$ possède une mesure invariante absolument continue par rapport à $\nu$ pourvu que $\sum_{n=1}^{\infty}
> n |(F^{n})'(F^{n_{c}}(c))|^{-\alpha}<\infty$ (la sommabilité de type polynômial) et que $F$ n’a pas de points périodiques paraboliques. Cela aboutit à un nouveau résultat sur l’existence des mesures invariantes absolument continues pour des applications multimodales d’un intervalle.
>
> Nous démontrons que si $F$ est sommable avec un exposant $\alpha< \frac{2}{2+\mmax}$, alors la dimension de Minkowski de $J$, si $J\not=\C$, est strictement plus petite que $2$ et $F$ est instable. Si $F$ est un polynôme ou le produit de Blaschke, alors $J$ est conformément effaccable. Si $F$ est sommable avec $\alpha< \frac{1}{1+\mmax}$, alors toute composante connexe de la frontière de chaque composante de Fatou invariante est localement connexe. Pour étudier la continuité de la dimension de Hausdorff des ensembles de Julia, nous introduisons le concept de la sommabilité uniforme.
>
> Enfin, nous en déduisons un analogue conforme du théorème de Jakobson et Benedicks-Carleson. Nous montrons la continuité externe de la dimension de Hausdorff des ensembles de Julia pour presque tout point de l’ensemble de Mandelbrot par rapport à la mesure harmonique.
Introduction
============
Overview
--------
The Poincaré series is a basic tool in the theory of Kleinian groups. It is used to construct and study conformal densities and dimensions of the limit set. Patterson and Sullivan proved that the critical exponent (the Poincaré exponent) is equal to the Hausdorff dimension of the limit set for Fuchsian and non-elementary geometrically finite Kleinian groups.
We focus on estimates of the Poincaré series in rational dynamics. From this perspective, we address the problem of regularity of conformal measures. We propose to study rational maps satisfying the summability condition, which requires, roughly speaking, only a polynomial growth of the derivative along critical orbits. Rational maps with parabolic periodic points are non-generic and for simplicity we exclude them from our considerations.
In the class of rational maps satisfying the summability condition, we prove the counterpart of Sullivan’s result that conformal measures with minimal exponent are ergodic (hence unique) and non-atomic. To pursue properties of the Poincaré series for rational maps we introduce a notion of a restricted Poincaré series which is also well-defined for points in the Julia set. This notion leads to new estimates, particularly implying that the convergence property of the Poincaré series is “self-improving.” This turns out to be an underlying reasons for regularity properties of conformal measures on Julia sets. Also, the divergence of the Poincaré series with the Poincaré exponent (infimum of exponents with converging Poincaré series) is an immediate consequence. A different definition of the Poincaré exponent and its relation with various dynamical dimensions can be found in [@przytycki-exponent].
One of the central problems in the theory of iteration of rational functions is to estimate the Hausdorff dimension of Julia sets which are not the whole sphere and investigate their fractal properties. It is believed that rational functions with metrically small Julia sets should possess certain weak expansion property. We prove that the Poincaré exponent coincides with the Hausdorff dimension of the Julia set $J$ and $\HD(J)<2$ unless $J=\CC$ for rational functions satisfying the summability condition with an exponent $\alpha< \frac{2}{\mmax+2}$. These results bear some relationship with a recent result of C. Bishop and P. Jones [@bijo] which says that for finitely generated Kleinian groups if the limit set has zero area then the Poincaré exponent is equal to the Hausdorff dimension of the limit set. Perhaps, the most famous problem in the iteration theory of rational functions is whether a given system can be perturbed to a hyperbolic one or not. It is widely believed that this should be possible (the Fatou conjecture), at least in the class of polynomials. It is well known, [@MSS], that if the Julia set of a polynomial is of Lebesgue measure zero then the polynomial can be perturbed to a hyperbolic one. In general, despite much effort, a very limited progress was achieved towards proving the Fatou conjecture. The real Fatou conjecture was proved in [@KozShenStrien] . We use the recent result of [@josm] to prove strong instability of polynomials satisfying the summability condition. This both strengthens and generalizes the results of [@przytycki-rohde-rigidity] in the class of polynomials and Blaschke products. The summability condition was proposed in one-dimensional real unimodal dynamics [@nost] as a weak condition which would guarantee the existence of absolutely continuous measures with respect to the one-dimensional Lebesgue measure. M. Aspenberg proved in [@Aspenberg] that the class of rational functions which satisfy the Collet-Eckmann condition is of positive Lebesgue measure in the space of all rational functions of a given degree (see also a closely related paper of M. Rees [@rees]) From the point of view of measurable dynamics and ergodic theory, the existence of regular invariant measures is of crucial importance. A dynamical analogue of the one-dimensional Lebesgue measure on Julia set is given by the “geometric measures” (conformal measures with minimal exponents). We study regularity and ergodic properties of conformal measures and determine whether the dynamics admits the existence of absolutely continuous invariant measures with respect to a given conformal measure. The problem is twofold and involves both dynamical and measure theoretical estimates.
Another problem we look at is local connectivity of Julia sets and the existence of wandering continua. In order to pursue the continuity of the Hausdorff dimension of Julia sets we introduce a uniform convergene for rational maps satisfying the summability condition. Finally, we discuss applications of our theory to the study of complex unicritical polynomials $z^{d}+c$. In this setting, we formulate a complex analogue of Jacobson and Benedicks-Carleson’s theorems.
#### Non-uniform hyperbolicity.
The concept of non-uniform hyperbolicity is slightly vague and depends on varying backgrounds and motivations. It is difficult to find a single formulation of this property. Our approach emphasizes measure theoretical aspects of the system, which should be hyperbolic on the average. Loosely speaking, given a non-hyperbolic system, one tries to make it hyperbolic by taking only pieces of the phase space and a high iterate of the map on each piece. If it is possible to find such pieces almost everywhere, we say that the system induces hyperbolicity or is non-uniformly hyperbolic with respect to a given measure. Of course, we are interested only in natural measures such as the Lebesgue measure when $J=\CC$ and geometric measures (see Definition \[def:mes\] and the following discussion) when $J\not =\C$. This approach originates from the work of Jakobson [@jak] on the abundance of absolutely continuous invariant measures and was also followed in a similar way by Benedicks and Carleson [@Beca; @Becaa]. The concept of induced hyperbolicity plays also a central role in the proof of the real Fatou conjecture, see [@book].
For rational functions $F$ satisfying the summability condition we prove induced hyperbolicity with respect to unique geometric measure on $J$ (Theorem \[theo:largescale\]). The induced hyperbolicity yields that the Julia set is of Lebesgue measure zero whenever $J\not=\CC$ , see also [@BruinStrien]. In many cases (e.g. under polynomial summability condition) we prove a stronger version of non-uniform hyperbolicity, namely the existence of a unique absolutely continuous invariant measure $\sigma$ with respect to the geometric measure. The measure $\sigma$ is ergodic, mixing, and has positive Lyapunov exponent.
Main concepts and statements of results
---------------------------------------
#### Summability conditions and maximal multiplicity.
Before stating our main theorems, we make a few technical remarks. For simplicity we assume that no critical point belongs to another critical orbit. Otherwise all theorems remain valid with the following amendment: a “block” of critical points $$F:~~c_1~{\mapsto}~\dots~{\mapsto}~c_2
~{\mapsto}~\dots~\dots
~{\mapsto}~c_k~,$$ of multiplicities $\mu_1,\,\mu_2,\,\dots,\,\mu_k$ enters the statements as if it is a single critical point of multiplicity $\prod\mu_j$.
If the Julia set is not the whole sphere, we use the usual Euclidean metric on the plane, changing the coordinates by a Möbius transformation so that $\infty$ belongs to a periodic Fatou component, and doing all the reasoning on a large compact containing the Julia set. Alternatively (and also when $J=\CC$) one can use the spherical metric.
Define $\sigma_n\,:=\,\min_{c\in\Crit}\brs{\abs{\br{F^n}'(Fc)}}$, where minimum is taken over all critical points in the Julia set whose forward orbits do not contain other critical points. Many properties will take into account $\mmax$ – the maximal multiplicity of critical points in the Julia set (calculated as above, if there are any critical orbit relations).
Suppose that $F$ is a rational function without parabolic periodic points. We say that $F$ satisfies the [*summability condition*]{} with exponent $\alpha$ if $$\sum_{j=1}^{\infty}\,(\sigma_j)^{-\alpha}~<~\infty~.$$ If a stronger inequality $$\sum_{j=1}^{\infty}\,j\cdot(\sigma_j)^{-\alpha}~<~\infty~,$$ holds, then we say that $F$ satisfies the [*polynomial summability condition*]{} with exponent $\alpha$.
We recall that $F$ satisfies the [*Collet-Eckmann condition*]{} if there exist $C>0$ and $\Lambda>1$ such that $\sigma_{n}\geq C\Lambda^{n}$ for every positive $n$. Contrary to the Collet-Eckmann case, the summability condition allows strong recurrence of the critical points. Generally, it is not true that the critical value of a summable rational map has infinitely many univalent passages to the large scale (counterexample given e.g. by a quadratic Fibonacci polynomial), compare Theorem \[theo:largescale\].
#### Poincaré series.
We call a point $z$ admissible if it does not belong to $\bigcup_{i=0}^{\infty}F^{n}(\Crit)$. Take an admissible point $z$ and assume that $F$ has no elliptic Fatou components and $J\not = \hat{\C}$. We define the Poincaré series by $$\Sig_\delta(z)~:=
~\sum_{n=1}^\infty~\sum_{y\in F^{-n}z}\abs{\br{F^n}'(y)}^{-\delta}~.$$ The series converges for every $\delta > \dpoin(z)$ and the minimal such $\dpoin(z)$ is called the Poincaré exponent (of $F$ at the point $z$). By standard distortion considerations, if ${\cal F}$ is a component of the Fatou set, then for all admissible $z\in {\cal F}$ Poincaré exponents coincide, so we set $$\dpoin({\cal F}):=\dpoin(z)~.$$ We define the [*Poincaré exponent*]{} by $$\dpoin(J)~:=~\max~\brs{\dpoin({\cal F})\, },$$ (As Theorem \[theo:poincare\] shows, we can alternatively set $\dpoin(J):=\inf\{\dpoin(x):\,x\in\CC\}$). A well-known area estimate shows that $\dpoin(J)\leq 2$. A natural question arises if the Poincaré exponents $\dpoin({\cal F})$ in different Fatou components coincide and if $\dpoin(J)<2$. By the analogy with the theory of Kleinian groups, we say that $F$ is of [*divergent*]{} ([*convergent*]{}) [*type*]{} if the Poincaré series $\Sig_\dpoin(z)$ diverges (converges) for every component ${\cal F}$ of the Fatou set and every admissible $z\in {\cal F}$. Clearly, hyperbolic rational maps satisfy the summability condition with any positive exponent $\alpha$. By Theorem \[theo:poincare\], they are all of divergent type. In general, rational maps of the divergent type can be viewed as weakly hyperbolic systems. It would be interesting to study this property from an abstract point of view.
To address the above problems, we consider the Poincaré series as a function of $z\in \hat{\C}\setminus J$ and study its limiting behavior when $z$ approaches the Julia set. We use the concept of a restricted Poincaré series to study the dynamics of inverse branches of $F$ independently from the fact whether their domains intersect $J$ or not.
Let $\xx{}{z,\Delta}$ be the set of all preimages of $z$ such that the ball $B(z,\Delta)$ can be pulled back univalently along the corresponding branch.
The [*restricted Poincaré series*]{} for a number $\Delta>0$ and $z\in\CC$ is defined by $$\Sigma^{\Delta}_\delta(z)~:=
~\sum_{n=1}^{\infty}~\sum_{y\in F^{-n}(z)\cap\xx{}{z,\Delta}}\,\abs{\br{F^n}'(y)}^{-\delta}~.$$
The definitions of the Poincaré exponents assume that the complement of $J$ is non-empty. Should $J$ coincide with the whole sphere, we set $\dpoin(J)\,:=\,2$. We will prove that under the summability condition the convergence of the Poincaré series is an open property. This means that $F$ is of divergent type. The proof will use the restricted Poincaré series and a generalized “area estimate.” Intriguingly, our technique allows us to compare different $\dpoin({\cal F})$ through perturbations of the Poincaré series near the critical points $c\in J$ of the maximal multiplicity. These points appear to be in the stability locus of $\dpoin(z)$.
\[theo:poincare\] Suppose that $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{\dpoin(J)}{\mmax+\dpoin(J)}~.$$ We have that
- the Poincaré series with the critical exponent $\dpoin(J)$ diverges for every point $z\in \CC$,
- if $z$ is at a positive distance to the critical orbits and $c$ is a critical point of the maximal multiplicity then $$\dpoin(z)~=\dpoin(c)~=~
\dpoin(J)~=~
\inf\left\{\dpoin(x):\,x\in\CC\right\}~.$$
If $J=\CC$ then by our convention, $\dpoin(J)=2$. Hence, the equality $$\dpoin(J)=
\inf\left\{\dpoin(x):\,x\in\CC\right\}~$$ of Theorem \[theo:poincare\] can be regarded as an alternative definition of the Poincaré exponent when $J=\CC$.
#### Conformal and geometric measures.
Conformal or [*[Patterson-Sullivan]{}*]{} measures are dynamical analogues of Hausdorff measures and capture important (hyperbolic) features of the underlying dynamics.
\[def:mes\] Let $F$ be a rational map with the Julia set $J$. A Borel measure $\nu$ supported on $J$ is called [*conformal with an exponent*]{} $p$ (or [*$p$-conformal*]{}) if for every Borel set $A$ on which $F$ is injective one has $$\nu(F(B))=\int_{B}|F'(z)|^{p}~d\nu~.$$
As observed in [@sullivan-rio], the set of pairs $(p,\nu)$ with $p$-conformal measure $\nu$ is compact (in the weak-$*$ topology). Hence, there exists a conformal measure with the [*minimal exponent*]{} $$\dconf~:=~\inf\{p: \exists\; \mbox{a $p$-conformal measure on $J$}\}.$$ The minimal exponent $\dconf$ is also called a [*conformal dimension*]{} of $J$.
However, conformal measures might have extremely bad analytical properties, in particular they can be atomic. In this context it is rather surprising that the most important conformal measures, namely these with minimal exponents, have many good analytical properties in the class of rational maps which satisfy the summability condition.
A hyperbolic Julia set has the Hausdorff dimension strictly less than $2$ and a finite positive Hausdorff measure in its dimension, [@sullivan-rio]. In the hyperbolic case, there is a unique conformal measure which coincides with the normalized Hausdorff measure. For non-hyperbolic maps the situation is much more complicated. A construction of conformal measures for Kleinian groups was proposed by Patterson. The same construction was implemented by Sullivan for rational functions [@sullivan-rio]. In Patterson-Sullivan approach conformal measures are constructed through the dynamics in the Fatou set. This external construction favors conformal measures with “inflated” exponents and can be briefly summarized as follows: Assume that $J\not =\hat{\C}$ and $F$ has no neutral cycles. Let $z\in \hat{\C}\setminus J$. If $\Sig_\dpoin(z)$ diverges then, for any $p>\dpoin(z)$ we consider $v_{p}$ defined by $$\label{equ:po1}
\nu_{p}:=
\frac{1}{\Sig_p}\;
\sum_{n=1}^{\infty}\sum_{y\in F^{-n}(z)}
\frac{1_{y}}{|(F^{n})'(y)|^{p}}~,$$ where $1_{y}$ is a Dirac measure at $y$. A weak limit of such atomic measures when $p\to\dpoin(z)$ defines a $\dpoin(z)$-conformal measure on the Julia set. If $\Sig_\dpoin(z)$ converges then the construction can be repeated in the same way after multiplying each term of $\Sig_{p}$ by a factor $h(|(F^{n})'(y)|^{-p})$, where $h$ is the Patterson function. The function $h$ tends to $1$ subexponentialy but still makes the series $\Sig_p$ divergent for $p=\dpoin(z)$. Clearly, $$\dconf\leq \dpoin\;.$$
An alternative “internal” construction was carried out by Denker and Urbanski, [@denker-urbanski-sullconf; @denker-urbanski-existconf]. It uses increasing forward invariant subcompacts inside the Julia set, free of critical points, to distribute probabilistic Borel measures $\nu_{n}$ with Jacobians bounded respectively by $|F'(z)|^{p_{n}}$. In the limit, these approximating measures become conformal with exponent equal to $\sup p_{n}=\dconf$, [@denker-urbanski-sullconf; @denker-urbanski-existconf]. Recall that the [*hyperbolic dimension*]{} $\HH(J)$ of the Julia set $J$ is equal to the supremum of the Hausdorff dimensions of all hyperbolic subsets of $J$. An important geometric consequence of the Denker-Urbanski construction is $$\dconf=\HH(J)~\;.$$ If additionally $\dconf=\HD(J)$ then every $\dconf$-conformal measure $\nu$ is called a [*geometric measure*]{}. It is an important open question, whether $\dconf=\HD(J)$ always holds, and in general very little is known about the existence and regularity properties of geometric measures (cf. [@shishikura; @przytycki-ce; @urbanski-bams]). It is not even known if a geometric measure has to be unique and non-atomic. A possible pathology is due to the presence of critical points in $J$ of convergent type. Indeed, if $\Sig_{p}(c)$ converges for $p\leq 2$ and $c\in \Crit$ then $\nu_{p}$ defined by (\[equ:po1\]) with $y=c$ is a $p$-conformal atomic measure.
We prove that if a rational function satisfies the summability condition with an exponent $\alpha< \frac{2}{2+\mmax}$, then, for every $p >\dconf$, there exists an atomic $p$-conformal measure with an atom at a critical point. In contrast to that, the geometric measures are non-atomic in this class. The majority of work in the area is based on Denker-Urbanski construction. We come back to the origins and focus on the Patterson-Sullivan approach.
Every conformal measure of $F$ has an exponent $p\in [\dconf, \infty)$. The set of all such $p$ forms a [*conformal spectrum*]{} of $F$. We distinguish an atomic part of the spectrum consisting of all $p\in [\dconf,\infty)$ for which there exist only atomic conformal measures, a [*continuous*]{} part corresponding to exponents for which there exist only non-atomic conformal measures, and a [*mixed part*]{} (possibly empty) gathering all $p$ for which there exist both atomic and non-atomic $p$-conformal measures.
We say that the conformal spectrum of $F$ is hyperbolic if its mixed part is empty and continuous part is equal to $\{\dconf\}$.
\[theo:4\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha<\frac{\dpoin(J)}{\mmax+\dpoin(J)}\;.$$ Then
- there is a unique non-atomic conformal measure $\mu$,
- measure $\mu$ is ergodic and its exponent is equal to $\dconf=\dpoin(J)$,
- if $F$ has critical points in the Julia set then for every $\delta >\dconf$ there exists an atomic $\delta$-conformal measure supported on the backward orbits of the critical points,
- every conformal measure has no atoms outside of the set of the critical points of $F$.
In particular, every rational function which satisfies the summability condition with an exponent $\alpha<\frac{\dpoin(J)}{\mmax+\dpoin(J)}$ has a hyperbolic conformal spectrum. Since every hyperbolic rational map has no critical points in the Julia set, its conformal spectrum is trivial and consists of only one point $\{\dconf\}$.
Observe that if we know that every geometric measure is ergodic, then in fact it must be unique. Indeed, if there were two such measures $\nu_{1}$ and $\nu_{2}$, then $\nu_{3}:=\frac{\nu_{1}+\nu_{2}}{2}$ is obviously a non-ergodic geometric measure, a contradiction.
The problem of ergodicity of conformal measures was raised before. In [@przytycki-ce] it is proved that a number of ergodic components of conformal measures $\nu$ for the so-called Collet-Eckmann rational maps is finite and not exceeding the number of critical points. This is not enough to conclude uniqueness of a geometric measure. The assertion of Theorem \[theo:4\] is valid only for conformal measures with minimal exponent. In general, if $p>\dconf$ and $F$ has more than one critical point of the maximal multiplicity, then there exist non-ergodic $p$-conformal measures. Indeed, by Theorem \[theo:poincare\], if $\mu(c)=\mmax$ then $\dpoin(c)=\dconf$. Measures $\nu(c)$ defined by (\[equ:po1\]) are $p$-conformal and their convex sum has exactly $\#\{c: \mu(c)=\mmax\}$ ergodic components.
#### Induced hyperbolicity.
Consider the set $\Jlso$ of all points $x\in J$ which $\epsilon$-frequently go to the large scale of radius $r$, namely: $$\exists\, n_j\to\infty:~F^{n_j}
\mbox{~is ~univalent~on ~} F^{-n_j}\br{B\br{F^{n_j}x,r}},
~\abs{\br{F^{n_{j+1}}}'(x)}\,<\,\abs{\br{F^{n_{j}}}'(x)}^{1+\epsilon}~,$$ where $F^{-n_j}\br{B\br{F^{n_j}x,r}}$ stands for its connected component, containing $x$.
\[theo:largescale\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{\dpoin(J)}{\dpoin(J)+\mmax}~.$$ Then there exists $r>0$ so that for every $\epsilon>0$ almost every point with respect to a unique $\dconf$-conformal measure $\nu$ goes $\epsilon$-frequently to the large scale of diameter $r$.
#### Invariant measures.
The summability condition was introduced in real dynamics in [@nost] to study absolutely continuous invariant measures (shortly acim) with respect to the Lebesgue measure. In the conformal setting when conservative dynamics is often concentrated on fractal sets with zero area, a concept of an acim encounters some hurdles. In this situation, it is natural to study absolutely continuous invariant measures with respect to conformal measures. This approach was already adopted by Przytycki, who proved that a rational Collet-Eckmann map has an acim with respect to a $p$-conformal measure $\nu$, provided that $\nu$ is regular enough along critical orbits [@przytyk2]. This regularity was verified only in particular cases (cf. Tsujii’s condition in [@przytyk2]) and can be expressed by an integral condition (in [@przytycki-ce] a slightly weaker condition was considered): there exists $C>0$ so that for all $i>0$ $$\label{def:cond}
\sup_{c\in \Crit} \int
|z-F^{i}(c)|^{-p\br{1-\frac{1}{\mmax}}}\;d\nu< C\; .$$ The scope of validity of this condition is not known even in the Collet-Eckmann setting. We will call this condition the [*integrability condition*]{} with an exponent $\eta:= p(1-\frac{1}{\mmax})$.
Surprisingly, we do not need to assume this condition to obtain an absolutely continuous invariant measure with respect to a non-atomic conformal measure, if $F$ satisfies the polynomial summability condition.
\[theo:inv\] Suppose that a rational function $F$ satisfies the polynomial summability condition with an exponent $$\alpha< \frac{\dpoin(J)}{\dpoin(J)+\mmax}~.$$ Then $F$ has a unique absolutely continuous invariant measure $\sigma$ with respect to a unique $\dpoin(J)$-conformal measure $\nu$. Moreover, $\sigma$ is ergodic, mixing, exact, has positive entropy and Lyapunov exponent.
If the integrability condition is satisfied, then we have the following theorem.
\[theo:integ\] If $F$ satisfies the summability condition with an exponent $$\alpha < \frac{\dpoin(J)}{\dpoin(J)+\mmax}~,$$ and a unique $\dpoin(J)$-conformal measure $\nu$ is $\dpoin(J)(1-\frac{1}{\mmax})$-integrable then $F$ has a unique and ergodic absolutely continuous invariant measure.
Observe that $2$-dimensional Lebesgue measure becomes a geometric (conformal) measure, when $J=\CC$. We conclude, that in this case $F$ has an absolutely continuous invariant measure with respect to 2-dimensional Lebesgue measure if it satisfies the summability condition with an exponent $\alpha< \frac{2}{2+\mmax}$. We also study ergodic and regularity properties of the absolutely continuous invariant measures.
#### Multimodal maps.
Another important application of our techniques lies in the dynamics of analytic [*multimodal maps*]{} of a compact interval with non-empty interior. Contrary to the unimodal case (maps with exactly one local extremum), there are few results about the existence of absolutely continuous invariant measures, [@BruinLuzzattoStrien]. The only available results are either of perturbative nature (analogues of Jakobson’s result in the quadratic family, i.e. they require some transversality assumptions in one-parameter families) or impose some semi-unimodal conditions. One of the most general results [@tsuji] states that for a generic $C^{2}$ families of multimodal maps there exist a positive set of parameters which correspond to Collet-Eckmann maps with acims. In [@nost] it was proved that if S-unimodal map $f$ (i.e. unimodal with negative Schwarzian derivative), and the critical point of multiplicity $d$ satisfies the summability condition with the exponent $1/d$ then it has an absolutely continuous invariant measure with respect to the Lebesgue measure (recently a stronger result was proved in [@BruinShenStrien]). The Schwarzian derivative is defined for a $C^3$ function $f$ by $$\label{eq:schdef}
S(f)(x) := f'''(x)/f'(x) - \frac{3}{2}\left(f''(x)/f'(x)\right)^2\;,$$ provided $f'(x) \neq 0 $. A prototype $S$-unimodal map is given by $z^{2d}+c$, with $c\in \R$ and $d$ a positive integer.
An absolutely continuous invariant measure provides a useful information about statistical behavior of orbits. We prove the following result for multimodal maps.
\[coro:uni\] Let $f$ be an analytic function of the unit interval with all periodic points repelling and negative Schwarzian derivative. If $f$ satisfies the summability condition with an exponent $$\alpha< \frac{1}{1+\mmax}~,$$ then $f$ has an absolutely continuous invariant measure $\sigma$ with respect to $1$-dimensional Lebesgue measure.
#### Fractal dimensions.
We also show that for the Julia sets under considerations are “regular” fractals, in the sense that all possible dimensions coincide.
The Hausdorff dimension of a measure $\nu$ is defined as the infimum of Hausdorff dimensions of its Borel supports: $$\HD(\nu) := \inf\{\HD(A): \nu(A)=1\}~.$$ For the convenience of the reader we define and discuss basic properties of Hausdorff, Minkowski, and Whitney dimensions $\HD$, $\MD$, $\dwhit$ in Section 8.
\[theo:dims\] Suppose that a rational function $F$ satisfies the [*summability condition*]{} with an exponent $$\alpha~<~\fr{p}{\mmax+p}~,$$ where p is any (e.g., maximal) of the quantities in the formula below. Then $$\dpoin(J)=\dwhit(J)=\MD(J)=\HD(J)=\HH(J)=\HD(\nu)~,$$ where $\nu$ is a unique $\dconf$-conformal measure.
Under the hypothesis of Theorem \[theo:4\], the unique $\dconf$-conformal measure $\nu$ is a geometric measure.
A natural question arises: does the claim of Theorem \[theo:dims\] remain valid under any summability condition? This is an interesting question with possible application towards establishing non-atomicity of conformal measures with minimal exponent.
\[theo:dim\] Suppose that a rational function $F$ satisfies the [*summability condition*]{} with an exponent $$\alpha~<~\fr{2}{\mmax+2}~.$$ Then the Julia set is either the whole sphere, or its Minkowski dimension is strictly less than $2$.
#### Unstable rational maps.
It is known (by an application of $\lambda$-lemma, see [@MSS]), that the affirmative answer to the dynamical Ahlfors conjecture (Julia set is either the whole sphere or of zero area) in the class of rational functions with $J\neq\CC$ implies the Fatou conjecture. If $F$ satisfies the summability condition with an exponent $\alpha\leq \frac{2}{2+\mmax}$ then, by Theorem \[theo:poincare\], the Julia set has zero area, and cannot carry a non-trivial Beltrami differential.
We say that a set $J$ is [*conformally removable*]{} if every homeomorphism $\phi$ of $\CC$ which is holomorphic off $J$, is in fact a M[ö]{}bius transformation.
For Julia sets, this is a very strong property, which generally does not hold even for hyperbolic rational maps. A counterexample, which is topologically a Cantor set of circles is constructed in §11.8 of the book [@beardon-book-iteration]. Using a recent work [@josm], we can establish conformal removability (also called holomorphic removability) of Julia sets for polynomials and Blaschke products satisfying the summability condition.
\[theo:rem\] If a polynomial $F$ satisfies the [*summability condition*]{} with an exponent $$\alpha~<~\fr{2}{\mmax+2}~,$$ then the Julia set is conformally removable.
More generally, the theorem above holds not only for polynomials, but for rational functions such that the Julia set is a boundary of one of the Fatou components.
The assumption above, that the Julia set coincides with the boundary of one of the Fatou components is essential for conformal removability. A more flexible concept of dynamical removability might hold for all rational Julia sets.
We say that a Julia set $J_F$ is dynamically removable if every homeomorphism $\phi$ of $\CC$ which conjugates $(\CC,F)$ with another rational dynamical system $(\CC,H)$ and is quasiconformal off $J_F$ is globally quasiconformal.
\[theo:rem1\] If a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{1}{\mmax+1}~,$$ then the Julia set is dynamically removable.
\[theo:rem2\] Let a rational function $F$ satisfy the summability condition with an exponent $$\alpha~<~\fr{2}{\mmax+2}~.$$ Suppose that there is a quasiconformal homeomorphism $\phi$ of $\CC$ which conjugates rational dynamical systems $(\CC,F)$ and $(\CC,H)$.
- If $J\neq\CC$, then the Beltrami coefficient $\phi_{\mu}$ has to be supported on the Fatou set.
- If $J=\CC$, then either $\phi$ is a Möbius transformation, or $F$ is double covered by an integral torus endomorphism (i.e. it is a Lattés example). In the latter case the Beltrami coefficient $\mu_\phi$ lifts to a constant Beltrami coefficient on the covering torus.
If a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{2}{\mmax+2}~,$$ then $J$ carries no invariant line field, except when $F$ is double covered by an integral torus endomorphism.
Compare Corollary 3.18 in [@mcm].
#### Geometry, local connectivity, and non-wandering continua.
If $z^{\ell}+c$ is a unicritical polynomial with locally connected Julia set $J$ then the dynamics on $J$ has a particularly simple representation: it is semi-conjugate to the multiplication by $\ell$ modulo $1$ on $[0,1)$. The quest for local connectivity of polynomial Julia sets dates back to the mid-Eighties. Recently, local connectivity was obtained for all real unicritical polynomials $z^{l}+c$ with connected Julia sets, [@levin-vanstrien], and for Collet-Eckmann polynomials and Blaschke products with connected Julia sets, [@ceh].
The quasihyperbolic distance $\qhdist{y,z}$ between points $y,z\in\Cal F$ is defined as the infimum of $$\qhdist{y,z}~:=~\inf_{\gamma}~\int_{\gamma}~
\fr{\abs{d\zeta}}{\dist{\zeta,\partial\Cal F}}~,$$ the infimum taken over all rectifiable curves $\gamma$ joining $y$ and $z$ in $\Cal F$.
\[def:integral\] We will call a (possibly non-simply-connected) domain $\Cal F$ [*integrable*]{}, if there exists $z_0\in\Cal F$ and an integrable function $H : \R_{+}\to \R_{+}$, $$\int_{0}^{\infty}H(r)<\infty~,$$ such that $\Cal F$ satisfies the following [*quasihyperbolic boundary condition*]{}: $$\dist{z,\partial\Cal F}~\leq ~H(\qhdist{z,z_0})~,$$ for any $z\in\Cal F$. The distance $\dist{z,\partial \Cal F}$ is computed in the spherical metric.
Hölder domains correspond to “exponentially fast integrable” domains with $H(r)=\exp(C-\epsilon r)$. We will show that the Fatou components of rational maps satisfying the summability condition are integrable domains.
A concept of wandering subcompacta of connected Julia sets is directly related to the local connectivity of components of $J$. We say that a compact set $K$ is wandering if for every $m,n>0$, $F^{n}(K)\cap F^{m}(K)=\emptyset$ whenever $m\neq n$.
\[prop:integrable\] Let $F$ be a rational function which satisfies the summability condition with an exponent $$\alpha\leq \frac{1}{\mmax+1}~.~$$ Then every periodic Fatou component $\Cal F$ is an integrable domain. If $F$ has a fully invariant Fatou component then every component of $J$ is locally connected and $F$ does not have wandering continua.
#### Uniform summability and continuity of dimensions.
We also study continuity properties of the Hausdorff dimension of the Julia set as a function of $F$: $F\mapsto \HD(J_{F})$. To this aim we consider a certain class of perturbations of a rational map $F$ which satisfy the summability condition in a uniform way. Since perturbations usually let critical points escape from the Julia set, we need to take into account critical points of $F$ which do not belong to $J_{F}$.
Given a rational function $F$ and an $\epsilon$-neighborhood $B_{\epsilon}(J)$ of its Julia set $J$ we define for every $c\in Crit\cap B_{\epsilon}(J)$ an [*escape time*]{} $$E(\epsilon)=\inf\{j\in \N:\,F^{j}(c) \not \in B_{\epsilon}(J)\}\;.$$ If $F^{j}(c)\in B_{\epsilon}(J)$ for all $j$, we set $E(\epsilon):=\infty$.
\[def:unif\] Let $F$ be a rational function satisfying the summability condition with an exponent $\alpha$. We say that a sequence $(F_i)$ of rational functions converges $S(\alpha)$- uniformly to $F$ if:
1. $F_i$ do not have parabolic periodic orbits and tend to $F$ uniformly on the Riemann sphere,
2. there exist $M,\epsilon >0$ so that for every $i$ large enough and every $c\in \Crit_{F_i}\cap B_{\epsilon}(J_{F_i})$, $$\label{unisum}
\sum_{j=1}^{E(\epsilon)} |(F_i^{j})'(c)|^{-\alpha} < M\;.$$
3. $\#Crit_{F}=\#Crit_{F_{i}}$ for $i$ large enough (the critical points are counted without their multiplicities).
The notion of the $S(\alpha)$-uniform convergence involves only these critical points $c$ of $F_i$ which “asymptotically” belong to the Julia set $J_{F}$. The $S(\alpha)$-uniform convergence demands $1-1$ correspondence between the critical points of $F$ and $F_{i}$ for large $i$.
Theorem \[theo:contin\] establishes continuity of the Hausdorff (Minkowski) dimension for the $S(\alpha)$-uniform convergence.
\[theo:contin\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha<\frac{\dpoin(J)}{\mmax+\dpoin(J)}\;,$$ and $(F_i)$ is a sequence of rational functions tending $S(\alpha)$-uniformly to $F$. Then, $$\lim_{i \rightarrow \infty}\HD(J_{F_i})=\HD(J_{F})\;.$$
#### Unicritical polynomials.
Let ${\cal M}_{d}$ be the connectedness locus of unicritical polynomials $f_{c}=z^{d}+c$, $${\cal M}_{d}= \{c:\, |f_{c}^{n}(c)|< \infty\}\; .$$ When $d=2$, ${\cal M}_{2}$ is better known as the Mandelbrot set. By Shishikura’s theorem [@shishikura] it is known that the Hausdorff dimension as a function of $c\in \C\setminus {\cal M}_2$ does not extend continuously to $\partial M$. Yet typically with respect to the harmonic measure of $\partial {\cal M}_2$ a continuous extension of $\HD(\cdot)$ along hyperbolic geodesics is possible.
\[defi:radius\] A closure $\Gamma(c)$ of a hyperbolic geodesic in $\C\setminus {\cal M}_{d}$ which contains $\infty$ and a point $c\in \partial M_{d}$ is called an external ray. If $\Gamma(c)\cap
\partial {\cal M}_{d}=\{c\}$ then we say that $\Gamma(c)$ terminates at $c$.
We use properties of the $S(\alpha)$-convergence with $\alpha<1/(1+d)$, and the results of [@graczyk-swiatek-ce; @smirnov-symbce] to deduce the following theorem.
\[theo:cont\] For almost all $c$ from ${\partial {\cal M}}_{d}$ with respect to the harmonic measure, we have $$\lim_{\Gamma(c)\ni c' \rightarrow c}\HD(J_{c'}) = \HD(J_{c})~.$$
Radial continuity of Hausdorff dimension for postcritically finite quadratic polynomials was established in [@mcmullen-radial]. The set of postcritically finite polynomials is of zero harmonic measure, [@graczyk-swiatek-ce; @smirnov-symbce].
Another consequence of our estimates and [@graczyk-swiatek-ce; @smirnov-symbce] is a conformal analogue of Jakobson and Benedicks-Carleson’s theorem [@jak; @Beca; @Becaa]. Let $f_{c}(z)=z^{d}+c$ and suppose that $f_{c}$ has a geometric measure. We call a probabilistic measure $\mu$, supported on the Julia set of $f_{c}$, a [*Sinai-Ruelle-Bowen*]{}, or SRB for short, measure if it is a weak accumulation point of measures $\mu_{n}$ equally distributed along the orbits $x, f_{c}(x),\dots, f_{c}^{n}(x)$ for $x$ in a positive geometric measure set.
\[theo:end\] For almost all $c\in \partial {\cal M}_{d}$ with respect to the harmonic measure,
1. there exists a unique geometric measure $\nu_{c}$ of $z^{d}+c$ which is a weak limit of the normalized Hausdorff measures of $J_{c'}$, $c'\in \Gamma(c)$.
2. $\nu_c$ is ergodic and non-atomic,
3. $\HD(\nu_c)=\HD(J)$,
4. $z^{d}+c$ has an invariant SRB measure with a positive Lyapunov exponent which is equivalent to the geometric measure $\nu_c$.
#### Acknowledgments.
The authors are grateful to the referee for carefully reading our manuscript and contributing numerous suggestions, which greatly improved the exposition.
Stanislav Smirnov would like to thank Zoltan Balogh, Chris Bishop, and Pekka Koskela for many useful discussions.
#### Notation and Conventions.
We will write $A\lesssim B$ whenever $A\le C B$ with some absolute (but depending on the equation) constant $C$. If $A\le CB$ and $B\le CA$ then we write $A\asymp B$. We will denote a ball of radius $R$ around $z$ by $B_R(z)$. We adopt the convention that $\sum_n(\omega_n)^{-\infty}<\infty$ means that the sequence $\omega_n$ tends to $\infty$ as $n\to\infty$.
For simplicity and readers convenience we will write all the distortion estimates for the planar metric, when Koebe distortion theorem has a more familiar formulation (Lemma \[lem:koeb\]). The estimates remain valid in the case of spherical metric, with an appropriate version of Koebe distortion theorem (which differs only by a multiplicative constant, since we work with the scales smaller than some very small $R$).
Another general convention is following: we call $F^{-n}(z), \dots, z$ a sequence of preimages of $z$ by $F$ if for every $1\leq j\leq n$, $$F(F^{-j})(z)=F^{-j+1}(z)\;.$$
Expansion along orbits {#sec:backexpan}
======================
Our goal is to estimate the derivative of $F^{n}$ at $z$ in terms of the summability condition and the injectivity radius of the corresponding inverse branch $F^{-n}$ at $F^{n}(z)$. This is attained by decomposing backward orbits into pieces which either closely follow the critical orbits or stay away from the critical points at a definite distance. We provide also a technical introduction to the theory of the Poincaré series for rational maps.
Proposition \[prop:fatouexpan\] can be regarded as a conditional version of induced hyperbolicity. In applications, we will use a stronger statement (the Main Lemma), which contains more technically detailed assertions. The proof of the Main Lemma will supply a procedure of decomposing any orbit into blocks of three different types, defined rigorously in Subsection \[sec:local\]. We will show that the derivative along each block of a given type is expansive up to an error term which is a function of a few dynamically defined parameters. The main difficulty in proving Proposition \[prop:fatouexpan\] is a possibility of accumulation of error terms. We will prove that due to cancellations, the expansion prevails.
After initial preparations in Section \[sec:backexpan\], the proof of Proposition \[prop:fatouexpan\] will be concluded in Section \[sec:specif\].
\[prop:fatouexpan\] Let a rational function $F$ satisfy the [*summability condition*]{} with an exponent $\alpha~\le~1~.$ There exist $\epsilon>0$ and a positive sequence $\brs{\omega_n}$ summable with an exponent $-\beta\,:=\,-\fr{\mmax\alpha}{1-\alpha}\,$, meaning $\sum_{n}\,\br{\omega_n}^{-\beta}~<~\infty$ , so that for every point $z$ from $\epsilon$-neighborhood of the Julia set and every univalent branch of $F^{-n}$ defined on the ball $B_{\Delta}(z)$ with $\Delta<\epsilon$, $$\begin{array}{ll}
\abs{\br{F^{n}}'(F^{-n}z)}~>~\Delta^{1-\m{c}/\mmax}~\omega_n~&
{\it ~if~a~critical~point~}c\in B_{\Delta}(z)~,\\
\abs{\br{F^{n}}'(F^{-n}z)}~>~\Delta^{1-1/\mmax}~\omega_n~ &
{\it~otherwise~.}
\end{array}$$
The statement above is well-formulated even when when $\alpha=1$, if we recall the convention that $\sum_n(\omega_n)^{-\infty}<\infty$ means that the sequence $\omega_n$ tends to $\infty$ as $n\to\infty$.
In the proof we actually obtain a specific form of $\omega_n$ in terms of $\{\sigma_n\}$ and Mañé’s lemma.
Suppose that the assumptions of Proposition \[prop:fatouexpan\] are satisfied. Then there is $\epsilon>0$ such that for any point $z$ at a distance $\Delta <\epsilon$ from the Julia set we have $$\abs{\br{F^{n}}'(F^{-n}z)}~\geq ~\Delta^{1-1/\mmax}~\omega_n~,$$ where $\omega_{n}$ is given by Proposition \[prop:fatouexpan\].
\[cor:nosiegel\] If $F$ satisfies the summability condition with an exponent $\alpha\leq 1$ then $F$ has neither Siegel disks nor Herman rings.
If $z$ belongs to an elliptic Fatou component ${\cal F}$ (Siegel disk or Herman ring) then for every preimage $F^{-n}z\in {\cal F}$, $\abs{\br{F^n}'(F^{-n}z)}\asymp1$. This contradicts Proposition \[prop:fatouexpan\] since $\lim_{n\to\infty}\omega_n=\infty$.
Preliminaries
-------------
#### Shrinking neighborhoods.
To control the distortion, we will use the method of shrinking neighborhoods, introduced in [@przytycki-ce] (see also [@ceh]). Suppose that $\sum_{n=1}^{\infty}\delta_{n}<1/2$ and $\delta_{n}>0$ for every positive integer $n$. Set $\del_n\,:=\,\prod_{k\le n}\br{1-\delta_k}$. Let $B_r$ be a ball of radius $r$ around a point $z$ and $\{F^{-n}z\}$ be a sequence of preimages of $z$. We define $U_{n}$ and $U'_{n}$ as the connected components of $~F^{-n}\,B_{r\del_n}~$ and $F^{-n}\,B_{r\del_{n+1}}$, respectively, which contain $ F^{-n}z$. Clearly, $$FU_{n+1}~=~U_n'~\subset~U_n~.$$ If $U_k$, for $1\le k\le n$, do not contain critical points then distortion of $F^n\,:~U_n'\to B_{r\del_{n+1}}$ is bounded (Koebe distortion lemma below) by a power of $\fr1{\delta_{n+1}}$, multiplied by an absolute constant.
Since $\sum_n\delta_n\,<\,\fr12$, one also has $\prod_n\br{1-\delta_n}\,>\,\fr12$, and hence always $B_{r/2} \subset B_{r\del_{n}}$.
#### The Koebe distortion lemma.
We will use the following version of the Koebe distortion lemma.
\[lem:koeb\] Let $f:B\rightarrow \C$ be a univalent map from the unit disk into the complex plane. Then the image $f(B)$ contains a ball of radius $\frac14|f'(0)|$ around $f(0)$. Moreover, for every $z\in B$ we have that $$\frac{(1-|z|)}{(1+|z|)^{3}} \leq
\frac{|f'(z)|}{|f'(0)|}\leq \frac{(1+|z|)}{(1-|z|)^{3}}~,$$ and $$|f(z)-f(0)|\leq |f'(z)|\frac{|z|(1+|z|)}{1-|z|}\;.$$
The first statement is Corollary 1.4, and the next inequality is Theorem 1.3 in [@pomme].
Let $M$ be a Möbius automorphism of the unit disk which maps $0$ onto $z$ and $-z$ onto $0$. By Theorem 1.3 in [@pomme] applied to $f\circ M$, we have that $$|f\circ M(-z)-f\circ M(0)|\leq |(f\circ
M)'(0)|\frac{|z|}{(1-|z|)^{2}}\;.$$ Since $M'(0)=1-|z|^{2}$, the last claim of the lemma follows.
Technical sequences {#sec:techseq}
-------------------
Suppose that $F$ is a rational function which satisfies the summability condition with an exponent $\alpha$. To simplify calculations we will introduce three positive sequences $\{\alpha_n\},\,\{\gamma_n>1\},\,\{\delta_n\}$ so that the growth of the derivative of $F^{n}$ will be expressed in terms of $\gamma_{n}$, the corresponding distortion will be majorized by $\delta_{n}$, and the constants will be controlled through $\alpha_{n}$.
\[lem:techseq\] If a sequence $\{1/\sigma_n\}$ is summable with an exponent $\alpha\le1$: $\sum(\sigma_n)^{-\alpha}<\infty$, then there exist three positive sequences $\{\alpha_n\},\,\{\gamma_n\},\,\{\delta_n\}$, such that $$\begin{array}{rlr}
\lim_{n\to\infty}\alpha_n&=~\infty~,&\\
\sum_{n}(\gamma_n)^{-\beta} &<~1/(16\deg F\cdot\mmax)~,
&\beta\,:=\,\fr{\mmax\alpha}{1-\alpha}~~,\\
\sum_{n}\delta_n~~&<~1/2~,&
\end{array}$$ and $$\sigma_n~\ge~(\alpha_n)^2\,(\gamma_n)^{\mmax}\,/\,\delta_n~.$$
Suppose that $\alpha <1$. We set $$\delta_n'\,:=\,\br{\sigma_n}^{-\alpha}~,
~~\gamma_n''\,:=\,\br{\sigma_n}^{(1-\alpha)/\mmax}~,$$ and observe that $\sigma_n=(\gamma_n'')^{\mmax}/\delta_n'$, $\sum_n{\delta_n'}<\infty$, $\sum_n\br{\gamma_n''}^{-\beta}<\infty$. It is an easy exercise, that there is a sequence $\brs{\alpha_n'}$, $\lim_{n\to\infty}\alpha_n'=\infty$, such that $\brs{\gamma_n'}$ defined by $$\gamma_n'\,:=\,\br{\sigma_n}^{(1-\alpha)/\mmax}
\,/\,(\alpha_n')^{-2/\mmax}~$$ is still summable with an exponent $-\beta$. Evidently, $\sigma_n=(\alpha_n')^2(\gamma_n')^{\mmax}/\delta_n'$. Now, choose suitable constants $C_\gamma, C_\delta$ so that $\{\gamma_n\}:=\{C_{\gamma}\gamma_n'\}$ and $\{\delta_n\}:=\{C_{\delta}\delta_n'\}$ satisfy $$\sum_n{\delta_n}~<~1/2~,~~ \sum_n{\gamma_n}^{-\beta}~<~m~.$$ Set $C_\alpha:=\sqrt{C_\delta/C_\gamma^{\mmax}}$ and let $\alpha_n :=C_\alpha\alpha_n'$. Then $$\lim_{n\to\infty}\alpha_n~=~\infty~,
~~\sigma_n~\ge~(\alpha_n)^2\,(\gamma_n)^{\mmax}/\delta_{n} ~.$$ The case of $\alpha=1$ can be treated similarly.
Constants and scales {#sec:const}
--------------------
A scale around critical points is given in terms of fixed numbers $R'\ll R\ll1$. We will refer to objects which stay away from the critical points and are comparable with $R'$ as the objects of the [*large scale*]{}. The proper choice of $R$’s is one of the most important elements in the analysis of expansion along pieces of orbits.
We impose the following conditions on $R$ and $R'$ (note that $\supF<\infty$, since we use the spherical metric):
Any two critical points are at least $100R$ apart and there exists a constant $M>1$ which depends only on $R$ and so that if $\dist{Fy,F(\Crit)}<(1+\supF)R<1$ or $\dist{y,\Crit}<5R^{1/\mmax}$ then $$1/M~<~\abs{F'(y)}/\dist{y,c}^{\m{c}-1}~<~M~,$$ $$1/M~<~\dist{Fy,Fc}/\dist{y,c}^{\m{c}}~<~M~.$$
$R$ is so small that the first return time of the critical points in the Julia set to $\bigcup F^{-1}B_{R}(Fc_{i})$ is greater than a constant $\tau$ chosen so that $\alpha_k~>~16^{\mmax}~M^2~>~1~,$ for $k\ge\tau$.
[[(iii)]{}]{} $R'$ is so small that ${16~R'}~{\supF}~/~\inf_{n}\br{\alpha_n}^2~\le~R~M^{-1}~.$
[[(iv)]{}]{} $R'$ is so small that there are no critical points in the $2R'$-neighborhood of the Julia set inside the Fatou set.
##### Dictionary of constants.
For the sake of clarity we list here other constants and indicate their mutual dependence and places of introduction.
${\l2t}=\const(R',q)$, $K,R_{2t}=\const({\l2t},R')$, and $C(p)=\const({\l2t},R',p)$ in Lemma \[lem:2t\],
$C_{3t}=\const(R_{2t})$ in Lemma \[lem:3t\],
$L''=\const(C_{3t},R_{2t}),\,L'=\const(L'')$ in Subsection \[subsec:estimates\].
Types of orbits {#sec:local}
---------------
The general scheme of decomposing backward orbits into “expansive” blocks was introduced in [@ceh] for Collet-Eckmann dynamics. Despite many similarities, our setting is substantially less hyperbolic than that given by the Collet-Eckmann condition. Though it is not possible to use directly the estimates of [@ceh] and new results are needed, the strategy to capture expansion is similar: we classify pieces of orbits, depending on whether they are close to critical points or not, and derive expansion estimates. We will obtain three different types of estimates, which when combined will yield “expansion” of the derivative along any orbit.
#### First type.
Our objective is to estimate expansion along pieces of the backward orbit which “join” two critical points, i.e. there is a disk in a vicinity of the first critical point which can be pulled back conformally along the orbit until its boundary hits the second critical point.
The formulation of Lemma \[lem:1t\] has to encompass the possibility of critical points with different multiplicities and hence it does not guarantee immediate expansion.
\[defi:1t\] A sequence $F^{-n}(z),\cdots, F^{-1}(z),z$ of preimages of $z$ is of the [*first type*]{} with respect to critical points $c_1$ and $c_2$ if there exists a radius $r<2R'$ such that
Shrinking neighborhoods $U_k$ for $B_r(z)$, $1\le k< n$, avoid critical points, The critical point $c_{2}\,\in\,\dd U_n$, The critical point $c_{1}\,\in\,F^{-1}B_{R}(Fz)$.
Note that given the sequence of preimages, the radius $r$ is uniquely determined by the condition 2). To simplify notation set $\mu_i\,:=\,\m{c_i}$, $d_2\,:=\,\dist{F^{-n}z,c_2}$, and $d_1\,:=\,\dist{z,c_1}$. Let $r_2'$ be the maximal radius so that $B_{r_2'}(F^{-n}z)\,\subset\,F^{-n}(B_{r/2}(z))$. For consistency, put $r_{1}:=r$.
\[lem:1t\] For any sequence $y=F^{-n}(z),\cdots,F^{-1}(z),z$ of preimages of the first type and any $\mum\le\mmax$ we have
$\abs{\br{F^n}'(y)}^{\mum}~>
~{\alpha_n}~(\gamma_n)^{\mmax}~2^{\mmax}
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~\fr{(r_1)^{\mum-1}}{(r_1+d_1)^{\mu_1-1}}~$, $(d_{2})^{\mu_{2}}<
(\max(r_1,d_1))^{\mu_{1}}(\gamma_{n})^{-\mmax}$, $\dist{Fy,Fc_2}~\leq ~R~.$
By the construction the map $F^{-(n-1)}:\,B_{\Delta_{n-1}\,r_1}(z)\,\to\,
U_{n-1}$ is conformal. The Koebe distortion Lemma \[lem:koeb\] implies that $$\begin{aligned}
\dist{Fy,Fc_{2}}&\le&\frac{(1-\delta_n)(2-\delta_n)}{\delta_{n}}
~\Delta_{n-1}\,r_1~\abs{\br{F^{n-1}}'(Fc_2)}^{-1}\nonumber\\
&\le&\frac{2\,R'}{\delta_{n}}
~{\supF}~\abs{\br{F^{n}}'(Fc_2)}^{-1}\nonumber\\
&\le&\frac{2\,R'}{\delta_{n}}
~{\supF}~{\delta_{n}}~/~(\alpha_n)^2~\le~R~,\label{eq:1t,1}\end{aligned}$$ and the third inequality follows by the choice of $R'$ (see [(iii)]{}).
We prove the first inequality. The condition 2) of Definition \[defi:1t\] implies that $F^nc_2\in\dd B_{r\del_n}$. Hence $$\dist{F^nc_2,c_1}~\le~\dist{F^nc_2,z}\,+\,\dist{z,c_1}
~\le~r_1\,+\,d_1~.$$ Since $\dist{F^{n+1}c_2,Fc_1}$ is small (less than $(1+\supF)R$), by the choice of $R$ we have $\abs{F'(F^{n}c_2)}\le M\dist{F^nc_2,c_1}^{\mu_1-1}$. Thus $$\label{eq:1t,2}
\abs{\br{F^{n-1}}'\br{Fc_2}}~\ge
~\abs{\br{F^{n}}'\br{Fc_2}}~\fr1{M\dist{F^nc_2,c_1}^{\mu_1-1}}
~\ge~\fr{\sigma_n}{M(r_1+d_1)^{\mu_1-1}}~.$$
Considering the conformal map $F^{-(n-1)}:~B_{r\del_{n-1}}(z)\,\to\,\,U_{n-1}$ , by the Koebe distortion lemma \[lem:koeb\] we obtain that $$\abs{\br{F^{-(n-1)}}'\br{F^nc_2}}~\ge
~\fr{\delta_n}{\br{2-\delta_n}^3}~\abs{\br{F^{-(n-1)}}'\br{z}}~,$$ and therefore $$\label{eq:8oc,1}
\abs{\br{F^n}'(y)}~\ge~\fr{\delta_n}{8M}~\abs{\br{F^{n-1}}'\br{Fc_2}}
~\dist{y,c_2}^{\mu_2-1}~.$$ Together with the estimate (\[eq:1t,2\]) this yields $$\abs{\br{F^n}'(y)}~\ge~\fr{\delta_n}{8M^2}~\sigma_n
~\fr{(d_2)^{\mu_2-1}}{(r_1+d_1)^{\mu_1-1}}~.$$
By the Koebe lemma, the image of the map $F^{-n}:\,B_{r_1/2}(z)\,\to\, U_{n}$ contains a ball of radius $\fr18\,r_1\,\abs{\br{F^n}'(y)}^{-1}$ and the center $y$. Hence $$\label{eq:8oc,2}
\abs{\br{F^n}'(y)}~\ge~\fr{r_1}{8r_2'}~.$$
Combining the above estimate raised to the power $(\mum-1)$ with the previous one we obtain $$\begin{aligned}
\abs{\br{F^n}'(y)}^{\mum}&\ge&
\fr{\delta_n}{8^{\mum}M^2}~\sigma_n
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~\fr{(r_1)^{\mum-1}}{(r_1+d_1)^{\mu_1-1}}\\
&=&
\fr{\delta_n}{8^{\mum}M^2}~(\delta_n)^{-1}
~(\alpha_n)^2~(\gamma_n)^{\mmax}
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~\fr{(r_1)^{\mum-1}}{(r_1+d_1)^{\mu_1-1}}\\
&>&
{\alpha_n}~(\gamma_n)^{\mmax}~2^{\mmax}
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~\fr{(r_1)^{\mum-1}}{(r_1+d_1)^{\mu_1-1}}~.\end{aligned}$$ The last inequality holds since by the choice of $R$, $n\ge\tau$.
Note that using the inequality (\[eq:1t,2\]) we can modify the estimate (\[eq:1t,1\]) by writing $$\begin{aligned}
(d_2)^{\mu_2}&\le&M\,\dist{Fy,Fc}\\
&~\le~&
\frac{(1-\delta_n)(2-\delta_n)}{\delta_{n}}
~\Delta_{n-1}\,r_1~\abs{\br{F^{n-1}}'(Fc_2)}^{-1}\nonumber\\
&\le&M\,\frac{2\,r_1}{\delta_{n}}\cdot\fr{M(r_1+d_1)^{\mu_1-1}}{\sigma_n}\\
&\le&2^{\mmax}\,M^2\,\br{\max(r_1,d_1)}^{\mu_1}\,
(\alpha_n)^{-2}\,(\gamma_n)^{-\mmax}\\
&=&2^{\mmax}\,M^2\,(\alpha_n)^{-2}~\br{\max(r_1,d_1)}^{\mu_1}\,(\gamma_n)^{-\mmax}
~<~\br{\max(r_1,d_1)}^{\mu_1}\,(\gamma_n)^{-\mmax}~\end{aligned}$$ which completes the proof of the second inequality.
#### Second type.
A piece of a backward orbit is of the second type if there exists a neighborhood of size $R'$ which can be pulled back univalently along the backward orbit. Type two preimages yield expansion along pieces of orbits of a uniformly bounded length. In this “uniformly bounded” setting, type $2$ corresponds to pieces of backward orbits which stay at a definite distance from the critical points.
\[defi:2t\] Let $\dist{z,\J}\,\leq \,~R'/2$. A sequence $F^{-n}(z),\cdots, F^{-1}(z),z$ of preimages of $z$ is of the [*second type*]{} if the ball $B_{R'}(z)$ can be pulled back univalently along it.
\[lem:2t\] Let $\typeii{(z)}$ be the set of all preimages of the second type of some point $z$. Denote $n(y):=n$ if $F^ny=z$.
1. There exists a constant ${\l2t}>0$ so that the following holds: $$\inf_{y\in \typeii{(z)},\,n(y)\ge {\l2t}}~\abs{\br{F^n}'(y)}~>~6~.$$
2. If the Poincaré series $\Sigma_q(v)$ converges for some point $v$, then ${\l2t}$ can be chosen so that $$\sum_{y\in \typeii{(z)},\,n(y)\ge {\l2t}}\abs{\br{F^n}'(y)}^{-q}
~<~\fr1{36}~.$$
3. Once ${\l2t}$ is chosen there exist positive constants $K$ and $C(q)$ such that $$\begin{array}{ll}
\sum_{y\in \typeii{(z)},n(y)~\le~
{\l2t}}\abs{\br{F^n}'(y)}^{-q}~<~C(q)~,
&{\rm~and}\\
\abs{\br{F^n}'(y)}~>~K~
&{\rm~for~every~}y\in \typeii{(z)},~n(y)\leq {\l2t}~.
\end{array}$$
4. Once ${\l2t}$ is chosen there is a positive $R_{2t}$ such that for any point $z$ and its second type preimage $y\,=\,F^{-{\l2t}}z$ of order ${\l2t}$ we have $$B_{2R_{2t}}(y)~\subset~F^{-{\l2t}}\br{B_{R'}(z)}~.$$
The proof of the first part is standard and follows from the compactness argument. Suppose that the claim does not hold. Then there is an infinite collection of sequences of the second type $$F^{-n_{i}}(z_{i}), \dots \,F^{-1}(z_{i}), z_i$$ such that $n_{i} \rightarrow \infty$ and $\abs{\br{F^{n_i}}'(F^{-n_i}(z_{i}))}\,\leq\,6$. Consider the preimages $F^{-n_i}\br{B_{R'/2}(z'_{i})}\ni F^{-n_i}(z_{i})$, where $z'_i$ is the closest point to $z_i$ in $\J$. Without loss of generality we can assume that $R'\ll\diam\J$. By the Koebe distortion lemma \[lem:koeb\], any of these preimages contains a round ball around $F^{-n_i}(z'_{i})$ of the radius larger than $\eta\,:=\,R'/(8\cdot6)$. Let $y$ be an accumulation point of the sequence $F^{-n_i}(z'_{i})\in\J$. By the construction, there is an increasing subsequence $\brs{k_j}$ of the sequence $\brs{n_j}$ such that the images of $B_{\eta/2}(y)$ under $F^{k_j}$ are contained in $B_{R'}(z)\,\not\supset\,\J$ and we arrived at a contradiction, since $y\in\J$ and the Julia set has [*the eventually onto*]{} property (see Theorem 1 in [@CG]).
To prove the second part, we recall again that if the Poincaré series for $v$ converges then $v$ must be a non-exceptional point, i.e. with preimages dense in the Julia set. We can fix finitely many of them, say $v_1,\dots,v_n$, so that they are $R'/4$-dense in $J$ and their Poincaré series will also converge. Then for any point $z$ with $\dist{z,\J}\,<\,~R'/2$ there is a point $v_j\in B_{3R'/4}(z)$. By the Koebe distortion lemma \[lem:koeb\], we can write $$\sum_{y\in \typeii{(z)}}\abs{\br{F^n}'(y)}^{-q}~\lesssim~
\Sigma_q(v_j)\,\le\,\max_{j}\Sigma_q(v_j)~\lesssim~\Sigma_q(v)
\,<\,\infty~,$$ and after a proper choice of ${\l2t}$ the second inequality of the lemma follows.
For the third part, we use the Koebe distortion lemma \[lem:koeb\], $$1~\grtsim~\diam(F^{-n}B_{R'}(z))~\ge~\fr14~R'~\abs{\br{F^{-n}}'(z)}
~=~\fr{1}{4}~R'~\abs{\br{F^{n}}'\br{F^{-n}(z)}}~.$$ Both statements easily follow.
The fourth part is an immediate consequence of the Koebe distortion lemma.
#### Third type.
The third type gives more leeway in choosing blocks than the types $1$ and $2$. In [@ceh] a more restrictive approach was used. Blocks of type $3$ connect the large scale with critical points.
\[defi:3t\] A sequence $F^{-n}(z),\cdots,F^{-1}(z),z$ of preimages of $z$ is of the [*third type*]{} with respect to the critical point $c_2$ if there exists a radius $r<2R'$ such that
Shrinking neighborhoods $U_k$ for $B_r(z)$, $1\le k< n$, avoid critical points, The critical point $c_{2}\,\in\,\dd U_n$ .
Note that given the sequence of preimages, the radius $r$ is uniquely determined by the condition 2).
The next lemma estimates the expansion along the third type preimages. To simplify notation set $\mu_2\,:=\,\m{c_2}$, $d_2\,:=\,\dist{F^{-n}z,c_2}$. Let $r_2'$ be the maximal radius so that $B_{r_2'}(F^{-n}z)\,\subset\,F^{-n}(B_{r/2}(z))$. For consistency, put $r_{1}:=r$.
\[lem:3t\] There exists $C_{3t}>0$ such that for any sequence of preimages of the third type $F^{-n}(z),\cdots,F^{-1}(z),z$ and any $\mum\le\mmax$ we have
$ \abs{\br{F^n}'(y)}^{\mum}~>~C_{3t}
~{\alpha_n}~(\gamma_n)^{\mmax}
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~{(r_1)^{\mum-1}}~$, $\dist{Fy,Fc_2}~\leq ~R~$,
If the sequence of the third type is preceded by a sequence of the second type of length ${\l2t}$ then we can substitute $R_{2t}$ for $r_1$ in the estimate [*1)*]{}.
The proof of the second inequality follows from ($\ref{eq:1t,1}$). Indeed, we did not use there the existence of a critical point close to $z$, so the proof works for the third type of preimages. Equation (\[eq:1t,1\]) implies the following estimate $$(d_2)^{\mu_2}~\le~M\,\dist{Fy,Fc}~\le
~M\,2R'\sup\abs{F'}/(\alpha_n)^2~,$$ and therefore $$\label{eq:3tdecayofd}
\dist{y,c}~\rightarrow~0~,~~~\mbox{as}~~ n\to\infty~.$$
The inequalities (\[eq:8oc,1\]) and (\[eq:8oc,2\]) from the proof of Lemma \[lem:1t\] are valid also for the third type preimages. So using the same notation, we can write $$\begin{aligned}
\abs{\br{F^n}'(y)}&\ge&
\fr{\delta_n}{8M}~\abs{\br{F^{n-1}}'(Fc_2)}~{(d_2)^{\mu_2-1}}\\
&\ge&\fr{\delta_n}{8M}~\fr{\sigma_{n}}{\supF}~{(d_2)^{\mu_2-1}}
~,\end{aligned}$$ and $$\abs{\br{F^n}'(y)}~\ge~\fr{r_1}{8r_2'}~.$$ Combining these estimates we conclude that $$\begin{aligned}
\abs{\br{F^n}'(y)}^{\mum}&\ge&
\fr{\delta_n}{8^{\mum}M}~\fr{\sigma_{n}}{\supF}
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~{(r_1)^{\mum-1}}\\
&>&C_{3t}~{\alpha_n}~(\gamma_n)^{\mmax}
~~\fr{(d_2)^{\mu_2-1}}{(r_2')^{\mum-1}}
~{(r_1)^{\mum-1}}~,\end{aligned}$$ where $$C_{3t}~:=~\inf_n\brs{\alpha_n~\br{8^{\mmax} M\supF}^{-1}\,}~>~0~.$$ It remains to observe that the last assertion of the lemma is true since if a sequence of the third type is preceded by a sequence of the second type of length ${\l2t}$ then $r_1>R_{2t}$ by Lemma \[lem:2t\].
Specification of orbits {#sec:specif}
=======================
We will estimate expansion along the backward orbits by decomposing them into blocks of different types described in Section \[sec:backexpan\].
Lemma \[lem:2t\], which governs the expansion in the large scale, was stated in the proximity of the Julia set, and to apply it we will need the following Lemma, which holds in the absence of parabolic points (see Lemma 5 in [@ceh]).
\[lem:comp\] There exists $\epsilon>0$ such that the backward orbit of any $z$ in the $\epsilon$-neighborhood of the Julia set stays in the $R'/2$-neighborhood of the Julia set.
This means that the assertions of Lemma \[lem:2t\] are valid for type $2$ preimages $F^{-n}(z)$, $n>0$, provided $z$ belongs to an $\epsilon$-neighborhood of the Julia set.
\[defi:adhoc\] We say that a backward orbit $y=F^{-n}(z),\dots, z$ is decomposed into a sequence of blocks if there exists an increasing sequence of integers $0=n_{0}< \dots < n_{k}=n$ so that for every $i=0,\dots,k-1$ the orbit $F^{-n_{i+1}}(z),\dots, F^{-n_{i}}(z)$ is of type $1$, $2$, or $3$. Given a pair of integers $0\leq r<l\leq n$, we say that a subsequence $F^{-n_{l}}(y),\dots, F^{-n_{r}}(y)$ yields expansion $M$ if $$|(F^{n_{l}-n_{r-1}})'(y)|\geq M\;.$$ The point $F^{-n_{l}}(z)$ is called a [*terminal*]{} point of the subsequence.
Recall that $\sigma_n\,:=\,\min_{c\in\Crit}\brs{\abs{\br{F^n}'(Fc)}}$ was represented as the product $(\alpha_n)^2\,(\gamma_n)^{\mmax}\,/\,\delta_n~$. The sequence $\brs{\delta_n}$ will majorate the distortion in the shrinking neighborhoods construction, $\brs{\alpha_n}$ will swallow all remaining constants, and $\brs{\gamma_n}$ will provide the desired expansion.
\[lem:main\] Let $\epsilon$ be supplied by Lemma \[lem:comp\]. Assume that a rational function $F$ satisfies the summability condition with an exponent $\alpha\leq 1$ and set $\beta=\mmax\alpha/(1-\alpha)$. Suppose that a point $z$ belongs to $\epsilon$-neighborhood of the Julia set $J$ and a ball $B_{\Delta}(z)$ can be pulled back univalently by a branch of $F^{-N}$. We claim that there exist positive constants $L'>L, K$ independent of $z, \Delta$, and $\epsilon$ such that the sequence $F^{-N}(z),\dots, z$ can be decomposed into blocks of types $1$, $2$, and $3$, and
- every type $2$ block, except possibly the leftmost one, has the length contained in $[L,L')$ and yields expansion $6$,
- the leftmost type $2$ block has the length contained in $[0,L]$ and yields expansion $K>0$,
- all subsequences of the form $1\dots13$, except possibly the rightmost one, yield expansion $$\gamma_{k_j}\dots\gamma_{k_1}\gamma_{k_0}~,$$ $k_i$ being the lengths of the corresponding blocks,
- the rightmost sequence of the form $1\dots 13$ yields expansion $$\begin{array}{ll}
\gamma_{k_j}\dots\gamma_{k_1}\gamma_{k_0}~\Delta^{(1-\mu(c)/\mmax)}&
{\it~if~a~critical~point~}c\in B_{\Delta}(z)~,\\
\gamma_{k_j}\dots\gamma_{k_1}\gamma_{k_0}~\Delta^{(1-1/\mmax)}&
{\it~if~otherwise~.}
\end{array}$$
Inductive decomposition of backward orbits {#sec:induc}
------------------------------------------
Let $z$ be a point which satisfies the assumptions of Lemma \[lem:main\]. We fix $N$ and a sequence of the preimages $F^{-N}(z),\dots,F^{-1}(z),z$. We will split this sequence in the subsequences of the first, second, and third types.
Namely we will define by induction sequences $\{n_j\}$ and $\{z_{j}:= F^{-n_{j}}(z)\}$ such that $n_0=0,\,n_{m-1}>N-{\l2t},\,n_m=N$, and
For every $j>0$ the sequence $F^{-n_{j}}z,\dots,F^{-n_{j-1}}z$ is of the first, second, or third type. For $j>0$ either the sequence $F^{-n_{j}}z,\dots,F^{-n_{j-1}}z$ is of the second type (case [IIa)]{}), or some critical value $F(c_j)\in B_{R}(Fz_j)$ (case [IIb)]{}).
Some additional properties will be discussed in the process of the construction.
#### Base of induction.
If the shrinking neighborhoods for $B_{2R'}(z_{0})$ do not contain critical points, we set $n_1:={\l2t}$, $z_1:=F^{-{\l2t}}z$, and the condition IIa) is satisfied for $z_1$. We start from $j=1$ and continue the inductive procedure as below.
By Lemma \[lem:comp\], $\dist{z_{j},J}<R'/2$, and hence sequences of the second type will yield desired expansion.
Otherwise we take $r:=\del$. By the choice of $\del$, the shrinking neighborhoods for $B_\del(z)$ omit the critical points. We increase $r$ continuously until certain shrinking neighborhood $U_k$ hits some critical point $c$, i.e. $c\in\dd U_k$. It must happen for some $r=r_0$ with $\del<r_{0}<2R'$. Set $n_{1}:=k$. Then $z_1$ is the third type preimage of $z_0$, and the condition IIb) for $z_1$ is satisfied by Lemma \[lem:1t\].
#### Inductive procedure.
Suppose we have already constructed $z_j$.
##### Case IIa.
We enlarge the ball $B_r(z_{j})$ continuously increasing the radius $r$ from $0$ until one of the following conditions occurs:
for some $k$ the shrinking neighborhood $U_k$ for $B_{r}(z_{j})$ hits some critical point $c'$, $c'\in\dd U_k$, the radius $r$ reaches the value of $2R'$.
In the case 1) we put $n_{j+1}:=n_j+k$. The condition I) is satisfied: $z_{j+1}$ is the third type preimage of $z_j$. The condition IIb)is satisfied by Lemma \[lem:3t\].
In the case 2) set $n_{j+1}:=n_j+{\l2t}$. Then $z_{j+1}$, $\dist{z_{j+1},\J}<R'/2$ is the second type preimage of $z_j$ of the length ${\l2t}$. Clearly, $z_{j+1}$ satisfies conditions I) and IIa).
##### Case IIb.
Suppose that we have IIb), but not IIa). Set $r=0$. The shrinking neighborhoods $U_{l}$ for $B_r(z_j)$, $l\le N-n_j$, do not contain critical points. We increase $r$ continuously until some domain $U_k$ hits a critical point $c'$, $c'\in\dd U_k$. This must occur for some $r<2R'$, since IIa) is not satisfied for $z_{j}$.
Let $n_{j+1}:=n_j+k$. Then the condition I) is satisfied: $z_{j+1}$ is the first type preimage of $z_j$. Lemma \[lem:1t\] implies the condition IIb).
#### Coding.
As a result of the inductive procedure, we have decomposed the backward orbit of the point $z$ into pieces of type $1$, $2$ and $3$. This gives a coding of backward orbits by sequences of $1$’s, $2$’s and $3$’s which are always read [*from right to left*]{}. Not all combinations of the entries are admissible here. By the construction, type $3$ is always preceded by type $2$ except the coding sequence starts with $3$. For example we could have a sequence of the form $$\dots11111323222211113221113~,$$ $F$ acts from the left to the right and our inductive procedure has started from the right end. All pieces of the second type except maybe for the very last one have the length ${\l2t}$.
This decomposition of backward orbits into pieces of different types is by no means the only one satisfying the desired properties. On the contrary, in the next Subsection we will have to reshuffle the coding slightly to obtain the claim of Lemma \[lem:main\].
Estimates of expansion {#subsec:estimates}
----------------------
#### Growth of the derivative along sequences containing $1\dots 13$.
Consider a sequence $1\dots 132$ obtained in the inductive construction. We will estimate expansion along a part of the orbit corresponding to its shorter sequence $1\dots 13$. Suppose that in the sequence $1\dots132$ the consecutive pieces of type $1$ have the lengths $k_{i},~ i=1,\dots, j$, and the piece of type $3$ has the length $k_0$. Let $k=k_{0}+\dots+ k_{j}$. By Lemma \[lem:1t\] and Lemma \[lem:3t\] with $\mum:=\mmax$ we have that $$\begin{aligned}
\abs{\br{F^k}'(y)}^{\mmax}&>&
\prod_{i=1}^{j}
~{\alpha_{k_i}}~(\gamma_{k_i})^{\mmax}~2^{\mmax}
~\fr{d_{i+1}^{\mu_{i+1}-1}}{(r_{i+1}')^{\mmax-1}}
~\fr{r_i^{\mmax-1}}{(r_i+d_i)^{\mu_i-1}} \nonumber\\
&&~~~\cdot~~{}
C_{{3t}}~{\alpha_{k_1}}~(\gamma_{k_1})^{\mmax}
~~\fr{(d_2)^{\mu_2-1}} {(r_2')^{\mmax-1}}
~{(R_{2t})^{\mmax-1}}\nonumber\\
&>&\left(~C_{{3t}}~{\prod_{i=0}^{j}\alpha_{k_i}}\,(\gamma_{k_i})^{\mmax}\right)
~~\cdot~(R_{2t})^{\mmax-1}~\cdot
~\fr{d_{j+1}^{\mu_{j+1}-1}}{(r_{j+1}')^{\mmax-1}}\nonumber \\
&&~~~\cdot~~{}\left(
~\prod_{i=1}^{j}~2^{\mmax}
~\fr{d_{i}^{\mu_{i}-1}}{(r_{i}')^{\mmax-1}}
~\fr{r_i^{\mmax-1}}{(r_i+d_i)^{\mu_i-1}}\right). \label{equ:xia}\end{aligned}$$ Since $r_i'\,<\,\min(r_i,d_i)$ and $\mu_i\,\le\,\mmax$, we obtain that $$\label{eq:oc11,2}2^{\mmax}
~~\fr{d_{i}^{\mu_{i}-1}}{(r_{i}')^{\mmax-1}}
~\fr{r_i^{\mmax-1}}{(r_i+d_i)^{\mu_i-1}}~~>~~1~.$$ Also $r'_{j+1} < d_{j+1}$ and $$\fr{d_{j+1}^{\mu_{j+1}-1}}{(r_{j+1}')^{\mmax-1}}~>~1~.$$ Combining (\[equ:xia\]) and the estimates above, we obtain that $$~\label{equ:xia'}
\abs{\br{F^k}'(y)}^{\mmax} >
(R_{2t})^{\mmax-1}~\cdot~
C_{{3t}}~{\prod_{i=0}^{j}\alpha_{k_j}}\,(\gamma_{k_i})^{\mmax}~~.$$
If the rightmost sequence $1\dots 13$ is not preceded by $2$ then similarly as above, using Lemma \[lem:1t\], we obtain that $$\label{eq:oc10,2}
\abs{\br{F^k}'(y)}^{\mmax} >
~C_{{3t}}~{\prod_{i=0}^{j}\alpha_{k_j}}\,(\gamma_{k_i})^{\mmax}
~\cdot~\Delta^{\mmax-\mu(c)}~$$ if there is a critical point $c$ inside $B_{\Delta}(z)$. Otherwise we use Lemma \[lem:3t\] instead to infer that $$\label{eq:oc10,3}
\abs{\br{F^k}'(y)}^{\mmax} >
~C_{{3t}}~{\prod_{i=0}^{j}\alpha_{k_j}}\,(\gamma_{k_i})^{\mmax}
~\cdot~\Delta^{\mmax-1}~.$$
#### Derivation of the Main Lemma.
Consider a sequence of the preimages $F^{-m}(z),\dots,z$ with a coding sequence of the form $$\dots11111323222211113221113~.$$ If the lenght $k_{1}+\dots +k_j$ of a piece of the backward orbit of the form $11\dots13$ is large enough then the expansion prevails over accumulation of distortion and small scale constants in the estimate (\[equ:xia’\]). Otherwise $k_{1}+\dots +k_j$ is uniformly bounded and the whole piece of the backward orbit will be treated as type $2$. Consequently, we will convert the code $1\dots132$ into $2$.
We proceed with estimates along the above lines to complete the proof of Lemma \[lem:main\]. Since $\lim_{i\rightarrow
\infty}\alpha_{i}=\infty$, there exists $\tau$ such that $\alpha_{i}\geq 8$ for $i\geq \tau$. Set $$\alpha'_n~:=~\inf\brs{\prod_{j}\alpha_{i_j}:
~{i_0+i_1+i_2+\dots\ge n};~i_1,i_2,\dots\ge\tau}~,$$ and observe that $\lim_{n\to\infty}\alpha'_n\,=\,\infty$. Now we choose large $L''$ so that for $n\ge L''$ one has $${\alpha'_n}\,C_{3t}\,(R_{2t})^{\mmax-1}\,\ge\,1\;.$$
A new coding of the preimages $F^{-m}(z),\dots,z$ is designed as follows: take a piece of the backward orbit corresponding to a subsequence $1\dots132$ of the length $k$. The consecutive pieces (counted from the right to the left) have the lengths $k_{i},~ i=0,\dots, j$. Consider two possible cases:
If $k<L''$ then $1\dots 132$ is replaced by $2$. The corresponding block of the preimages is indeed of type $2$ and the length $n\,:=\,{\l2t}+k$ with $n<L':=({\l2t}+L'')$. If $k\ge L''$ then $1\dots132$ remains unchanged and by the estimate (\[equ:xia’\]) and the definition of $L''$, the derivative of $F^{k}(y)$ is greater than $\gamma_{k_{j}}\cdots\gamma_{k_{1}}$. The last pair of estimates of the Main Lemma \[lem:main\] follows immediately from $(\ref{eq:oc10,2})$ and $(\ref{eq:oc10,3})$ and the definition of $L''$.
The proof of the Main Lemma \[lem:main\] is completed.
#### Strong expansion along some sequences $11\dots1$.
Consider a subsequence $11\dots1$ obtained in the inductive construction. Suppose that $x$ is a terminal point of this subsequence and the consecutive pieces of type $1$ have lengths $k_{i},~ i=0,\dots,j$. Following the notation of Lemma \[lem:main\], we prove the lemma below, which will be later used in our investigation of conformal and invariant densities.
\[lem:proxi\] Let $G$ be a set of indexes $j$ such that $d_j<r_j$ and $\mmax'$ be the maximal multiplicity which occurs in the sequence $\brs{\mu_j:\,j\in G}$. Choose an index $j$ from the set $G':=\brs{j\in G:\,\mu_j=\mmax'}$ and denote $k=\sum_{i=0}^{j-1}k_i$. Then $$\abs{\br{F^k}'(x)}~>~\prod_{i=0}^{j-1}
\gamma_{k_i}~.$$
If $\mum:=\mmax'$ then Lemma \[lem:1t\] and Lemma \[lem:3t\] imply a counterpart of the estimate (\[equ:xia\]), $$\begin{aligned}
\label{equ:xia1}
\abs{\br{F^k}'(x)}^{\mmax'}&>&
\prod_{i=0}^{j-1}
~{\alpha_{k_i}}~(\gamma_{k_i})^{\mmax'}~2^{\mmax'}
~\fr{d_{i+1}^{\mu_{i+1}-1}}{(r_{i+1}')^{\mmax'-1}}
~\fr{r_i^{\mmax'-1}}{(r_i+d_i)^{\mu_i-1}} \nonumber\\
&>&\left(~{\prod_{i=0}^{j-1}\alpha_{k_i}}\,(\gamma_{k_i})^{\mmax'}\right)
~~\cdot~~\fr{d_{0}^{\mu_{0}-1}}{(r_{0}')^{\mmax'-1}}\\
&&~\cdot~{}\left(
~\prod_{i=0}^{j-1}~2^{\mmax'}
~\fr{d_{i}^{\mu_{i}-1}}{(r_{i}')^{\mmax'-1}}
~\fr{r_i^{\mmax'-1}}{(r_i+d_i)^{\mu_i-1}}\right)~
\cdot~\,\fr{r_j^{\mmax'-1}}{(r_j+d_j)^{\mu_j-1}}\;.\nonumber\end{aligned}$$ We cannot proceed exactly as in (\[equ:xia\]), since it was essential that $\mmax$ was the maximal multiplicity. Instead, we use the properties of $\mmax'$ and the set $G$ as follows:
[(i)]{} If $i\in G$ then $r_i'\,<\,\min(r_i,d_i)$ and $\mu_i\,\le\,\mmax'$. We see that the estimate (\[eq:oc11,2\]) holds with $\mmax$ replaced by $\mmax'$, $$2^{\mmax'}~~\fr{d_{i}^{\mu_{i}-1}}{(r_{i}')^{\mmax'-1}}
~\fr{r_i^{\mmax'-1}}{(r_i+d_i)^{\mu_i-1}}~~>~~1~.$$
[(ii)]{} If $i\notin G$ then $d_i\ge r_i>r_i'$ and the same estimate is still valid, $$2^{\mmax'}~\fr{d_{i}^{\mu_{i}-1}}{(r_{i}')^{\mmax'-1}}
~\fr{r_i^{\mmax'-1}}{(r_i+d_i)^{\mu_i-1}}~\ge~
2^{\mmax}~\fr{d_{i}^{\mu_{i}-1}}{(2d_i)^{\mu_i-1}}
~\fr{r_i^{\mmax'-1}}{(r_{i}')^{\mmax'-1}}~>~1~.$$
[(iii)]{} By our choice, $j\in G'$. This means that $d_j<r_j$ and $\mu_j=\mmax'$. Hence, $$2^{\mmax'}\,\fr{r_j^{\mmax'-1}}{(r_j+d_j)^{\mu_j-1}}~\ge~
2^{\mmax}~\fr{r_j^{\mmax'-1}}{(2r_j)^{\mmax'-1}}~>~1~.$$
[(iv)]{} By the definition, $r'_{0} < d_{0}$ and $$\fr{d_{0}^{\mu_{0}-1}}{(r_{0}')^{\mmax'-1}}~>~1~.$$
Inserting the estimates ${\rm(i)-(iv)}$ into $(\ref{equ:xia1})$ we obtain the claim of the lemma.
#### Proof of Proposition \[prop:fatouexpan\].
Fix a point $z$ sufficiently close to the Julia set and a branch of $F^{-n}$ at $z$. By Lemma \[lem:techseq\], $$\sum_{n}(\gamma_n)^{-\beta}~<~1/4~,~~~\beta\,
:=\,\fr{\mmax\alpha}{1-\alpha}~.$$ Let $\gamma_n'\,:=\,\inf\brs{\prod_{j}\gamma_{i_j}:\,i_0+i_1+i_2+\dots= n}$. By simple algebra, $$\begin{aligned}
\sum_{n}(\gamma_n')^{-\beta}
&<&\sum_{n}(\gamma_n)^{-\beta}+\br{\sum_{n}(\gamma_n)^{-\beta}}^2
+\br{\sum_{n}(\gamma_n)^{-\beta}}^3+\dots\\
&<&\fr14+\br{\fr14}^2+\br{\fr14}^3+\dots~=~\fr13~.\end{aligned}$$
Lemma \[lem:main\] gives a decomposition of the orbit $F^{-n}(z),\dots,F^{-1}(z),z$ into pieces of type $1$, $2$, and $3$, with the following properties (we restate them using new notation):
[(i)]{} each piece of the form $1\dots13$ of length $k$, except the rightmost one, yields expansion $\gamma_k'$ ,
[(ii)]{} the rightmost piece of the form $1\dots13$ of length $k$ yields expansion $\gamma_k'\Delta^{1/\mmax-1}$ ,
[(iii)]{} each piece of the form $2$, except possibly the leftmost one, has the length $l\in[L,L')$ and yields expansion $6\ge\lambda^l$, where $\lambda\,:=\,6^{1/L'}\,>\,1$ ,
[(iv)]{} the leftmost piece of the form $2$, has the length $l\in[0,L)$ and yields expansion $K$ .
If we set $$\omega_n~:=~\inf
\brs{\,K\,\lambda^{k_0}\,\prod_{j\ge1}\gamma_{k_j}'\,:
~k_0+k_1+k_2+\dots\in{[n-{\l2t},n)}}~,$$ then properties (i)–(iv) above clearly imply $$\abs{\br{F^{n}}'(F^{-n}z)}~>~\Delta^{1-1/\mmax}~\omega_n~.$$ On the other hand, $$\begin{aligned}
\sum_{n}(\omega_n)^{-\beta}
&<& K^{-\beta}\,
\br{1+\lambda^{-\beta}+\lambda^{-2\beta}+\dots}\\
&&~~\cdot\br{\sum_{n}(\gamma_n')^{-\beta}+\br{\sum_{n}(\gamma_n')^{-\beta}}^2
+\br{\sum_{n}(\gamma_n')^{-\beta}}^3+\dots}\\
&<&K^{-\beta}\,\br{1-\lambda^{-\beta}
}^{-1}\,\br{\fr13+\br{\fr13}^2+\br{\fr13}^3+\dots}\\
&<&K^{\beta}\,\br{1-\lambda^{-\beta}}^{-1}\,{\fr12}~<~\infty~,\end{aligned}$$ which completes the proof of the first inequality of Proposition \[prop:fatouexpan\]. The proof of the second, when a critical point $c\in B_{\Delta}(z)$, is very much the same.
Summability along backward orbits
---------------------------------
Fix a point $z$ and a positive number $\Delta$. Let $\xx{}{z,\Delta}$ stand for the set of all preimages of $z$ such that a ball $B_{\Delta}(z)$ can be pulled back univalently along the corresponding branch. By Lemma \[lem:main\], every backward orbit of $z$ which terminates at $y \in \xx{}{z,\Delta}$ can be decomposed into blocks of type $1$, $2$, or $3$.
\[defi:symbol\] In the decomposition of the Main Lemma, let $x\in\xx{}{z,\Delta}$ be a point which starts a type $3$ block. Denote by $\typei(x|z)=\typei^{\Delta}(x|z)$ the set of all $y\in\xx{}{z,\Delta}$ which are the endpoints of type $1$ blocks preceded by exactly one type $3$ block beginning at $x$. For example, preimages of $x$ which are the endpoints of blocks $13$, $113, \dots$ belong to $\typei(x|z)$. Let $L'>L$ be the constants supplied by the Main Lemma. In the decomposition of the Main Lemma, let $x\in\xx{}{z,\Delta}$ be a point which starts a type $2$ block. Denote by $\typeii_{l}(x|z)$ and $\typeii_{s}(x|z)$, respectively, the sets of all “long” (of order $L'>n(y)\ge L$) and “short” (of order $n(y)<L$) type $2$ preimages $y$ of $x$.
Note that the definition of $\typei(x|z)=\typei^{\Delta}(x|z)$ depends on the choice of $\Delta$. Also, the definitions of $\typeii_{l}(x|z)$ and $\typeii_{s}(x|z)$ depend on the choice of $\Delta$, however all estimates from Lemma \[lem:2t\] are independent of $\Delta$, so we simplify the notation by omitting $\Delta$.
Note also that $z$ is its own preimage of order zero (since formally $F^{0}(z)=z$), so we write, e.g. $y\in\typei(z|z)$ if $y\in\xx{}{z,\Delta}$ is the endpoint of a type $1$ block preceded by exactly one type $3$ block beginning at $z$.
We will drop $z$ from the notation of $\typei(x|z), \typeii_{s}(x|z),\typeii_{l}(x|z)$ whenever no confusion can arise.
\[lem:main2\] Let $\beta=\mmax\alpha/(1-\alpha)$. If a rational function $F$ satisfies the summability condition with an exponent $\alpha\leq 1$ then there exists $\epsilon>0$ so that for every point $z$ from $\epsilon$-neighborhood of the Julia set $J$ and every set $\typei(x|z)=\typei^{\Delta}(x|z)$, $$\begin{array}{ll}
\sum_{y\in\typei(x|z)}\abs{\br{F^{n(y)}}'(y)}^{-\beta}~<~\fr1{3}
&{\it ~if~}x\neq z~,\\
\sum_{y\in\typei(x|z)}\abs{\br{F^{n(y)}}'(y)}^{-\beta}
~<~\fr13~\Delta^{\beta(1/\mmax-1)}~
&{\it~if~}x=z~, \\
\sum_{y\in\typei(x|z)}\abs{\br{F^{n(y)}}'(y)}^{-\beta}
~<~\fr13~\Delta^{\beta(\mu(c)/\mmax-1)}~
&{\it~if~}x=z{\it~and~a~critical~point~}c\in B_{\Delta}(z).
\end{array}$$
We will work with sequences $\alpha_{n}$, $\gamma_{n}$, and $\delta_{n}$ supplied by Lemma \[lem:techseq\]. Recall that $\sum_{n}(\gamma_{n})^{-\beta}\,<\,1/(16\deg F)$.
Observe that any point $y\in F^{-k}(z)$ has at most $4\deg F$ preimages of a given length which are of the first or the third type. In fact, since pull-backs to the critical values are univalent, there is only one way to hit a specific critical value after particular number of steps, and thus only $\mu(c)$ ways to hit a critical point $c$, but $$\label{equ:deg}
\sum_{c}\mu(c)\,=\,\#\{\Crit\}+\sum_{c}(\mu(c)-1)
\,\le\,2(\deg F-1)+2(\deg F-1)\,<\,4\deg F~.$$ Therefore, for every sequence $k_0,k_1,\dots,k_m$ of positive integers there are at most $(2\deg F)^{m+1}$ sequences $1\dots13$ with the corresponding lengths of the pieces of type $1$ and $3$. By the Main Lemma \[lem:main\], if $x\not= z$ then for every $y\in \typei(x)$ $$\abs{\br{F^{n(y)}}'(y)} \geq \gamma_{k_0}\gamma_{k_1}\dots\gamma_{k_m}$$ and $$\begin{aligned}
\label{eq:1tpoincare}
\sum_{y\in\typei(x)}\abs{\br{F^{n(y)}}'(y)}^{-\beta}
&<&\sum_{m,k_0,k_1,\dots,k_m}~(4 \deg F)^{m+1}
~\br{\gamma_{k_0}\gamma_{k_1}\dots\gamma_{k_m}}^{-\beta}\nonumber\\
&<&{\textstyle 4 \deg F\,\sum_{k}\gamma_{k}^{-\beta}
\,+\,(4 \deg F\,\sum_{k}\gamma_{k}^{-\beta})^2
\,+\,(4 \deg F\,\sum_{k}\gamma_{k}^{-\beta})^3
\,+\,\dots}\nonumber\\
&<&\fr14~+~\br{\fr14}^2~+~\br{\fr14}^3~+~\dots
~=~\fr1{3}~.\end{aligned}$$
If $x=z$ then the rightmost sequence begins with $3$. Similarly as before, using the estimates of the Main Lemma, we obtain that $$\begin{aligned}
\sum_{y\in\typei(z)}\abs{\br{F^{n(y)}}'(y)}^{-\beta}
&<~\fr13~\Delta^{\beta(1/\mmax-1)}~,
&\quad{\rm or~}\label{typeisum}\\
\sum_{y\in\typei(z)}\abs{\br{F^{n(y)}}'(y)}^{-\beta}
&<~\fr13~\Delta^{\beta(\mu(c)/\mmax-1)}~,
&\quad{\rm if~ a ~critical~point~}c\in B_{\Delta}(z)~.\nonumber\end{aligned}$$ This completes the proof of Lemma \[lem:main2\].
\[lem:bbb\] Assume that the Poincaré series with exponent $q$ is summable for some point $v\in \CC$. Then there exists $\epsilon>0$ so that for every point $z$ from $\epsilon$-neighborhood of the Julia set $J$ and every set $\typeii_{l}(x|z)$ and $\typeii_{s}(x|z)$, $$\begin{array}{ll}
\sum_{y\in\typeii_{{l}}(x|z)}\abs{\br{F^{n(y)}}'(y)}^{-q}
~<~\fr1{36}~&,\\
\sum_{y\in\typeii_{{s}}(x|z)}\abs{\br{F^{n(y)}}'(y)}^{-p}
~<~C(p)&{\rm~for~any~}p~.
\end{array}$$
This is a reformulation of Lemma \[lem:2t\] in the new notation.
[Poincaré series]{} {#sec:Fatou}
===================
In this Section we analyze Poincaré series, particularly proving a self-improving property of the Poincaré exponent. Theorem \[theo:poincare\] is a direct consequence of this property. We recall that $\xx{}{z,\Delta}$ stands for the set of all preimages $F^{-n}z$, $n\in\N$, such that the ball $B_\Delta(z)$ can be pulled back univalently along the corresponding branch of $F^{-n}$.
\[prop:poincare\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{q}{\mmax+q}~,$$ $q>0$, and the Poincaré series with exponent $q$ converges for some point $v$, $\Sigma_q(v)<\infty$. Then there exist $p<q$, $\epsilon>0$, and $C(\epsilon,p)$ so that for every point $z$ in the $\epsilon$-neighborhood of the Julia set $$\begin{array}{ll}
\sum_{y\in\xx{}{z,\Delta}}\abs{\br{F^n}'(y)}^{-p}~<~C~
\Delta^{p(\fr{\m{c}}{\mmax}-1)}~&
{~if~a~critical~point~}c\in B_{\Delta}(z)~,\\
\sum_{y\in\xx{}{z,\Delta}}\abs{\br{F^n}'(y)}^{-p}~<
~C~\Delta^{p(\fr1{\mmax}-1)}~&
{~otherwise~.}
\end{array}$$
\[cor:critexpon\] Assume that $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{q}{\mmax+q}~,$$ $q>0$, and there exists a point $z\in \CC$ so that the Poincaré series $\Sigma_q(z)$ converges. Then
- $\dpoin(w)\,<\,q$ if $w$ is at a positive distance from the orbits of the critical points,
- $\dpoin(c)\,<\,q$ if $c$ is a critical point of the maximal multiplicity.
If the distance of $w$ to the critical orbits in $J$ is positive then all preimages of $w$ belong to $\xx{}{w,\Delta}$ with $\Delta$ sufficiently small. This yields $\Sigma_p(w)<\infty$.
If $c$ is a critical point of the maximal multiplicity $\mu(c)=\mmax$ then $$\sum_{y\in\xx{}{c,\Delta}}\abs{\br{F^n}'(y)}^{-p}~<~C~
\Delta^{p(\fr{\m{c}}{\mmax}-1)}~=~C~,$$ and letting $\Delta$ go to zero we obtain that $$\sum_n\,\sum_{y\in F^{-n}c}\,\abs{\br{F^n}'(y)}^{-p}~<~C~.$$
If a rational function $F$ satisfies the summability condition with an exponent $\alpha\,<\,{2}/({\mmax+2})$ and its Julia set is not the whole sphere, then there exists $p<2$ so that the conclusion of Proposition \[prop:poincare\] holds.
If the Julia set is not the whole sphere and the Fatou set does not contain elliptic components then there exists a point $v\in \CC\setminus J$ such that the Poincaré series $\Sigma_2(v)$ converges. This a classical area argument, [@sullivan-rio]. It is enough to notice that there exists a small ball $B_\delta(v)$ free from the critical orbits and with preimages pairwise disjoint.
#### Proof of Theorem \[theo:poincare\].
Suppose that the Poincaré series $\Sigma_q(v)$ converges for a point $v\in \CC$. If $q\le\dpoin(J)$ then, by Corollary \[cor:critexpon\], there exist $\epsilon>0$ so that $\dpoin(J)\le q-\epsilon<\dpoin(J)$, a contradiction. This means that for $q=\dpoin(J)$, the Poincaré series $\Sigma_q(z)$ diverges for every point $z\in \CC$. Hence, $\dpoin(z)\geq \dpoin(J)$ for every $z\in \CC$.
By the definition of the Poincaré exponent $\dpoin(J)$, for any $\epsilon>0$ there exist $q<\dpoin(J)+\epsilon$ and a point $v\in \CC$ so that the Poincaré series $\Sigma_q(v)$ converges. By Corollary \[cor:critexpon\], for all points which are at the positive distance to the critical orbits and for all critical points of maximal multiplicity one has $\dpoin(z)<\dpoin(J)+\epsilon$ and Theorem \[theo:poincare\] follows.
$\Box$
#### Proof of Proposition \[prop:poincare\].
We use the inductive decomposition of backward orbits described in Section \[sec:induc\]. Let $z$ be a point from an $\epsilon$-neighborhood of the Julia set. By Lemma \[lem:bbb\], $$\sum_{y\in\typeii_{l}(x)}
\abs{\br{F^{n(y)}}'(y)}^{-q}~<~\fr1{36}~.$$ But there are at most $(\deg F)^{L'}$ points in $\typeii_{l}(x)$, since these preimages are all of the order at most $(L'-1)$. Therefore by power means inequality (see e.g. Section 2.9 in [@halipo]) we have for $p<q$ sufficiently close to $q$ (namely $p>q-q\log2\br{L'\log\br{\deg F}}^{-1}\,$): $$\begin{aligned}
\sum_{y\in\typeii_{l}(x)}
\abs{\br{F^{n(y)}}'(y)}^{-p}
&<&\br{\sum_{y\in\typeii_{l}(x)}\abs{\br{F^{n(y)}}'(y)}^{-q}}^{\fr pq}
~\cdot~\br{\deg F}^{L'\fr {q-p}q}\\
&<&\br{\fr1{36}}^{\fr pq}
~\cdot~\br{\deg F}^{L'\fr {q-p}q} \\
&<&\fr1{6}~\cdot~\br{\deg F}^{L'\fr {q-p}q}
~<~\fr13~.\end{aligned}$$ Also by Lemma \[lem:bbb\] $$\sum_{y\in\typeii_{s}(x)}\abs{\br{F^{n(y)}}'(y)}^{-p}~<
~C~=~C(p)~.$$
We expand $\sum_{y\in\xx{}{z,\Delta}}\abs{\br{F^n}'(y)}^{-p}$ by grouping preimages of the same kind into clusters. We begin with $z$ obtaining preimages of three kinds: $\typei(z)=\typei^\Delta(z)$, $\typeii_{l}(z)$ and $\typeii_{s}(z)$. Points in $\typeii_{s}(z)$ are terminal while preimages $y$ of the points in $\typei(z)$ and $\typeii_{l}(z)$ are divided further. We proceed in this fashion down the tree of preimages of $z$. If there is no critical point in $B_{\Delta}$ we obtain that $$\begin{aligned}
\sum_{y\in\xx{}{z,\Delta}}\abs{\br{F^n}'(y)}^{-p}
&=&
\sum_{z'\in\typeii_{s}(z)}\abs{\br{F^{n(z')}}'(z')}^{-p}
~+~\sum_{z'\in\typei,\typeii_{l}(z)}\abs{\br{F^{n(z')}}'(z')}^{-p}\\
&\cdot&
\Biggm(\,\sum_{z''\in\typeii_{s}(z')}\abs{\br{F^{n(z'')}}'(z'')}^{-p}
~+~\sum_{z''\in\typei,\typeii_{l}(z')}\abs{\br{F^{n(z'')}}'(z'')}^{-p}\\
&\cdot&
\Biggl(\,\sum_{z'''\in\typeii_{s}(z'')}\abs{\br{F^{n(z''')}}'(z''')}^{-p}
~+~\dots~~~\Biggl)\Biggm)\\
&\le&C\,+\,\br{\fr13+\fr13(\Delta)^{p(1/\mmax-1)}}
\br{C\,+\,\fr23\br{C\,+\,\dots}}\\
&=&C\,+\,\fr13\br{1+(\Delta)^{p(1/\mmax-1)}}C\br{1+\fr23+\br{\fr23}^2+\dots}\\
&=&\br{2+(\Delta)^{p(1/\mmax-1)}}C~<~3C\,(\Delta)^{p(1/\mmax-1)}~. \end{aligned}$$ Otherwise, we have a stronger estimate $$\sum_{y\in\xx{}{z,\Delta}}\abs{\br{F^n}'(y)}^{-p}
~<~3C\,(\Delta)^{p(\m{c}/\mmax-1)}~ .$$ This proves Proposition \[prop:poincare\].
$\Box$
Induced hyperbolicity and conformal measures {#sec:induced}
============================================
Inductive procedure with a stopping rule {#sec:stop}
----------------------------------------
We decompose a sequence of preimages $F^{-N}(z),\dots,F^{-1}(z), z$ into blocks of types $2$ and $1\dots13$ using the usual inductive procedure with the following new stopping rule: at the first occurrence of a type $2$ sequence we stop the induction. For the reader’s convenience we will describe the construction.
#### Construction.
We take shrinking neighborhoods $\{U_{k}\}$ for $B_{2R'}(z)$. If they do not contain the critical points we form one block of type $2$ of the length $N$. Otherwise, we set $r=0$ and increase it continuously until some shrinking neighborhood $U_{k}$ hits a critical point $c$, $c\in \partial U_{k}$. It must happen for some $0<r< 2R'$. Set $r_{0}:=\dist{F^{k}(c),z}$, $n_{1}:=k$, and $z_{1}:=F^{-n_{1}}(z)$. Then $z_{1}$ is a third type preimage of $z$ and the ball $B_{r_{0}}$ can be pulled back univalently by $F^{N}$ along the backward orbit.
##### Inductive procedure.
Suppose we have already constructed $z_j=F^{-n_{j}}(z)$ which is of type $1$ or $3$. We enlarge the ball $B_r(z_{j})$ continuously increasing the radius $r$ from $0$ until one of the following conditions is met:
for some $k\leq N-n_{j}$ the shrinking neighborhood $U_k$ for $B_{r}(z_{j})$ hits a critical point $c\in \Crit$, $c\in\dd U_k$, radius $r$ reaches the value of $2R'$.
In the case 1) we put $n_{j+1}:=n_j+k$. Clearly, $z_{j+1}:=F^{-n_{j+1}}(z)$ is a type $1$ preimage of $z_{j}$. If 2) holds, we set $z_{j+1}:= F^{-N}(z)$ which is a type $2$ preimage of $z_{j}$. This terminates the construction in this case.
##### Coding.
As a result of the inductive procedure, we can decompose the backward orbit of a point $z$ into pieces of type $1$, $2$ and $3$. This gives a coding of backward orbits by sequences of $1$’s, $2$’s and $3$’s. By the construction, only the following three types of codings are allowable: $2, 1\dots 3, 21\dots1 3$. We recall that according to our convention, during the inductive procedure we put symbols in the coding from the right to the left.
We attach to every sequence of preimages of $z$ the sequence $k_l, \dots, k_0$ of the lengths of the blocks of preimages of a given type in its coding. Again our convention requires that $k_{0}$ stands always for the length of the rightmost block of preimages in the coding. Clearly, $k_{0}+\cdots + k_{l}=N$.
Most points go to large scale infinitely often
----------------------------------------------
We recall that a Jacobian of a $\delta$-conformal measure $\nu$ is equal to $\abs{F'}^\delta$ (see Definition \[def:mes\]), $$d\nu(F(z))~=~\abs{F'(z)}^\delta\,d\nu(z)~.$$ Consider a subset of points in $J$ which infinitely often go to the large scale of size $R'$ with bounded distortion: $$\Jls \,:=\,\brs{z\in J:\, \exists~n_j\to\infty, \mbox{~with~}F^{n_j}
\mbox{~univalent~on ~} F^{-n_j}\br{B\br{F^{n_j}z,R'}}}~.$$ Note that the value of $R'$ is already fixed and does not depend on the point.
\[prop:largescale\] Suppose that a rational function $F$ satisfies the [*summability condition*]{} with an exponent $$\alpha~<~\fr{p}{\mmax+p}~.$$ Then for any $p$-conformal measure $\nu$ with no atoms at critical points $\nu(J\setminus \Jls )=0$.
For every $x\in J$ and every $k\in \N$, we use the inductive procedure of Section \[sec:stop\] to decompose the sequence $x,\dots, F^k(x)$ of preimages of $F^{k}(x)$ into blocks of either $211\dots113$ or $2$. The procedure is stopped at the first occurrence of type $2$ block, which might be of arbitrary length. In particular, it might be of length zero which means that a block of type $2$ does not occur and the sequence ends with type $1$.
Denote by ${\cal E}_{x}$ the set of all codes obtained for $x$. Points in $\Jls $ are precisely those for which we get infinitely many different type $2$ sequences. Hence, if $x\in J\setminus \Jls $, then $x$ is a terminal point of an infinite number of sequences $2111\dots113$ with only a finite choice of type $2$ blocks. Let $k(x)$ be the minimal number for which infinitely many sequences from ${\cal E}_{x}$ have the same type $2$ block of length $k(x)$. Denote $X_k:=\brs{x:k(x)=k}$ and observe that the sets $\{X_{k}:k=0,1,\dots\}$ form a countable partition of $J\setminus \Jls$. If $\nu$ has no atoms at critical points and $F^{k}(X)$ is measurable then $$\nu(F^kX)=0\iff \nu(X)=0\iff\nu(F^{-k}X)=0.$$ Since $F^k(X_k)\subset X_0$ and consequently $J\setminus \Jls\subset\cup_k F^{-k}(X_0)$, it is sufficient to prove that $\nu(X_0)=0$. Without loss of generality we can exclude from $X_0$ all preimages of the critical points since they are of zero $\nu$ measure. Every point $x\in X_0$ must be terminal for infinitely many different subsequences $1\dots1$ . Otherwise the orbit of $x\in X_0$ would pass near the critical points only finitely many times and hence its distance to the set $\Crit$ would be positive. Another consequence of the finitness of $1\dots 1$ subsequences would be an unbounded length of type $3$ blocks in ${\cal E}_x$. The estimate (\[eq:3tdecayofd\]) yields $\dist{x,\Crit}=0$, a contradiction.
By very much the same argument, using that the distance from $x\in X_0$ to $\Crit$ is positive, we obtain that the length of the leftmost blocks of type $1$ in ${\cal E}_{x}$ must be bounded and therefore we can choose infinitely many sequences from ${\cal E}_{x}$ with the same leftmost block. Next, we consider the second block of type $1$ from the left and repeat the above argument to produce infinitely many sequences in ${\cal E}_{x}$ with the same two leftmost blocks. We continue in this fashion until we build by induction an infinite sequence $1111\dots $ terminating at $x$. Denote corresponding parameters by $d_j,\,r_j,\,r_j',\,c_j,\,\mu_j,\,n_j$ with $j=0,-1,-2,\dots$ (we use negative integers to preserve convention of enumerating from the right to the left).
Let $G$ be the set of indexes $j$ such that $d_j<r_j$. The second inequality of Lemma \[lem:1t\] implies that if $j\notin G$ then $(d_{j+1})^{\mu_{j+1}}<(d_j)^{\mu_j}(\gamma_{n_j})^{-\mmax}$. This means that $G$ is infinite since otherwise $\lim_{j\to-\infty}d_j=\infty$. Now set $\mmax'$ to be the maximal multiplicity which occurs infinitely often in the sequence $\brs{\mu_j:\,j\in G}$. Let $X_0'(k)$ stand for the set of all points $x\in X_{0}$ such that there are no points of larger than $k=\mmax'(x)$ multiplicity in $G$. We see that $$X_0\subset\bigcup_{k=2}^{\mmax}\bigcup_{i=0}^{\infty}
F^{-i}(X_0'(k))\;.$$ Therefore, it is sufficient to show that $\nu(X_0'(k))=0$ for $k=2,\dots,\mmax$. We fix $k=\mmax'$ and drop it from the notation of $X_0'(k)$.
Fix a point $x\in X_0'$ and take an index $j$ in the infinite set $G':=\brs{j\in G:\,\mu_j=\mmax'}$. Set $k=\sum_{i=-1}^{j}k_i$. Then, by Lemma \[lem:proxi\], $$\abs{\br{F^k}'(x)}^{p}~>~\prod_{i=j+1}^{-1}(\gamma_{k_i})^{p}~,$$ and hence $$d\nu(x)~=~\abs{\br{F^k}'(x)}^{-p}d\nu(F^kx)
~<~\prod_{i=j+1}^{-1}(\gamma_{k_i})^{-p}d\nu(F^kx)~.$$ Assuming that $\nu(X_0')$ is positive, we proceed similarly as in the proof of Proposition \[prop:poincare\] (note that parameters $k_i$, $j$ depend on $x$), $$\begin{aligned}
+\infty&=&\int_{X_0'}\#G'(x)d\nu(x)
~=~\int_{X_0'}\sum_{j\in G'(x)}1\, d\nu(x)\\
&<&\int_{X_0'}~~\sum_{j\in G'(x)}~~
\prod_{i=j+1}^{-1}(\gamma_{k_i})^{-p}~d\nu(F^kx)\\
&\le&\int_{J}~~\sum_{F^kx=z,j\in G'(x)}~~
\prod_{i=j+1}^{-1}(\gamma_{k_i})^{-p}~d\nu(z)\\
&\le&\int_{J}~\sum_{x\in\typei(z)}~
\prod_{i=j(x)+1}^{-1}(\gamma_{k_i(x)})^{-p}~d\nu(z)\\
&\le&\int_{J}~~\sum_{j,k_{-1},k_{-2},\dots,k_{j+1}}~(4 \deg F)^{\abs{j-1}}
~\br{\gamma_{k_{-1}}\gamma_{k_{-2}}\dots\gamma_{k_{j+1}}}^{-p}~d\nu(z)\\
&<&\int_{J}~~\left({\textstyle 4 \deg F\,\sum_{k}\gamma_{k}^{-\beta}
\,+\,(4 \deg F\,\sum_{k}\gamma_{k}^{-\beta})^2
\,+\,(4 \deg F\,\sum_{k}\gamma_{k}^{-\beta})^3
\,+\,\dots}\nonumber\right)d\nu(z)\\
&~<~&\int_{J}\br{\fr1{4}+\br{\fr14}^2+\dots}d\nu(z)~=~\int_{J}\fr1{3}d\nu(z)~<~+\infty~.\end{aligned}$$ This yields a contradiction and proves the proposition.
Conformal measures
------------------
The notion of conformal measures was introduced to rational dynamics by D. Sullivan following an analogy with Kleinian groups, see Definition \[def:mes\]. Loosely speaking, a probabilistic measure $\nu$, supported on the Julia set, is [*conformal with exponent*]{} $\delta$, if its Jacobian is equal to $\abs{F'}^\delta$, i.e. $$d\nu(F(z))~=~\abs{F'(z)}^\delta\,d\nu(z)~.$$ D. Sullivan proved in [@sullivan-rio] that for every Julia set there exists a conformal measure with an exponent $\delta\in(0,2]$. For hyperbolic Julia sets, there exists only one conformal measure which coincides with a normalized $\HD(J)$-dimensional Hausdorff measure. In general, it is more difficult to describe analytical properties of conformal measures. For example, it is an open problem whether there exists a non-atomic conformal measure for a given rational function.
We recall that a conformal dimension $\dconf(J)$ of $J$ is defined as $$\dconf(J)~=~\inf\brs{\delta:~\exists~\delta-{\mathrm conformal~measure}}~.$$
A simple compactness argument (see [@sullivan-rio]) shows that infimum is attained in the definition above. The following lemma is a version of standard Patterson-Sullivan construction of conformal measures (cf. [@sullivan-rio]):
\[lem:existconf\] Let $z$ be either a critical point of the maximal multiplicity in the Julia set, or a point at a positive distance from the orbits of the critical points. Then there exists a $\dpoin(z)$-conformal measure.
If $z$ is a critical point, then for any $q>\dpoin(z)$ there is an atomic conformal measure supported on the preimages of $z$. To see this, normalize $$\sum_n~\sum_{y\in F^{-n}z}~\abs{\br{F^n}'(y)}^{-q}~1_y~,$$ where $1_{y}$ is a Dirac measure at $y$, to be a probabilistic measure.
If $z$ is a point at a positive distance from the critical orbits then standard arguments of [@sullivan-rio] apply.
\[lem:erg\] Suppose that there exist a $p$-conformal measure $\eta$ and a $q$-conformal measure $\nu$ which have no atoms at critical points. If $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{\max\brs{p,q}}{\mmax+\max\brs{p,q}}
~=~\max\brs{\fr{p}{\mmax+p},\fr{q}{\mmax+q}}~,$$ then $p=q$ and $\eta=\nu$.
If a ball $B$ of radius $r(B)$ is mapped with a bounded distortion to the large scale, i.e. $F^{n}(B)=A$, then $$\nu(B) ~\asymp~\br{\fr{r(B)}{\diam (A)}}^{q}\,\nu(A)
~\asymp~r(B)^{q}~.$$
Assume first that $p$ and $q$ are different, without loss of generality $p<q$. Then, by Proposition \[prop:largescale\], $\nu$-almost every point goes infinitely often to the large scale with bounded distortion. This implies that for $\nu$-almost every point $z$ there is a sequence of balls $B_{j}$ of radius $R_{j}\to 0$ centered at $z$ so that $$\eta(B_j)~\asymp~(R_j)^p~=~(R_j)^{p-q}~(R_j)^q~\asymp
~(R_j)^{p-q}~\nu(B_j)~.$$ Let ${\Cal B}$ be a collection of all balls of radius less than $r$ which are mapped with a uniformly bounded distortion to the large scale. By the Besicovitch covering theorem (see Section 2.7 in [@mattila]) there exists a subcollection $\Cal B'$ of $\Cal B$ so that $\nu$-almost all points of $J$ are contained in $\bigcup_{B\in {\Cal B}'}$ and every point in $\C$ is covered by at most $P$ balls from $\cal B'$. Then $$\begin{aligned}
\eta(J)&\ge&P^{-1}\sum_{B\in{\Cal B'}}\eta(B)
~\gtrsim~\sum_{B\in{\Cal B'}}r(B)^{p-q}\,\nu(B)\\
&\ge&r^{p-q}\sum_{B\in{\Cal B'}}\nu(B)~\ge~r^{p-q}\,\nu(J)~\end{aligned}$$ which (for sufficiently small $r$) contradicts the fact that $\eta(J)=\nu(J)=1$.
Hence $p=q$. If $\nu$ and $\eta$ are different probabilistic measures then their difference $\nu-\eta$ has a non-trivial positive and negative part. After normalization, $(\nu-\eta)_-$ and $(\nu-\eta)_+$ become $q$-conformal measures which are mutually singular. Therefore, without loss of generality, we can assume that $\nu$ and $\eta$ are mutually singular.
If $E\subset J$ is an open set then, by the Besicovitch covering theorem, we can choose a cover $\Cal B'$ of $\nu$-almost all points of $E$ such that every point in $\C$ is covered by at most $P$ balls and no points outside $E$ are covered. Then $$\eta(E)~\ge~P^{-1}\sum_{B\in{\Cal B'}}\eta(B)~\asymp~
\sum_{B\in{\Cal B'}}(r(B))^{q}~\asymp~
~\sum_{B\in{\Cal B}}\nu(B)~\ge~\nu(E)~$$ and consequently $\eta(E)~\gtrsim~\nu(E)$ for every Borel set $E$. This contradicts the mutual singularity of $\eta$ and $\nu$, and completes the proof.
\[cor:confnoatom\] If $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{\dpoin(J)}{\mmax+\dpoin(J)}~$$ then
1. There is a unique $\dpoin(J)$-conformal measure. It is ergodic and non-atomic. This is the only conformal measure with no atoms at the critical points. In particular, there are no non-atomic measures with exponents different from $\dpoin(J)$.
2. There are no conformal measures with exponents less than $\dpoin(J)$, i.e. $\dpoin(J)=\dconf(J)$.
3. For every $q>\dpoin(J)$ there exists an atomic $q$-conformal measure supported on the backward orbits of the critical points. Every conformal measure has no atoms at other points.
<!-- -->
1. By Lemma \[lem:existconf\], there is a $\dpoin(J)$-conformal measure. It cannot have atoms since otherwise the corresponding Poincaré series converges and by Corollary \[cor:critexpon\], $\dpoin(J)<\dpoin(J)$. Now, uniqueness and ergodicity follow from Lemma \[lem:erg\].
2. There are no atomic measures by Corollary \[cor:critexpon\] and no non-atomic measures by Lemma \[lem:erg\].
3. To obtain an atomic $q$-conformal measure, $q>\dpoin$, distribute atoms at all preimages of a critical point of the maximal multiplicity. If there is a conformal measure with an atom at a point whose orbit omits the critical points then we can easily produce a conformal measure which has no atoms at the critical points. By Lemma \[lem:erg\], the latter coincides with a unique $\dpoin$-conformal measure which is non-atomic, a contradiction.
Corollary \[cor:confnoatom\] implies Theorem \[theo:4\].
Frequency of passages to the large scale
----------------------------------------
In this Section we give a proof of Theorem \[theo:largescale\]. Consider the set $\Jlso$ of all points $x\in J$ which $\epsilon$-frequently go to the large scale of size $R'$, namely: $$\exists\, n_j\to\infty:~F^{n_j}
\mbox{~univalent~on ~} F^{-n_j}\br{B\br{F^{n_j}x,R'}},
~\abs{\br{F^{n_{j+1}}}'(x)}\,<\,\abs{\br{F^{n_{j}}}'(x)}^{1+\epsilon}~.$$ Note that the value of $R'$ is already fixed and does not depend on a point.
\[prop:freq\] Suppose that a rational function $F$ satisfies the [*summability condition*]{} with an exponent $$\alpha~<~\fr{p}{\mmax+p}~$$ and for every $q>p$ there exists a point $v$ such that the Poincaré series $\Sigma_q(v)$ converges. If a $p$-conformal measure $\nu$ has no atoms at the critical points then $\nu(J\setminus\Jlso)=0$.
We say that a point $x$ goes to the large scale of size $R'$ univalently at time $k$ if $$\label{lscale}
F^{k}\mbox{~is~univalent~on~} F^{-k}\br{B\br{F^{k}x,R'}}~.$$ Assume that a point $x$ goes to the large scale at a time $m$ and apply the inductive procedure of Section \[sec:stop\] for the sequence $x,\dots,F^{m-1}(x)$. As a result we obtain a sequence of blocks of the form $21\dots13$, $21\dots1$, or $2$ (blocks of type $2$ might be of zero length, meaning that we end with a type $1$ block). Suppose that $y:=F^n(x)$ is the first point which belongs to a block of type $2$ or equivalently is a terminal point of the longest sequence of the form $1\dots13$ in the decomposition into blocks of the orbit $x,\dots, F^{m-1}(x)$. By the definition of $m$, a ball of radius $R'$ around $F^{m}(x)$ can be pulled univalently back to $x$. Hence, the same is true for a ball of radius $\Delta:=R'/(4M)$, $M=\sup_{y\in J}|F'(y)|$, around $z:=F^{m-1}(x)$. Therefore, $y\in\typei^\Delta(z)=\typei^\Delta(z|z)$ – recall that $\typei^\Delta(z)$ stands for the set of the first type preimages $y\in\xx{}{z,\Delta}$ of $z$, obtained in the course of the inductive procedure for $z$.
By Proposition \[prop:largescale\], we already know that $\nu$-almost all points in $J$ go to the large scale infinitely often so it is sufficient to show that $\nu(X)=0$ for $X:=\Jls\setminus\Jlso$.
Suppose that for every $x\in X$ there are two increasing sequences $\{n_j\}$ and $\{m_j:=m(n_j)\}$ such that $$\abs{\br{F^{m_j}}'(x)}\,\ge\,\abs{\br{F^{n_{j}}}'(x)}^{1+\epsilon}~.$$ Therefore, $\abs{\br{F^{m_j-n_j}}'(F^{n_j}(x))}\,\ge
\,\abs{\br{F^{n_{j}}}'(x)}^{\epsilon}$. Denote $y=y_j(x):=F^{n_j}x$ and $z=z_j(x):=F^{m_j}x$. Then $$\begin{aligned}
\abs{\br{F^{m_j}}'(x)}^{-p}
&=&\abs{\br{F^{n_j}}'(x)}^{-p}\,\abs{\br{F^{m_j-n_j}}'(y)}^{-p}\\
&\le&\abs{\br{F^{n_j}}'(x)}^{-(p+\delta)}
\,\abs{\br{F^{m_j-n_j}}'(y)}^{-(p-\delta/\epsilon)}~.\end{aligned}$$ Choose $\delta$ so small that $\alpha<\beta/(\mmax+\beta)$ for $\beta:=p-\delta/\epsilon$. Then, by Lemma \[lem:main2\], $$\label{equ:obs1}
\sum_{y\in\typei^\Delta(z)}
\abs{\br{F^{n(y)}}'(y)}^{-(p-\delta/\epsilon)}~<~\const(\Delta)~.$$
By the assumptions, the Poincaré series for $q:=p+\delta$ converges for a point $v$ whose preimages are dense in the Julia set. We can choose finitely many of them, say $v_1,\dots,v_n$, so that they are $R'/4$-dense in $J$ and their Poincaré series are also convergent. Now, for every point $z$ with $\dist{z,\J}\,<\,~R'/2$, there is a point $v_j\in B_{3R'/4}(z)$. By the Koebe distortion lemma \[lem:koeb\], we have that $$\sum_{y\in \typeii{(z)}}\abs{\br{F^n}'(y)}^{-q}~\lesssim~
\Sigma_q(v_j)\, \le\,\max_{j}\Sigma_q(v_j)~\lesssim~\Sigma_q(v)
\,<\,\infty~,$$ and $$\label{equ:obs2}
\sup_{y}\,\sum_{x\in\typeii(y)}
\abs{\br{F^{n(x)}}'(x)}^{-(p+\delta)}~<~\const~<~\infty~.$$ Combining the estimates (\[equ:obs1\]) and (\[equ:obs2\]), we obtain that $$\begin{aligned}
\infty\cdot\nu(X)&=&
\int_{X}\sum_{j}1\;d\nu(x)~=~
\int_{X}~\sum_{j}~~\abs{\br{F^{m_j}}'(x)}^{-p}\,d\nu\br{F^{m_j}x}\\
&=&\int_{J}~\sum_{j,x:\,z=z_j(x)}~~\abs{\br{F^{m_j(x)}}'(x)}^{-p}\,d\nu\br{z}\\
&\le&\int_{J}~\sum_{j,x:\,z=z_j(x)}~
~\abs{\br{F^{n_j}}'(x)}^{-(p+\delta)}\,
\abs{\br{F^{m_j-n_j}}'(y_j(x))}^{-(p-\delta/\epsilon)}~d\nu\br{z}\\
&\le&\int_{J}~\sum_{x,j:z=z_j(x)}\abs{\br{F^{n_j}}'(x)}^{-(p+\delta)}~
\sum_{y:\exists x,\,y=y_j(x),\,z=z_j(x)}
\,\abs{\br{F^{m_j-n_j}}'(y)}^{-(p-\delta/\epsilon)}d\nu\br{z}\\
&\le&\br{\sup_{y}\,\sum_{x\in\typeii(y)}
\abs{\br{F^{n(x)}}'(x)}^{-(p+\delta)}}
~\cdot~\int_{J}~
\sum_{y\in\typei(z)}\abs{\br{F^{n(y)}}'(y)}^{-(p-\delta/\epsilon)}
d\nu\br{z}\\
&\le&\const\,\int_{J}~\const(\Delta)~d\nu(z)~~<~~\infty~.\end{aligned}$$ Therefore $\nu(\Jls\setminus\Jlso)=0$ and the proposition follows.
#### Proof of Theorem \[theo:largescale\].
Theorem \[theo:largescale\] is a consequence of Theorem \[theo:4\] and Proposition \[prop:freq\].
Invariant measures {#sec:inv}
==================
Polynomial summability condition
--------------------------------
In this Subsection we begin the proof of Theorem \[theo:inv\], establishing existence of an absolutely continuous invariant measure, provided the [*polynomial summability condition*]{} holds. We start with the geometric measure $\nu$ with exponent $\delta:=\dpoin(J)$, which exists by Theorem \[theo:4\]. It is sufficient to find $Z\in L^1(\nu)$ such that for all $n$ $$\label{eq:invmeas}
\frac{d\nu\circ F^{-n}}{d\nu}(z)~\lesssim~Z(z)~.$$ In fact, any weak subsequential limit of $$\frac1{n}\sum_{k=1}^n d\nu\circ F^{-k}~,$$ is an invariant measure, and (\[eq:invmeas\]) implies that its density is majorated by $Z(z)$, and hence it is absolutely continuous with respect to $\nu$.
To find $Z$ and establish (\[eq:invmeas\]), we proceed as follows. Given two points $y,\,z$ with $z=F^n(y)$ we define points $v=v(y,z),\,w=w(y,z)$ by the following construction, which is feasible for $\nu$-almost every $z$.
Since we are interested only in $\nu$-generic points, we can assume, by Proposition \[prop:largescale\], that $y$ goes to the large scale infinitely often. Let $n'$ be the first time $n'>n$ when $y$ goes to the large scale, and denote $w:=F^{n'-1}(y)$. By the choice of $n'$, the ball of radius $R'$ around $F(w)$ can be pulled univalently back to $y$. The same is of course true for the ball of radius $\Delta:=R'/(4M)$ around $w$, $M=\sup_{y\in J}|F'(y)|$. Now we carry out the inductive procedure from the Main Lemma \[lem:main\] for the preimages of $w$ of order $<n'$ till we get a block of type $2$. By the definition of $n'$, a code of the sequence $y,\,F(y),\dots,\,z,\dots,w$ is of the form $21\dots13$. Let $v=v(y,z):=F^l(y)$ be the point which starts the block of type $2$ (in other words, $v$ ends the blocks $1\dots13$). Note that $y\in\typeii(v)$ and $v\in\typei(w)=\typei^\Delta(w|w)$. We recall that $\typei^\Delta(w|w)$ stands for the set of first type preimages of $w$ belonging to $\xx{}{w,\Delta}$ obtained in the course of the inductive procedure for $w$.
Below, we assume that $l=l(v)$ is chosen so that $v=F^l(y)$, and $j=j(v)=n-l$. If $F^n(y)=z$, we denote $n(y,z):=n$. Recalling that $\delta=\dpoin(J)$ is the exponent of the conformal measure $\nu$, we can write $$\begin{aligned}
\frac{d\nu\circ F^{-n}}{d\nu}(z)
&=&\sum_{y\in F^{-n}(z)}~\abs{(F^n)'(y)}^{-\delta}\\
&=&\sum_{v:\,\exists y\in F^{-n}(z),\,v=v(y,z)}
~\sum_{y\in F^{-l}(v)}~\abs{(F^l)'(y)}^{-\delta}~\abs{(F^{n-l})'(v)}^{-\delta}\\
&\le&\sum_{v:\,\exists y\in F^{-n}(z),\,v=v(y,z)}
\br{\sup_{x}\sum_{y\in\typeii(x),\,F^{l}(y)=x}
~\abs{(F^l)'(y)}^{-\delta}}~\abs{(F^{n-l})'(v)}^{-\delta}\\
&\lesssim&\sum_{v:\,\exists y,\,v=v(y,z)}
~\abs{(F^{n(v,z)})'(v)}^{-\delta}~=:~Z(z)~.\end{aligned}$$ The estimate above is possible since for a fixed $n$ and $z$ every point $v\in F^{-j}z$ is counted only if it is $v(y,z)$ for some $y\in F^{-n}z$, and in this case $l=n-j$ is fixed (and independent of $y$). However, once the summation is done, $n$ disappears from the estimate and does not figure in the definition of $Z$. Note also that summation set satisfies $$\brs{v:\,\exists n,y\in F^{-n}(z),\,v=v(y,z)}
\subset \brs{v:\,\exists w,\,y\in\typei(w),\,
y\in F^{-n}(z)\cap F^{-m}(w),\,m\ge n}~.$$ Thus it suffices to prove that $Z\in L^1(\nu)$, which we can do by writing $$\begin{aligned}
\int Z(z)d\nu(z)
&=&\int\sum_{v:\,\exists y,\,v=v(y,z)}
~\abs{(F^{n(v,z)})'(v)}^{-\delta}d\nu(z)\\
&\le&\int \sum_{v,z:\,\exists y,v=v(y,z),w=w(y,z)}~\abs{(F^{n(v,w)})'(v)}^{-\delta}d\nu(w)\\
&\le&\int \sum_{v\in\typei(w)}~n(v,w)~
\abs{(F^{n(v,w)})'(v)}^{-\delta}d\nu(w)\\
&\lesssim&\int \sum_{n}~n~
\sum_{i,k_1,\dots,k_i:\,k_1+\dots+k_i=n}~\br{\gamma_{k_1}\dots\gamma_{k_i}}^{-\delta}\\
&\lesssim&\int \sum_{n}~n~{\gamma_{n}}^{-\delta}~<~\infty~,\end{aligned}$$ the last inequality being true since $F$ satisfies the polynomial summability condition with an exponent $\alpha<\delta/(\delta+\mmax)$. We also use above that for given $v$ and $w$ there are at most $n(v,w)$ possible choices of $z$, namely $v,F(v),\dots,F^{n(v,w)-1}(v)=F^{-1}(w)$. This concludes the proof of the existence of an absolutely continuous invariant measure.
Ergodic properties {#sec:lyap}
------------------
In this Section we complete the proof of Theorem \[theo:inv\], establishing that an absolutely continuous invariant measure is unique, ergodic, mixing, exact, has positive entropy and Lyapunov exponent. We do not require the polynomial summability condition of Theorem \[theo:inv\]: it is sufficient to assume the corresponding summability condition and existence of an absolutely continuous invariant measure.
If an absolutely continuous invariant measure exists, its ergodicity and uniqueness follow immediately from the ergodicity of the geometric measure asserted by Theorem \[theo:4\].
#### Lyapunov exponents.
A [*Lyapunov exponent*]{} of $F$ at $z$ is defined as $$\chi(z):=\lim_{n\rightarrow \infty} \frac{1}{n}\log|(F^{n})'(z)| ~ ,$$ provided that the limit exists. A [*Lyapunov exponent*]{} of an invariant measure $\sigma$ is defined as $\chi_{\sigma} =\int \log |F'|\,d\sigma$. Birkhoff’s ergodic theorem implies that if $\sigma$ is ergodic then for almost every point $z$ with respect to $\sigma$ the Lyapunov exponent $\chi(z)$ exists and is equal to $\chi_{\sigma}$. The next lemma is based on standard reasoning (see e.g. [@demelo-vanstrien]).
Let $\nu$ be a geometric measure of a rational function $F$ which satisfies the summability condition with an exponent $$\alpha<\frac{\dpoin(J)}{\dpoin(J)+\mmax}~.$$ Suppose that $\sigma$ is an absolutely continuous invariant measure with respect to $\nu$. Then $\sigma$ has positive entropy and Lyapunov exponent and for almost every point $z$ with respect to $\nu$, $$\chi(z)=\int \log|F'|\;d\sigma >0.$$
The entropy is given by the formula $h_{\sigma}=\int \log \Jac_{\sigma}\, d\sigma$ where the Jacobian is defined as the Radon-Nikodym derivative: $\Jac_{\sigma}\,:=\,d\sigma\circ F/d\sigma$. The latter is always $\ge1$ since $\sigma$ is invariant. In our case for sufficiently small sets $A$ which do not contain the critical points of $F$ we can write $$\Jac_{\sigma}|_A\asymp\frac{\sigma(F(A))}{\sigma(A)}
\asymp\frac{\nu(F(A))}{\nu(A)}>0~,$$ and hence $$1~=~\sum_{y\in F^{-1}F(y)}\frac1{\Jac_\sigma(y)}~>~
\frac1{\Jac_\sigma(y)}~$$ for $\sigma$-almost every $y$. We conclude that $\sigma$-almost everywhere $\Jac_\sigma>1$ and hence entropy of $\sigma$ is positive. Since $\sigma$ is invariant and ergodic, the remaining statements follow from [@mane].
#### Exactness.
Recall that a measure preserving endomorphism $F$ is called [*mixing*]{} if for every two measurable sets $A$ and $B$ $$\lim_{n\rightarrow \infty}\sigma(A\cap T^{-n}(B))=\sigma(A)\sigma(B)~.$$ A measure preserving endomorphism $F$ is [*exact*]{} if for every measurable $A$, $0<\nu(A)<1$, there is no sequence of sets $A_{n}$ so that $A=F^{-n}(A_{n})$.
Suppose that $F$ satisfies the summability condition with an exponent $$\alpha<\frac{\dpoin(J)}{\dpoin(J)+\mmax}~,$$ and has an absolutely continuous invariant measure $\sigma$. Then $$\limsup_{n\rightarrow \infty}\sigma(F^{n}(A))=1$$ for every measurable set $A$ of positive $\sigma$-measure, and hence $F$ is exact and mixing.
The proof that exactness implies mixing can be found in [@walters]. Also it is clear that ($\sigma$ is absolutely continuous with respect to an ergodic $\nu$) it is sufficient to prove the same statement for $\nu$: $\limsup_{n\rightarrow \infty}\nu(F^{n}(A))=1$.
By Proposition \[prop:largescale\], there exists $R'>0 $ so that for almost every point $z\in J$ with respect to $\sigma$ there is a sequence of integers $n_{j}$ and sequences of balls $B_{r_{j}}(z)$ and topological disks $D_{j}(F^{n_{j}}(z))\supset B_{R'}(F^{n_{j}}(z))$ so that $F^{n_{j}}:B_{r_{j}}(z)\mapsto D_{j}(F^{n_{j}}(z))$ is a univalent function with bounded distortion. Let $z$ be a density point of $A$ with respect to $\nu$. The bounded distortion of $F^{n_{j}}$ implies that for every $\epsilon>0$ there exist $j$ so that $$\frac{\nu(A\cap D_{j}(F^{n_{j}}(z)) )}{\nu( D_{j}(F^{n_{j}}(z)) )}~
\geq~ (1-\epsilon)\frac{\nu(A\cap B_{r_{j}}(z))}{\nu(B_{r_{j}}(z))}~\geq1-2\epsilon~.$$ By compactness, there exists $N=N(R')$ such that every disk $B_{R'}(y)$, $y\in J$, is mapped onto $J$ by $F^{N}$. Hence, $$\lim_{j\rightarrow \infty}\nu(F^{n_{j}+N}(A))~\ge~
\lim_{j\rightarrow \infty}
\frac{\nu(A\cap D_{j}(F^{n_{j}}(z)) )}{\nu( D_{j}(F^{n_{j}}(z)) )}
\,\nu(J)~=~1~,$$ and the lemma follows.
This concludes the proof of Theorem \[theo:inv\].
Fractal structure {#sec:geometry}
=================
In this Section we will prove that the geometry of the Julia sets satisfying appropriate summability conditions is effectively fractal and self-similar. Namely, every sufficiently small ball shrinks under the pull-backs and hence its geometry is infinitely many times reproduced at different scales. Moreover, it is “usually” (i.e. around most points and for most scales) reproduced with bounded distortion.
Average contraction of preimages
--------------------------------
\[prop:goodcover\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{2}{\mmax+2}~,$$ and the Julia set is not the whole sphere. Then there is $p<2$ such that for every sufficiently small ball $B$ with center on the Julia set $$\sum_{n}\,\sum_{F^{-n}}~\br{\diam\br{F^{-n}B}}^{p}
~\degg{F^n}{F^{-n}B}~<~\infty~,$$ where $\degg{F^n}{F^{-n}B}$ denotes the degree of $F^n$ restricted to the connected component ${F^{-n}B}$ of the preimage of $B$ under $F^{-n}$.
We continue to work with sequences $\{\alpha_n\}$, $\{\gamma_n\}$, $\{\delta_n\}$ of Lemma \[lem:techseq\]. To control the diameters we will need a new decomposition procedure.
#### Local Analysis.
First we prove the analogues of Lemma \[lem:1t\] and Lemma \[lem:3t\].
\[lem:1again\] Suppose that
Shrinking neighborhoods $U_k$ for $B_{4r}(z)$, $1\le k< n$, avoid critical points and $(r)^{\mu_1}\,<\,R$, a critical point $c_{2}\,\in\,U_n$ , a critical point $c_{1}\,\in\,B_{r}(z)$ .
To simplify notation set $\mu_i\,:=\,\m{c_i}$, $r_2:=\br{\diam\br{U_n}}$ and, for consistency, $r_{1}:=r$.
Then $$(r_2)^{\mu_2}~<~(r_1)^{\mu_1}~(\gamma_n)^{-\mmax}~,$$ in particular, $~(r_2)^{\mu_2}~<~R~.$
First note, that $F^nc_2\in B_{4r\del_n}$, hence $$\dist{F^nc_2,c_1}~\le~\dist{F^nc_2,z}\,+\,\dist{z,c_1}
~\le~5\,r_1~<~5 R^{1/\mmax}~,$$ and by the choice of $R$ we have $\abs{F'(F^{n}c_2)} \stackrel{M}{\asymp} \dist{F^nc_2,c_1}^{\mu_1-1}$. Therefore, $$\abs{\br{F^{n-1}}'\br{Fc_2}}^{-1}~\le
~{M\dist{F^nc_2,c_1}^{\mu_1-1}}~\abs{\br{F^{n}}'\br{Fc_2}}^{-1}
~\le~\fr{M(5r_1)^{\mu_1-1}}{\sigma_n}~.$$
We recall that $U_{n}\supset U_{n-1}'=F(U_{n})$. By the Koebe distortion theorem (see Lemma \[lem:koeb\]) applied to the conformal map $F^{-(n-1)}:\,B_{4r_1\Delta_{n-1}}(z)\,\to\, U_{n-1}$ we obtain that $$\begin{aligned}
\diam\br{U_{n-1}'}&\le&2\,\frac{(1-\delta_n)(2-\delta_n)}{\delta_{n}}
~\Delta_{n-1}\,4r_1~\abs{\br{F^{n-1}}'(Fc_2)}^{-1}\\
&\le&\frac{16\,r_1}{\delta_{n}}
~\fr{M(5r_1)^{\mu_1-1}}{\sigma_n}\\
&\le& 16^{\mmax}~M~(\alpha_n)^{-2}~(r_1)^{\mu_1}
~(\gamma_n)^{-\mmax}\\
&\le& (r_1)^{\mu_1}~(\gamma_n)^{-\mmax}
~(\alpha_n)^{-1}~.\end{aligned}$$ The last inequality is true by our choice of $\alpha_{n}$ and $R$, see condition [(ii)]{} in Section \[sec:const\]. In particular, $\diam\br{U_{n-1}'}\,<\,(r_1)^{\mu_1}\,<\,R$ and again by condition [(i)]{} of Section \[sec:const\] we have that $$\begin{aligned}
(r_2)^{\mu_2}&\le&M\,\diam\br{U_{n-1}'}\\
&\le& M~(r_1)^{\mu_1}~(\gamma_n)^{-\mmax}
~(\alpha_n)^{-1}\\
&\le&(r_1)^{\mu_1}~(\gamma_n)^{-\mmax}~ <~R,\end{aligned}$$ which completes the proof.
\[lem:3again\] Suppose that
shrinking neighborhoods $U_k$ for $B_{4r}(z)$, $1\le k< n$, avoid critical points and $r\,<\,R'$ , a critical point $c_{2}\,\in\,U_n$ .
Set $\mu_2:=\m{c_2}$ and $r_2:=\br{\diam\br{U_n}}$. For consistency, put $r_{1}:=r$. Then $$(r_2)^{\mu_2}~<~(\gamma_n)^{-\mmax}~,$$ and $(r_2)^{\mu_2}~<~R~.$
Applying the Koebe distortion Lemma \[lem:koeb\] we obtain that $$\begin{aligned}
\diam\br{U_{n-1}'}
&\le&2\,\frac{(1-\delta_n)(2-\delta_n)}{\delta_{n}}
~\Delta_{n-1}\,4r_1~\abs{\br{F^{n-1}}'(Fc_2)}^{-1}\\
&\le&\frac{16\,r_1}{\delta_{n}}
~\fr{\supF}{\sigma_n}\\
&\le&16~R'~\supF~\br{\alpha_n}^{-2}~\br{\gamma_n}^{-\mmax}\\
&\le&R~{M}^{-1}~\br{\gamma_n}^{-\mmax}~\le~R~.\end{aligned}$$ The last inequality is true by the choice of $R'$. Particularly, $U_{n-1}'$ is close to $Fc_2$ and $$\begin{aligned}
(r_2)^{\mu_2}&\le&M\,\diam\br{U_{n-1}'}\\
&\le& M~{M}^{-1}~R~\br{\gamma_n}^{-\mmax}\\
&=&R~\br{\gamma_n}^{-\mmax}~.\end{aligned}$$
#### Proof of Proposition \[prop:goodcover\].
Let $z$ be a point from the Julia set and fix an inverse branch of $F^{-n}$ so that $F^{-n}(z)\mapsto\dots\mapsto F^{-1}(z)\mapsto z$. Next, take a ball $B=B_{r_1}(z)$ of sufficiently small radius $r_1<R'$ and consider the shrinking neighborhoods for the $4$ times larger ball $B_{4r_1}(z)$. Let $k_1$ be the first time when $ U_{k_1}$ catches a critical point $c_{2}$. Then, by Lemma \[lem:3again\], $r_2:=\diam\br{F^{-k_1}B_{r_1}}$, we have that $$(r_2)^{\mu_2}~<~(\gamma_{k_1})^{-\mmax}~.$$
Consider now the shrinking neighborhoods for the ball $B_{4r_2}(z_2)$ with $z_2:=F^{-k_1}z$. Let $k_2$ be the first time when $U_{k_2}$ hits a critical point $c_{3}$. Again, by Lemma \[lem:1again\], $r_3:=\diam\br{F^{-k_2}B_{r_2}}$, we obtain that $$(r_3)^{\mu_3}~<~(r_2)^{\mu_2}~(\gamma_{k_2})^{-\mmax}~.$$
We continue in the same fashion, taking shrinking neighborhoods for $B_{4r_3}(z_3)$ with $z_3:=F^{-k_2}z_2$, and so on. Observe, that during the construction we always have $$F^{-(k_1+k_2+\dots+k_j)}B~\subset
~F^{-(k_2+\dots+k_j)}B_{r_2}(z_2)~\subset
~\dots~\subset~B_{r_{j+1}}(z_{j+1})~,$$ and there is a bound for the degree: $$\degg{F^{(k_1+k_2+\dots+k_j)}}{F^{-(k_1+k_2+\dots+k_j)}B}
~\le~\br{\mmax}^j~.$$
We can repeat the above construction until we meet a ball $B_{4r_l}(F^{-k_{l-1}}z)$ whose shrinking neighborhoods do not contain critical points. This means that the ball $B_{2r_l}(z_l)$ can be pulled back univalently along the considered branch. We will call $z_{l}$ a terminal point. In more general notation, $y:=z_{l}$ with parameters $r(y)=r_l$, $l(y)=l$, $c_y=c_l$.
Now, we look at the backward orbit of $z$ for all possible inverse branches of $F$ and denote by ${\Cal Y}(z)$ the set of all terminal points.
By the Koebe distortion theorem, $$\begin{aligned}
\diam\br{F^{-n}B}&<&
\diam\br{F^{-m}B_{r_l}(z_l)}\\
&<&16~\abs{\br{F^{-m}}'(z_l)}~r_l\\
&=&16~r_l~\abs{\br{F^{m}}'(x)}^{-1}~,\end{aligned}$$ where $m=(n-k_1-\dots-k_{l-1})$ and $x=F^{-m}z_l=F^{-n}z$. Note, that $x\in\xx{}{z_l,r_l}$ in the terminology of Proposition \[prop:poincare\], and $\degg{F^{n}}{F^{-n}B}~\le~\br{\mmax}^{l-1}$.
Now, using the result of Proposition \[prop:poincare\] we can expand (for $p<2$ close to $2$) $$\begin{aligned}
\sum_{n}&{\displaystyle\sum_{F^{-n}}}&
\br{\diam\br{F^{-n}B}}^{p}\,\degg{F^n}{F^{-n}B}\\
&<&
\sum_{y\in{\Cal Y}(z)}\,\sum_{x\in\xx{}{y,r(y)}}
\,16^p\,\br{r(y)}^p\,\abs{\br{F^{n(x)}}'(x)}^{-p}\,\degg{F^n}{F^{-n}B}\\
&<&
\sum_{y\in{\Cal Y}(z)}\,16^p\,\br{r(y)}^p\,
C\,\br{r(y)}^{p(\m{c_{y}}/\mmax-1)}\,\degg{F^n}{F^{-n}B}\\
&<&
16^p\,C\,\sum_{y\in{\Cal Y}(z)}
\,\br{r(y)}^{p\m{c_{y}}/\mmax}\,\br{\mmax}^{l(y)-1}\\
&<&
16^p\,C\,\sum_{y\in{\Cal Y}(z)}\,
\br{\gamma_{k_{l-1}}^{-\mmax}\dots\gamma_{k_1}^{-\mmax}}^{p/\mmax}
\,\br{\mmax}^{l-1}\\
&<&
16^p\,C~\sum_{l,k_1,\dots,k_{l-1}}~(2 \deg F)^{l}
~\br{\gamma_{k_1}\dots\gamma_{k_{l-1}}}^{-p}~\br{\mmax}^{l-1}\\
&\le&
16^p\,C~\sum_l\,\br{2 \deg F\,\mmax\sum_k\gamma_k^{-p}}^l\\
&<&C\,16^p\,\sum_l\br{\fr12}^l~=~C\,16^p~<~\infty
~ ,\end{aligned}$$ which proves Proposition \[prop:goodcover\].
$\Box$
Note, that substuting into the last formula Lemma \[lem:main2\] (instead of Proposition \[prop:poincare\]), we can arrive at a better estimate (where the sum is taken only over some preimages):
\[cor:goodcover\] Taking $\beta=\mmax\alpha/(1-\alpha)$ and using the notation above, we get $$\sum_{y\in{\Cal Y}(z)}\,\sum_{x\in\typei^{r(y)}(y)}
\br{\diam\br{F^{-n(x,z)}B}}^{\beta}
~<~\infty~.$$
Contraction of preimages
------------------------
\[prop:shrink\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~\le~1~.$$ Then there exist a positive sequence $\brs{\tilde\omega_n}$, summable with an exponent $-\beta\,:=\,-\fr{\mmax\alpha}{1-\alpha}$: $$\sum_{n}\,\br{\tilde\omega_n}^{-\beta}
~<~\infty~,$$ such that for every sufficiently small (of radius less than $R'$) ball $B$ centered on the Julia set, every $n$, and every branch of ${F^{-n}}$ we have $$\diam\br{F^{-n}B}~<~(\tilde\omega_n)^{-1}~.$$
The proof of Proposition \[prop:shrink\] given below will actually imply that $$\diam\br{F^{-n}B}~<~\const
~(\tilde\omega_n)^{-1}~\br{\diam\br{B}}^{1/\mmax}~.$$
Also, for any periodic point $z:~F^{k}(z)=z$, by the proposition above we can find such $n$ that for the branch of $F^{-kn}$, fixing $z$, and a small ball $B(z,\rho)$ one has $$F^{-nk}B(z,\rho)~\subset~B(z,\rho/2)~.$$ By a standard use of the Schwartz lemma, the latter implies $\abs{\br{F^k}'(z)}>1$, and we arrive at the following
\[cor:nocremer\] Under the assumptions as above, $F$ has no Cremer points.
To prove Proposition \[prop:shrink\], take a ball $B_{r}(z)$ of a small radius $r_1<R'$ and proceed as in the proof of Proposition \[prop:goodcover\] – we preserve the notation. Then, with the help of Proposition \[prop:fatouexpan\], (the sequence $\brs{\omega_n}$ was constructed there) we obtain that $$\begin{aligned}
\diam\br{F^{-n}B}&<&
\diam\br{F^{-m}B_{r_l}(z_l)}\\
&<&16~\abs{\br{F^{-m}}'(z_l)}~r_l\\
&<&16~r_l~(r_l)^{\m{c_l}/\mmax-1}~(\omega_m)^{-1}\\
&<&16~(r_l)^{\m{c_l}/\mmax}~(\omega_m)^{-1}\\
&<&16~\br{(\gamma_{k_1})^{-\mmax}\dots
(\gamma_{k_l})^{-\mmax}}^{1/\mmax}~(\omega_m)^{-1}\\
&<&16~(\gamma_{k_1})^{-1}\,\dots\,
(\gamma_{k_l})^{-1}~(\omega_m)^{-1}~,\end{aligned}$$ where $k_1+\dots+k_l+m=n$.
It means that setting $$\tilde\omega_n~:=~\inf
\brs{\gamma_{k_1}\,\dots\,\gamma_{k_l}\,\omega_m\,/\,16~:
~~k_1+\dots +k_l+m=n}~,$$ we have that $$\diam\br{F^{-n}B}~<~(\tilde\omega_n)^{-1}~.$$
On the other hand (for $-\beta\,:=\,-\fr{\mmax\alpha}{1-\alpha}$) $$\begin{aligned}
\sum_{n}(\tilde\omega_n)^{-\beta}
&<&16^{\beta}\,\br{\sum_m(\omega_m)^{-\beta}}
\cdot\sum_{l=0}^{\infty}
\br{\sum_k(\gamma_k)^{-\beta}}^l\\
&<&16^{\beta}\,{\sum_m(\omega_m)^{-\beta}}
\cdot\sum_{l=0}^{\infty}\br{\fr12}^l
~=~16^{\beta}\,2\,{\sum_m(\omega_m)^{-\beta}}~<~\infty~.\end{aligned}$$ which completes the proof of Proposition \[prop:shrink\].
Most points go to large scale infinitely often
----------------------------------------------
We will prove that the Hausdorff dimension of points which do not “go to a large scale infinitely often” is small provided $F$ satisfies the summability condition. This should be compared with Proposition \[prop:largescale\] where it is shown that most points go to a large scale infinitely often with respect to conformal measure.
The definition of the subset of points in $J$ which infinitely often go to the large scale of size $R'/2$ is as follows: $$\Jls \,:=\,\brs{z\in J:\, \exists~n_j\to\infty, {\em~with~}F^{n_j}
{\em~univalent~on ~} F^{-n_j}\br{B\br{F^{n_j}x,R'/2}}}~.$$ Note that the value of $R'$ is already fixed and does not depend on a point.
\[prop:largescale2\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $\alpha<1$, then $$\HD(J\setminus \Jls )~\le~\fr{\mmax\alpha}{1-\alpha}~.$$
The proof is a modification of the proof of Proposition \[prop:largescale\]. Denote $\beta:=\fr{\mmax\alpha}{1-\alpha}$.
Take a finite cover $\{B_j:j=1,\dots, K\}$ of the Julia set by balls of radii $R'/2$ centered at points $w_j\in J$. For every $x\in J$ and every $k\in \N$, we decompose the sequence $x,\dots,F^k(x)$ into blocks of new types $1^{*}, 3^{*}$, and blocks of old types $1,2,3$. An inductive procedure ascribing a code to the sequence $x,\dots ,F^{k}(x)$ will be defined only for preimages of the center of the ball $B_j\ni F^{k}(x)$. By the definition, the sequence $x,\dots,F^k(x)$ inherits the code of the corresponding sequence of preimages $F^{-k}(w_{j}),\dots ,w_{j}$.
We start by defining blocks of type $1^{*}$ and $3^{*}$ for the preimages of $w_{j}$. To this aim we invoke the inductive procedure from Proposition \[prop:goodcover\]. Namely, we start by picking a ball $B_j\subset B_{R'}(F^k(x))$, denoting $z_1:=w_j$, $r_1:=R'/2$, and considering the shrinking neighborhoods for the $4$ times larger ball $B_{4r_1}(z)$. Let $k_1$ be the first time when $U_{k_1}$ hits a critical point $c_{2}$. We set $r_2:=\diam\br{F^{-k_1}B_{r_1}}$, $z_2:=F^{-k_1}(z_1)$, and proceed by induction. The construction is repeated until we meet a ball $B_{4r_l}(F^{-k_{l-1}}z)$ whose shrinking neighborhoods do not contain critical points. This means that the ball $B_{2r_l}(z_l)$ can be pulled back univalently along the corresponding branch of $F^{-k+(k_{1}+\dots+k_{l})}$. Summarizing, our construction leads to a decomposition of the sequence $z_l,F(z_l),\dots,z_1$ into blocks $1^{*}\dots 1^{*}3^{*}$. We see that the symbol $3^{*}$ stands for the initial sequence of preimages of type $3$ with $r=2R'$ (see Definition \[defi:3t\]). The terminal point of the type $3^{*}$ sequence is $z_{2}$. After, only type $1^{*}$ sequences are allowed with terminal points $z_{3}, \dots ,z_{l}$, respectively.
Having defined $z_{l}$, we apply to $F^{-k+k_1+\dots+k_{l-1}}(z_l),\dots,z_l$ the inductive procedure with a stopping rule (see Section \[sec:stop\]). This yields a decomposition of the sequence into blocks of the form $21\dots3$ or $2$. Finally, we can represent the orbit $F^{-k}(z_1),\dots,z_1$, as a sequence of blocks $21\dots111^{*}1^{*}\dots1^{*}3^{*}$. By our convention, the orbit $x,\dots,F^k(x)$ has the same decomposition. Note that if we have no blocks of type $1^{*}$, i.e. $z_l$ coincides with $F^{-k}(w_{j})$, then the ball of radius $R'$ can be pulled back univalently along the corresponding branch of $F^{-k}$, and we have no blocks $1$ either. This means that the sequence $x, \dots, F^{k}(x)$ is encoded as $2$. Note also that the corresponding endpoints of blocks from the orbits $x,\dots,F^k(x)$ and $F^{-k}(z_1),\dots,z_1$ are $R$-close to each other.
Following the proof of Proposition \[prop:goodcover\], we denote by ${\Cal Y}(w_j)$ the set of all possible terminal points $z_l$ for all inverse branches of $F$ defined on the ball $B_j$. Introducing more general notation, we set $y:=z_{l}$, $r(y)=r_l$, $l(y)=l$, and $c_y=c_l$.
Denote by ${\cal C}_{x}$ the set of all codes obtained for $x$. If for some point $x$ we get infinitely many different type $2$ sequences then $x$ must belong to $\Jls$. Indeed, a type $2$ sequence means that an $R'$-ball around a point $R'/2$-close to some image of $x$ can be pulled back univalently. Hence, the same is true for $R'/2$-ball around the image of $x$.
Therefore, if $x\in J\setminus \Jls $ then $x$ is a terminal point of an infinite number of sequences $211\dots111^{*}1^{*}\dots1^{*}1^{*}3^{*}$ with only a finite number of choices for type $2$ blocks. Let $k(x)$ be a minimal number for which infinitely many sequences from ${\cal C}_{x}$ have the same type $2$ block of length $k(x)$. Denote $X_k:=\brs{x:k(x)=k}$ and observe that the sets $\{X_{k}:k=0,1,\dots\}$ form a countable partition of $J\setminus \Jls$.
Obviously, for any Borel set $X\subset J$ we have $\HD(F^kX)=\HD(X)=\HD(F^{-k}X)$. Since $F^k(X_k)\subset X_0$ and consequently $J\setminus \Jls\subset\cup_k F^{-k}(X_0)$, it is sufficient to prove that $\HD(X_0)\le \mmax\alpha/(1-\alpha)$.
Every point $x\in X_0$ must be a terminal point for infinitely many different subsequences of the form $1\dots11^{*}\dots1^{*}3^{*}$, containing at least one block $1^{*}$. Thus every point $x\in X_0$ is covered by infinitely many preimages $$F^{-n(v,w_j)}(B_j)~,~~~j= 1,\dots, K,~~~y\in{\cal
Y}_j,~~~v\in\typei^{r(y)}(y)~.$$ Corollary \[cor:goodcover\] implies that for every $j=1,\dots, K$ and $\beta=\mmax\alpha/(1-\alpha)$, $$\sum_{y\in{\Cal Y}(w_j)}\,\sum_{v\in\typei^{r(y)}(y)}
\br{\diam\br{F^{-n(v,w_j)}B_{j}}}^{\beta}~<~\infty~.$$ We conclude that $\HD{(X_0)}\le\fr{\mmax\alpha}{1-\alpha}$ which proves the proposition.
Dimensions and conformal measures {#sec:dim}
=================================
Fractal dimensions
------------------
First we will remind the definitions of various dimensions, used in this paper. For properties of the Hausdorff and Minkowski measures, contents, and dimensions one can consult the monographs [@mattila] and [@federer].
Assume that we are given a compact subset $K$ of the complex plane (or a complex sphere with the spherical metric).
For positive $\delta$ the [*Hausdorff measure*]{} $\ha_\delta$ is defined by $$\ha_\delta(K)
~:=~\lim_{\rho\to0}~\inf_{\Cal B_\rho}
~\sum_{B\in\Cal B_\rho}~r(B)^{\delta}~,$$ the infimum taken over all covers $\Cal B_\rho\,=\,\{B\}$ of the set $K$ by balls $B$ of radii $r(B)\le\rho$.
Usually the measure above is normalized by some factor, depending on $\delta$, but this is not necessary for our purposes.
It is easy to show that there exists some number $\delta'\in[0,2]$, such that $\ha_\delta(K)$ is infinite for $\delta<\delta'$ and zero for $\delta>\delta'$. The latter is called the Hausdorff dimension:
The [*Hausdorff dimension*]{} of a set $K$ is defined by $$\HD(K)~:=~\inf~\brs{\delta\,:~\ha_\delta(K)=0}~.$$ The [*Hausdorff dimension*]{} of a Borel measure $\nu$ is defined as the infimum of the dimensions of its Borel supports: $$\HD(\nu)~:=~\inf~\brs{\HD(E)\,:~E{\mathrm~is~Borel~and~}\nu(E^c)=0}~.$$
The upper and lower Minkowski dimensions can be defined similarly using the corresponding Minkowski contents. Equivalently, one can take a shortcut and define them as follows:
Let $N(K,\rho)$ be the minimal number of the balls of radius $\rho$ needed to cover $K$. The [*upper*]{} and [*lower*]{} [*Minkowski dimensions*]{} are defined as $$\begin{array}{ccc}
\MDsup(K)&:=&\limsup_{\rho\to0}~\frac{\log N(K,\rho)}{\log 1/\rho}~,\\
\MDinf(K)&:=&\liminf_{\rho\to0}~\frac{\log N(K,\rho)}{\log 1/\rho}~.
\end{array}$$
If those dimensions coincide, their common value is called the [*Minkowski dimension*]{} $\MD(K)$.
Since we restricted ourselves to a smaller collection of coverings, than in the definition of the Hausdorff measure, one clearly has $$\HD(K)~\le~\MDinf(K)~\le~\MDsup(K) ,$$ for arbitrary compact set $K$.
In the absence of the dynamics, the Whitney exponent can be regarded as a substitute for the Poincaré exponent. One can decompose domain $\Omega\,:=\,\C\setminus K$ in the complex plane into a collection $\brs{Q_j}$ of non-overlapping dyadic squares so that $\dist{Q_j,K}\,\asymp\,\diam(Q_j)$ up to a constant of $4$ (consult [@Stein] for this classical fact, the Whitney decomposition).
[*Whitney exponent*]{} is defined as an exponent of convergence $$\dwhit(K)~:=~\inf~\brs{\delta\,:
~\sum_{Q_j:\,\diam(Q_j)\le1}\,\diam(Q_j)^{\delta}\,<\,\infty}~.$$
Note that in the Whitney decomposition the smaller the squares, the closer they are to the set, so to describe its geometry it is enough to work with the small squares only. Thus the large squares are dropped from the series above so that it becomes convergent.
Clearly, this definition admits the following integral reformulation (and hence does not depend on the choice of Whitney decomposition): $$\dwhit(K)~:=~\inf~\brs{\delta\,:
~\int_{\Omega}~\dist{z,K}^{\delta-2}\,dm(z)\,<\,\infty}~,$$ where $m$ denotes area, and we use the spherical metric. One can restrict integration to some neighborhood of $K$ (and should do so if working with the planar metric).
The definitions of Whitney and Poincaré exponents assume that the complement of $K=J$ is non-empty. Should $K=J$ coincide with the whole sphere, we set $\dwhit(K)\,=\,\dpoin(K)\,:=\,2$.
Multifractal analysis
---------------------
The following is Lemma 2.1 in [@bishop-poincare], where it was used in similar situation, involving Poincaré exponent of a Kleinian group and Minkowski dimension of its limit set. We thank Chris Bishop for bringing it to our attention.
\[lem:bishop\] For any compact set $K$, $\dwhit(K)\,\le\,\MDsup(K)$. If, in addition, $K$ has zero area, then $\dwhit(K)\,=\,\MDsup(K)$.
By taking a cover of $J$ by a finite number of small balls and applying Proposition \[prop:goodcover\] to each of them, we easily obtain the following
\[lem:hdim\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{2}{\mmax+2}~.$$ If the Julia set is not the whole sphere, then its Hausdorff dimension is strictly less than $2$.
It seems to be folklore that for rational maps without neutral orbits $\dpoin(J)=\dwhit(J)$. We were unable to find a reference to this fact in the literature and thus we supply the proof below. The following is an analogue of Lemma 3.1 in [@bishop-poincare]:
\[lem:poinwhit\] For any rational function $F$ without Siegel disks, Herman rings, or parabolic points one has $$\dpoin(J)~=~\dwhit(J)~.$$
The proof below can be modified to work for parabolic points as well. However, in the presence of Siegel disks or Herman rings the introduced version of Poincaré series does not work well.
Under an additional assumption that the Julia set has zero area, the lemma together with Fact \[lem:bishop\] imply that the Poincaré exponent coincides with the upper Minkowski dimension.
Fix points $\{z_j\}$ used in the definition of $\dpoin(J)$ – one inside each cycle of periodic Fatou components. From Lemma 7 of [@ceh] (Lemma \[lem:ourkoebe\]) follows that for any $y\in F^{-n} z_j$ one has $$\label{eq:dynmetric}
\dist{y,J}~\asymp~\abs{\br{F^{n}}'(y)}^{-1}~,$$ up to a constant depending on $z_j$ only.
Knowing that only (super) attractive Fatou components are possible, we can choose “fundamental” domains $z_j\in U_j$, so that their preimages under all possible branches of $F^{-n}$ are disjoint and cover almost all of some neighborhood $U$ of $J$ inside the Fatou set. Also $z_j$ and then $U_j$ can be chosen so that under iteration critical points never enter some neighborhoods of $U_j$, and hence by distortion theorems, up to a constant $\const(z_j,U_j)$ $$\label{eq:dynmetrica}
\dist{x,J}~\asymp~\abs{\br{F^{n}}'(x)}^{-1}~\asymp
~\abs{\br{F^{n}}'(y)}^{-1}~,$$ for any $x\in F^{-n}U_j$ and $y$ being the corresponding preimage of $z_j$: $y\in F^{-n} z_j$.
Hence, for any $\delta\ge0$ we can write (here $V\in F^{-n}U_j$ means that $V$ is one of the components of connectivity of the latter) $$\begin{aligned}
\int_{U}~\dist{x,J}^{\delta-2}\,dm(x)&=&
\sum_j\,\sum_{n=1}^{\infty}\,\sum_{V\in F^{-n}U_j}
\int_{V}~\dist{x,J}^{\delta-2}\,dm(x)\\
&\asymp&\sum_j\,\sum_{n=1}^{\infty}\,\sum_{V\in F^{-n}U_j}\,
\int_{V}~\abs{\br{F^n}'(x)}^{2-\delta}\,\abs{\br{F^n}'(x)}^{-2}\,dm(F^n(x))\\
&\asymp&\sum_j\,\sum_{n=1}^{\infty}\,\sum_{V\in F^{-n}U_j}\,
\int_{V}~\abs{\br{F^n}'(F^{-n}z_j)}^{-\delta}\,dm(F^n(x))\\
&\asymp&\sum_j\,\sum_{n=1}^{\infty}\,\sum_{y\in F^{-n}z_j}\,\abs{\br{F^n}'(y)}^{-\delta}
\int_{U_j}\,dm(z)
~\asymp~\Sigma_\delta(J,\{z_j\})~,\end{aligned}$$ which clearly implies the desired equality.
The definitions and our discussion so far imply two chains of inequalities: $$\HD(J)\leq \MDinf(J)\leq \MDsup(J)\;$$ and $$\dpoin(J)=\dwhit(J)\, \leq \,\MDsup(K)~.$$
\[prop:confgauge\] Suppose that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{q}{\mmax+q}~,$$ and $\nu$ is $q$-conformal measure with no atoms at critical points. Then for $\nu$-almost every $x\in J$ and any $\epsilon>0$ there are constants $C_x>0$ and $C_{x,\epsilon}>0$, such that for any ball $B(x,r)\,,~r<1,$ centered at $x$ one has $$C_{x}\,r^q~<~\nu(B(x,r))~<~C_{x,\epsilon}\,r^{q-\epsilon}~.$$
It is a straight-forward use of Proposition \[prop:freq\].
It is easy to see, that the proposition above implies that $\HD(J)\,\ge\,\HD(\nu)\,=\,q$. Combining this with the Corollary \[cor:confnoatom\], we obtain the following
\[cor:hdconf\] Assume that $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{\dpoin(J)}{\mmax+\dpoin(J)}~,$$ then $$\HD(J)~\ge~\dpoin(J)~=~\HD(\nu)~.$$
#### Proof of Theorem \[theo:dims\].
By now we have $$\dpoin(J)~=~\dconf(J)~=~\dwhit(J)~=~\MD(J)~\ge~\HD(J)~\ge~\dpoin(J)~,$$ and hence all these dimensions coincide, provided that a rational function $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{\dpoin(J)}{\mmax+\dpoin(J)}~.$$ By the work of M. Denker, F. Przytycki, and M. Urbanski, the hyperbolic and dynamical dimensions will also be equal to the dimensions above.
Removability and rigidity
=========================
In this Section, we prove Theorems \[theo:rem\], \[theo:rem1\], and \[theo:rem2\].
Conformal removability and strong rigidity
------------------------------------------
The notion of [*conformal removability*]{} (also called holomorphic removability) appears naturally in holomorphic dynamics: often one can show that two dynamical systems are conjugated by a homeomorphism which is (quasi)conformal outside the Julia set, and conformal removability of the latter ensures global (quasi)conformality of the conjugation.
We say that a compact set $J$ is [*conformally removable*]{} if any homeomorphism of the Riemann sphere $\hat{\C}$, which is conformal outside $K$, is globally conformal and hence is a Möbius transformation.
The quasiconformal removability is defined similarly. An easy application of the measurable Riemann mapping theorem shows that the two notions are equivalent. The problem of geometric characterization of removable sets is open, see [@josm] for discussion and relevant references. Sets of positive area are non-removable, as are Cartesian products of intervals with Cantor sets of positive length. On the other hand, quasicircles and sets of $\sigma$-finite length are removable. Note that there are removable sets of Hausdorff dimension $2$ and non-removable of dimension $1$.
In [@josm] a few geometric criteria for removability are given, some close to being optimal and well-adapted for dynamical applications. We will use the following fact (which is Theorem 5 in [@josm]):
\[fact:js\] Suppose that $F$ is a polynomial, and $\{B_j\}$ is a finite collection of domains whose closure covers $J_F$. Denote by $\{P_i^n\}$ the collection of all components of connectivity of pullbacks $F^{-n}B_j$, and by $N(P_i^n)$ the degree of $F^n$ restricted to $P_i^n$. Then the geometric condition, $$\label{jscondition}
\sum_{i,n}\,N(P_i^n)\,\diam\br{P_i^n}^2\ <\ \infty\ ,$$ is sufficient for conformal removability of the Julia set.
If a polynomial satisfies the summability condition with an exponent $\alpha\,<\,\fr{2}{\mmax+2}$ then Proposition \[prop:goodcover\] implies condition (\[jscondition\]) for a cover by sufficiently small balls $B_j$. By Fact \[fact:js\], the Julia set is conformally removable and Theorem \[theo:rem\] follows. Similar reasoning works for every Julia set, which is a boundary of one of the Fatou components.
Dynamical removability and rigidity
-----------------------------------
The assumption that the Julia set coincides with the boundary of one of the Fatou components is essential for conformal removability. Indeed, there are hyperbolic rational functions with non-removable Julia sets. An example of a non-removable Julia set, which is topologically a Cantor set of circles, is constructed in §11.8 of the book [@beardon-book-iteration]. It is a classical observation that these type of sets are not conformally removable. An exotic homeomorphism is given by rotating annuli between circles by a devil’s staircase of angles: the resulting homeomorphism is conformal on each annulus, globally continuous since the devil’s staircase is, but clearly is not Möbius (i.e. not globally conformal).
Even though such Julia sets are not conformally removable, they will be removable for all “dynamical” conjugacies. To make this precise, consider a homeomorphism $\phi$ which conjugates a rational dynamical system $(\CC,F)$ to another dynamical system $(\CC,G)$ and assume that $\phi$ is quasiconformal outside the Julia set $J$.
Recall a metric definition of quasiconformality (which states that images of circles look like circles themselves): a homeomorphism $\phi$ is quasiconformal, if there is a constant $H$ such that for every point $x\in \CC$ $$\label{eq:qcdef}
\limsup_{r\to0}\frac{L_\phi(x,r)}{l_\phi(x,r)}~\le~H~<~\infty~,$$ where $$\begin{aligned}
L_\phi(x,r)&:=&\mbox{sup}\,\brs{\abs{\phi(x)-\phi(y)}\,:~|x-y|\leq r}~,\\
l_\phi(x,r)&:=&\mbox{inf}\,\brs{\abs{\phi(x)-\phi(y)}\,:~|x-y|\geq r}~. \end{aligned}$$
If a rational function $F$ is hyperbolic, then every sufficiently small ball with center at the Julia set is mapped univalently by some iterate of $F$ to a large scale with bounded distortion, and the inequality (\[eq:qcdef\]) holds by a compactness argument implying a (global) quasiconformality of $\phi$.
For non-hyperbolic maps the property of “going to large scale with bounded distortion” fails for many small balls. In these circumstances one has to resort to more subtle tools in the theory of quasiconformal maps. A theorem of great use for complex dynamical systems was proved recently by J. Heinonen and P. Koskela in [@heinonen-koskela-def]. They have shown, that for Euclidean spaces the upper limit “$\limsup$” in the metric definition of the quasiconformality can be replaced by “$\liminf$.” J. Heinonen and P. Koskela’s result was immediately applied by F. Przytycki and S. Rohde [@przytycki-rohde-rigidity] to deduce rigidity of Julia sets satisfying the topological Collet-Eckmann condition (shortly TCE). The argument of [@przytycki-rohde-rigidity] goes as follows: for every point $x\in J$ there is a sequence of radii $r_j\to0$ such that the balls $B_{r_j}(x)$ are mapped by some iterates of $F$ to a large scale with bounded distortion (though no longer univalently but with uniformly bounded criticality), and the inequality (\[eq:qcdef\]) for $\liminf$ holds again by a compactness argument implying a (global) quasiconformality of $\phi$.
Rational maps which satisfy the summability condition have weaker properties than TCE maps (in the unicritical case the latter class is strictly smaller), so we need an even stronger theorem than that of J. Heinonen and P. Koskela. It is a well-known fact, that the metric definition (with “$\limsup$”) of quasiconformality allows for an exceptional set. Partially motivated by the perspective applications to our paper, S. Kallunki and P. Koskela [@kallunki-koskela] established very recently that one can also have an exceptional set in the “$\liminf$” definition of quasiconformality. The following is Theorem 1 of [@kallunki-koskela]:
\[fact:kalkos\] Let $\Omega\subset{\R}^n$ be a domain and suppose that $\phi:\Omega \rightarrow \phi(\Omega)\subset{\R}^n$ is a homeomorphism. If there is a set $E$ of $\sigma$-finite $(n-1)$- dimensional Hausdorff measure so that $$\liminf_{r\to0}\frac{L_\phi(x,r)}{l_\phi(x,r)}~\le~H~<~\infty~,$$ for each $x\in\Omega\setminus E,$ then $\phi$ is quasiconformal in $\Omega.$
This theorem fits very well into our framework. By Proposition \[prop:largescale\], if $F$ satisfies the summability condition with an exponent $\alpha<\fr1{\mmax+1}$, then except for a set $E$ of Hausdorff dimension $<1$ every point $x\in J$ “go to a large scale” infinitely often. More precisely, for every $x\in J$ there exists (a point-dependent) sequence of radii $r_j\to 0$ such that the balls $B_{r_j}(x)$ are mapped by iterates $F^{k_j}$ to the large scale of size $\asymp R'$ univalently and with a bounded distortion. Thus for every $x\in J\setminus E$ (cf. [@przytycki-rohde-rigidity]) one has $$\begin{aligned}
\liminf_{r\to0}\frac{L_\phi(x,r)}{l_\phi(x,r)}&\le&
\liminf_{j\to\infty}\frac{L_\phi(x,r_j)}{l_\phi(x,r_j)}~\lesssim~
\liminf_{j\to\infty}\frac{L_\phi(F^{n_j}(x),R')}{l_\phi(F^{n_j}(x),R')}\cr
&\lesssim&\sup_{x\in J}\frac{L_\phi(x,R')}{l_\phi(x,R')}
~=:~H~<~\infty~,\end{aligned}$$ the latter quantity being finite by a compactness argument. We infer that the homeomorphism $\phi$ is globally quasiconformal, thus deducing Theorem \[theo:rem1\].
Consider now a quasiconformal homeomorphism $\phi$ which conjugates a rational dynamical system $(\CC,F)$ to another dynamical system $(\CC,G)$ and assume that $F$ satisfies the (weaker) summability condition with an exponent $\alpha<\fr2{\mmax+2}$.
If $J\neq\CC$ then the area of the Julia set is zero, by the Corollary \[theo:dim\], and an invariant Beltrami coefficient $\phi_{\mu}$ has to be supported on the Fatou set. There are two interesting special settings when $\phi$ is automatically a Möbius transformation. If $\phi$ is conformal outside the Julia set, then the Beltrami coefficient is identically zero, and $\phi$ is Möbius. Also, if there is only one simply-connected Fatou component (e.g. this is the case for polynomials with all critical points in the Julia set), it has to be super-attracting, and by a standard argument it does not support non-zero $F$-invariant Beltrami coefficients, and $\phi$ is Möbius again.
If $J=\CC$ then by Proposition \[prop:largescale\] except for a set $E$ of Hausdorff Dimension $<2$, all points “go to a large scale” infinitely often. Then a standard technique (see the proof of either Theorem 3.9 or Theorem 3.17 in [@mcm]) implies that the Beltrami coefficient $\mu_\phi$ has to be holomorphic, and by Lemma 3.16 from [@mcm], we have that $\mu_\phi\equiv 0$ or $F$ is a double cover of an integral torus endomorphism, i.e. it is a Lattés example. This concludes the proof of Theorem \[theo:rem2\].
Integrability condition and invariant measures.
===============================================
A natural question arises whether a rational map $F$ has invariant measures absolutely continuous with respect to conformal measures. We will make two different types of assumptions. Firstly, we make a general assumption about regularity of conformal measures. This will guarantee that a conformal measure is not too singular with respect to the corresponding Hausdorff measure (integrability condition). Secondly, we demand that $F$ has some expansion property, usually given in the form of a suitable summability condition.
Let $\nu$ be a conformal measure with an exponent $\delta$ defined on the Julia set $J$. Assume also that $\nu$ satisfies the uniform integrability condition with an exponent $\eta$. We recall that this means that there exist positive $C$ and $\eta$ such that for all positive integers $n$ and every $c\in \Crit $, $$\label{equ:inte2}
\int\frac{d\nu}{\dist{x,F^{n}(c)}^{\eta}}~<~C~<~\infty~.$$
Ruelle-Perron-Frobenius transfer operator.
------------------------------------------
We study the existence of an absolutely continuous invariant measure $\sigma$ through analysis of the Ruelle-Perron-Frobenius operator ${\cal L}$ which ascribes to every measure $\nu$, the density of $F_{*}(\nu)$ with respect to $\nu$. The $N$-th iterate of ${\cal L}(\nu)$ evaluated at $z$ is equal to $${\cal L}^{N}(\nu)(z):=\frac{dF^{N}_{*}(\nu)}{d\nu}=
\sum_{y\in F^{-N}(z)}\frac{1}{|(F^{N})'(y)|^{\delta}}\;.$$
For simplicity, we will drop $\nu$ from the notation of the Ruelle-Perron-Frobenius operator.
\[prop:in1\] Assume that a rational function $F$ satisfies the summability condition with an exponent $$\alpha < \frac{\delta}{\delta+\mmax}$$ and $\nu$ is a $\delta$-conformal measure on $J$. Let $\Delta_{k}:=\dist{f^{k}(\Crit), z}$ and $${\mathrm g}(z):= \sum_{k=1}^{\infty}
\gamma_{k}^{-\delta}\Delta_{k}^{-(1-\frac{1}{\mmax})\delta}\;.$$ There exists a positive constant $K$ such that for every $z\not \in \bigcup_{n=1}^{\infty} F^{n}(\Crit)$ and every positive integer $N$, $${\cal L}^{N}(z) < K\; {\mathrm g}(z)~.$$ The sequence $\gamma_{k}^{-1}$ (defined in ) is summable with an exponent $\beta=\frac{\mmax\alpha}{1-\alpha}<\delta$.
We will use the estimates of the Main Lemma for a modified decomposition procedure, as in Subsection \[sec:stop\].
#### Construction.
Let $z\in X$ and a sequence $$F^{-N}(z),\dots, F^{-1}(z), z$$ form a chain of preimages, that is $F(F^{-i}(z))=F^{-i+1}(z)$ for $i=1,\dots, N$ and $F^{-N}(z)\in X$. We decompose the chain into blocks of preimages of the types $2$ and $1\dots13$. This is done using the procedure of the Main Lemma with the following stopping rule: at the first occurrence of a type $2$ block we stop the procedure of decomposing the chain, so this type $2$ block can be arbitrarily long. See Subsection \[sec:stop\] for the details.
The inductive procedure gives a coding of backward orbits by sequences of $1$’s, $2$’s and $3$’s, with only the following three types of codings allowable: $2, 1\dots 3, 21\dots1 3$. We recall that according to our convention, during the inductive procedure we put symbols in the coding from the right to the left.
We attach to every chain of preimages of $z$ the sequence $k_l, \dots, k_0$ of the lengths of the blocks of preimages of a given type in its coding. Again our convention requires that $k_{0}$ always stands for the length of the rightmost block of preimages in a coding. Clearly, $k_{0}+\cdots + k_{l}=N$.
#### Estimates.
We recall that the Main Lemma implies that every sequence of the form $11\dots 3$ with the length of the corresponding pieces $k_{l},\dots
k_{0}$ yields expansion $$\gamma_{k_{l}}\cdot\dots\cdot \gamma_{k_{0}}\;
\Delta_{k_{0}}^{1-\frac{1}{\mmax}}\;,$$ where $\Delta_{k_{0}}=\dist{f^{k_{0}}(\Crit), z}$.
For singleton sequences $\{2\}$ of the length $n$ we have the following estimate, $$\label{equ:only}
\sum_{y\in F^{-n}(x)}\frac{1}{|(F^{n})'(y)|^{\delta}}\;\lesssim\;
\sum_{ F^{-n}}
\frac{\nu(F^{-n}(B_{R'}(x)))}{\nu(B_{R'}(x))} \leq
\frac{1}{\nu(B_{R'}(x))}\leq K_{R'}\;,$$ which is independent from $n$.
Consider now a set of all preimages $y\in F^{-N}(z)$. Let $11\dots 13$ denote the set of all points $x\in
\bigcup_{i=1}^{N}F^{-i}(z)$ which are coded by maximal sequences of $1$’s and $3$’s. We define $n_{x}$ by the condition $F^{n_{x}}(x)=z$.
Hence, $$\begin{aligned}
{\cal{ L}}^{N}(z) & = & \sum_{
{\mbox {all
codings}}}\frac{1}{|(F^{N})'(y)|^{\delta}} \\
&\leq & \sum_{n_{x}=1}^{N}\left(\sum_{11\dots13}
\frac{1}{|(F^{n_{x}})'(x)|^{\delta}}\right)
\left(\sum_{y\in F^{-N+n_{x}}}
\frac{1}{|(F^{n})'(y)|^{\delta}}\right)\\
&\leq&\;K_{R'}\; \sum_{11\dots13}
(\gamma_{k_{l}}\cdot\dots\cdot \gamma_{k_{0}})^{-\delta}\;
\Delta_{k_{0}}^{-(1-\frac{1}{\mmax})\delta}\;\\\end{aligned}$$ Since for every sequence of $k_{l},\dots, k_{0}$ positive integers there is at most $(2\deg F)^{l+1}$ sequences $11\dots 13$ with the corresponding lengths of the pieces of type $1$ and $3$ (see the estimate (\[equ:deg\])), we obtain that $$\begin{aligned}
{\cal L}^{N}(z)
&\lesssim&
\sum_{l,k_{l},\dots, k_{0}} (4\deg F)^{l+1}
(\gamma_{k_{l}}\cdot\dots\cdot \gamma_{k_{0}}^{-\delta}\;
\Delta_{k_{0}}^{-(1-\frac{1}{\mmax})\delta}\;\\
&<& \sum_{l=1}^{\infty}
\left(4\deg F \sum_{k_{l}} \gamma_{k_{l}}^{-\delta}\right)
\cdot \dots\cdot \left(4\deg F \sum_{k_{1}} \gamma_{k_{1}}^{-\delta}\right)\cdot
\left(4\deg F \sum_{k_{0}}
\gamma_{k_{0}})^{-\delta}\Delta_{k_{0}}^{-(1-\frac{1}{\mmax})\delta}\right)\\
&<&\; \sum_{l=1}\left(\frac{1}{4}\right)^{l}
\left(4\deg F \sum_{k_{0}}
\gamma_{k_{0}}^{-\delta}\Delta_{k_{0}}^{-(1-\frac{1}{\mmax})\delta}\right)\\
&<& K\; \sum_{k_{0}}
(\gamma_{k_{0}})^{-\delta}\Delta_{k_{0}}^{-(1-\frac{1}{\mmax})\delta}\;.\end{aligned}$$ By Lemma \[lem:techseq\], $\gamma_{k}$ is summable with any exponent bigger than $-\frac{\mmax\alpha}{1-\alpha}$.
#### Proof of Theorem \[theo:integ\].
Now it is a standard reasoning. By Proposition \[prop:in1\], ${\cal L}^{n}(z)\leq \gr(z)$ and $\gr(z)\in L^{1}(\nu)$. Hence, $$\nu_{N}=\frac{1}{N}\sum_{i=0}^{N-1}F^{i}_{*}(\nu)\;$$ form a weakly compact set of probabilistic measures absolutely continuous with respect to $\nu$ with densities bounded by $\gr$. A weak limit of $\nu_{N}$ gives an absolutely continuous invariant measure.
Uniqueness and ergodicity follow by an argument presented in Section \[sec:lyap\].
Regularity of invariant densities
---------------------------------
\[theo:inreg\] Suppose that $F$ satisfies the summability condition with an exponent $$\alpha < \frac{\delta}{\delta+\mmax}\; .$$ Let $\eta_{1} > \eta_{0}:=\delta(1-\frac{1}{\mmax})$ and assume $\eta$-integrability of the unique $\delta_{Point}$-conformal measure $\nu$ for every $\eta\in (0,\eta_{1})$. Then for every $\zeta<\eta_{1}/\eta_{0}$ $$\int \left(\frac{d\sigma}{d\nu}\right)
^{\zeta}\, d\nu< \infty \;,$$ and for every Borel set $A$, $$\sigma (A)\leq K_{\zeta}
\nu(A)^{1-1/\zeta}\;.$$
To prove $\zeta$-integrability of the density of $d\sigma/d\nu$ we use Proposition \[prop:in1\]. Recall that $\Delta_{k}= \dist{f^{k}(\Crit), z}$. We use the Hölder inequality with the exponents $1/\zeta+1/\zeta'=1$ in the estimates below. $$\begin{aligned}
\label{equ:above}
\sum_{k}\gamma_{k}^{-\delta}\Delta_{k}^{-(1-1/\mmax)\delta}& = &
\sum_{k}\gamma_{k}^{-\delta/\zeta'}\;\frac{\gamma_{k}^{-\delta/\zeta}}
{\Delta_{k}^{(1-1/\mmax)\delta}}\\
& \leq &
\left(\sum_{k}\gamma_{k}^{-\delta}\right)^{1/\zeta'}
\;
\left(\sum_{k}
\frac{\gamma_{k}^{-\delta}}{\Delta_{k}^{\delta(1-1/\mmax)\zeta}}\right)
^{1/\zeta}\nonumber\\
&\leq & K\;
\left(\sum_{k}
\frac{\gamma_{k}^{-\delta}}{\Delta_{k}^{\eta}}\right)
^{1/\zeta}\nonumber\end{aligned}$$ with $\eta\in [0,\eta_{1})$. Next, we integrate the density $d\sigma/d\nu$ to the power $\zeta$. Proposition \[prop:in1\] and the inequality (\[equ:above\]) imply that $$\begin{aligned}
\int \left(\frac{d\sigma}{d\nu}\right)
^{\zeta}\, d\nu &\lesssim &
\int
\left(\sum_{k}
\gamma_{k}^{-\delta}\Delta_{k}^{-(1-1/\mmax)\delta}\right)^{\zeta}\, d\nu\\
& \lesssim & \sum_{k}
\gamma_{k}^{-\delta}\int \Delta_{k}^{-\eta} d\nu < \infty \;.\end{aligned}$$ To prove the last estimate of the Corollary, we apply once more the Hölder inequality with the exponents $\zeta$ and $\zeta'$. $$\begin{aligned}
\sigma(A) &=& \int_{A} \frac{d\sigma}{d\nu}\, d\nu
\leq \left(\int_{A}d\nu\right)^{1/\zeta'}
\left(\int_{A}
\left(\frac{d\sigma}{d\nu}\right)^{\zeta} d\nu\right)^{1/\zeta'}\\
&\leq & K_{\zeta} \nu(A) ^{1-1/\zeta}\;.\end{aligned}$$
Under the assumptions of Theorem \[theo:integ\], there exists a positive constant $C$ so that for $\nu$ almost every point $z$, $$\frac{d\sigma}{d\nu}(z) \geq C,$$ where $d\sigma/d\nu$ stands for the density of the absolutely continuous invariant measure $\sigma$.
Let ${\rm g}=\sum_{k=1}^{\infty}
\gamma_{k}^{-\delta}\Delta_{k}^{-(1-\frac{1}{\mmax})\delta}$ and ${\rm g}_{n}(z):=\sum_{k=n}^{\infty}
\gamma_{k}^{-\delta}\Delta_{k}^{-(1-\frac{1}{\mmax})\delta}$. By the hypothesis of Theorem \[theo:integ\] (the intergrability condition), $$\int {\mathrm g}(z)\, d\nu < \infty$$ and thus for almost all $z\in \CC$ with respect to $\nu$, $\lim_{n\rightarrow\infty}{\rm g}_{n}(z)=0$. Consequently, there exist a point $w \not \in \bigcup_{n=1}^{\infty}F^{-n}(\Crit)$ and a positive integer $N$ so that $d\sigma/d\nu (w)\geq 1$ and ${\rm g}_{n}(w)<1/2$ for every $n\geq N$. Observe that the choice of $N$ and $w$ does not depend on $R'$.
Choose $R'$ so small that $B_{R'}(w)\cap F^{k}(\Crit)=\emptyset$ for all $k\leq N$ and decompose the backward orbit of $w$ into pieces of type $1$, $2$ or $3$ as described in the proof of Proposition \[prop:in1\]. By the construction, only the following three types of codings are allowable: $2$, $1\dots 3$, $21\dots 3$.
Let ${\cal R}_{ R'}^{n}(z)=\sum_{\{2\}}|(F^{n})'(y)|^{-\delta}$ be the sum over all type $2$ preimages of $z$ of length $n$ (regular part). The sum over all other preimages is denoted by ${\cal S}_{R'}^{n}(z)$ (singular part, compare [@przytycki-ce]). By the choice of $R'$, ${\cal S}_{R'}^{n}(w)<1/2$ for every $n$ positive. Hence, by the choice of $w$ and for $n$ large enough $$\frac{3}{4}\leq \frac{1}{n}\sum_{i=0}^{n-1}{\cal L}^{i}(w)=
\frac{1}{n}\sum_{i=0}^{n-1}{\cal R}_{R'}^{i}(w)+
\frac{1}{n}\sum_{i=0}^{n-1}{\cal S}_{R'}^{i}(w)
\leq \frac{1}{n}\sum_{i=0}^{n-1}{\cal R}_{R'}^{i}(w)
+\frac{1}{2}~.$$ We choose new $R_{\rm{new}}':=R'/2$. This will change the decomposition of backward orbits of points, generally allowing more type $2$ sequences of preimages. If $z\in B_{R'/2}(w)$ then every type $2$ preimage of $w$ with the parameter $R'$ corresponds to exactly one type $2$ preimage of $z$ with $R_{\rm new}'$. By the bounded distortion, ${\cal R}_{R'/2}^{n}(z)~\grtsim ~{\cal R}_{R'}^{n}(w)$ and thus for $n$ large enough, $$\frac{1}{n}\sum_{i=0}^{n-1}{\cal L}^{i}(z)\geq \frac{1}{n}\sum_{i=0}^{n-1}
{\cal R}_{R'}^{i}(w) ~\grtsim ~1/4~.$$
By the eventually onto property, there exists a positive integer $m$ so that for every $z\in J$ there is a preimage $u=F^{-m}(z)\in B_{R'}(w)$. Let $M=\sup_{z\in J} |F'(z)|$. Then for almost every $z\in \CC$ with respect to $\nu$, $$d\sigma/d\nu(z)=\lim_{n\rightarrow \infty}\frac{1}{n}
\sum_{i=0}^{n-1}{\cal L}^{i}(z) \geq M^{-m}
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=j}^{n-1}{\cal L}^{i-j}(u)~
\grtsim ~M^{-m}/2~.$$
Analytic maps of interval
-------------------------
We are interested in analytic maps $f$ with negative Schwarzian derivative (i.e. ${S}(f)<0$, recall the formula (\[eq:schdef\])), which map a compact interval $I$ with non-empty interior into itself. We will denote by $F$ a complex extension of $f$ to a neighborhood $U_F\supset I $ in the complex plane which does not contain any critical points different then the real ones. We will study iterates of [*real*]{}$\;$ inverse branches $\Fe$ of $F$ defined by the condition: if $U\subset U_F$ is a topological disk such that $U\cap \R$ is an interval then for every $x\in U\cap \R$, $\Fe(x)\in I$. By the definition, $\Fe(U)$ is a topological disk and $\Fe(U\cap \R)= \Fe(U)\cap \R$ . We will show that iterates of points by the real inverse branches stay in a bounded distance from the interval $I$. To this aim we will need the disk property based on Proposition 3 and Proposition 4 of [@gss]. For every $a,b\in \R$ define $D_{a,b}$ to be the open disk centered at $(a+b)/2$ with the radius $|a-b|/2$.
\[SAD1\] Let $h:I\to \R$ be an analytic diffeomorphism from a compact interval $I$ with non-empty interior into the real line. Suppose that $f:I\to \R$ either concides with $h$ or is of the form $h^{\ell}$, $\ell>1$ is an integer. In the latter case assume that there exists $\zeta\in I$ such that $h(\zeta)=0$. Let $H$ be an extension of $h$ to a complex neighborhood of $I$ and $F(z)$ be equal either to $H(z)$ or $H(z)^{\ell}$. If $S(f) < 0$ on $I$ then there exists $\delta > 0$ such that for every two distinct points $a,b\in I$ with $\zeta \not \in (a,b)$ and $\diam{D_{f(a),f(b)}}<\delta$ the connected component of $F^{-1}(D_{f(a),f(b)})$ which contains $(a,b)$ is contained in $D_{a, b}$.
\[coro:real\] There exists a constant $L>0$ so that for every $n\in \N$, every $x\in I$, and every inverse branch $f^{-n}$ defined on $B_L(x)\cap \R$, the real inverse branch $F_{\rm{rl}}^{-n}$ which coincides with $f^{-n}$ on $B_L(x)\cap \R$ is well defined on $B_L(x)$ and $F_{\rm{rl}}^{-n}(B_{L}(x))\subset U_{F}$.
Observe that $f$ has only finitely many critical points in $I$ and for every critical point $c\in \Crit$ there exists a neighborhood $U_{c}\ni c$ such that $F$ is in the form $F(c)(1-H_{c}(z)^{\ell(c)})$ with $H_{c}$ a biholomorphic function near $c$. The proof follows immediately from Fact \[SAD1\].
#### Proof of Theorem \[coro:uni\].
The normalized Lebesgue measure $\nu$ on $I$ can be regarded as a $\delta$-conformal measure with an exponent $\delta=1$. Clearly, $\nu$ satisfies the uniform integrability condition for any $\eta<1$. We will show that we can use the estimates of Proposition \[prop:in1\] for the real inverse branches of $F$. From Lemma \[coro:real\] we infer that all preimages of disks $B_{L}(x)$ by the real inverse branches of $F$ are well-defined and of uniformly bounded diameters. This means that the estimates of Lemma \[lem:main\] (the Main Lemma) for disks $B_{\Delta}(x)$, $x\in\R$, and the real inverse branches $F_{\rm{rl}}^{-N}$ are still valid.
We define a real part ${\cal
L}_{{\mathrm rl}}$ of the Ruelle-Perron-Frobenius operator $\cal
L$ by, $${\cal L}_{{\mathrm rl}}(\nu)(z):=
\sum_{y\in F^{-1}
(z)\cap \R}\frac{1}{|F'(y)|^{\delta}}\;.$$
In the proof of Proposition \[prop:in1\], the equality $\Jac_{\nu}(x)=|F'(x)|^{\delta}$ was used only for the estimates involving type $2$ preimages. Explicitly, it is the estimate (\[equ:only\]). Let $F_{\rm{rl}}^{-k}$ be a real inverse branch of type $2$ (we use the inductive procedure from the proof of Proposition \[prop:in1\]) and $y= F_{\rm{rl}}^{-k}(x)$. We have the following uniform estimate, $$|(F^{k})'(y)|\;\sim \frac{\nu(B_{R'}(x))}{\nu(F^{-k}(B_{R'}(x)))}~.$$
The inductive construction of Proposition \[prop:in1\] yields a coding of backward orbits by sequences of $1$’s, $2$’s, and $3$’s. We recall that only three types of the codings are allowable: $2$, $1\dots 13$, $21\dots13$. Lemma \[coro:real\] guarantees that only the critical points of $f$ are used in the inductive construction of the backward codings.
After these preparations we are ready to invoke Proposition \[prop:in1\] with $\cal L$ replaced by ${\cal L}_{{\mathrm rl}}$. Hence, there exists a positive constant $K$ such that for every $z\in \R\setminus \bigcup_{n=1}^{\infty} f^{n}(\Crit)$ and every positive integer $N$, $${\cal L}_{{\mathrm rl}}^{N}(\nu)(z) =\sum_{y\in F^{-N}
(z)\cap \R}\frac{1}{|(F^{N})'(y)|^{\delta}} < K\; \sum_{k=1}^{\infty}
\gamma_{k}^{-\delta}\Delta_{k}^{-(1-\frac{1}{\mmax})\delta}\;,$$ where $\Delta_k=\dist{f^k(\Crit),z}$ and the sequence $\gamma_{k}^{-1}$ (defined in ) is summable with an exponent smaller than $\delta$. To complete the proof, observe that ${\cal L}_{{\mathrm rl}}$ with $\delta=1$ is the Ruelle-Perron-Frobenius operator for $f$. Therefore, the densities of $\nu_{N}=f^{N}_{*}(\nu)$ are bounded by a constant multiple of an $L^1$ function $\sum_{k=1}^{\infty}\gamma_{k}^{-1}\Delta_{k}^{-(1-\frac{1}{\mmax})}$. Any weak limit of $\nu_{N}$ gives an absolutely continuous invariant measure for $f$. The additional claims of Theorem \[coro:uni\] follow immediately from Corollary \[theo:inreg\].
Geometry of Fatou components
============================
Integrable domains
------------------
\[prop:sum\] For every periodic Fatou component ${\cal F}$ the following two conditions are equivalent:
1. For any (some - by the Koebe distortion lemma \[lem:koeb\] the statements are equivalent) point $z\in{\cal F}$ away from the critical orbits there exist a sequence $\omega_{n}$ so that $\abs{\br{F^n}'(y)}~>~\omega_{n}~$ for points $y\in{\cal F}\cap F^{-n}z$, and $\sum_{n=1}^{\infty}\omega_{n}^{-1}<\infty$.
2. $\cal F$ is an integrable domain.
Without loss of generality we may assume that $F$ fixes a Fatou component ${\cal F}$. Throughout the rest of this Section we will always mean by $F^{-n}$ a branch mapping ${\cal F}$ to itself.
Take a subdomain $\om\,\subset\,\Cal F$ with smooth boundary containing all critical points from $\Cal F$ such that $F\om~\subset~\om$. Any point $z\in{\cal F}$ eventually gets to $\om$ under some iterate of $F$, so we can define $n(z):=\min\brs{n:~F^n(z)\in\Omega}$. Also fix a reference point $z_0\in\om$.
Lemma \[prop:sum\] follows immediately from Lemma 7 from [@ceh] which we state as Lemma \[lem:ourkoebe\].
\[lem:ourkoebe\] Suppose that $z\notin\om$ and $n=n(z)$. Then $$\begin{aligned}
\dist{z,\partial\Cal F}&\asymp&\abs{\br{F^n}'(z)}^{-1}~,\\
\qhdist{z,z_0}&\asymp&n~,\end{aligned}$$ up to some constant depending on $\Cal F$ and our choice of $\Omega$ only.
The proof of Theorem \[prop:integrable\] is preceded by a few analytical observations, compare [@gehring-martio] and [@smith-stegenga]. We will use that quasihyperbolic metric is a geodesic metric, see exposition [@koskela-quasihyp] for this and other properties of quasihyperbolic metric.
\[lem:integre\] Every integrable domain (see Definition \[def:integral\]) is geodesically bounded, i.e. every minimal quasihyperbolic geodesic is of uniformly bounded Euclidean arclength.
Let $\gamma$ be a (minimal) quasihyperbolic geodesic joining $z_{0}$ and $z$, i.e. $\qhdist{z_{0},z}=\qhlen\gamma$. We parameterize $\gamma$ by its Euclidean arclength starting from $z_{0}$. We define function $g$ on the interval $[0,\len\gamma]$ by $g(t):=\qhlen{\gamma[0,t]}=\qhdist{z_0,\gamma(t)}$, the latter quantities are identical since the geodesic is minimal. Then $$\frac{d}{dt} g(t)=\dist{\gamma(t),\partial{\cal F}}^{-1}~.$$ From the definition of integrable domains, we obtain the following differential inequality $$\label{equ:equ}
\frac{d}{dt}g(t) \cdot H(g(t))\geq 1~.$$ Integrating leads to $$s=\int_{0}^{s}dt \leq \int_{0}^{s}
H(g(t))dg(t) \leq
\int_{0}^{\infty}H(r)dr < \infty~.$$
Denote by $\gamma(z,y)$ a minimal quasihyperbolic geodesic joining $z, y \in \Omega$, that is $\qhdist{z,y}=\qhlen{\gamma(z,y)}$.
\[lem:geo\] Let $z_{0}$ be a base point in $\Omega$ (see Definition \[def:integral\]). There exists a continuous decreasing function $A: \R_{+}\to\R_{+}$ with $\lim_{r\rightarrow 0}A(r)=\infty$ so that for every point $z_{1}\in \gamma$ $$\qhdist{z_{0},z}\leq A(\len{\gamma(z_{1},z)})~.$$
We follow the notation from the proof of Lemma \[lem:integre\]. Let $\qhlen{\gamma(z_{0},z_{1})}=L$ and $z_1=\gamma(t)$. Then $\qhdist{z_{1},z}=L-g(t)$. By the inequality (\[equ:equ\]), $$\frac{d}{dt}\len{\gamma(z_{1},z)}= -1
\geq -\frac{d}{dt}g(t) H(g(t)) = \frac{d}{dt}
\int_{\qhdist{z,z_{0}}}^{\infty}H(r)dr~.$$ Integrating from $t$ to $L$, we obtain that $$-\len{\gamma(z_{1},z)} \geq
-\int_{\qhdist{z,z_{0}}}^{\infty}H(r)dr+\int_{L}^{\infty}H(r)dr~$$ and $$\len{\gamma(z_{1},z)} \leq
\int_{\qhdist{z,z_{0}}}^{\infty}H(r)dr~.$$ Set $A(r)$ to be the inverse of $\int_{r}^{\infty}H(r)dr$. Then $A(r)$ is a non-decreasing continuous function and $\lim_{r\rightarrow 0}A(r)=\infty$. The lemma follows.
\[coro:loc\] For every integrable domain $\Omega$ there exists a continuous function $A$ defined in Lemma \[lem:geo\] so that for every subarc $\gamma_{1}$ of a minimal quasihyperbolic geodesic $\gamma$ starting at the base point $z_{0}$ the following inequality holds $$\frac{\len{\gamma_{1}}}{A(\len{\gamma_{1}})}~\lesssim ~\max_{z\in \gamma_{1}}
\dist{z,\partial\Omega}~.$$
Local connectivity and continua of convergence
----------------------------------------------
A connected set $K$ is [*locally connected*]{} if for every $z\in K$ and each neighborhood $U$ of $z$ there exists a neighborhood $V$ of $z$ such that $K\cap V$ lies in a single component of $K\cap U$.
A continuum $K_{\infty}\subset M$ is called a continuum of convergence of a set $M$ if there exists a sequence of continua $K_{i}\subset M$ so that
1. $K_{i}$ are pairwise disjoint for $i=1,\dots,\infty$.
2. $\lim_{i\rightarrow \infty}K_{i}=K_{\infty}$ in the Haudorff metric.
A concept of continuum of convergence has a principal application in study of local connectedness of continua. We will say that a continuum of convergence is non-trivial if it contains more than one point. We have the following fact (see Theorems 12.1 and 12.3 in [@whyburn]) which follows almost immediately from the definition of local connectivity.
\[fact:why\] If a continuum $M$ is not locally connected then there exists a non-trivial continuum of convergence of $M$.
From Fact \[fact:why\], it is clear that the local continuity of $M$ cannot fail just in one point. It fails for all points from a non-trivial continuum of convergence.
\[lem:conver\] The boundary of an integrable domain $\Omega$ (which is not necessarily connected) does not have a non-trivial continuum of convergence.
Suppose that there exists a non-trivial continuum of convergence $K_{\infty}\subset \partial \Omega$. Let $K_{i}\rightarrow K_{\infty}$ be mutually disjoint continua of $\partial \Omega$. We choose a point $w$ in $K_{\infty}$ and $\epsilon>0$ so that the circle $\{|z-w|= \epsilon\}$ intersects $K_{\infty}$ in at least two points and $\Omega\setminus \overline{B_{\epsilon}(w)}$ is non-empty. Without loss of generality, every $K_{i}$ intersects $\{|z-w|=\epsilon\}$ in at least two points and hence $B_{\epsilon}(w)\setminus K_{i}$ has at least two components. For every $n>1/\epsilon$ choose a component $\Omega_{n}$ of $\Omega\cap B_{\epsilon}(w)$ and $z_{n}\in \Omega_{n}$ with the following properties: ${\rm(i)}$ $\dist{z_{n},w}=1/n$, ${\rm (ii)}$ there exists $j$ such that $w$ and $z_{n}$ are in different components of $B_{\epsilon}(w)\setminus K_{j}$. Observe that there are infinitely many different $\Omega_{n}$. Indeed, let $\Gamma_{j}$ be a component of $B_{\epsilon}(w)\setminus K_{j}$ which contains $w$. By [(ii)]{}, there exists $j$ so that $\Omega_{n}$ is contained in a component of $B_{\epsilon}(w)\setminus K_{j}$ different than $\Gamma_{j}$. By [(i)]{}, there exists $N>n$ so that $z_{N}$ and thus $\Omega_{N}$ are contained in $\Gamma_{j}$. Consequently, every $\Omega_{m}$ with $m>N$ and $\Omega_{n}$ are different.
By the connectivity of $\Omega$, $\partial\Omega_{n}\cap \{|z-w|=\epsilon \}\not = \emptyset$. We join every $z_{n}$ with a base point $z_{0}\in \Omega
\setminus \overline{B_{\epsilon}(w)}$ by a minimal quasihyperbolic geodesic $\gamma(z_{n},z_{0})$. Let $\gamma_{n}\ni z_{n}$ be a component of $\gamma(z_{n},z_{0})\cap
B_{\epsilon/2}(w)$. The Euclidean length of $\gamma_{n}$ is at least $\epsilon/4$ for large $n$. Then, by Corollary \[coro:loc\], $$~\max_{y\in \gamma_{n }}\dist{y,\partial\Omega_{n}}
\geq~\max_{y\in \gamma_{n }}\dist{y,\partial\Omega}
~\grtsim~\frac{\len{\gamma_{i}}}{A(\len{\gamma_{i}})}~\grtsim~
\frac{\epsilon}{4A(\epsilon/4)}=:\delta.$$ Consequently, every $\Omega_{n}$ contains a ball of radius $\delta$ and hence there are infinitely many disjoint balls of the same radius in $B_{\epsilon}(w)$, a contradiction.
The non-existence of a non-trivial continuum of convergence implies immediately that every component of the boundary is locally connected. We see immediately that every subcontinuum of a continuum without non-trivial continuum of convergence is locally connected. In fact it gives even more precise topological description of every component of $\partial \cal F$ (see [@whyburn] pp 82-84).
We say that a continuum $K$ is a [*rational*]{} curve if each point $z\in K$ has a family of arbitrary small neighborhoods $U_{n}$ so that $\partial U_{n}
\cap K$ is [*countable*]{}.
Theorem 3.3 and Remark 2.1 of Chapter V of [@whyburn] assert that every continuum which has no non-trivial continuum of convergence is a rational curve. Another direct consequence of Lemma \[lem:conver\] is that for every $\epsilon > 0$ there exists only finitely many components $\partial {\cal F}$ with diameters larger than $\epsilon$. This in turn will imply the absence of wandering continua for polynomials which satisfy the summability condition.
Wandering continua
------------------
\[def:wander\] A continuum $K$ is called wandering if for every two non-negative integers $m\not = n$ $$F^{m}(K)\cap F^{n}(K)=\emptyset\;.$$
\[lem:wand\] Suppose that $F$ satisfies the summability condition with exponent $\alpha< \frac{1}{1+\mmax}$ and $\cal F$ is a periodic Fatou component. Then $F$ has no wandering continua contained in $\partial {\cal F}$ other than points. In particular, if $F$ is a polynomial then every non-point component of connectivity of $J$ is preperiodic.
We can always assume that $K$ is connected. Let $R'$ be supplied by Proposition \[prop:shrink\]. We look at the orbit $K_{i}:=F^{i}(K)$. Since $\partial {\cal F}$ does not have a non-trivial continuum of convergence almost all continua $K_{i}$ have the diameter smaller than $R'/2$. A ball $B_{i}$ of the radius $R'$ centered at any point of $K_{i}$ contains $K_{i}$. By Proposition \[prop:shrink\], we obtain that $$\diam K \leq \diam F^{-i}(B_{i}) < (\tilde{\omega_{n}})^{-1} \rightarrow 0$$ and $K$ must be a point.
Uniform summability condition
=============================
#### Continuity of scales and parameters.
Suppose that $F$ is a rational function which satisfies the summability condition with an exponent $\alpha$ and a sequence of rational functions $F_i$ tends $S(\alpha)$-uniformly to $F$. By the definition of $S(\alpha)$-uniform convergence and Lemma \[lem:techseq\], we choose the same sequences $\{\alpha_{n}\}$, $\{\gamma_{n}\}$, and $\{\delta_{n}\}$ for all $F_i$ and $F$.
All constants defined in $\rm{(i)-(iii)}$ in Section \[sec:const\] can be chosen uniformly that is independently from $F_i$ for $i$ large enough. The condition (iv) from Section \[sec:const\] needs to be replaced by the condition $\rm {(iv')}$:
[$\rm{(iv')}$]{} Let $\Crit'_F$ be a set of critical points of $F$ outside the Julia set $J_F$. We chose $R'$ is so small that for all $i$ large enough there are no critical points $\Crit_F'$ in the $2R'$-neighborhood of the Julia set $J_i$.
This choice of $R'$ is indeed possible. Since $F$ and $F_i$, $i\geq 0$, satisfy the summability condition all periodic orbits of $F$ and $F_i$, $i\geq 0$, are either repelling or attracting, see Corollary \[cor:nosiegel\]. Suppose now that we can find a sequence $c_{i_k}$ of critical points of $F_{i_k}$ such that for some $c\in \Crit_F'$ $$\dist{c_{i_k}, c}\rightarrow 0\;\;\; \mbox{and}\;\;\: \dist{c_{i_k}, J_{i_k}} \rightarrow 0$$ when $k\rightarrow \infty$. The critical point $c$ is in the basin of an attracting periodic point $w$. Let $\eta>0$. There is $m>0$ such that $\dist{F^m(c), w}< \eta/3$. For every $i$ large enough there is an attracting periodic point $w_i$ of $F_i$ such that $\dist{w_i, w}<\eta/3$. Since $\dist{F^m(c_{i_k}), J_{i_k}}\rightarrow 0$ when $k\rightarrow \infty$, we have that for all $k$ large enough $\dist{J_{i_k},w_{i_k}}<\eta$, a contradiction.
#### Uniform version of Lemma \[lem:2t\].
We will argue that after the above choice of the sequences, the constants and scales $\rm{(i)-(iv')}$, all the constants $R', R_{2t}, L, C(q), K$ from Lemma \[lem:2t\] are uniform, that is independent from $F_i$ for $i$ large enough.
1. We say that a sequence $F^{-n}(z), \dots\, F^{-1}(z), z,$ of type $2$ preimages of $z$ is in the scale $R'$ if the ball $B_{R'/2}(z)$ can be pulled back univalently along the orbit $F^{-n}(z),\cdots, F^{-1}(z), z$ and $\dist{z,J_F}\leq R'/2$.
We claim that for every $L_u'>0$ there exists a neighborhood of $F$ in the space of rational functions with the topology of uniform convergence so that for every $G$ from that neighborhood and every sequence $G^{-L_u'}(z), \cdots, z$ of type $2$ preimages of $z$ in the scale $R'$ and such that $\dist{z,J_G}<R'/16$, the corresponding sequence $F^{-L_u'}(z), \cdots, z$ is of type $2$ in the scale $R'/2$. Indeed, if $G$ close enough to $F$ then $J_F$ is contained in $R'/8$-neighborhood of $J_G$. To see this consider all repelling periodic orbits of $F$ with periods smaller than a constant and the property that they are $R'/16$-dense in $J_F$. By the implicit function theorem $J_G$ is $R'/4$ dense in $J_F$ if $G$ is close to $F$.
We possibly increase the constant $L$ from Lemma \[lem:2t\] so that the first two claims of Lemma \[lem:2t\] are satisfied with the estimates $7$ and $1/37$ instead of $6$ and $1/36$, respectively, for all type $2$ preimages of $z$ by $F$ in the scale $R'/2$. We put $L_u:=L$.
Therefore, by $C^1$ continuity for every $L_u'>L_u$ there exists a neighborhood of $F$ so that for every $G$ from this neighborhood and every sequence of type $2$ preimages of $z$ in the scale $R'$ ,
$$\inf_{y\in \typeii{(z)},\,L_u'\ge n(y)\ge L_u}~\abs{\br{G^n}'(y)}~
>~6~,$$ and if the Poincaré series $\Sigma_{q}(v)$ for $F$ converges for some point $v$ then $$\sum_{y\in \typeii{(z)},\,L_u'\ge n(y)\ge L_u}\abs{\br{G^n}'(y)}^{-q}
~<~\fr1{36}~$$ provided $\dist{z,J_G}<R'/16$.
2. The constants $K, R_{2t}$, and $C(q)$ of Lemma \[lem:2t\] come from the Koebe type estimates and depend only on other uniform constants hence are uniform.
We formulate a uniform version of Lemma \[lem:comp\].
\[lem:comp1\] Let $F$ be a rational function which satisfies the summability condition with an exponent $\alpha\leq 1$ and $(F_i)$ be a sequence of rational maps converging $S(\alpha)$-uniformly to $F$. There exists $\epsilon>0$ such that the backward orbit $\;\cdots , F_i^{-k}(z), \cdots , z\;$ of any $z$ from the $\epsilon$-neighborhood of the Julia set $J_i$ stays in the $R'/16$-neighborhood of $J_i$. The same claim is true for $F$.
The existence of $\epsilon_i>0$ for every $F_i$ is proved in Lemma \[lem:comp\]. If $\epsilon_i$ tend to $0$ then $J_F\not = \hat{\C}$. Suppose that there is an infinite set $I$ of positive integers such that for every $i\in I$ there are $z_i\in \hat{\C}$ and $n_i\in \N$ so that $\dist{z_i, J_i}\geq R'/16$ and $\dist{F_i^{n_i}(z_i), J_i}\leq \epsilon_i$. By the compactness, there exists a converging sequence $z_{i_j}\rightarrow z\in \hat{\C}$ with the property that $B_{R'/32}(z)$ is disjoint with every Julia set $J_{i_j}$, $i_j\in I$, and thus also disjoint from $J_F$. This means that there is $m>0$ such that $F^m(z)$ is in the immediate bassin of attraction of an attracting periodic point $w$ of $F$. Let $w_{i_j}$ be the corresponding attracting periodic point of $F_{i_j}$ for $j$ large enough. By the continuity, $F_{i_j}^m(z)$ are uniformly close to $w_{i_j}$ and therefore the orbit of $z$ by $F_{i_j}$ must accumulate on the periodic orbit of $w_{i_j}$ for all $j$ large enough, a contradiction.
\[lem:mainunif\] Assume that a rational function $F$ satisfies the summability condition with an exponent $\alpha\leq 1$ and set $\beta=\mmax\alpha/(1-\alpha)$. Suppose that $F_i$ is a sequence of rational functions tending $S(\alpha)$-uniformly to $F$. Let $\epsilon$ be supplied by Lemma \[lem:comp1\] and suppose that a point $z$ belongs to the $\epsilon$-neighborhood of the Julia set $J_i$ and a ball $B_{\Delta}(z)$ can be pulled back univalently by a branch of $F_i^{-N}$.
We claim that there exists $i_0$ and positive constants $L'>L, K$ independent of $z, \Delta$, and $\epsilon$ such that for every $i\geq i_0$ the sequence $F_i^{-N}(z),\dots, z$ can be decomposed into blocks of types $1$, $2$, and $3$, and
- every type $2$ block, except possibly the leftmost one, has the length contained in $[L,L')$ and yields expansion $6$,
- the leftmost type $2$ block has the length contained in $[0,L]$ and yields expansion $K>0$,
- all subsequences of the form $1\dots13$, except possibly the rightmost one, yield expansion $$\gamma_{k_j}\dots\gamma_{k_1}\gamma_{k_0}~,$$ $k_i$ being the lengths of the corresponding blocks,
- the rightmost sequence of the form $1\dots 13$ yields expansion $$\begin{array}{ll}
\gamma_{k_j}\dots\gamma_{k_1}\gamma_{k_0}~\Delta^{(1-\mu(c)/\mmax)}&
{\it~if~a~critical~point~}c\in B_{\Delta}(z)~,\\
\gamma_{k_j}\dots\gamma_{k_1}\gamma_{k_0}~\Delta^{(1-1/\mmax)}&
{\it~if~otherwise~.}
\end{array}$$
We claim that the inductive construction from the Main Lemma can be carried out for $F_i$ and any $z$ from the $\epsilon$-neighborhood of $J_i$ provided $i$ is large enough. The constants and scales are given by the conditions $\rm{(i)-(iv')}$. Recall that the same sequences $\{\alpha_{n}\}$, $\{\gamma_{n}\}$, and $\{\delta_{n}\}$ can be used for both $F_i$ and $F$. By the definition of $ S(\alpha)$-uniform convergence, there is a $1-1$ correspondence between the critical points of $F$ and these of $F_i$ if only $i$ is large enough. We follow the inductive decomposition of backward orbits of $F_i$ of the Main Lemma with one exception, all the critical points of $F_i$ contained in the $\epsilon$-neighborhood of $J_i$ are included in the construction. We treat $F_i$ as small perturbations of $F$ and work in the same scale $0<R'<R$ both for $F_i$ and $F$. Using the uniform summability condition for all critical points inside the $\epsilon$-neighborhood of $J_i$, we obtain the expansion from the last three claims of Lemma \[lem:mainunif\] in exactly the same way as in the proof of the Main Lemma. In the construction of all backward pieces of type $1\dots13$ we use all the critical points of $F_i$ from the $\epsilon$-neighborhood of $J_i$.
What remains is the proof that we can find $L$ and $L'>L$ independently from $F_i$ so that the first two claims of Lemma \[lem:mainunif\] hold. We put $L:=L_{u}$ and hence $L$ is independent from $F_i$ by the uniform version of Lemma \[lem:2t\]. Next, for a given $F_i$, the constant $L'$ in the proof of the Main Lemma is defined as $ L+L''$ where $L''$ is a function of $\{\alpha_{n}\}$, $C_{3t}$, and $R_{2t}$ only. These are the uniform constants. We conclude the proof by invoking the uniform version of Lemma \[lem:2t\] for $L_{u}$ and $L_{u}'=L_{u}+L''$.
#### Uniform self-improving property and main estimates.
The proof of Proposition \[prop:poincare\] is self-contained except for references to Lemmas \[lem:main2\] and \[lem:bbb\]. Suppose that rational functions $F_i$ converge $S(\alpha)$-uniformly to a rational function $F$ which satifies the summability condition with an exponent $\alpha \leq 1$. By the uniform version of the Main Lemma and Lemma \[lem:comp1\], we obtain the estimates of Lemma \[lem:main2\] for all $F_i$ sufficiently close to $F$. The decomposition of backward orbits of $F_i$ into pieces of type $1,2$, and $3$ uses, as in the uniform version of the Main Lemma, all the critical points of $F_i$ contained in the $\epsilon$-neighborhood of $J_i$ ($\epsilon$ comes from Lemma \[lem:comp1\]). Lemma \[lem:bbb\] is a reformulation of two claims of Lemma \[lem:2t\] which already has a uniform version. The constants $L:=L_u'$ and $L':=L_u'$ are supplied by the uniform version of the Main Lemma.
Now suppose as in the hypothesis of Proposition \[prop:poincare\] that $F$ satisfies the summability condition with an exponent $\alpha< q/(\mmax+q)$ for some $q>0$ and that the sequence $(F_i)$ converges $S(\alpha)$-uniformly to $F$. Also, assume that the Poincaré series for $F$ with an exponent $q>0$ converges for some point $v\in \hat{\C}$, $\Sigma_q(v)<\infty$. In the proof of Proposition \[prop:poincare\] the assumption $\Sigma_{q}(v)<\infty$ is used to derive the following property: there exists $\eta>0$ so that for every $z$ such that $\dist{z, J_F} < \eta$, $$\sum_{y\in\typeii_{l}(z)}
\abs{\br{F^{n(y)}}'(y)}^{-q}~<~\fr1{36}~,$$ where $\typeii_{l}(z)$ is the set of all “long” (of order $L'>n(y)\ge L$) type $2$ preimages $y$ of $z$. The property in fact holds for all $F_i$ close enough to $F$ by the uniform version of Lemma \[lem:2t\] with $\eta:=\epsilon$ of Lemma \[lem:comp1\]. This shows that the self-improving property of Proposition \[prop:poincare\] is true for all $F_i$ close enough to $F$.
Taking into account the uniform self-improving property and Corollary \[cor:critexpon\], we obtain the main estimate behind the continuity of the Hausdorff dimension of Julia sets for $S(\alpha)$-uniformly converging rational maps.
\[coro:dim\] Assume that $F$ satisfies the summability condition with an exponent $$\alpha~<~\fr{p}{\mmax+p}~,$$ $p$, and a sequence of rational maps $(F_i)$ converges $S(\alpha)$-uniformly to $F$. If there exist $\epsilon>0$ and $q>p$ so that for every point $z$ from the $\epsilon$-neighborhood of the Julia set $J_F$, $$\sum_{y\in\typeii_{l}(z)}
\abs{\br{F^{n(y)}}'(y)}^{-q}~<~\fr1{36}~,$$ then for every $i$ large enough $\dpoin(J_i)\,<\,q$.
\[coro:cont\] If a sequence $(F_{i})$ of rational functions tends $S(\alpha)$-uniformly to $F$ which satisfies the summability condition with an exponent $\alpha< \frac{\dpoin(J_{F})}{\mmax+\dpoin(J_{F})}$ then $$\limsup_{i\rightarrow \infty}\dpoin(J_{i})\leq \dpoin(J_{F})\;.$$
By Theorem \[theo:poincare\], for every positive $\eta$ and every critical point of maximal multiplicity, the Poincaré series for $F$, $\Sigma_{\dpoin(J_{F})+\eta}(c)<\infty$. By the uniform version of Lemma \[lem:2t\], for every $\eta>0$ we can find $L_u(\eta)\leq L_u'(\eta)$ so that for every $z$ such that $\dist{z, J_i}< \epsilon$ ($\epsilon$ is supplied by Lemma \[lem:comp1\]), $$\sum_{y\in \typeii{(z)},\,L_u (\eta)'\ge n(y)\ge {\l2t}_u(\eta)}\abs{\br{F_{i}^n}'(y)}^
{-\dpoin(J_{F})-\eta}
~<~\fr1{36}~$$ provided $i$ is large enough. $\typeii{(z)}$ stands for a set of type $2$ preimages of $z$ for $F_{i}$. In the definition of these type $2$ preimages all critical points of $F_i$ inside the $\epsilon$-neighborhood of $J_i$ were considered. The constants $L_u'(\eta) >L_u(\eta)$ depend only on $F$ and $\eta$. We use the uniform version of Corollary \[coro:dim\] (set $p:=\dpoin(J)$ and $q:=\dpoin(J)+\eta$) to conclude that $\dpoin(J_{i})\leq \dpoin(J)+\eta$ for all $i$ large enough.
#### Proof of Theorem \[theo:contin\].
By Corollary \[coro:cont\] and Theorem \[theo:poincare\], $$\limsup_{n\rightarrow \infty}\HD(J_{n})=\limsup_{n\rightarrow \infty}
\dpoin(J_{n})\leq \dpoin(J_{F})= \HD(J_{F})\;.$$ Since $\HD(J_{F})=\HH(J_{F})$, for every $\eta >0$ there exists a hyperbolic subset $K$ of $J_{F}$ so that $\HD(K)\geq \HD(J_{F})-\eta$ (see [@denker-urbanski-sullconf; @denker-urbanski-existconf]). By the general theory of hyperbolic sets, the set $K$ persists under small perturbations and therefore every $J_{i}$ for $i$ large enough contains a hyperbolic set $K_{i}$ of the Hausdorff dimension $\HD(K)-2\eta$. Consequently, $$\lim\inf_{i\rightarrow \infty}\HD(J_{i})\geq \HD(J_{F})~,$$ which proves the theorem.
Unicritical polynomials
=======================
It is known that the connectedness locus ${\cal M}_{d}$ for unicritical polynomials $z^{d}+c$ is a full compact. Let $\phi$ be the Riemann map from the unit disk to $\C\setminus {\cal M}_{d}$. By Fatou’s theorem for almost all $\xi$ with respect to the Lebesgue measure on the unit circle, there exists a radial limit $\lim_{r\rightarrow 1} \phi(r\xi)$. Therefore, the harmonic measure $\chi$ is given by, $$\chi = \phi_{*}(m)\;.$$ By Fatou’s theorem, for almost every $c\in \partial{\cal M}_{d}$ with respect to $\chi$ the external radius $\Gamma(c)$ (see Definition \[defi:radius\]) terminates at $c$.
Denote the Julia set of the polynomial $z^{d}+c$ by $J_{c}$. By Shishikura’s theorem, $\cite{shishikura}$, there exists a residual set $Z \subset \partial{\cal M}_{2}$ with the property that $\forall c \in Z$, $$\HH(J_{c})=2\;.$$ Let $c\in \partial {\cal M}_{2}$ corresponds to a Collet-Eckmann polynomial. By [@graczyk-swiatek-ce] and [@smirnov-symbce], Collet-Eckmann parameters are typical with respect to the harmonic measure $\chi$ and the corresponding Julia sets are of Hausdorff dimension $<2$, [@ceh]. Choose now a sequence $c_{n}\in Z$, $c_{n}\rightarrow c$. Since $\HH(\cdot)$ is lower semicontinuous, there exist open disks $D_{n}$ centered at $c_{n}$ so that $\forall c\in D_{n}$, $$\HD(J_{c_{n}})> 2-\frac{1}{n}\;.$$ By Yoccoz’s theorem, ${\cal M}_{2}$ is locally connected at $c$ and thus there exists a curve $\gamma$ terminating at $c$ so that $$\limsup_{\gamma \ni c'\rightarrow c}\HD(J_{c'})=2 > \HD(J_{c})\;$$ and $\HD$ as a function of $c'\in \C\setminus {\cal M}_{2}$ does not extend continuously to $\partial {\cal M}_{2}$.
Another type of discontinuity of $\HD(\cdot)$ is caused by the parabolic implosion. Let $c\in \partial {\cal M}_{2}$, $c\in \Gamma(c)$, has a parabolic cycle. The parabolic implosion means that $\HD(J_{c})$ is strictly contained in the Hausdorff limit of $J_{c'}, c'\in \Gamma(c)$. It was recently shown in [@douady-sentenac-zinsmeister] that if $d=2$ and $c> 1/4$ then $$\HD(J_{1/4})< \liminf_{c\rightarrow 1/4}\HD(J_{c})
\leq \limsup_{c\rightarrow 1/4}\
\HD(J_{c}) < 2~.$$
Renormalizations
----------------
We will start with the observation unicritical polynomials satisfying the summability condition are not infinitely renormalizable.
\[lem:rig\] Suppose that $f_{c}$ satisfies the summability condition with an exponent $\alpha\leq \frac{1}{1+d}$. Then $f_{c}$ is only finitely many times renormalizable.
Indeed, suppose that $f:= f_{c}$ is infinitely renormalizable. Then there is a sequence $n_{j}$ and two topological disks $\overline{U_{j}}\subset V_{j}$ so that $f^{n_{j}}:U_{j}\rightarrow V_{j}$ is proper of degree $\ell$ and $$J_{j}:= \cap_{i=0}^{\infty}f^{-n_{j}i}(U_{j})\ni 0$$ is a non-trivial continuum ($f^{-n_{j}}$ is a branch which sends $V_{j}$ onto $U_{j}$). We may assume that $\forall j\geq 0$, $U_{j+1}\subset U_{j}$. Let $R'$ be a constant chosen in Section \[sec:const\]. If for every $j$ positive and every $0\leq k\leq n_{j}$, $\diam f^{k}(J_{j})\geq R'$ then $\cap_{j}^{\infty} J_{j}$ is a non-trivial wandering continuum and the Julia set of $f_{c}$ would not be locally connected. Hence, there exist $j$ and $k$ so that $\diam f^{k}(J_{j})\leq R'$. Let $B_{R'}\supset f^{k}(J_{j})$. We apply Proposition \[prop:shrink\] for the inverse branches $f^{-k-n_{j}s}$, $s$ is a positive integer, $f^{-n_{j}s}(B_{R'})\supset J_{j}$, and the ball $B_{R'}$. Then $$\diam J_{j}= \diam f^{-k-n_{j}s}(B_{R'})\leq \omega_{k+n_{j}s}^{-1} \rightarrow 0,$$ a contradiction.
Radial limits
-------------
#### Proof of Theorem \[theo:cont\].
Let $f_{c}=z^{d}+c$. We use the following fact stated as Theorem 1.2 in [@graczyk-swiatek-ce].
Let $\Gamma(c_{0})$ be an external ray landing at $c_{0}$. For every $d\geq 2$ and for almost every $c_{0}\in \partial {\cal M}_{d}$ with respect to the harmonic measure there exist constants $K>0$ and $\Lambda>1$ so that for each $c\in \Gamma(c_{0})$ and every $n>0$ $$(f_{c}^{n})'(c)\geq K\Lambda^{n}\;.$$
This means that for almost all $c_{0}$ with respect to the harmonic measure, $f_{c}$, $c\in \Gamma(c_{0})$, converges $S(\alpha)$-uniformly to $f_{c_{0}}$ for every $\alpha>0$. To complete the proof of Theorem \[theo:cont\], we invoke Theorem \[theo:contin\], $$\limsup_{\Gamma(c_{0})\ni c\rightarrow c_{0}}\HD(J_{c})
=\HD(J_{c_{0}})\;.$$
#### Proof of Theorem \[theo:end\].
Fix a generic $c_{0}\in \partial {\cal M}_{d}$ as in the proof of Theorem \[theo:cont\]. Consider $f_{c}$ with $c\in \Gamma(c_{0})$. Since $f_{c}$ is hyperbolic, by [@sullivan-rio] it admits a unique conformal measure $\nu_{c}$ which is a Hausdorff measure (restricted to the Julia set $J_{c}$) normalized by a multiplicative constant so that its total mass is one. Hence, the conformal exponent of $\nu_{c}$ is equal to $\HD(J_{c})$. By Theorem \[theo:cont\], $\HD(J_{c})\rightarrow \HD(J_{c_{0}})$ as $\Gamma(c_{0}) \ni c\rightarrow c_{0}$. This means that any weak accumulation point of $\nu_{c}$ is a conformal measure with an exponent $\HD(J_{c_{0}})$. But there is only one such a measure for $f_{c_{0}}$ by Theorem \[theo:4\], and we obtain the convergence of $\nu_{c}$ to the geometric measure of $f_{c_{0}}$.
[10]{} M. Aspenberg The Collet-Eckmann condition for rational maps on the Riemann Sphere. Ph. D. Thesis, KTH Sweden, 2004.
A. F. Beardon. . Springer-Verlag, New York, 1991. M. Benedicks, L. Carleson. On iterations of $1-ax\sp 2$ on $(-1,1)$. (2), 122(1):1–25, 1985.
M. Benedicks, L. Carleson. The dynamics of the [H]{}énon map. (2), 133(1):73–169, 1991.
C. J. Bishop. Minkowski dimension and the [P]{}oincaré exponent. , 43(2):231–246, 1996.
C. J. Bishop and P. W. Jones. Hausdorff dimension and [K]{}leinian groups. , 179(1):1–39, 1997.
H. Bruin, S. van Strien 137 (2003), 223–263.
H. Bruin, S. Luzzatto, S. van Strien (4) 36 (2003), no. 4, 621–646.
H. Bruin, W. Shen, S. van Strien 241 (2003), no. 2-3, 287–306.
L. Carleson, T. W. Gamelin. . Springer-Verlag, New York, 1993.
W. de Melo and S. van Strien. . Springer-Verlag, Berlin, 1993.
M. Denker, M. Urba[ń]{}ski. On [S]{}ullivan’s conformal measures for rational maps of the [R]{}iemann sphere. , 4(2):365–384, 1991.
M. Denker, M. Urba[ń]{}ski. On the existence of conformal measures. , 328(2):563–587, 1991.
A. Douady, P. Sentenac, M. Zinsmeister. Implosion parabolique et dimension de [H]{}ausdorff. , 325(7):765–772, 1997.
H. Federer. . Springer-Verlag New York Inc., New York, 1969. Die Grundlehren der mathematischen Wissenschaften, Band 153.
F. W. Gehring, O. Martio. Lipschitz classes and quasiconformal mappings. , 10:203–219, 1985.
J. Graczyk, D. Sands, G. Świac. Metric Attractors for Smooth Unimodal Maps. , 159(2):725–740, 2004.
J. Graczyk, S. Smirnov. ollet, [E]{}ckmann and [H]{}ölder. , 133(1):69–96, 1998.
J. Graczyk, G. Świac. . Princeton University Press, Princeton, NJ, 1998.
J. Graczyk, G. Świac. Harmonic measure and expansion on the boundary of the connectedness locus. , 142(3):605–629, 2000.
G. H. Hardy, J. E. Littlewood, G. P[ó]{}lya. . Cambridge University Press, Cambridge, 1988. Reprint of the 1952 edition.
J. Heinonen, P. Koskela. Definitions of quasiconformality. , 120(1):61–79, 1995.
M. V. Jakobson. Absolutely continuous invariant measures for one-parameter families of one-dimensional maps. , 81(1):39–88, 1981.
P. W. Jones, S. K. Smirnov. Removability theorems for [S]{}obolev functions and quasiconformal maps. , 38(2): 263–279, 2000.
S. Kallunki, P. Koskela. Exceptional sets for the definition of quasiconformality. , 122(4):735–743, 2000.
O. Kozlovski, W. Shen, S. van Strien. Rigidity for real polynomials. , (2) 165 (2007), no. 3, 749–841.
P. Koskela. Old and new on the quasihyperbolic metric. In [*Quasiconformal mappings and analysis (Ann Arbor, MI, 1995)*]{}, pages 205–219. Springer, New York, 1998.
G. Levin, S. van Strien. Local connectivity of the [J]{}ulia set of real polynomials. , 147(3):471–541, 1998.
R. Mañ[é]{}, P. Sad, D. Sullivan. On the dynamics of rational maps. , 16(2):193–217, 1983.
R. Mañé. The [H]{}ausdorff dimension of invariant probabilities of rational maps. In [*Dynamical systems (Valparaiso, 1986)*]{}, pages 86–117. Springer, Berlin–New York, 1988. Lecture Notes in Mathematics, Vol. 1331.
P. Mattila. . Cambridge University Press, Cambridge, 1995.
C. T. McMullen. . Princeton University Press, Princeton, NJ, 1994.
C. T. McMullen. Hausdorff dimension and conformal dynamics. II. Geometrically finite rational maps. 75(4): 535–593, 2000.
T. Nowicki, S. van Strien. Invariant measures exist under a summability condition for unimodal maps. , 105(1):123–136, 1991.
Ch. Pommerenke. . Springer-Verlag, New York, 1992.
F. Przytycki. On measure and [H]{}ausdorff dimension of [J]{}ulia sets of holomorphic [C]{}ollet-[E]{}ckmann maps. In [*International Conference on Dynamical Systems (Montevideo, 1995)*]{}, pages 167–181. Longman, Harlow, 1996.
F. Przytycki. Iterations of holomorphic [C]{}ollet-[E]{}ckmann maps: conformal and invariant measures. [A]{}ppendix: on non-renormalizable quadratic polynomials. , 350(2):717–742, 1998.
F. Przytycki. Conical limit set and [P]{}oincaré exponent for iterations of rational functions. , 351(5): 2081–2099, 1999.
F. Przytycki, S. Rohde. Rigidity of holomorphic [C]{}ollet-[E]{}ckmann repellers. , 37(2):357–371, 1999.
M. Rees. Positive measure sets of ergodic rational maps. , 19(3):383–407, 1986.
M. Shishikura. The [H]{}ausdorff dimension of the boundary of the [M]{}andelbrot set and [J]{}ulia sets. , 147(2):225–267, 1998.
S. Smirnov. Symbolic dynamics and Collet-Eckmann conditions. , 2000(7):333–351.
W. Smith and D. A. Stegenga. Hölder domains and [P]{}oincaré domains. , 319(1):67–100, 1990.
E. M. Stein. . Princeton University Press, Princeton, N.J., 1970. Princeton Mathematical Series, No. 30.
D. Sullivan. Conformal dynamical systems. In [*Geometric dynamics (Rio de Janeiro, 1981)*]{}, pages 725–752. Springer, Berlin, 1983.
M. Tsujii. Positive [L]{}yapunov exponents in families of one-dimensional dynamical systems. , 111(1):113–137, 1993.
M. Urba[ń]{}ski. Measures and dimensions in conformal dynamics. , 40: 281–321, 2003.
P. Walters. . Springer-Verlag, Berlin-New York, 1975. Lecture Notes in Mathematics, Vol. 458.
G. T. Whyburn. . American Mathematical Society, Providence, R.I., 1963.
|
---
abstract: |
We advance an exact, explicit form for the solutions to the fractional diffusion-advection equation. Numerical analysis of this equation shows that its solutions resemble power-laws.
Keywords: distributions; diffusion-advection; power-laws
author:
- |
M.C.Rocca$^{1,2}$,A.R.Plastino$^{3}$,\
A.Plastino$^{1,2}$, G. L. Ferri$^4$, A. de Paoli$^{1,2}$\
\
,\
\
\
\
\
\
\
$^4$\
title: '[**Features of the fractional diffusion-advection equation**]{}'
---
Introduction
============
Advection constitutes a transport mechanism of either a substance or a conserved property by a fluid. This process originates in the fluid’s bulk motion. For instance, the transport of silt in a river by bulk water flowing downstream. Thermodynamic advected quantities are energy and enthalpy. Any substance or conserved, extensive quantity can experience advection by a fluid that contains it. Advection does not include transport by molecular diffusion. In this work we analyze some features of an evolution equation characterizing the combined effects of i) advection and ii) a diffusion process described by fractional partial derivatives. The ensuing fractional advection-diffusion equation has been recently applied to the study of the transport of energetic particles in the interplanetary environment [@SS2011; @TZ2011; @ZP2013; @C95; @LE2014].
There is a considerable body of evidence, from data collected by spacecrafts like [*Ulysses*]{} and [*Voyager 2*]{}, indicating that the transport of energetic particles in the turbulent heliospheric medium is superdiffusive [@PZ2007; @PZ2009]. Considerable effort has been devoted in recent years to the development of superdiffusive models for the transport of electrons and protons in the heliosphere [@SS2011; @TZ2011; @ZP2013]. This kind of transport regime exhibits a power-law growth of the mean square displacement of the diffusing particles, $\langle \Delta x^2 \rangle \propto
t^{\alpha}$, with $\alpha > 1$ (see, for instance, [@SZ97]). The special case $\alpha = 2$ is called ballistic transport. The limit case $\alpha \to 1$ corresponds to normal diffusion, described by the well-known Gaussian propagator. The energetic particles detected by the aforementioned probes are usually associated with violent solar events like solar flares. These particles diffuse in the solar wind, which is a turbulent environment than can be assumed statistically homogeneous at large enough distances from the sun [@PZ2007]. This implies that the propagator $P(x,x',t,t')$, describing the probability of finding at the space time location $(x,t)$ a particle that has been injected at $(x',t')$, depends solely on the differences $x-x'$ and $t-t'$. In the superdiffusive regime the propagator $P(x,x',t,t')$ is not Gaussian, and exhibits power-law tails. It arises as solution a non local diffusive process governed by an integral equation that can be recast under the guise of a diffusion equation where the well-known Laplacian term is replaced by a term involving fractional derivatives [@C95]. Diffusion equations with fractional derivatives have attracted considerable attention recently (see [@LTRLGS2014; @LSSEL2009; @RLEL2007; @stern; @LMASL2005] and references therein) and have lots of potential applications [@MK2000; @P2013]. In particular, the observed distributions of solar cosmic ray particles are often consistent with power-law tails, suggesting that a superdiffusive process is at work.
A proper understanding of the transport of energetic particles in space is a vital ingredient for the analysis of various important phenomena, such as the propagation of particles from the Sun to our planet or, more generally, the acceleration and transport of cosmic rays. The superdiffusion of particles in interplanetary turbulent environments is often modelled using asymptotic expressions for the pertinent non-Gaussian propagator, which have a limited range of validity. A first step towards a more accurate analytical treatment of this problem was recently provided by Litvinenko and Effenberger (LE) in [@LE2014]. LE considered solutions of a fractional diffusion-advection equation describing the diffusion of particles emitted at a shock front that propagates at a constant upstream speed $V_{sh}$ in the solar wind rest frame. The shock front is assumed to be planar, leading to an effectively one-dimensional problem. Each physical quantity depends only on the time $t$ and on the spatial coordinate $x$ measured along an axis perpendicular to the shock front.
In the present contribution we re-visit the fractional diffusion-advection equation and provide [explicit]{}, exact closed analytical solutions, without approximations. We also undertake a numerical analysis that shows the these solutions resemble power-laws.
Formulation of the Problem
==========================
We deal with the equation $$\label{ep2.1}
\frac {\partial f} {\partial t}=\kappa\frac
{{\partial}^{\alpha} f} {\partial |x|^{\alpha}}+a\frac {\partial
f} {\partial x}+\delta(x),$$ where $t>0$ and $f(x,t)$ is the distribution function for solar cosmic-rays transport. Here the fractional spatial derivative is defined as [@LE2014] $$\label{ep2.2}
\frac {{\partial}^{\alpha} f} {\partial
|x|^{\alpha}}= \frac {1} {\pi}\sin\left(\frac {\pi\alpha}
{2}\right)\Gamma(\alpha+1) \int\limits_0^{\infty}\frac
{f(x+\xi)-2f(x)+f(x-\xi)} {{\xi}^{\alpha+1}}\;d\xi.$$ To solve this equation one appeals to the the Green function governed by the equation: $$\label{ep2.3}
\frac {\partial {\cal G}} {\partial t}=\kappa\frac
{{\partial}^{\alpha} {\cal G}} {\partial
|x|^{\alpha}}+\delta(x)\delta(t).$$ With this Green function, the solution of (\[ep2.1\]) can be expressed as $$\label{ep2.4}
f(x,t)=\int\limits_0^t {\cal G}(x+at^{'},
t^{'})\;dt^{'}.$$ In this work we obtain the solutions of Eqs. (\[ep2.1\]) and (\[ep2.3\]) [**using distributions as main tools [@guelfand1].**]{}
For our task we use, as a first step, the solution obtained in [@LE2014] for the Green function through the use of the Fourier Transform given by $$\label{ep2.5}
\hat{{\cal G}}(k,t)=\frac {1} {2\pi}
\int\limits_{-\infty}^{\infty}{\cal G}(x,t) e^{-ikx}\;dx,$$ from which we obtain for $\hat{{\cal G}}$: $$\label{ep2.6}
\hat{{\cal G}}(k,t)=-\kappa|k|^{\alpha}\hat{{\cal
G}}(k,t)+\frac {1} {2\pi}\delta(t),$$ whose solution is $$\label{ep2.7}
\hat{{\cal G}}(k,t)=\frac {H(t)} {2\pi} e^{-\kappa
|k|^{\alpha}t},$$ where $H(t)$ is the Heaviside’s step function.
Explicit general solution of the equation
=========================================
From (\[ep2.7\]) we have for $\hat{{\cal G}}$ $$\label{ep3.1}
\hat{{\cal G}}(k,t)=\frac {H(t)} {2\pi} e^{-\kappa
|k|^{\alpha}t}= \frac {H(t)} {2\pi}
\sum\limits_{n=0}^{\infty}\frac {(-1)^n{\kappa}^nk^{\alpha n} t^n}
{n!},$$ and, invoking the inverse Fourier transform $${\cal G}(x,t)=\frac {H(t)} {2\pi}
\int\limits_{-\infty}^{\infty}
e^{-\kappa |k|^{\alpha}t}e^{ikx}\;dk=$$ $$\label{ep3.2}
\frac {H(t)} {2\pi} \sum\limits_{n=0}^{\infty}\frac
{(-1)^n{\kappa}^n t^n} {n!} \left[\int\limits_0^{\infty}k^{\alpha
n}e^{ikx}\;dx + \int\limits_0^{\infty}k^{\alpha
n}e^{-ikx}\;dx\right].$$ Fortunately, we can find in the classical book of [@guelfand1] the results for the two integrals of (\[ep3.2\]). We obtain $$\label{ep3.3}
{\cal G}(x,t)=\frac {H(t)} {2\pi}
\sum\limits_{n=0}^{\infty}\frac {(-1)^n{\kappa}^n t^n} {n!}
\Gamma(\alpha n+1) \left[\frac {e^{i\frac {\pi} {2}(\alpha n+1)}}
{(x+i0)^{\alpha n + 1}} + \frac {e^{-i\frac {\pi} {2}(\alpha
n+1)}} {(x-i0)^{\alpha n + 1}} \right].$$ Using now (\[ep2.4\]) we have for $f$ $$f(x,t)=\int\limits_0^t
{\cal G}(x+at^{'}, t^{'})\;dt^{'},$$ so that one can write $$f(x,t)=\frac {1} {2\pi}
\sum\limits_{n=0}^{\infty}\frac {(-1)^n{\kappa}^n} {n!}
\Gamma(\alpha n+1)\times$$ $$\label{ep3.4}
\int\limits_0^t \left[\frac {e^{i\frac {\pi}
{2}(\alpha n+1)}} {(x+at^{'}+i0)^{\alpha n + 1}} + \frac
{e^{-i\frac {\pi} {2}(\alpha n+1)}} {(x+at^{'}-i0)^{\alpha n + 1}}
\right]t^{'n}\;dt^{'}.$$ According to Eq. (\[a1\]) of the Appendix, where $t>0$, we now obtain for $f$, invoking hypergeometric functions $F(\alpha n+1,2;3;z)$ and Beta functions ${\cal B}(1,n+1)$, $$f(x,t)=\frac {1} {2\pi}\sum\limits_{n=0}^{\infty}
\frac {(-1)^n{\kappa}^nt^{n+1}} {n!}\Gamma(\alpha n+1)
{\cal B}(1,n+1)\times$$ $$\left[\frac {e^{i\frac {\pi} {2}(\alpha n + 1)}}
{(x+i0)^{\alpha n + 1}}
F\left(\alpha n+1,n+1;n+2;-\frac {at} {x+i0}\right)+\right.$$ $$\label{gral} \left.\frac {e^{-i\frac {\pi} {2}(\alpha n + 1)}}
{(x-i0)^{\alpha n + 1}}
F\left(\alpha n+1,n+1;n+2;-\frac {at} {x-i0}\right)\right].$$ This is the general solution of Eq.(\[ep2.1\]) for the initial condition $f(x,0)=0$.
\[htb\]
($x$ in arbitrary units). \[Fig1\]
Results
=======
Fig. 1 depicts a typical graph for solutions of Eq. (\[gral\]). We do this for different times $t$. The main result of this contribution is that these curves are very well approximated by power laws. Figs. 2 ($\kappa=0.01$), 3 ($\kappa=0.5$), and 4 ($\kappa=0.2$) show how the above solutions can be adjusted by power laws of the form $1/x^q$: the corresponding lines rotate, for fixed $\alpha=3/2$, in the plane $f(x,t)$ vs. $1/x^q$ when $at$ changes, as the plots clearly illustrate. In all our figures we have taken $a=1$ and $\alpha=3/2$.
Figs. 5, 6, and 7 show how the dependence of $q$ with $\alpha$ can, in turn, be adjusted in simple fashion. In Figs. 5 and 6, straight lines are employed. In Fig. 7 we attempt a spline adjustment. It is evident that a weak diffusion-regime exists. This regime was conjectured in Ref. [@LE2014] by appeal to an approximate theoretical treatment of the fractional diffusion-advection equation and here amply confirmed by our exact approach.
What about other $\alpha$ values? The situation does not change. Power-law adjustment continues to be possible. Figs. 8 and 9 illustrate this issue.
Conclusions
===========
We have provided an explicit analytical solution for and advection-diffusion equation involving fractional derivatives in the diffusion term. This equation governs the diffusion of particles in the solar wind injected at the front of a shock that travels at a constant upstream speed $V_{sh}$ in the solar wind rest frame. The shock is assumed to have a planar front, leading to a problem with an effective one dimensional geometry, where all the relevant quantities depend solely on time and on the coordinate $x$ measured along an axis perpendicular to the front.
We obtained the exact solution of the above mentioned equation in the $x$-configuration space (besides the associated formal solution in the $k$-space related to the previous one via a Fourier transform).
We conclude from our analysis that the solutions of the fractional diffusion-advection equations are essentially power laws, and have numerically found a “law" for the behavior of the associated power-exponent $q$ with varying $\kappa$ via spline interpolation.
[99]{}
T. Sugiyama and D. Shiota, ApJ [**731**]{} (2011) L34.
E.M. Trotta and G. Zimbardo, A&A [**530**]{} (2011) A130.
G. Zimbardo and S. Perri, ApJ [**778**]{} (2013) 35
A.I. Saichev and G.M. Zaslavsky, Chaos [**7**]{} (1997) 753.
K.V. Chukbar, Soviet Journal of Experimental and Theoretical Physics [**81**]{} (1995) 1025.
Y.E. Litvinenko and F. Effenberger, ApJ. [**796**]{}, 125 (2014).
S. Perri and G. Zimbardo, ApJ [**671**]{} (2007) L000.
S. Perri and G. Zimbardo, ApJ [**693**]{} (2009) L118.
E.K. Lenzi, A.A. Tateishi, H.V. Ribeiro, M.K. Lenzi, G. Gon�alves, and L.R. da Silva, Journ. Stat. Mech.: Theor. Exp. [**8**]{} (2014) 08019.
E.K. Lenzi, L.R. da Silva, A.T. Silva, L.R. Evangelista, and M.K. Lenzi, Physica A [**388**]{} (2009) 806.
R. Rossato, M.K. Lenzi, L.R. Evangelista, and E.K. Lenzi, Phys. Rev. E [**76**]{} (2007) 032102.
R. Stern, F. Effenberger, H. Fichtner, and T Schäfer, Fract. Calc. Appl. Analys. [**17**]{} (2014) 171.
E.K. Lenzi, R.S. Mendes, J.S. Andrade Jr., L.R. da Silva, and L.S. Lucena, Phys. Rev. E [**71**]{} (2005) 052101.
R. Metzler and J. Klafter, Physics Reports [**339**]{} (2000) 1.
D. Perrone et al., Space Sci. Rev. [**178**]{} (2013) 233.
I. M. Guelfand and G. E. Chilov: “Les Distributions” [**V1**]{}, Dunod (1962). I. S. Gradshteyn and I. M. Ryzhik: “Table of Integrals, Series and Products”. Academic Press (1965), [**3.197**]{}, 8, page 287.
Appendix:Some properties of Hypergeometric Function {#appendixsome-properties-of-hypergeometric-function .unnumbered}
===================================================
Using data from [@gra2] we have $$\int\limits_0^t\frac {t^{'n}}
{(x+at^{'}\pm i\epsilon)^{\alpha n +1}}\;dt^{'}= \frac {t^{n+1}}
{(x\pm i\epsilon)^{\alpha n+1}}B(1,n+1)\times$$ $$\label{a3} F\left(\alpha n +1, n+1,n+2;-\frac {at} {x\pm
i\epsilon}\right).$$ Then, we have $$\lim_{\epsilon\rightarrow 0^+}\int\limits_0^t\frac {t^{'n}}
{(x+at^{'}\pm i\epsilon)^{\alpha n +1}}\;dt^{'}=
\lim_{\epsilon\rightarrow 0^+} \frac {t^{n+1}} {(x\pm
i\epsilon)^{\alpha n+1}}B(1,n+1)\times$$ $$\label{a2} \lim_{\epsilon\rightarrow 0^+} F\left(\alpha n +1,
n+1,n+2;-\frac {at} {x\pm i\epsilon}\right).$$ Thus we obtain finally $$\int\limits_0^t\frac {t^{'n}} {(x+at^{'}\pm i0)^{\alpha n +1}}\;dt^{'}=
\frac {t^{n+1}} {(x\pm i0)^{\alpha n+1}}B(1,n+1)\times$$ $$\label{a1} F\left(\alpha n +1, n+1,n+2;-\frac {at} {x\pm
i0}\right).$$
\[htb\]
\[htb\]
\[htb\]
\[htb\]
\[htb\]
\[htb\]
\[htb\]
\[htb\]
|
---
abstract: 'KIC 8262223 is an eclipsing binary with a short orbital period ($P=1.61$ d). The [*Kepler*]{} light curves are of Algol-type and display deep and partial eclipses, ellipsoidal variations, and pulsations of $\delta$ Scuti type. We analyzed the [*Kepler*]{} photometric data, complemented by phase-resolved spectra from the R-C Spectrograph on the 4-meter Mayall telescope at Kitt Peak National Observatory and determined the fundamental parameters of this system. The low mass and oversized secondary ($M_2=0.20M_{\odot}$, $R_2=1.31R_{\odot}$) is the remnant of the donor star that transferred most of its mass to the gainer, and now the primary star. The current primary star is thus not a normal $\delta$ Scuti star but the result of mass accretion from a lower mass progenitor. We discuss the possible evolutionary history and demonstrate with the MESA evolution code that this system and several other systems discussed in prior literature can be understood as the result of non-conservative binary evolution for the formation of EL CVn type binaries. The pulsations of the primary star can be explained as radial and non-radial pressure modes. The equilibrium models from single star evolutionary tracks can match the observed mass and radius ($M_1=1.94M_{\odot}$, $R_1=1.67R_{\odot}$) but the predicted unstable modes associated with these models differ somewhat from those observed. We discuss the need for better theoretical understanding of such post-mass transfer $\delta$ Scuti pulsators.'
author:
- 'Zhao Guo, Douglas R. Gies, Rachel A. Matson'
- Antonio García Hernández
- 'Zhanwen Han, Xuefei Chen'
title: 'KIC 8262223: A Post-Mass Transfer Eclipsing Binary Consisting of a Delta Scuti Pulsator and a Helium White Dwarf Precursor'
---
Introduction
============
The $\delta$ Scuti variables are A- or F-type main-sequence (MS) and post-MS stars. They are interesting asteroseismic targets as they are numerous and often show multi-periodic pressure mode pulsations. However, it is hard to determine their fundamental parameters, and their ubiquitous fast rotation makes this task even more difficult. Those eclipsing binaries (EBs) with a component that is a $\delta$ Scuti pulsator offer us the means to determine accurate masses and radii for these variables. However, systems with short orbital periods are more likely to be observed and these are also more likely to experience binary mass transfer. Many $\delta$ Scuti pulsating stars in Algol-type (oEA) systems were discovered from ground-bases photometry (Mkrtichian 2002, 2003; Soydugan et al. 2006). The daily gaps in the time series limit the number of detected pulsation frequencies to only a few. Space missions like [*Kepler*]{} and [*CoRoT*]{} have provided continuous and accurate light curves of hundreds of $\delta$ Scuti pulsating EBs, and on average, tens of frequencies have been detected for each system. Most of the systems that have been studied in detail were considered as single stars and understood from the theory of single star evolution (Hambleton et al. 2013; Maceroni et al. 2014; da Silva et al. 2014; Guo et al. 2016). In this paper, we focus on $\delta$ Scuti EBs which have undergone mass transfer through the course of binary evolution. KIC 10661783 (Southworth et al. 2011; Lehmann et al. 2013) belongs to this class, and this binary consists of an oversized low mass secondary ($M=0.2 M_{\odot}, R=1.13R_{\odot}$) and a $\delta$ Scuti pulsating primary.
KIC 8262223 ($K_p=12.146$, $\alpha_{2000}$=$20$:$01$:$19.788$, $\delta_{2000}$=$+44$:$08$:$38.90$) is included in the Kepler Eclipsing Binary Catalog (Prša et al. 2011; Slawson et al. 2011). It is described as a semi-detached eclipsing binary with an orbital period of $1.612$ days, a near circular orbit, and a high inclination ($\sin i =0.97$). The eclipse timing analysis of this system was performed by Gies et al. (2012, 2015) and Conroy et al. (2014). The flat $O-C$ diagram of the timings indicates that this circular binary is not likely to have a nearby third companion. Gies et al. (2012) also noticed a pulsation signal in near resonance with the orbit in the light curve. Armstrong et al. (2014) estimated the effective temperatures of the primary and secondary as $T_{\rm eff1}=9325\pm 428$ K and $T_{\rm eff2}=6791\pm 642$ K, respectively, based on their fit of the binary spectral energy distribution (SED).
Here we analyze the photometric and spectroscopic data of KIC 8262223 (Section 2, 3, 4), study its pulsations (Section 5), and show that this binary and other similar systems like the aforementioned KIC 10661783 are the products of binary evolution with non-conservative mass-transfer (Section 6). They belong to the class of EL CVn stars which consist of an A- or F-type dwarf primary and a low mass He white dwarf precursor.
[*Kepler*]{} Photometry and Ground-based Spectroscopy
======================================================
Simple Aperture Photometry (SAP) data (Data Release 23) from the [*Kepler*]{} satellite were retrieved from the MAST Archive. There are 18 quarters (Q0-17) of long cadence data and 1 quarter (Q4) of short cadence data. The long and short cadence data have time sampling rate of $58.8$ seconds and $29.4$ minutes, respectively. Please refer to Caldwell et al. (2010) for more information on the [*Kepler*]{} data. The aperture contamination factors ($k$) reported in the Kepler Input Catalog are lower than $0.7\%$ in all quarters except for Q12 ($1.3\%$). The light curve shows deep eclipses, ellipsoidal variations and coherent pulsations at frequency about $65$ d$^{-1}$ as apparent in the short cadence data. After removing the long-term trends and outliers (see Section 4), we show a sample of the short cadence light curve in Figure 1. As the main pulsational frequency is well above the Nyquist frequency ($\approx 24$ d$^{-1}$) of long cadence data, these pulsations are essentially all cancelled out in the phase-folded multiple-quarter long cadence data used for binary light curve modeling. The light curve residuals of the short cadence data are used for pulsational analysis.
We also obtained 13 ground-based spectra of moderate resolving power ($R \approx 6000$) from the R-C Spectrograph on Kitt Peak National Observatory (KPNO) 4-meter Mayall telescope from $2010$ to $2013$. The spectra cover the wavelength from $3930$ Å to $4600$ Å, with typical signal-to-noise ratio of $70-120$ (Figure 2). More information about the instrument and spectra can be found in Matson et al. (2016).
Spectroscopic Orbit and Atmospheric Parameters
==============================================
The radial velocities (RVs) were determined following the same cross-correlation technique described by Matson et al. (2016). Two templates from atmospheric model grids UVBLUE (Rodríguez-Merino et al. 2005) were cross-correlated with the observed spectra to obtain the radial velocities presented in Table 1. The derived RVs were fitted to get the orbital parameters ($K_1,K_2,\gamma_1,\gamma_2,T_0$) with the Levenberg-Marquardt algorithm, where $K_1,K_2$ and $\gamma_1,\gamma_2$ are semi-amplitude velocities and system velocities of the primary and secondary star, respectively; $T_0$ is time epoch of the primary minimum (Figure 3). We assumed a circular orbital solution ($e=0$) and the orbital period was fixed to the value from eclipse timing measurements in Gies et al. (2015) as $P= 1.61301476$ days. We then used the tomography algorithm (Bagnuolo et al. 1994) to reconstruct the individual component spectra of each star. These spectra were compared with a grid of UVBLUE synthetic spectra and the best atmospheric parameters ($T_{\rm eff}$, $\log g$, \[Fe/H\] and $v\sin i$) were determined from a grid search followed by a local optimization with the Levenberg-Marquardt algorithm. To break the degeneracy in fitting five atmospheric parameters, the $v\sin i$ values were initially estimated from the metal lines in five different spectral sections and the $\log g$ values were fixed to the result from the binary modeling (see next section). Note that the uncertainties were estimated from the covariance matrices, and can be somewhat underestimated. The procedures mentioned above are iterative, and in each step the templates and RVs were updated from previous determinations. We adopted the final values when the parameters converged.
Spectral disentangling is another way of deriving orbital parameters. For spectroscopic binaries, we observe the linear combination of two component spectra with different Doppler shifts. Given the radial velocities of the two stars and their mean flux ratio, we can form a coefficient matrix $A$. Then we can separate the component spectra by solving the linear inverse problem $y=Ax$, where $y$ and $x$ are vectors formed by concatenating the observed composite spectra and the individual component spectra (see Hensberge at al. 2008). If the RVs used in the coefficient matrix are calculated from orbital parameters, we can find the optimized orbital parameters by minimizing the $\chi^2$ differences between the observed and synthetic composite spectra, $|y-AX|^2$. We implemented this method with the FDBinary code (Ilijic et al. 2004). Note that the code uses a downhill simplex optimizer and regrettably does not provide uncertainty estimates.
The final orbital parameters are summarized in Table 2. The orbital parameters from the two techniques agree very well. The results show that the system has a very small mass ratio ($q=0.104$), and the systemic velocities from fitting RVs of primary and secondary ($\gamma_1,\gamma_2$) agree within uncertainties. Table 3 contains the optimal atmospheric parameters. This binary consists of a hot A-type primary ($T_{\rm eff1}=9128$ K) and a much cooler secondary ($T_{\rm eff2}=7119$ K). Both stars have metallicities slightly lower than solar. The projected rotational velocity of the primary star ($v\sin i=37\pm 13$ km s$^{-1}$) is a little lower than the synchronized value at $50$ km s$^{-1}$ (see Table 4 in next section). The $v\sin i$ of the secondary matches the synchronized value very well. Note that each pixel in our spectra is equal to $26.25$ km s$^{-1}$ in velocity space, and we cannot reliably measure small rotational velocities ($v\sin i < 30$ km s$^{-1}$). In Figure 4, we show the reconstructed component spectra of the two stars and the best matching model spectra. The mean flux ratio ($F_2/F_1$) in the observed spectral range ($\approx 4225$ Å) is $0.21\pm 0.02$ which amounts to percentage contributions of $82.6\%$ and $17.4\%$ for the primary and secondary, respectively.
ELC Binary Models
=================
We used the [*Kepler*]{} long cadence data to perform our light curve modeling. The preparation of the raw data, which was detailed in Guo et al. (2016), includes de-trending and outlier removal. We divided the $18$ quarters into eight sections ($Q0-Q2$, $Q3-4$, $Q5-6$, $Q7-8$, $Q9-10$, $Q11-12$, $Q13-14$, $Q15-17$) and fitted the light curve of each individually. The standard deviations of the best fitting parameters from these eight datasets are adopted as the final uncertainties.
We used the Eclipsing Light Curve (ELC) code by Jerome Orosz (Orosz & Hauschildt 2000) to model the binary light curve. The code implements the Roche model and synthesizes the binary light curve and radial velocity curve by integrating the specific intensity and flux-weighted RVs of each segment on the stellar surface. In ELC, the effect of aperture contamination factor ($k$) is accounted for by adding to the median value of the model light curve $y_{med}$ an offset $ky_{med}/(1- k)$.
We optimized the following fitting parameters: orbital inclination ($i$), temperature ratio ($ temprat=T_{\rm eff2}/T_{\rm eff1}$), filling factors ($f_1, f_2$) and time of secondary minimum ($T_0$) by implementing the genetic algorithm PIKAIA (Charbonneau 1995). Note that the Roche lobe filling factor ($f$) is defined as $x_{\rm point}/x_{L1}$, where $x_{\rm point}$ is the radius of the star toward the inner Lagrangian point (L1), and $x_{L1}$ is the distance to L1 from the center of the star. It is the counterpart of the Roche potential $\Omega$ used in the Wilson-Devinney code (Wilson & Devinney 1971). We run the genetic optimizer for 400 generations with 100 members in each generation. We set broad search ranges for these parameters: $i\in[50,90]$(degrees), $temprat\in[0.6,0.9]$, $f_1,f_2\in[0.1,0.8]$, $T_0\in[55430.9,55432.5]$(BJD-2,400,000). The orbital period was fixed to $1.61301476$ days as found by Gies et al. (2015). The effective temperature of the primary was fixed to the value from spectroscopy ($9128$ K) as it is well known that the light curve from **a** single passband is only sensitive to the temperature ratio (if both primary and secondary eclipses occur). We assumed the binary has a circular orbit and the two components have synchronized rotation as indicated from spectroscopy. The parameters mass ratio ($q=M_2/M_1$), velocity semi-amplitude ($K_1$), and systemic velocity ($\gamma$) were fixed to values from spectroscopic orbital solutions as they have little affect on the light curve (except for $q$, which can have some influence on the ellipsoidal variations). The gravity brightening coefficients ($\beta$) were fixed to the canonical values of $0.25$ for radiative atmospheres and $0.08$ for convective atmospheres. Similarly, the surface bolometric albedos ($l_1,l_2)$ were set to 1.0 and 0.5 for radiative and convective atmospheres, respectively. However, the light curve residuals from the parameter settings above still show obvious variations. We found that by setting $l_2$ as a free parameter the light curve fit is much better. The optimal value of $l_2$ is $0.22$, which is much lower than the canonical value of $0.5$. Note Matson et al. (2016) also found a lower albedo ($0.33$) for the F stars in KIC 5738698. If we let the albedo of the primary star ($l_1$) vary, the best value is very close to $1.0$, and the light curve fit is not improved.
Doppler boosting or beaming is a relativistic effect (Loeb & Gaudi 2003), in which the observer will receive a higher photon rate from a star moving towards him or her, and vice versa. The fractional change of photon rate is $\Delta n_{\lambda}/n_{\lambda}=f_{\rm DB}v_{\lambda}/c$, where $v_{\lambda}$ is radial velocity of the star and $c$ is speed of light. Thus, the key parameters are the mass and flux ratios. If the two stars have similar temperatures, then the beaming effect will be canceled out if they have a mass ratio of $1$. For systems with a very small mass ratio, the Doppler beaming effect is expected to play an important role. A measurement of the beaming amplitude from the light curve can provide an independent estimation of the orbital parameters. This was performed in many beaming binaries such as KOI-74, KOI-81 (van Kerkwijk et al. 2010) and KIC 11558725 (Telting et al. 2012). In the ELC code, the Doppler beaming effect is accounted for following the treatment in van Kerkwijk et al. (2010). The beaming parameter $f_{\rm DB}$ is estimated as the wavelength average of $xe^x/(e^x-1)$ in the [*Kepler*]{} passband, where $x=hc/(\lambda kT)$. The estimated values are $2.76$ and $3.48$ for the primary and secondary, respectively, and are fixed in the fitting process. In Figure 5, we show the best light curve solution and corresponding residuals with (red) and without (green) Doppler beaming. It can be seen in the two middle panels that the residuals are more symmetric around the zero horizontal line if the beaming effect is included. We also show the ELC model of the Doppler beaming signal in the bottom panel, and the amplitude of the beaming effect is about $0.00033$ magnitude.
We also fit the light curves and RVs simultaneously (LC+RV) with fitting parameters $(T_0,i,f_1,f_2,temprat, q,K_1,\gamma)$. Due to the sharp difference between the data quality, we have to give more weight to the RVs. We scaled the errors of light curves so that the $\chi^2_{min} \sim \nu$, where $\nu$ is the degree of freedom. The $1\sigma$ uncertainties were then derived from changing the parameters so that the $\chi^2$ increase by $1.0$ from $\chi^2_{min}$.
In Table 4, we list the final model parameters of KIC 8262223. The parameters of the primary are typical for a mid-A type ZAMS star ($M_1=1.94M_{\odot}$, $R_1=1.67R_{\odot}$, $T_{\rm eff1}=9128$ K) but somewhat over-luminous. The secondary has a very low mass ($M_2=0.20M_{\odot}$) and very discrepant radius ($R_2=1.31R_{\odot}$) and effective temperature ($T_{\rm eff2}=6849$ K) compared to main sequence stars of the same mass. This suggests that this system has gone through binary evolution with mass transfer. The implications and possible evolutionary scenarios are discussed in section 6. The model parameters from fitting light curves and RVs simultaneously (LC+RV) are almost the same as the LC-only values.
The optimal effective temperature ratio ($T_{\rm eff2}/T_{\rm eff1}$) from fitting the light curve is $0.75$, and this gives $T_{\rm eff2}=6849$ K which is $270$ K ($1.8\sigma$) cooler than that from spectroscopy ($T_{\rm eff2}=7119 $ K). This discrepancy can be explained by our adopted lower albedo $l_2=0.22$. There is a correlation between $T_{\rm eff2}$ and bolometric albedo $l_2$. It is known that the bolometric albedo is difficult to pin point and is usually treated as a free parameter. Sometimes even values as high as $2.46$ are used (e.g., star A in KIC 10661783; Lehmann et al. 2013). Note that the effective temperature of KIC 3858884 star B from spectroscopy by Maceroni et al. (2014) is also $\approx 300$ K different from that from the light curve solution. Thus, we think this minor discrepancy is not a problem with our analysis.
Pulsational Characteristics
===========================
We only use the short cadence data to study the pulsations of this system. We calculated the Fourier spectrum of the light curve residuals with the [*Period 04*]{} package (Lenz & Breger 2005) with all eclipses removed. The calculation was performed to the short cadence Nyquist frequency ($\approx 734$ d$^{-1}$) with the fitting formula $Z+\sum_i A_i \sin (2\pi \Omega_i t+2\pi \Phi_i)$, where $Z, A_i,\Omega_i, \Phi_i$ are the zero-point shift, pulsation amplitudes, linear frequencies and phases, respectively, and time $t= $ BJD $ - \ 2,400,000$. No significant peaks were found beyond $70$ d$^{-1}$. All frequencies with $S/N>4$ are reported in Table 5 and the Fourier spectrum is shown in Figure 6. The uncertainties were calculated following Kallinger et al. (2008).
Almost all the pulsations are in the range of $50-65$ d$^{-1}$. There appear to be some low amplitude peaks at $100-130$ d$^{-1}$ (not shown in Figure 6) which are exactly twice the main pulsation range. We interpret these peaks as the harmonics of the main pulsations rather than some high intrinsic pulsation frequencies. This indicates that the pulsations are to some extent non-sinusoidal. The primary star contributes much more light (83% in the wavelength range of our spectra), and its fundamental parameters ($M_1=1.94M_{\odot}$, $R_1=1.67R_{\odot}$, $T_{\rm eff1}=9128$ K) also agree with those of a typical $\delta$ Scuti pulsator. It is, thus, very likely that the pulsations stem from the primary. The pulsations at $50-65$ d$^{-1}$ can be well explained as high order ($n_p \approx 6,7$) radial and non-radial p-modes.
In the low frequency region, there are two peaks $f_{20}=1.2397$ d$^{-1}$ and $f_{58}=0.79928$ d$^{-1}$. $f_{20} $ is equal to twice the orbital frequency $2f_{\rm orb}=2\times 0.61996$ d$^{-1}$ within uncertainties and is likely the result of imperfect light curve fitting (e.g., ellipsoidal variations). $f_{58}$ is probably an artifact of imperfect data reduction.
As the star is pulsating at relatively high radial orders, which are closer to the asymptotic regime, we can expect to find some frequency regularities similar to those observed in solar-like oscillators. Garc[í]{}a Hern[á]{}ndez et al. (2015) found the signature of frequency regularities in six $\delta$ Scuti stars in eclipsing binaries by analyzing the Fourier Transform (FT) of the p-mode frequencies. These frequency patterns are close to the large frequency separation which is related to the mean stellar density. We applied the same FT technique to the frequencies in Table 5, and the Fourier spectrum is presented in Figure 7. The periodicity at $39.9\mu \rm$HZ (3.45 d$^{-1}$) is related to half of the large frequency separation $0.5 \Delta \nu$. From the mean density of the primary, we can deduce the expected large separation ($\Delta \nu$) by using the linear relation between $\log \Delta \nu$ and $\log \rho$ (Su[á]{}rez et al. 2014; Garc[í]{}a Hern[á]{}ndez et al. 2015). The expected $\Delta \nu$ is $70.7$ $\mu$HZ (6.1 d$^{-1}$), which is similar to but smaller than the observed value $\Delta \nu_{\rm obs}=2\times 39.9=79.8$ $\mu$HZ (6.89 d$^{-1}$). Paparo et al. (2016a, b) found the signatures of the large frequency separation in $90$ $\delta$ Scuti stars observed by [*CoRoT*]{} satellite, and in addition to showing regularities of $\Delta\nu$, some of them show patterns which approximately agree with $\Delta\nu\pm f_{\rm rot}$ or $\Delta\nu\pm2 f_{\rm rot}$. If we adopt the synchronous rotational frequency $f_{\rm rot}=7.175\mu$HZ (0.62 d$^{-1}$), the corresponding rotational splittings for the high order p-modes are $m(1-C_{nl}) f_{\rm rot}\approx m f_{\rm rot}$ (Aerts et al. 2010), and $m=\pm1$ and $m=\pm1, \pm2$ for $l=1$ and $l=2$ modes, respectively. We thus conclude that the observed pattern $\Delta\nu_{\rm obs}$ agrees with the theoretical large frequency separation with rotational effect taken into account. Indeed, the highest peak in the Fourier spectrum in Figure 7 is at about $7.07\mu$HZ (0.61 d$^{-1}$) which is only slightly smaller than the orbital frequency $f_{\rm orb}=7.175\mu$HZ (0.62 d$^{-1}$), and thus this regularity is likely the result of rotational splitting.
Tidally excited g-mode oscillations have been found in many eccentric binaries (Welsh et al. 2011; Hambleton et al. 2016; Guo et al. 2017). For KIC 8262223 with a synchronized circular orbit, this effect of the dynamical tide is not expected. Note that tidal oscillations can also enforce frequency splitting of p-modes at integer multiples of orbital frequency (e.g., the eccentric binary KIC4544587 in Hambleton et al. 2013) through mode coupling of self-excited p-modes and tidally induced g-modes.
To check whether the pulsation range can be explained by the non-adiabatic theory, we modeled the evolution of some single non-rotating stars with MESA (Paxton et al. 2011, 2013) and calculated their pulsation frequencies in the range of $20-70$ d$^{-1}$ with GYRE (Townsend & Teitler 2013). We set the mixing length parameter $\alpha_{\rm MLT}$ to $1.8$ and used the OPAL opacity tables (Iglesias & Rogers 1996). Solar mixtures in Grevesse & Sauval (1998) were adopted for the assumed solar composition. The results are presented in Figure 8. The pulsation modes ($l=0, 1, 2$) from the equilibrium model with $M=1.94M_{\odot}$, $R=1.67M_{\odot}$, $Y=0.28$, and $Z=0.02$ are unstable ($\eta > 0$, the normalized growth-rate defined in Stellingwerf 1978) in the range of $60-67$ d$^{-1}$, which agrees approximately with the observed unstable range $50-65$ d$^{-1}$. However, the theoretical unstable range is much narrower than observations. We have to be cautious in interpreting the above analysis based on single star evolution, because the real inner structure of the $\delta$ Scuti type primary may have different pulsational properties due to interior changes caused by the past mass transfer in the binary. Our preliminary analysis of these post-mass transfer $\delta$ Scuti stars suggests that they tend to be hotter and pulsate over a broader range and with higher frequencies than their single-evolution counterparts with the same mass and radius. Thus the rejuvenation of $\delta$ Scuti stars from binary evolution is a possible candidate to explain the overabundance of high-frequency $\delta$ Scuti pulsators (Balona et al. 2015). This will be presented in a separate paper (Z. Guo in preparation).
Evolution
=========
The primary star of KIC 8262223 appears **to be** a normal A-type dwarf near the zero age main sequence (ZAMS) (slightly over-luminous), while the low-mass secondary is noticeably oversized and over-luminous. The classical scenario for the formation of this type of cool Algol system involves the mass transfer (probably case B, i.e., the donor star fills its Roche lobe while evolving through the Hertzsprung gap, Paczynski 1971) from the original massive primary (donor) to the original less massive secondary, leading to a mass ratio reversal.
KIC 8262223 is likely to evolve into a typical EL CVn system, which consists of a normal A- or F-type dwarf and a low mass ($\approx 0.2 M_{\odot}$) helium white dwarf precursor (pre-He WD). Maxted et al. (2014) presented 17 EL CVn systems discovered by the WASP survey. KIC 8262223 closely resembles the cool Algol system KIC 10661783 described by Southworth et al. (2011) and Lehmann et al. (2013). The latter authors also discussed several similar systems such as AS Eri (Mkrtichian et al. 2004) and V228 (Kaluzny et al. 2007). Sarna et al. (2008) found that a system with similar initial masses and slightly longer period ($M_{10}=0.88M_{\odot},M_{20}=0.85M_{\odot}, P=1.35$d) can evolve to the current state of V228 ($M_{2}=0.20M_{\odot},M_{1}=1.51M_{\odot}, P=1.15$d) through non-conservative case B mass transfer (see also Stepien et al. 2016). Eggleton & Kiseleva-Eggleton (2002) studied binary evolution of cool Algols including AS Eri. For better comparison, we list the parameters of four systems KIC 8262223, KIC 10661783, AS Eri, and V228 in Table 6. All these binaries consist of a low mass secondary ($\approx 0.2 M_{\odot}$) and may have similar evolutionary history as detailed below.
In Figure 9, we show the positions of the above four systems and the evolutionary tracks of He-WDs calculated by Driebe et al. (1999). The $\log g$ and $T_{\rm eff}$ of KIC 8262223 are fitted nicely by the evolutionary track of mass $0.195M_{\odot}$, matching the observed mass from RVs ($0.20 \pm 0.01 M_{\odot}$). The observed quantities of other three systems also agree with the theory if the uncertainties of masses are considered.
Chen et al. (2016) found that EL CVn type binaries can result from non-conservative binary evolution with long-term stable mass transfer between low-mass stars **that avoided** a rapid common-envelope evolution. They did thorough simulations with the MESA code (Paxton et al. 2011, 2013) to analyze the evolution channel of EL CVn stars from low mass progenitors ($M_{10}\in[0.9,2.0]M_{\odot}$ and $q_0=M_{10}/M_{20} \in [1.1,4.0]$). The parameters of the secondary star of KIC 8262223 ($R_2/a=0.176$, $T_{\rm eff2}=6849K$) fit their $R_2/a-T_{\rm eff2}$ relation for pre-He WD very well (see their Fig. $9$). They also found a tight correlation between orbital periods and WD masses as shown in their Figure $10$. Our observed values ($P=1.6$d, $M_2=0.20M_{\odot}$) also nicely match their theoretical relations. According to their Figure $7$, the pre-He WD in KIC 8262223 with a mass of $0.2M_{\odot}$ has an envelope mass of $0.02M_{\odot}$.
Due to the uncertainties in the treatment of mass loss and angular momentum loss of binary evolution, we do not attempt to find a best matching evolution history for KIC 8262223. Instead, we show that this binary as well as other binaries in Table 6 can be qualitatively explained by the aforementioned formation channel. We used the binary module of MESA evolution code (v7624) and evolved two typical systems: (1) $M_{10}=1.35M_{\odot}$, $M_{20}=1.15M_{\odot}$ and $P_0=2.89$ d; (2) $M_{10}=1.0M_{\odot}$, $M_{20}=0.9M_{\odot}$ and $P_0=3.0$d. The metallicities were set to the solar value ($Z=0.02$) and initial helium abundances were fixed to $Y=0.28$. The evolutionary tracks were assumed to be non-conservative. Following the assumptions in Chen et al. (2016), half of the mass lost from the vicinity of the donor is accreted by the gainer while the other 50% leaves the system as a fast wind, carrying away the same angular momentum **as** the donor. The mass transfer rate is calculated implicitly using the Ritter scheme (Ritter 1988).
The evolutionary paths are shown in the H-R diagram in Figure 10. The black and red tracks are for the donor and gainer in model (1), respectively. The system starts with an orbital period of $2.89$ d, and the two stars follow their single star evolutionary tracks. The mass transfer begins when the primary evolves to the sub-giant stage and its radius reaches its Roche lobe (at $t=3.32$ Gyr, marked by the filled circles). After a stable mass-transfer of about $\sim 1$ Gyr (marked by the star symbol), the donor star evolves to a stable long-term stage of almost constant luminosity and begins to contract, cool, and evolve to a He WD precursor. The system ends up with parameter values $M_{1}=0.218M_{\odot},M_2=1.716M_{\odot}$, and $P=3.59$ d. For model (2), the evolution of the initial primary star $M_1$ (gray line, secondary evolution not shown for clarity) is shown in Figure 10. The final status of this system has parameters of $M_{1}=0.20M_{\odot},M_2=1.30M_{\odot},P=1.06$ d.
The observed positions of four cool Algol systems mentioned above are shown in Figure 10, with mass-gainers indicated as red symbols and donors as black symbols. According to the above evolutionary models, the mass gainer evolves along the red track to the upper left and arrives at the observed locations of the A- or F-type dwarfs which can enter the $\delta$ Scuti instability strip (IS). The blue and red edges for the fundamental radial modes calculated by Dupret et al. (2005) are denoted by the dotted lines, and those for the fourth overtone radial modes are marked by the dashed lines[^1]. The mass donor gets hotter, smaller, and evolves to the left and becomes a He WD precursor. The evolutionary tracks of these two representative models are not meant to explain quantitatively the properties of the four systems, but rather to show the regions that the product of the binary evolution can occupy on the H-R diagram and the final status for the formation of EL CVn stars. For KIC 8262223, the secondary seems to be a star that has just finished its mass transfer and is contracting (i.e., from filling its Roche lobe to under-filling its Roche lobe). The secondary still has a large radius, and thus the binary light curves show a partical eclipse instead of a flat-bottomed transit signal which is typical for EL CVn stars.
It is interesting to note that the dwarf stars in these systems are often pulsating (all but V228, which has a mass too low to be a $\delta$ Scuti pulsator). As can be seen in Table 6, these systems can pulsate at low ($20-30$ d$^{-1}$) as well as high frequencies ($50-60$ d$^{-1}$). It is known that the unstable range of pulsations will vary as the star evolves off the ZAMS. For example, for a $\delta$ Scuti star with $M=1.8M_{\odot}$, the $p$-modes $n_p=4-7$ ($45-60$ d$^{-1}$) are unstable for young models close to ZAMS. The unstable range moves to $5-25$ d$^{-1}$ for models near TAMS which are low order p-modes or g-modes (Dupret 2002). Asteroseismology has the potential of determining the ages of the $\delta$ Scuti pulsators in these EL CVn binaries. Not only the dwarfs, but the pre-He WD precursor can also show pulsations. One such example is the $g$-mode pulsating WD in KIC 9164561 (Zhang et al. 2016). Such pulsations enabled the discovery of a thick hydrogen envelope on the pre-He WD J0247-25B (Maxted et al. 2013). The theoretical instability strip of these pre-He WDs has been examined closely by C[ó]{}rsico & Althaus (2016). It is interesting to investigate whether the mass-gainer $\delta$ Scuti star and the He WD can both reside in their corresponding instabiity strip, as the current observations only reveal systems containing one pulsator. More information can be extracted from these pulsations, which may lead to great advancements in our understanding of the evolution of low mass close binaries.
Conclusions and Prospects
=========================
Utilizing the accurate [*Kepler*]{} photometric data and our ground-based spectroscopic data, we determined the fundamental parameters of KIC 8262223, an eclipsing binary system with an orbital period of $1.6$ days which contains an A-type dwarf and a low mass pre-He WD. The light curves show high frequency pulsations at about $60$ d$^{-1}$. These $\delta$ Scuti type pulsations are likely from the primary star and can be explained as radial and non-radial p-modes. We discussed possible evolutionary scenarios and showed that this system and several other very similar binaries can be explained by the non-conservative evolution of close binaries with low mass progenitors, the channel that forms EL CVn type stars. KIC 8262223 also poses some challenges to our non-adiabatic theory of stellar pulsations in modeling these post-mass transfer $\delta$ Scuti stars. These rejuvenated $\delta$ Scuti stars pulsate with high frequencies and may explain the observed over-abundance of high-frequency $\delta$ Scuti stars in the [*Kepler*]{} field (Balona et al. 2015).
Asteroseismic modeling has not yet been applied to post-mass transfer $\delta$ Scuti stars in pulsating Algols (oEAs) due to the complex nature of these systems. As a prerequisite, it is possible to identify the pulsation modes through high cadence and high resolution spectroscopy. The eclipse mapping method (Reed et al. 2005; B[í]{}r[ó]{} & Nuspl 2011) is also promising but still awaits application to a real object. Several hundreds of $\delta$ Scuti variables in binaries have already been detected by the [*Kepler*]{} satellite as well as ground-based observations (Pigulski & Michalska 2007), and future missions like [*TESS*]{} will provide more systems. A complete analysis of their pulsational properties will require a better understanding of close binary tidal interactions and binary evolution. We are in debt to the anonymous referee for helpful comments and suggestions which greatly improved the quality of this paper. We thank Jerome A. Orosz for his constant support in using the ELC code. We thank Bill Paxton, Rich Townsend and others for maintaining and updating MESA and GYRE. We thank Meng Sun for helpful discussions. This work is partly based on data from the [*Kepler*]{} mission. Funding for this mission is provided by NASA’s Science Mission Directorate. The photometric data were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This study was supported by NASA grants NNX12AC81G, NNX13AC21G, and NNX13AC20G. This material is based upon work supported by the National Science Foundation under Grant No. AST-1411654. A. G. H. acknowledges support from Fundação para a Ciência e a Tecnologia (FCT, Portugal) through the fellowship SFRH/ BPD/80619/2011, and from the EC Project SPACEINN (FP7-SPACE-2012-312844). Z. H. is partly supported by the Natural Science Foundation of China (Grant Nos 11521303, 11390374). Institutional support has been provided from the GSU College of Arts and Sciences and the Research Program Enhancement fund of the Board of Regents of the University System of Georgia, administered through the GSU Office of the Vice President for Research and Economic Development.
Aerts, C., Christensen-Dalsgaard, J., & Kurtz, D. W. 2010, Asteroseismology, Astronomy and Astrophysics Library (Heidelberg: Springer)
Armstrong, D. J., Gomez Maqueo Chew, Y., Faedi, F., & Pollacco, D. 2014, , 437, 3473
Bagnuolo, W. G., Jr., Gies, D. R., Hahula, M. E., Wiemker, R., & Wiggs, M. S. 1994, , 423, 446
Balona, L. A., Daszy[ń]{}ska-Daszkiewicz, J., & Pamyatnykh, A. A. 2015, , 452, 3073
B[í]{}r[ó]{}, I. B., & Nuspl, J. 2011, , 416, 1601
Caldwell, D. A., Kolodziejczak, J. J., Van Cleve, J. E., et al. 2010, , 713, L92
Charbonneau, P. 1995, , 101, 309
Chen, X., Maxted, P. F. L., Li, J., & Han, Z., , submitted, arXiv:1604.01956(v1)
Conroy, K. E., Prša, A., Stassun, K. G., et al. 2014, , 147, 45
C[ó]{}rsico, A. H., Althaus, L. G., Serenelli, A. M., et al. 2016, , 588, A74
da Silva, R., Maceroni, C., Gandolfi, D., Lehmann, H., & Hatzes, A. P. 2014, , 565, A55
Driebe, T., Bl[ö]{}cker, T., Sch[ö]{}nberner, D., & Herwig, F. 1999, , 350, 89
Dupret, M. A. 2002, Bulletin de la Société Royale des Sciences de Liège, 71, 249
Eggleton, P. P., & Kiseleva-Eggleton, L. 2002, , 575, 461
Garc[í]{}a Hern[á]{}ndez, A., Mart[í]{}n-Ruiz, S., Monteiro, M. J. P. F. G., et al. 2015, , 811, L29
Gies, D. R., Williams, S. J., Matson, R. A., et al. 2012, , 143, 137
Gies, D. R., Matson, R. A., Guo, Z., et al. 2015, , 150, 178
Grevesse, N., & Sauval, A. J. 1998, , 85, 161
Guo, Z, Gies, D. R., Matson, R. A., et al. 2016, , 826, 69
Guo, Z., Gies, D. R., & Fuller, J. 2017, , 834, 59
Hambleton, K. M., Kurtz, D. W., Prsa, A., et al. 2013, , 434, 925
Hambleton, K., Kurtz, D. W., Pr[š]{}a, A., et al. 2016, , 463, 1199
Hensberge, H., Iliji[ć]{}, S., & Torres, K. B. V. 2008, , 482, 1031
Iglesias, C. A., & Rogers, F. J. 1996, , 464, 943
Ilijic, S., Hensberge, H., Pavlovski, K., & Freyhammer, L. M. 2004, (ASP Conf. Vol. 318), ed. R. W. Hilditch, H. Hensberge, & K. Pavlovski, (San Francisco: ASP), 111
Kallinger, T., Reegen, P., & Weiss, W. W. 2008, , 481, 571
Kaluzny, J., Thompson, I. B., Rucinski, S. M., et al. 2007, , 134, 541
Lehmann, H., Southworth, J., Tkachenko, A., & Pavlovski, K. 2013, , 557, A79
Lenz, P., & Breger, M. 2005, Communications in Asteroseismology, 146, 53
Loeb, A., & Gaudi, B. S. 2003, , 588, L117
Maceroni, C., Lehmann, H., da Silva, R., et al. 2014, , 563, A59
Matson, R. A., Gies, D. R., Guo, Z., & Orosz, J. A. 2016, , 151, 139
Maxted, P. F. L., Serenelli, A. M., Miglio, A., et al. 2013, , 498, 463
Maxted, P. F. L., Bloemen, S., Heber, U., et al. 2014, , 437, 1681
Mkrtichian, D. E., Kusakin, A. V., Gamarova, A. Y., et al. 2002, in Observational Aspects of Pulsating B- and A Stars, (ASP Conf. Vol. 256), ed. C. Sterken & D. W. Kurtz (San Francisco: ASP), 259
Mkrtichian, D. E., Nazarenko, V., Gamarova, A. Y., et al. 2003, in Interplay of Periodic, Cyclic and Stochastic Variability in Selected Areas of the H-R Diagram, (ASP Conf. Vol. 292), ed. C. Sterken (San Francisco: ASP), 113
Mkrtichian, D. E., Kusakin, A. V., Rodriguez, E., et al. 2004, , 419, 1015
Orosz, J. A., & Hauschildt, P. H. 2000, , 364, 265
Paczy[ń]{}ski, B. 1971, , 9, 183
Papar[ó]{}, M., Benk[ő]{}, J. M., Hareter, M., & Guzik, J. A. 2016a, , 822, 100
Papar[ó]{}, M., Benk[ő]{}, J. M., Hareter, M., & Guzik, J. A. 2016b, , 224, 41
Paxton, B., Bildsten, L., Dotter, A., et al. 2011, , 192, 3
Paxton, B., Cantiello, M., Arras, P., et al. 2013, , 208, 4
Pigulski, A., & Michalska, G. 2007, , 57, 61
Prša, A., Batalha, N., Slawson, R. W., et al. 2011, , 141, 83
Reed, M. D., Brondel, B. J., & Kawaler, S. D. 2005, , 634, 602
Ritter, H. 1988, , 202, 93
Rodr[í]{}guez-Merino, L. H., Chavez, M., Bertone, E., & Buzzoni, A. 2005, , 626, 411
Sarna, M. J. 2008, arXiv:0812.5051
Slawson, R. W., Prša, A., Welsh, W. F., et al. 2011, , 142, 160
Southworth, J., Zima, W., Aerts, C., et al. 2011, , 414, 2413
Soydugan, E., Soydugan, F., Demircan, O., & [İ]{}bano[ǧ]{}lu, C. 2006, , 370, 2013
Stellingwerf, R. F. 1978, , 83, 1184
Stepien, K., Pamyatnykh, A. A., & Rozyczka, M. 2016, arXiv:1610.02199
Su[á]{}rez, J. C., Garc[í]{}a Hern[á]{}ndez, A., Moya, A., et al. 2014, , 563, A7
Telting, J. H., [Ø]{}stensen, R. H., Baran, A. S., et al. 2012, , 544, A1
Townsend, R. H. D., & Teitler, S. A. 2013, , 435, 3406
van Kerkwijk, M. H., Rappaport, S. A., Breton, R. P., et al. 2010, , 715, 51
Welsh, W. F., Orosz, J. A., Aerts, C., et al. 2011, , 197, 4
Wilson, R. E., & Devinney, E. J. 1971, , 166, 605
Xiong, D. R., Deng, L., Zhang, C., & Wang, K. 2016, , 457, 3163
Zhang, X. B., Fu, J. N., Li, Y., et al. 2016, , 821, L32
[cccccccc]{}
55369.9232 & 0.19 & $ 1.5$ $\pm$ 1.7 & $-0.9$& 215.4 $\pm$ 4.3& $0.4$ & KPNO\
55732.8574 & 0.19 & $ 2.6$ $\pm$ 1.7 & 0.4& 213.5 $\pm$ 4.5& $-3.3$ & KPNO\
55815.8979 & 0.68 & $ 42.7$ $\pm$ 1.9 & 0.2& $-159.2$ $\pm$ 5.9& $-2.1$ & KPNO\
56077.9534 & 0.14 & $ 6.2$ $\pm$ 1.7 & 0.8& 185.0 $\pm$ 4.7& 4.2 & KPNO\
56078.7629 & 0.64 & $ 42.3$ $\pm$ 2.4 & 2.0& $-144.2$ $\pm$ 7.0& $-11.9$ & KPNO\
56078.8440 & 0.79 & $ 42.0$ $\pm$ 1.8 & $-1.2$& $-163.3$ $\pm$ 5.7& 1.9 & KPNO\
56078.9357 & 0.75 & $ 45.8$ $\pm$ 1.9 & 1.6& $-183.5$ $\pm$ 5.6& $-3.9$ & KPNO\
56079.7925 & 0.28 & $ 9.3$ $\pm$ 3.4 & 7.2& 220.4 $\pm$ 10.7& $-6.4$ & KPNO\
56081.9642 & 0.63 & $ 34.8$ $\pm$ 2.9 & $-4.2$& $-103.3$ $\pm$ 6.4& 15.5 & KPNO\
56082.8204 & 0.16 & $ 3.1$ $\pm$ 1.8 & $-1.0$& 195.2 $\pm$ 4.7& 0.9 & KPNO\
56082.8833 & 0.20 & $ 1.7$ $\pm$ 1.7 & $-0.4$& 217.2 $\pm$ 4.7& 0.0 & KPNO\
56082.9468 & 0.23 & $ 0.6$ $\pm$ 2.0 & -0.8& 227.5 $\pm$ 5.6& $-1.2$ & KPNO\
[lccc]{} $T_0$ (primary minimum) (HJD-2,400,000) & $55690.5 \pm 0.1$ & $55690.605$\
$K_1$ (km s$^{-1}$) & $21.4 \pm 1.0$ & $21.5$\
$K_2$ (km s$^{-1}$) & $204.8\pm 3.2$ & $201.4$\
$\gamma_1$ (km s$^{-1}$) & $22.8 \pm 0.6 $ & $...$\
$\gamma_2$ (km s$^{-1}$) & $25.1\pm 1.7$ & $...$\
$e$ & $0.0\tablenotemark{a}$ & $ 0.0\tablenotemark{a}$\
$rms_1$ (km s$^{-1}$) & $2.6$ & $...$\
$rms_2$ (km s$^{-1}$) & $6.3$ & $ ...$\
[lccc]{} $T_{\rm eff}$ (K) & $9128 \pm 130$ & $7119 \pm 150$\
$\log g$ (cgs) & $4.3\tablenotemark{a}$ & $3.5\tablenotemark{a}$\
$v \sin i$ (km s$^{-1}$) & $37 \pm 13$ & $35 \pm 10$\
$\rm[Fe/H]$ & $-0.05 \pm 0.10$ & $ -0.05 \pm 0.10$\
Flux Contribution & $82.6\%$ &$17.4\%$
[lcccc]{} Period (days) & $1.61301476\tablenotemark{a}$& $1.61301476\tablenotemark{a}$\
Time of primary minimum (BJD-2400000) & $55432.522844(7)$ &$55432.522844(7)$\
Mass ratio $q=M_2/M_1$ &$0.104\tablenotemark{a}$ &0.107(2)\
Orbital eccentricity $e$ &$0.0\tablenotemark{a}$ &$0.0\tablenotemark{a}$\
Orbital inclination $i$ (degree) &$75.203(7)$ & 75.178(2)\
Semi-major axis a ($R_\odot$) &$7.45(11)$ &7.48(10)\
$M_1$ ($M_\odot$) & $1.94(6)$ &1.96(6)\
$M_2$ ($M_\odot$) & $0.20(1)$ &0.21(1)\
$R_1$ ($R_\odot$) & $1.67(3)$ &1.67(4)\
$R_2$ ($R_\odot$) & $1.31(2)$ &1.32(3)\
Filling factor $f_1$ &$0.314(3)$ &$0.314(2)$\
Filling factor $f_2$ &$0.672(1)$ &0.671(1)\
Gravity brightening, $\beta_1$ &$0.25\tablenotemark{a}$ &$0.25\tablenotemark{a}$\
Gravity brightening, $\beta_2$ &$0.08\tablenotemark{a}$&$0.08\tablenotemark{a}$\
Bolometric albedo 1& $1.0\tablenotemark{a}$&$1.0\tablenotemark{a}$\
Bolometric albedo 2& $0.22(1)$ &0.22(1)\
Beaming parameter 1& $2.76\tablenotemark{a}$&$2.76\tablenotemark{a}$\
Beaming parameter 2& $3.48\tablenotemark{a}$&$3.48\tablenotemark{a}$\
$T_{\rm eff1}$ (K) & $9128\tablenotemark{a} $&$9128\tablenotemark{a} $\
$T_{\rm eff2}$ (K) & $6849(15) $ &$6885(24) $\
$\log g_1$ (cgs) & $4.28(4)$ & $4.28(2)$\
$\log g_2$ (cgs) & $3.51(6)$ & $3.52(2)$\
Synchronous $v \sin i_1$ (km s$^{-1}$) & $50.6(9)$ &50.7(9)\
Synchronous $v \sin i_2$ (km s$^{-1}$) & $39.6(6)$ &39.9(7)\
$K_1$ (km s$^{-1}$) & $21.4\tablenotemark{a}$ &21.9(3)\
$\gamma$ (km s$^{-1}$) & $22.8\tablenotemark{a}$ &23.0(5)\
[lccccccc]{} $f_{ 1}$ & $64.43390 \pm 0.00010$ & $1.319 \pm 0.020$ & $0.896\pm 0.007$ & $114.5$ & $$\
$f_{ 2}$ & $57.17794 \pm 0.00016$ & $0.918 \pm 0.022$ & $0.313\pm 0.011$ & $72.3$ & $$\
$f_{ 3}$ & $61.43616 \pm 0.00018$ & $0.782 \pm 0.021$ & $0.190\pm 0.012$ & $64.4$ & $$\
$f_{ 4}$ & $53.64792 \pm 0.00024$ & $0.620 \pm 0.022$ & $0.345\pm 0.016$ & $49.0$ & $$\
$f_{ 5}$ & $51.04548 \pm 0.00026$ & $0.565 \pm 0.021$ & $0.281\pm 0.017$ & $46.0$ & $$\
$f_{ 6}$ & $54.78183 \pm 0.00028$ & $0.540 \pm 0.022$ & $0.368\pm 0.019$ & $42.5$ & $$\
$f_{ 7}$ & $63.28439 \pm 0.00028$ & $0.497 \pm 0.020$ & $0.516\pm 0.019$ & $42.2$ & $$\
$f_{ 8}$ & $60.31265 \pm 0.00040$ & $0.366 \pm 0.021$ & $0.949\pm 0.027$ & $29.7$ & $$\
$f_{ 9}$ & $61.19863 \pm 0.00040$ & $0.363 \pm 0.021$ & $0.896\pm 0.027$ & $29.8$ & $$\
$f_{ 10}$ & $49.08047 \pm 0.00039$ & $0.357 \pm 0.020$ & $0.218\pm 0.027$ & $30.1$ & $$\
$f_{ 11}$ & $60.19302 \pm 0.00052$ & $0.284 \pm 0.021$ & $0.669\pm 0.035$ & $22.9$ & $$\
$f_{ 12}$ & $63.82187 \pm 0.00049$ & $0.281 \pm 0.020$ & $0.947\pm 0.033$ & $24.1$ & $$\
$f_{ 13}$ & $54.88585 \pm 0.00059$ & $0.255 \pm 0.022$ & $0.607\pm 0.040$ & $20.0$ & $$\
$f_{ 14}$ & $62.43309 \pm 0.00063$ & $0.226 \pm 0.020$ & $0.717\pm 0.042$ & $18.9$ & $$\
$f_{ 15}$ & $53.54042 \pm 0.00071$ & $0.212 \pm 0.022$ & $0.134\pm 0.048$ & $16.7$ & $$\
$f_{ 16}$ & $57.77610 \pm 0.00071$ & $0.210 \pm 0.022$ & $0.885\pm 0.048$ & $16.6$ & $$\
$f_{ 17}$ & $50.32422 \pm 0.00071$ & $0.201 \pm 0.021$ & $0.513\pm 0.048$ & $16.6$ & $$\
$f_{ 18}$ & $55.94001 \pm 0.00078$ & $0.193 \pm 0.022$ & $0.050\pm 0.053$ & $15.2$ & $$\
$f_{ 19}$ & $50.97786 \pm 0.00079$ & $0.185 \pm 0.021$ & $0.755\pm 0.053$ & $15.1$ & $$\
$f_{ 21}$ & $64.47378 \pm 0.00079$ & $0.172 \pm 0.020$ & $0.584\pm 0.053$ & $14.9$ & $$\
$f_{ 22}$ & $61.55406 \pm 0.00086$ & $0.167 \pm 0.021$ & $0.746\pm 0.058$ & $13.7$ & $$\
$f_{ 23}$ & $61.46043 \pm 0.00087$ & $0.165 \pm 0.021$ & $0.255\pm 0.059$ & $13.6$ & $$\
$f_{ 24}$ & $58.42108 \pm 0.00097$ & $0.153 \pm 0.022$ & $0.494\pm 0.066$ & $12.2$ & $$\
$f_{ 25}$ & $63.66929 \pm 0.00096$ & $0.144 \pm 0.020$ & $0.252\pm 0.065$ & $12.3$ & $$\
$f_{ 26}$ & $49.85027 \pm 0.00100$ & $0.143 \pm 0.021$ & $0.713\pm 0.067$ & $11.9$ & $$\
$f_{ 27}$ & $58.42454 \pm 0.00108$ & $0.138 \pm 0.022$ & $0.593\pm 0.073$ & $10.9$ & $$\
$f_{ 28}$ & $60.23810 \pm 0.00108$ & $0.135 \pm 0.021$ & $0.568\pm 0.073$ & $10.9$ & $$\
$f_{ 29}$ & $59.09553 \pm 0.00117$ & $0.127 \pm 0.021$ & $0.757\pm 0.079$ & $10.1$ & $$\
$f_{ 30}$ & $56.62486 \pm 0.00120$ & $0.126 \pm 0.022$ & $0.992\pm 0.081$ & $9.9$ & $$\
$f_{ 31}$ & $60.28838 \pm 0.00126$ & $0.116 \pm 0.021$ & $0.751\pm 0.085$ & $9.4$ & $$\
$f_{ 32}$ & $64.52926 \pm 0.00119$ & $0.114 \pm 0.020$ & $0.752\pm 0.080$ & $9.9$ & $$\
$f_{ 33}$ & $52.27648 \pm 0.00130$ & $0.114 \pm 0.021$ & $0.204\pm 0.088$ & $9.1$ & $$\
$f_{ 34}$ & $50.42825 \pm 0.00130$ & $0.111 \pm 0.021$ & $0.808\pm 0.088$ & $9.1$ & $$\
$f_{ 35}$ & $54.50442 \pm 0.00142$ & $0.105 \pm 0.022$ & $0.515\pm 0.096$ & $8.3$ & $$\
$f_{ 36}$ & $63.20290 \pm 0.00146$ & $0.096 \pm 0.020$ & $0.717\pm 0.099$ & $8.1$ & $$\
$f_{ 37}$ & $56.85719 \pm 0.00165$ & $0.091 \pm 0.022$ & $0.197\pm 0.111$ & $7.2$ & $$\
$f_{ 38}$ & $59.76651 \pm 0.00174$ & $0.084 \pm 0.021$ & $0.690\pm 0.118$ & $6.8$ & $$\
$f_{ 40}$ & $57.75009 \pm 0.00199$ & $0.075 \pm 0.022$ & $0.430\pm 0.134$ & $5.9$ & $$\
$f_{ 41}$ & $61.62167 \pm 0.00207$ & $0.069 \pm 0.021$ & $0.177\pm 0.139$ & $5.7$ & $$\
$f_{ 42}$ & $60.36814 \pm 0.00215$ & $0.068 \pm 0.021$ & $0.742\pm 0.145$ & $5.5$ & $$\
$f_{ 43}$ & $51.07405 \pm 0.00215$ & $0.068 \pm 0.021$ & $0.854\pm 0.145$ & $5.5$ & $$\
$f_{ 44}$ & $64.40949 \pm 0.00205$ & $0.067 \pm 0.020$ & $0.325\pm 0.138$ & $5.8$ & $$\
$f_{ 45}$ & $61.36160 \pm 0.00222$ & $0.065 \pm 0.021$ & $0.574\pm 0.150$ & $5.3$ & $$\
$f_{ 46}$ & $65.66649 \pm 0.00212$ & $0.063 \pm 0.019$ & $0.513\pm 0.143$ & $5.6$ & $$\
$f_{ 47}$ & $66.96511 \pm 0.00209$ & $0.062 \pm 0.019$ & $0.258\pm 0.141$ & $5.7$ & $$\
$f_{ 48}$ & $54.37597 \pm 0.00243$ & $0.062 \pm 0.022$ & $0.028\pm 0.164$ & $4.9$ & $$\
$f_{ 49}$ & $51.11913 \pm 0.00237$ & $0.061 \pm 0.021$ & $0.195\pm 0.160$ & $5.0$ & $$\
$f_{ 50}$ & $60.16181 \pm 0.00242$ & $0.060 \pm 0.021$ & $0.567\pm 0.163$ & $4.9$ & $$\
$f_{ 51}$ & $60.34386 \pm 0.00242$ & $0.060 \pm 0.021$ & $0.157\pm 0.163$ & $4.9$ & $$\
$f_{ 52}$ & $58.97922 \pm 0.00247$ & $0.060 \pm 0.021$ & $0.686\pm 0.167$ & $4.8$ & $$\
$f_{ 54}$ & $55.81344 \pm 0.00262$ & $0.057 \pm 0.022$ & $0.227\pm 0.177$ & $4.5$ & $$\
$f_{ 55}$ & $58.14367 \pm 0.00266$ & $0.056 \pm 0.022$ & $0.960\pm 0.179$ & $4.5$ & $$\
$f_{ 56}$ & $59.80451 \pm 0.00275$ & $0.054 \pm 0.021$ & $0.532\pm 0.185$ & $4.3$ & $$\
$f_{ 58}$ & $0.79928 \pm 0.00277$ & $0.052 \pm 0.021$ & $0.658\pm 0.187$ & $4.3$ & $$\
$f_{ 59}$ & $54.28582 \pm 0.00294$ & $0.051 \pm 0.022$ & $0.767\pm 0.199$ & $4.0$ & $$\
$f_{ 60}$ & $48.03643 \pm 0.00270$ & $0.051 \pm 0.020$ & $0.140\pm 0.182$ & $4.4$ & $$\
$f_{ 61}$ & $59.63633 \pm 0.00291$ & $0.051 \pm 0.021$ & $0.109\pm 0.197$ & $4.1$ & $$\
$f_{ 62}$ & $62.25437 \pm 0.00288$ & $0.049 \pm 0.021$ & $0.425\pm 0.194$ & $4.1$ & $$\
$f_{ 63}$ & $64.90535 \pm 0.00281$ & $0.048 \pm 0.020$ & $0.340\pm 0.190$ & $4.2$ & $$\
$f_{ 64}$ & $65.72024 \pm 0.00282$ & $0.047 \pm 0.019$ & $0.585\pm 0.190$ & $4.2$ & $$\
$f_{ 20}$ & $1.23967 \pm 0.00079$ & $0.183 \pm 0.021$ & $0.027\pm 0.053$ & $14.9$ & $2f_{orb}$\
$f_{ 39}$ & $58.09686 \pm 0.00194$ & $0.077 \pm 0.022$ & $0.993\pm 0.131$ & $6.1$ & $f_{37}+2f_{orb}$\
$f_{ 53}$ & $49.18796 \pm 0.00242$ & $0.058 \pm 0.020$ & $0.677\pm 0.163$ & $4.9$ & $f_{34}-2f_{orb}$\
$f_{ 57}$ & $62.58379 \pm 0.00267$ & $0.053 \pm 0.020$ & $0.528\pm 0.180$ & $4.4$ & $f_{36}-f_{orb}$\
[ccccccccccc]{}
KIC8262223 & 1.94 & $ 0.20$ & 1.67& 1.31& 9128 & 6849& $1.61$&$50-65$&detached\
KIC10661783 & 2.05 & $ 0.20$ & 2.56& 1.12 & 7760 & 5980& 1.23&$20-30$&detached\
AS Eri & 1.92 & $ 0.21$ & 1.50& 1.15 & 7290 & 4250 & 2.66&$\approx 60$&semi-detached\
V228 & 1.51 & $ 0.20$ & 1.36& 1.24 & 8070 & 5810 & 1.15&...&semi-detached\
[{height="12cm"}]{}
[{height="12cm"}]{}
[{height="12cm"}]{}
[![The reconstructed individual spectra (red) of the primary (upper) and secondary (lower) of KIC 8262223. The best matching atmospheric models from UVBLUE are shown as black spectra, and the corresponding parameters $T_{\rm eff}$(K), $v \sin i$ (km s$^{-1}$), $\log g$ (cgs) and ${\rm [Fe/H]}$ are labeled. ](fig4.eps "fig:"){height="12cm"}]{}
[{height="12cm"}]{}
[{height="12cm"}]{}
[{height="12cm"}]{}
[{height="12cm"}]{}
[{height="11cm"}]{}
[{height="12cm"}]{}
[^1]: The blue/red edges depend on the radial order of the modes, as well as other model parameters (e.g., mixing length parameter $\alpha_{\rm MLT}$) KIC 8262223 seems to reside outside the instability strip for $n_p=4$, and it may pulsate at higher order ($n_p \approx 7-8$).
|
---
abstract: 'In papers published in the 25 years following his famous 1964 proof John Bell refined and reformulated his views on locality and causality. Although his formulations of local causality were in terms of probability, he had little to say about that notion. But assumptions about probability are implicit in his arguments and conclusions. Probability does not conform to these assumptions when quantum mechanics is applied to account for the particular correlations Bell argues are locally inexplicable. This account involves no superluminal action and there is even a sense in which it is local, but it is in tension with the requirement that the direct causes and effects of events are nearby.'
author:
- |
Richard A. Healey\
The University of Arizona
title: 'Local Causality, Probability and Explanation'
---
Introduction
============
I never met John Bell, but his writings have supplied me with a continual source of new insights as I read and reread them over 40 years. In working toward a rather different understanding of quantum mechanics he has been foremost in my mind as a severe but honest critic of such attempts. We all would love to know what Einstein would have made of Bell’s theorem. I confess the deep regret I feel that Bell cannot respond to this paper is sometimes assuaged by a sense of relief.
Locality and Local Causality
============================
In his seminal 1964 paper, John Bell expressed locality as the requirement
> that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past. \[2004, p. 14\]
This seems to require that the result of a measurement would have been the same, no matter what operations had been performed on such a distant system. But suppose the result of a measurement were the outcome of an indeterministic process. Then the result of the measurement might have been different even if exactly the same operations (if any) had been performed on that distant system. So can no indeterministic theory satisfy the locality requirement? Bell felt no need to address that awkward question in his 1964 paper, since he took the EPR argument to establish that any additional variables needed to restore locality and causality would have to determine a unique result of a measurement. Indeterminism was not an option:
> Since we can predict in advance the result of measuring any chosen component of $\mathbf{\sigma }_{2}$ \[in the Bohm-EPR scenario\], by previously measuring the same component of $\mathbf{\sigma }_{1}$, it follows that the result of any such measurement must actually be predetermined. (*op. cit.*, p. 15)
Afterwards he repeatedly stressed that any theory proposed as an attempt to complete quantum theory while restoring locality and causality need not be *assumed* to be deterministic: to recover such perfect (anti)correlations it would *have* to be deterministic. This argument warrants closer examination, and I’ll come back to it. But in later work Bell offered formulations of locality conditions tailored to theories that were not deterministic.
An initial motivation may have been to facilitate experimental tests of attempted local, causal completions of quantum mechanics by suitable measurements of spin or polarization components on pairs of separated systems represented by entangled quantum states. Inevitable apparatus imperfections would make it impossible to confirm a quantum prediction of perfect (anti)correlation for matched components, so experiment alone could not require a local, causal theory to reproduce them. Einstein himself thought some theory might come to underlie quantum mechanics much as statistical mechanics underlies thermodynamics. In each case there would be circumstances in which the more basic theory (correctly) predicts deviations from behavior the less basic theory leads one to expect.
But by 1975 a second motivation had become apparent—the hope that by revising or reformulating quantum mechanics as a theory of local beables one might remove ambiguity and arrive at increased precision. It is in this context that Bell now introduces a requirement of local causality. This differs in two ways from his earlier locality requirement. It is not a requirement on the world, but on theories of local beables: and it applies to theories that are probabilistic, with deterministic theories treated as a special case in which all probabilities are 0 or 1 (and densities are delta functions).[^1] Bell (\[1975\], \[1985\]) designs his requirement of local causality as a generalization of a requirement of local determinism met by Maxwell’s electromagnetic theory. In source-free Maxwellian electromagnetism, the local beables are the values of the electric and magnetic fields at each point $(\mathbf{x},t)$. This theory is locally deterministic because the field values in a space-time region are uniquely determined by their values at an earlier moment in a finite volume of space that fully closes the backward light cone of that region.
Local causality arises by generalizing to theories in which the assignment of values to some beables $\Lambda $ implies, not necessarily a particular value, but a probability distribution $Pr(A|\Lambda )$, for another beable $A$. Here is how Bell (\[2004, p. 54\]) defines it (in my notation):
> Let $N$ denote a specification of *all* the beables, of some theory, belonging to the overlap of the backward light cones of space-like separated regions 1 and 2. Let $\Lambda $ be a specification of some beables from the remainder of the backward light cone of 1, and $B$ of some beables in the region 2. Then in a *locally causal theory* $Pr(A|\Lambda
> ,N,B)=Pr(A|\Lambda ,N)$ whenever both probabilities are given by the theory.
If $M$ is a specification of some beables from the backward light cone of 2 but not of 1, then (assuming the joint probability distribution $Pr(A,B|\Lambda ,M,N)$ exists)$$\begin{aligned}
Pr(A,B|\Lambda ,M,N) &=&Pr(A|\Lambda ,M,N,B).Pr(B|\Lambda ,M,N) \label{1} \\
&=&Pr(A|\Lambda ,N).Pr(B|\Lambda ,M) \label{2}\end{aligned}$$where (\[1\]) follows from the definition of conditional probability, and (\[2\]) follows for any locally causal theory. This means that any theory of local beables that is locally causal satisfies the condition$$Pr(A,B|\Lambda ,M,N)=Pr(A|\Lambda ,N).Pr(B|\Lambda ,M) \label{3}$$
In his 1990 presentation Bell modified his formulation of local causality, in part in response to constructive criticisms. He also defended his revised formulation by appeal to an *Intuitive Principle* of local causality (IP), namely
> The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light. \[2004, p. 239\]
Here is Bell’s revised formulation of *Local Causality* (LC):
> A theory will be said to be locally causal if the probabilities attached to values of local beables in a space-time region 1 are unaltered by specification of values of local beables in a space-like separated region 2, when what happens in the backward light cone of 1 is already sufficiently specified, for example by a full specification of local beables in a space-time region 3 \[a thick “slice” that fully closes the backward light cone of region 1 wholly outside the backward light cone of 2\]. (*op. cit.*, p. 240).
Bell \[1990\] then applies this condition to a schematic experimental scenario involving a linear polarization measurement on each photon in an entangled pair in which the polarizer setting $a$ and outcome recording $A$ for one photon occur in region 1, while those ($b,B$ ) for the other photon occur in region 2. He derives a condition analogous to (\[3\]) and uses it to prove a CHSH inequality whose violation is predicted by quantum mechanics for the chosen entangled state for certain sets of choices of $a,b$.[^2]$$Pr(A,B|a,b,c,\lambda )=Pr(A|a,c,\lambda ).Pr(B|b,c,\lambda )
\tag{Factorizability} \label{Factorizability}$$Not only did this proof not assume the theory of local beables was deterministic, even this (Factorizability) condition was not assumed but derived from the reformulated local causality requirement.
Bell’s formulation of local causality (LC) has been carefully analyzed by Norsen \[2011\], whose analysis has been further improved by Seevinck and Uffink \[2011\]. They have focused in their analyses on what exactly is involved in a sufficient specification of what happens in the backward light cone of 1. This specification could fail to be sufficient through failing to mention local beables in 3 correlated with local beables in 2 through a joint correlation with local beables in the overlap of the backward light cones of 1 and 2. A violation of a local causality condition that did not require such a sufficient specification would pose no threat to the intuitive principle of local causality: specification of beables in 2 could alter the probabilities of beables in 1 if *unspecified* beables in 3 were correlated with both through a (factorizable) common cause in the overlap of the backward light cones of 1 and 2. On the other hand, requiring a specification of *all* local beables in 3, may render the condition (LC) inapplicable in attempting to show how theories meeting it predict correlations different from those successfully predicted by quantum theory.
To see the problem, consider the set-up for the intended application depicted in Figure 1. $A$,$B$ describe macroscopic events[^3], each usually referred to as the detection of a photon linearly polarized either vertically or horizontally relative to an $a $- or $b$-axis respectively: $a$,$b$ label events at which an axis is selected by rotating through an angle $a{{}^\circ},b{{}^\circ}$ respectively from some fixed direction in a plane. The region previously labeled 3 has been relabeled as 3a, a matching region 3b has been added in the backward light cone of 2, and ‘3’ now labels the entire continuous stack of space-like hypersurfaces right across the backward light cones of 1 and 2, shielding off these light cones’ overlap from 1,2 themselves. Note that each of 1,$a$ is space-like separated from each of 2, $b$.
In some theories, a complete specification of local beables in 3 would constrain (or even determine) the selection events $a$,$b$. But in the intended application $a$,$b$ must be treated as free variables in the following sense: in applying a theory to a scenario of the relevant kind each of $a$,$b$ is to be specifiable independently in a theoretical model, and both are taken to be specifiable independently of a specification of local beables in region 3. Since this may exclude some *complete* theoretical specifications of beables in region 3 it is best not to require such completeness. Instead, one should say exactly what it is for a specification to be sufficient.
Seevinck and Uffink \[2011\] clarify this notion of sufficiency as a combination of functional and statistical sufficiency, rendering the label $b $ and random variable $B$ (respectively) redundant for predicting $Pr_{a,b}(A|B,\lambda )$, the probability a theory specifies for beable $A$ representing the outcome recorded in region 1 given beables $a$,$b$ representing the free choices of what the apparatus settings are in sub-regions of 1,2 respectively, conditional on outcome $B$ in region 2 and beable specification $\lambda $ in region 3. This implies$$Pr_{a,b}(A|B,\lambda )=Pr_{a}(A|\lambda ) \tag{4a} \label{4a}$$Notice that $a,b$ are no longer treated as random variables, as befits their status as the locus of free choice. It would be unreasonable to require a theory of local beables to predict the probability that the experimenters make one free choice rather than another: but treating $a,b$ as random variables (as in Bell’s formulation of \[Factorizability\]) would imply the existence of probabilities of the form $Pr(a|\lambda )$, $Pr(b|\lambda $).
By symmetry, interchanging ‘1’ with ‘2’, ‘$A$’ with ‘$B$’ and ‘$a$’ with ‘b’ implies$$Pr_{a,b}(B|A,\lambda )=Pr_{b}(B|\lambda ) \tag{4b} \label{4b}$$
Seevinck and Uffink \[2011\] offer equations (\[4a\]) and (\[4b\]) as their mathematically sharp and clean (re)formulation of the condition of local causality. Together, these equations imply the condition$$Pr_{a,b}(A,B|\lambda )=Pr_{a}(A|\lambda )\times \ Pr_{b}(B|\lambda )
\tag{Factorizability$_{SU}$} \label{FactorizabilitySU}$$used to derive CHSH inequalities. Experimental evidence that these inequalities are violated by the observed correlations in just the way quantum theory leads one to expect may then be taken to disconfirm Bell’s intuitive causality principle.
In more detail, Seevinck and Uffink \[2011\] claim that orthodox quantum mechanics violates the statistical sufficiency conditions (commonly known as Outcome Independence, following Shimony)$$\begin{aligned}
Pr_{a,b}(A|B,\lambda ) &=&Pr_{a,b}(A|\lambda ) \TCItag{5a} \label{5a} \\
Pr_{a,b}(B|A,\lambda ) &=&Pr_{a,b}(B|\lambda ) \TCItag{5b} \label{5b}\end{aligned}$$
while conforming to the functional sufficiency conditions (commonly known as Parameter Independence, following Shimony) $$\begin{aligned}
Pr_{a,b}(A|\lambda ) &=&Pr_{a}(A|\lambda ) \TCItag{6a} \label{6a} \\
Pr_{a,b}(B|\lambda ) &=&Pr_{b}(B|\lambda ) \TCItag{6b} \label{6b}\end{aligned}$$
Statistical sufficiency is a condition employed by statisticians in situations where considerations of locality and causality simply don’t arise. But in this application the failure of quantum theory to provide a specification of beables in region 3 such that the outcome $B$ is always redundant for determining the probability of outcome $A$ (and similarly with ‘$A$’, ‘$B$’ interchanged) has clear connections to local causality, as Seevinck and Uffink’s \[2011\] analysis has shown.
In the light of Seevinck and Uffink’s \[2011\] analysis, perhaps Bell’s local causality condition (LC) should be reformulated as follows:
> (LC$_{SU}$)A theory is said to be locally causal$_{SU}$ if it acknowledges a class $R_{\lambda }$ of beables $\lambda $ in space-time region 3 whose values may be attached independently of the choice of $a$,$b$ and are then sufficient to render $b$ functionally redundant and $B$ statistically redundant for the task of specifying the probability of $A$ in region 1.
The notions of statistical and functional redundancy appealed to here are as follows:
> For $\lambda \varepsilon R_{\lambda },$ $\lambda $ renders $B$ statistically redundant for the task ofspecifying the probability of $A$ iff $Pr_{a,b}(A|B,\lambda
> )=Pr_{a,b}(A|\lambda )$.
>
> For $\lambda \varepsilon R_{\lambda }$, $\lambda $ renders $b$ functionally redundant for the task ofspecifying the probability of $A$ iff $Pr_{a,b}(A|\lambda )=Pr_{a}(A|\lambda
> )$.
Though admittedly less general than (LC), (LC)$_{SU}$ seems less problematic but just as well motivated by (IP), as applied to the scenario depicted in Figure 1. If correlations in violation of the CHSH inequality are locally inexplicable in so far as no theory of local beables can explain them consistent with (LC), then they surely also count as locally inexplicable in so far as no theory of local beables can explain them consistent with (LC)$_{SU}$. But Bell himself said we should regard his step from (IP) to (LC) with the utmost suspicion, and that is what I shall do. My grounds for suspicion are my belief that quantum mechanics *itself* helps us to explain the particular correlations violating CHSH inequalities that Bell \[2004, pp. 151-2\] claimed to be locally inexplicable without action at a distance. Moreover, that explanation involves no superluminal action, and there is even a sense in which it is local.
To assess the status of (LC) (or (LC)$_{SU}$) in quantum mechanics one needs to say first how it is applied to yield probabilities attached to values of local beables in a space-time region 1 and then what it would be for these to be altered by specification of values of local beables in a space-like separated region 2. This is not a straightforward matter. The Born rule may be correctly applied to yield more than one chance for the same event in region 1, and there is more than one way to understand the requirement that these chances be unaltered by what happens in region 2. As we’ll see, the upshot is that while Born rule probabilities do violate ([Factorizability]{}) (or (\[FactorizabilitySU\])) here, this counts as a violation of (LC) (or (LC)$_{SU}$) only if that condition is applied in a way that is not motivated by (IP)’s prohibition of space-like causal influences.
Probability and chance
======================
Bell credited his formulation of local causality (LC) with avoiding the “rather vague notions” of cause and effect by replacing them with a condition of probabilistic independence. The connection to (IP)’s motivating talk of ‘cause’ and ‘effect’ is provided by the thought that a cause alters (and typically raises) the chance of its effect. But this connection can be made only by using the general probabilities supplied by a theory to supply chances of particular events.
By chance I mean the definite, single-case probability of an individual event such as rain tomorrow in Tucson. As in this example, its chance depends on *when* the event occurs—afterwards, it is always 0 or 1: and it may vary up until that time as history unfolds. Chance is important because of its conceptual connections to belief and action. The chance of $e$ provides an agent’s best guide to how strongly to believe that $e$ occurs, when not in a position to be certain that it does.[^4] And the comparison between $e$’s chances according as (s)he does or does not do $D$ are critical in the agent’s decision about whether to do $D$. These connections explain why the chance of an event defaults to 0 or 1 when the agent is in a position to be certain about it—typically, after it does or doesn’t occur.
Probabilistic theories may be useful guides to the chances of events, but what they directly yield are not chances but general probabilities of the form $\Pr_{C}(E)$ for an event of type $E$ relative to reference class $C$. To apply such a general probability to yield the chance of $e$, you need to specify the type $E$ of $e$ and also the reference class $C$. A probabilistic theory may offer alternative specifications when applied to determine the chance of $e$, in which case it becomes necessary to choose the appropriate specifications. Actuarial tables may be helpful when estimating the chance that you will live to be 100, but you differ in all kinds of ways from every individual whose death figures in those tables. What you want is the most complete available specification of *your* situation: this may include much irrelevant information, but it’s not necessary to exclude this since it won’t affect the chance anyway. In Minkowski space-time, the conceptual connection between chance ($Ch$) and the degree of belief ($Cr$) it prescribes is captured in this version of David Lewis’s Principal Principle that implicitly defines chance:[^5]
> The chance of $e$ at $p$, conditional on any information $I_{p}$ about the contents of $p$’s past light cone satisfies: $Cr_{p}(e/I_{p})=_{df}Ch_{p}(e)$.
Now consider an agent who accepts quantum theory and wishes to determine the chance of the event $e_{A}$ that the next photon detected by Alice registers as vertically polarized ($V_{A}$). Assuming that the state $\left\vert \Phi
^{+}\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert H\right\rangle
\left\vert H\right\rangle +\left\vert V\right\rangle \left\vert
V\right\rangle \right) $ was prepared and the settings $a,b$ chosen long before, the agent is also in a position to be certain what these were. The agent is then in a position to use the Born rule to determine the chance of $e_{A}$. But that chance must be relativized not just to a time, but (relativistically) to a space-time point. So consider the following diagram:
As this shows, if the outcome in region $\mathbf{2}$ is of type $V_{B}$ then $Ch_{p}(e_{A})=Ch_{p^{\prime }}(e_{A})=Ch_{r}(e_{A})={\frac12}$, but $Ch_{q}(e_{A})=\cos ^{2}\angle ab$. These are the chances that follow from application of the Born rule to state $\left\vert \Phi
^{+}\right\rangle $, given settings $a,b$. In each case the event $e_{A}$of the next photon detected in $\mathbf{1}$’s registering as vertically polarized has been specified as of type $V_{A}$, and the specification of the reference class at least includes the state and settings. Specifically,$$Ch_{p}(e_{A})=Pr_{a,b}^{\Phi ^{+}}(V_{A})=||\hat{P}^{A}(V)\Phi ^{+}||^{2}={\frac12}.$$$$Ch_{q}(e_{A})=Pr_{a,b}^{\Phi ^{+}}(V_{A}|V_{B})\equiv \frac{Pr_{a,b}^{\Phi
^{+}}(V_{A},V_{B})}{Pr_{b}^{\Phi ^{+}}(V_{B})}=\frac{{\frac12}\cos ^{2}\angle ab}{{\frac12}}=\cos ^{2}\angle ab$$Note that the reference class used in calculating $Ch_{q}(e_{A})$ is narrower: it is further restricted by specification of the outcome as of type $V_{B}$ in region $\mathbf{2}$.
Any agent who accepts quantum theory and is (momentarily) located at space-time point $x$ should match credence in $e_{A}$ to $Ch_{x}(e_{A})$ because it is precisely the role of chance to reflect the epistemic bearing of all information accessible at $x$ on facts not so accessible, and to accept quantum theory is to treat it as an expert when assessing the chances. This is so whether or not an agent is *actually* located at $x$—fortunately, since it is obviously a gross idealization to locate the epistemic deliberations of a physically situated agent at a space-time point! A hypothetical agent located at $q$ in the forward light cone of region $\mathbf{2}$ (but not $\mathbf{1}$) has access to the additional information that the outcome in $\mathbf{2}$ is of type $V_{B}$: so the reference class used to infer the chance of $e_{A}$ at $q$ from the Born rule should include that information. That is why $Ch_{q}(e_{A})$ is determined by the conditional Born probability $Pr_{a,b}^{\Phi
^{+}}(V_{A}|V_{B})$ but $Ch_{p}(e_{A})$ is determined by the unconditional Born probability $Pr_{a,b}^{\Phi ^{+}}(V_{A})$.
In the special case that the settings $a,b$ coincide (the polarizers are perfectly aligned) application of the Born rule yields the chances depicted in Figure 3.Bell (\[2004, pp. 240-41\]) took this example as a simple demonstration that ordinary quantum mechanics is not locally causal, crediting the argument of EPR \[1935\]. He begins
> Each of the counters considered separately has on each repetition of the experiment a 50% chance of saying ‘yes’.
Each of the chances $Ch_{p}(e_{A}),Ch_{p^{\prime }}(e_{B})$ is $\frac12$, as Bell says: but $Ch_{q}(e_{A})=1$. After noting that quantum theory here requires a perfect correlation between the outcomes in $\mathbf{1,2}$, he continues
> “So specification of the result on one side permits a 100% confident prediction of the previously totally uncertain result on the other side. Now in ordinary quantum mechanics there just *is* nothing but the wavefunction for calculating probabilities. There is then no question of making the result on one side redundant on the other by more fully specifying events in some space-time region $\mathbf{3}$. We have a violation of local causality.”
It is true that (\[Factorizability\]) and (\[FactorizabilitySU\]) fail here, since $Pr_{a,a}^{\Phi ^{+}}(V_{A}|V_{B})=1$, $Pr_{a,a}^{\Phi
^{+}}(H_{A}|V_{B})=0$, while $Pr_{a,a}^{\Phi ^{+}}(A)=Pr_{a,a}^{\Phi
^{+}}(B)={\frac12}$. But does that constitute a violation of local causality? (LC) and (LC$_{SU}$) are both conditions straightforwardly applicable to a theory whose general probabilities yield a *unique* chance for each possible outcome in $\mathbf{1}$ prior to its occurrence. In the case of quantum theory, however, the general Born rule probabilities yield *multiple* chances for each possible outcome in $\mathbf{1}$, each at the same time(in the laboratory frame): $Ch_{p}(e_{A})={\frac12}$, but $Ch_{q}(e_{A})=1$ (assuming the outcome in $\mathbf{2}$ is of type $V_{B}$). When (LC) speaks of *the* probabilities attached to \[$e_{A},\bar{e}_{A}$\] in a space-time region $\mathbf{1}$ being unaltered by specification of \[$V_{B}$\] in a space-like separated region $\mathbf{2}$ (my italics), which probabilities are these?
Since the connection to (IP)’s motivating talk of ‘cause’ and ‘effect’ is provided by the thought that a cause alters the chance of its effect, (LC) is motivated only if applied to the *chances* of $e_{A},\bar{e}_{A}$ in region $\mathbf{1}$. But $Ch_{p}(e_{A})$ is *not* altered by specification of $V_{B}$ in space-like separated region $\mathbf{2}$: its value depends only on what happens in the backward light cone of $\mathbf{1}$, in conformity to its role in prescribing $Cr_{p}(e_{A})$.$\ $Of course $Ch_{q}(e_{A})$ does depend on the outcome in $\mathbf{2}$. If it did not, it could not fulfill its constitutive role of prescribing $Cr_{q}(e_{A}/I_{q})$ no matter what information $I_{q}$ provides about the contents of $q$’s past light cone. It follows that $Ch_{q}(e_{A})$ is not altered but *specified* by specification of the result in $\mathbf{2}$.
Only for a hypothetical agent whose world-line has entered the future light cone of $\mathbf{2}$ at $q$ is it true that specification of the result in $\mathbf{2}$ permits a 100% confident prediction of the previously totally uncertain result on the other side. A hypothetical agent at $p$ is not in a position to make a 100% confident prediction: for such an agent the result in $\mathbf{1}$ remains totally uncertain: what happens in $\mathbf{2}$ makes no difference to what (s)he should believe, since region $\mathbf{2}$ is outside the backward light cone of $p$. That is why it is $Ch_{p}(e_{A})$, not $Ch_{q}(e_{A})$, that says what is certain at $p$. Newtonian absolute time fostered the illusion of the occurrence of future events becoming certain for everyone at the same time—when they occur if not sooner. Relativity requires certainty, like chance, to be relativized to space-time points—idealized locations of hypothetical knowers.
So does ordinary quantum mechanics violate local causality? If the probabilities (LC) speaks of are $Pr_{a,b}^{\Phi ^{+}}(V_{A}),$ $Pr_{a,b}^{\Phi ^{+}}(H_{A})$, and the condition that these be unaltered is understood to be that $Pr_{a,b}^{\Phi
^{+}}(V_{A})=Pr_{a,a}^{\Phi ^{+}}(V_{A}|B),$ $Pr_{a,b}^{\Phi
^{+}}(H_{A})=Pr_{a,a}^{\Phi ^{+}}(H_{A}|B)$, then ordinary quantum mechanics violates (LC). But if this is all (LC) means, then it is not motivated by (IP) and its violation does not imply that the quantum world is non-local in that there are superluminal causal relations between distant events. For (LC) to be motivated by causal considerations such as (IP), the probabilities (LC) speaks of must be understood to be chances, including $Ch_{p}(e_{A})$ and $Ch_{q}(e_{A})$. But neither of these would be altered by the specification of the outcome \[$V_{B}$\] in a space-like separated region, so local causality would then not be violated. Although it remains unclear exactly how (LC) (or (LC$_{SU}$)) is supposed to be applied to quantum mechanics, one way of applying it is unmotivated by (IP), while if it is applied in another way quantum mechanics does *not* violate this local causality condition.
Chance and Causation
====================
Suppose in the EPR-Bohm scenario that the outcome in region $\mathbf{2}$ had been of type $H_{B}$ instead of $V_{B}$: then $Ch_{q}(e_{A})$ would have been 0 instead of 1. Suppose that the polarization axis for the measurement in region $\mathbf{2}$ had been rotated through $60{{}^\circ}$: then $Ch_{q}(e_{A})$ would have been $\frac14$ or $\frac34$, depending on the outcome in region $\mathbf{2}$. Or suppose that no polarization measurement had been performed in region $\mathbf{2}$: then $Ch_{q}(e_{A})$ would have been $\frac12$. This shows that $Ch_{q}(e_{A})$ depends counterfactually on the polarization measurement in $\mathbf{2}$ and also on its outcome. Another way to understand talk of alteration of the probability of an event of type $V_{A}$ is as the difference between the actual value of $Ch_{q}(e_{A})$ and what its value would have been had the polarization measurement in $\mathbf{2}$ or its outcome been different. Don’t such counterfactual alterations in $Ch_{q}(e_{A})$ amount to *causal* dependence between space-like separated events, in violation of (IP)? There are several reasons why they do not.
The first reason is that while $Ch_{q}(e_{A})$ would be different in each of these counterfactual scenarios, in none of them would $Ch_{p}(e_{A})$ differ from $\frac12$, so the “local” chance of $e_{A}$ is insensitive to all such counterfactual variations in what happens in $\mathbf{2}$. If one wishes to infer causal from counterfactual dependence of thechance of a result in $\mathbf{1}$ on what happens in $\mathbf{2}$, then only one of two relevant candidates for the chance displays such counterfactual dependence. For those who think of chance as itself a kind of indeterministic cause—a localized physical propensity whose actualization may produce an effect—$Ch_{p}(e_{A})$ seems better qualified for the title of the chance of $e_{A}$ than $Ch_{q}(e_{A})$.
The role of chance in decision provides the second reason. Just as the chance of $e$ tells you everything you need to know to figure out how strongly to believe $e$, the causal dependence of $e$ on $d$ tells you everything you need to know about $e$ and $d$ when deciding whether to do $d$ (assuming you are not indifferent about $e$). As Huw Price \[2012\] put it, causal dependence should be regarded as an analyst-expert about the conditional credences required by an evidential decision maker.
Consider the situation of a hypothetical agent Bob at $p^{\prime }$ deciding whether to act by affecting what happens in $\mathbf{2}$ to try to get outcome $e_{A}$ in $\mathbf{1}$. Bob can choose not to measure anything, or he can choose to measure polarization with respect to any axis $b$. If he were to measure nothing, $Ch_{q}(e_{A})$ would be $\frac12$. If he were to measure polarization with respect to the same axis as Alice, then $Ch_{q}(e_{A})$ would be either 0 or 1, with an equal chance (at his momentary location $p^{\prime }$) of either outcome. Since he can neither know nor affect which of these chances it will be, he must base his decision on his best estimate of $Ch_{q}(e_{A})$ in accordance with Ismael’s \[2008\] Ignorance Principle:
> Where you’re not sure about the chances, form a mixture of the chances assigned by different theories of chance with weights determined by your relative confidence in those theories.
Following this principle, Bob should assign $Ch_{q}(e_{A})$ the estimated value ${\frac12}.0+{\frac12}.1={\frac12}$, and base his decision on that. Since measuring polarization with respect to the same axis as Alice would not raise his estimated chance of securing outcome $e_{A}$ in $\mathbf{1}$, he should eliminate this option *whether or not he could execute it*. His estimated value of $Ch_{q}(e_{A})$ were he to measure polarization with respect to an axis rotated $60{{}^\circ}$ from Alice’s is also $\frac12$ (${\frac12}.{\frac14}+{\frac12}.{\frac34}={\frac12}$). Similarly for any other angle. This essentially recapitulates part of the content of the no-signalling theorems, going back to Eberhard \[1978\]. Bell \[2004, pp. 237-8\] shows why manipulation of external fields at $p^{\prime }$ or in $\mathbf{2}$ would also fail to alter Bob’s estimated value of $Ch_{q}(e_{A})$.
But what if Bob had simply arranged for the measurement in $\mathbf{2}$ to have had the different *outcome* $\bar{e}_{B}$? Then $Ch_{q}(e_{A})$ would have been 0 instead of 1. No-one who accepts quantum mechanics can countenance this counterfactual scenario. The Born rule implies that $Pr_{b}^{\Phi ^{+}}(H_{B})={\frac12}$, and anyone who accepts quantum mechanics accepts the implication that $Ch_{p^{\prime }}(\bar{e}_{B})={\frac12}$. So anyone who accepts quantum mechanics will have credence $Cr_{p^{\prime
}}(\bar{e}_{B}/I_{p^{\prime }})={\frac12}$ no matter what he takes to happen in the backward light cone of $p^{\prime
}$ (as specified by $I_{p^{\prime }}$).[^6] If he accepts quantum mechanics, Bob will conclude that there is nothing it makes sense to contemplate doing to alter his estimate of $Ch_{p^{\prime
}}(\bar{e}_{B})$, and so there is no conceivable counterfactual scenario in which one in Bob’s position arranges for the measurement in $\mathbf{2}$ to have had the different outcome $\bar{e}_{B}$. In general, there is causal dependence between events in $\mathbf{1}$ and $\mathbf{2}$ only if it makes sense to speak of an intervention in one of these regions that would affect a hypothetical agent’s estimated chance of what happens in the other. Anyone who accepts quantum mechanics should deny that makes sense.
Perhaps the most basic reason why counterfactual dependencies between happenings in region $\mathbf{2}$ and the chance(s) of $e_{A}$ are no sign of causal dependence is that chances are not beables, and are incapable of entering into causal relations. That Bell thought they behaved like beables is suggested by the \[1975\] paper in which he introduced local causality as a natural generalization of local determinism:
> In Maxwell’s theory, the fields in any space-time region $\mathbf{1}$ are determined by those in any space region $V$, at some time $t$, which fully closes the backward light cone of $\mathbf{1}$. Because the region $V$ is limited, localized, we will say the theory exhibits *local determinism*. We would like to form some no\[ta\]tion of *local causality* in theories which are not deterministic, in which the correlations prescribed by the theory, for the beables, are weaker. \[2004, p. 53\]
It seems Bell thought the chances prescribed by a theory that is not deterministic were analogous to the beables of Maxwell’s electromagnetism, so that while local determinism (locally) specified the local\[ized\] beables (e.g. fields), local causality should (locally) specify the local\[ized\] *chances* of beables, where those chances (like local beables) are themselves localized physical magnitudes.
Others have joined Bell in this view of chances as localized physical magnitudes. But quantum mechanics teaches us that chances are *not* localized physical propensities whose actualization may produce an effect. Maudlin says what he means by calling probabilities objective:
> ...there could be probabilities that arise from fundamental physics, probabilities that attach to actual or possible events in virtue solely of their physical description and independent of the existence of cognizers. These are what I mean by *objective probabilities*. (Beisbart and Hartmann eds., \[2011, p. 294\])
Although quantum chances do attach to actual or possible events, they are not objective in this sense. As we saw, the chance of outcome $e_{A}$ does not attach to it in virtue solely of its physical description: the *chances* of $e_{A}$ attach also in virtue of its space-time relations to different space-time locations. Each such location offers the epistemic perspective of a situated agent, even in a world with no such agents. The existence of these chances is independent of the existence of cognizers. But it is only because we are not merely cognizers but physically situated agents that we have needed to develop a concept of chance tailored to our needs as informationally deprived agents. Quantum chance admirably meets those needs: an omniscient God could describe and understand the physical world without it.
While they are neither physical entities nor physical magnitudes, quantum chances are objective in a different sense. They supply an objective prescription for the credences of an agent in any physical situation. Anyone who accepts quantum mechanics is committed to following that prescription.
A view of quantum mechanics
===========================
As I see it \[2012\], it is not the function of quantum states, observables, probabilities or the Schrödinger equation to represent or describe the condition or behavior of a physical system to which they are associated. These elements function in other ways when a quantum model is applied in predicting or explaining physical phenomena such as non-localized correlations. Assignment of a quantum state may be viewed as merely the first step in a procedure that licenses a user of quantum mechanics to express claims about physical systems in descriptive language and then warrants that user in adopting appropriate epistemic attitudes toward some of these claims. The language in which such claims are expressed is not the language of quantum states or operators, and the claims are not about probabilities or measurement results: they are about the values of physical magnitudes, and I’ll refer to them as *magnitude claims*. Magnitude claims were made by physicists and others before the development of quantum mechanics and continue to be made, some in the same terms, others in terms newly introduced as part of some scientific advance. But even though quantum mechanics represents an enormous scientific advance, claims about quantum states, operators and probability distributions are not magnitude claims.
The quantum state has two roles. One is in the algorithm provided by the Born Rule for assigning probabilities to significant claims of the form $M_{\Delta }(s)$ : The value of $M$ on $s$ lies in $\Delta $, where $M$ is a physical magnitude, $s$ is a physical system and $\Delta $ is a Borel set of real numbers. In what follows, I will call a descriptive claim of the form $M_{\Delta }(s)$ a* canonical magnitude claim*. For two such claims the formal algorithm may be stated as follows:$$\Pr (M_{\Delta }(s),N_{\Gamma }(s))=Tr(\rho \hat{P}^{M}[\Delta ].\hat{P}^{N}[\Gamma ]) \tag{Born Rule} \label{Born}$$
Here $\rho $ represents a quantum state as a density operator on a Hilbert space $\mathcal{H}_{s}$ and $\hat{P}^{M}[\Delta ]$ is the value for $\Delta $ of the projection-valued measure defined by the unique self-adjoint operator on $\mathcal{H}_{s}$ corresponding to $M$.
But the significance of a claim like $M_{\Delta }(s)$ varies with the circumstances to which it relates. Accordingly, a quantum state plays a second role by modulating the content of $M_{\Delta }(s)$ or any other magnitude claim by modifying its inferential relations to other claims. Because I believe the nature of this modulation of content renders inappropriate the metaphor of magnitudes corresponding to elements of reality, I recommend against thinking of magnitudes that figure in canonical or other magnitude claims as beables, even though many such magnitude claims are true. But if one insists on calling magnitudes that figure in magnitude claims beables, these magnitudes are not beables introduced by quantum mechanics—they are at most beables recognized in its applications.[^7]
The quantum state is not a beable in this view. Indeed, since none of the distinctively quantum elements of a quantum model qualifies as a beable introduced by the theory, quantum mechanics has no beables of its own. Viewed this way, a quantum state does not describe or represent some new element of physical reality.[^8] But nor is it the quantum state’s role to describe or represent the epistemic state of any actual agent. A quantum state assignment is objectively true (or false): in that deflationary sense a quantum state is objectively real. But its function is not to say what the world is like but to help an agent applying quantum mechanics to predict and explain what happens in it. It is physical conditions in the world that make a quantum state assignment true (or false). True quantum state assignments are backed by true magnitude claims, though some of these are typically about physical systems other than that to which the state is assigned.
Any application of quantum mechanics involves claims describing a physical situation. While it is considered appropriate to make claims about where individual particles are detected contributing to the interference pattern in a contemporary interference experiment, claims about through which slit each particle went are frequently alleged to be meaningless. In its second role the quantum state offers guidance on the inferential powers, and hence the content, of canonical magnitude claims.
The key idea here is that even assuming unitary evolution of a joint quantum state of system and environment, delocalization of system state coherence into the environment will typically render descriptive claims about experimental outcomes and the condition of apparatus and other macroscopic objects appropriate by endowing these claims with enough content to license an agent to adopt epistemic attitudes toward them, and in particular to apply the Born Rule. But an application of quantum mechanics to determine whether this is so will not require referring to any system as macroscopic, as an apparatus or as an environment. All that counts is how a quantum state of a super-system evolves in a model, given a Hamiltonian associated with an interaction between the system of interest and the rest of that super-system.
It is important to note that since the formulation of the Born Rule now involves no explicit or implicit reference to measurement, Bell’s (\[2004, pp. 213-31\]) strictures against the presence of the term ‘measurement’ in a precise formulation of quantum mechanics are met. None of the other proscribed terms ‘classical’, ‘macroscopic’, ‘irreversible’, or ‘information’ appears in its stead.
Since an agent’s assignment of a quantum state does not serve to represent a system’s properties, her reassignment of a collapsed state on gaining new information represents no change in that system’s properties. That is why collapse is not a physical process, in this view of quantum mechanics. Nor does the Schrödinger equation express a fundamental physical law: to assign a quantum state to a system is not to represent its dynamical properties. A formulation of quantum mechanics has no need to include a statement distinguishing the circumstances in which physical processes of Schrödinger evolution and collapse occur. An agent can use quantum mechanics to track changes of the dynamical properties of a system by noting what magnitude claims are significant and true of it at various times. But quantum mechanics itself does not imply any such claim, even when an agent would be correct to assign a system a quantum state, appropriately apply the Born Rule, and conclude that the claim has probability 1.
Quantum states are relational on this interpretation. When agents (actually or merely hypothetically) occupy relevantly different physical situations they should assign different quantum states to one and the same system, even though these different quantum state assignments are equally correct. The primary function of Born probabilities is to offer a physically situated agent authoritative advice on how to apportion degrees of belief concerning contentful canonical magnitude claims that the agent is not currently in a position to check. That is why the Born rule should be applied by differently situated agents to assign different chances to a single canonical magnitude claim $M_{\Delta }(s)$ about a system $s$ in a given situation. These different chance assignments will then be equally objective and equally correct.
The physical situation of a (hypothetical or actual) agent will change with (local) time. The agent may come to be in a position to check the truth-values of previously inaccessible magnitude claims, some of which may be taken truly to describe outcomes of measurements. If a quantum state is to continue to provide the agent with good guidance concerning still inaccessible magnitude claims, it must be updated to reflect these newly accessible truths. The required alteration in the quantum state is not a physical process involving the system, in conflict with Schrödinger evolution. What has changed is just the physical relation of the agent to events whose occurrence is described by true magnitude claims. This is not represented by a discontinuous change in the quantum state of some model: it corresponds to adoption of a *new* quantum model that incorporates additional information, newly accessible to the user of quantum mechanics.
The preceding paragraphs contained a lot of talk of agents. To forestall misunderstandings, I emphasize that quantum mechanics is not about agents or their states of knowledge or belief: A precise formulation of quantum mechanics will not speak of such things in its models any more than it will speak of agents’ measuring, observing or preparing activities. If quantum mechanics is about anything it is about the quantum systems, states, observables and probability measures that figure in its models. Quantum mechanics, like all scientific theories, was developed by (human) agents for the use of agents (not necessarily human: while insisting that any agent be physically situated, I direct further inquiry on the constitution of agents to cognitive scientists). Trivially, only an agent can apply a theory for whatever purpose. So any account of a predictive, explanatory or other application of quantum mechanics naturally involves talk of agents.
How to use quantum mechanics to explain non-localized correlations
==================================================================
In his \[1981\] Bell argued that
> certain particular correlations, realizable according to quantum mechanics, are locally inexplicable. They cannot be explained, that is to say, without action at a distance. \[2004, pp. 151-2\]
The particular correlations to which Bell refers arise, for example, in the EPR-Bohm scenario in which pairs of spin $\frac12$ “particles” are prepared in a singlet spin state, then at widely separated locations each element of a pair is passed through a Stern-Gerlach magnet and detected either in the upper or in the lower part of a screen. By calling them realizable rather than realized he acknowledged the experimental difficulties associated with actually producing statistics supporting them in the laboratory (or elsewhere). Enormous improvements in experimental technique since 1981 have overcome most of the difficulties associated with performing a loophole-free test of CHSH or other so-called Bell inequalities and at the same time provided very strong statistical evidence for quantum mechanical predictions in analogous experiments. Since the improvements have been most dramatic for experiments involving polarization measurements on entangled photons, it is appropriate to refer back to the experimental scenario discussed by Bell in 1990 (\[2004, pp. 232-248\]).
Suppose photon pairs are prepared at a central source in the entangled polarization state $\Phi ^{+}=1/\sqrt{2}(|HH\rangle +|VV\rangle )$, and the photons in a pair are both subsequently detected in coincidence at two widely separated locations after each has passed through a polarizing beam splitter (PBS) with axis set at $a,b$ respectively. If a photon is detected with polarization parallel to this axis, a macroscopic record signifies yes: if it is detected with polarization perpendicular to this axis, the record signifies no. Let the record yes at one location be the event of a magnitude $A$ taking on value $+1$, no the event of $A $ taking on value $-1$, and similarly for $B$ at the other location. Let $a $ be a locally generated signal that quickly sets the axis of the PBS on the $A $ side to an angle $a{{}^\circ}$ from some standard direction, and similarly for $b$ on the $B$ side. Assume this is done so that each of $a$ and $A$’s taking on a value is space-like separated from each of $b$ and $B$’s taking on a value. In this scenario quantum mechanics predicts that, for $a{{}^\circ}=0{{}^\circ},a^{\prime }{{}^\circ}=45{{}^\circ},b{{}^\circ}=22{\frac12}{{}^\circ},b^{\prime }{{}^\circ}=-22{\frac12}{{}^\circ}$$$E(a,b)+E(a,b^{\prime })+E(a^{\prime },b)-E(a^{\prime },b^{\prime })=2\sqrt{2}
\tag{7} \label{CHSH violation}$$where, for example, $E(a,b)\equiv
\Pr_{a,b}(+1,+1)+\Pr_{a,b}(-1,-1)-\Pr_{a,b}(+1,-1)-\Pr_{a,b}(-1,+1)$. This is in violation of the CHSH inequality$$|E(a,b)+E(a,b^{\prime })+E(a^{\prime },b)-E(a^{\prime },b^{\prime })|\leq 2
\tag{CHSH}$$that follows from (\[FactorizabilitySU\]). Bell claims these correlations are realizable according to quantum mechanics but that they cannot be explained without action at a distance. While it is generally acknowledged that quantum mechanics successfully predicts Bell’s particular correlations, demonstrating this will illustrate the present view of quantum mechanics. To decide whether it also explains them we need to ask what more is required of an explanation.
What we take to be a satisfactory explanation has changed during the development of physics, and we may confidently expect such change to continue. One who accepts quantum mechanics is able to offer a novel kind of explanation. Nevertheless, explanations of phenomena using quantum mechanics may be seen to meet two very general conditions met by many, if not all, good explanations in physics.
\(i) They show that the phenomenon to be explained was to be expected, and
\(ii) they say what it depends on.Quantum mechanics enables us to give explanations meeting both conditions.
Meeting the first condition is straightforward. Anyone accepting quantum mechanics can use the \[Born\] applied to state $\Phi ^{+}$ to calculate joint probabilities such as $\Pr_{a,b}(A,B)$ and go on to derive (\[CHSH violation\]). So for anyone who accepts quantum mechanics, violation of the CHSH inequalities is to be expected. But it is worth showing in more detail how quantum mechanics can be applied to derive ([CHSH violation]{}) because this will help to exhibit the relational nature of quantum states and probabilities while making it clear that a precise formulation of quantum mechanics need not use the word ‘measurement’ or any other term on Bell’s list of proscribed words \[2004, p. 215\].
Figure 4 is a space-time diagram depicting space-like separated polarization measurements by Alice and Bob in regions 1,2 respectively on a photon pair. Time is represented in the laboratory frame. At $t_{1}$ each takes the polarization state of the $L-R$ photon pair to be $\Phi ^{+}=1/\sqrt{2}(|HH\rangle +|VV\rangle )$. What justifies this quantum state assignment is their knowledge of the conditions under which the photon pair was produced—perhaps by parametric down-conversion of laser light by passage through a non-linear crystal. Such knowledge depends on information about the physical systems involved in producing the pair. This state assignment is backed by significant magnitude claims about such systems—if anything counts as a claim about beables recognized by quantum mechanics, these do. Then Alice measures polarization of photon $L$ along axis $a{{}^\circ}$, Bob measures polarization of photon $R$ along axis $b{{}^\circ}$. Decoherence at the photon detectors licenses both of them to treat the Born rule measure corresponding to state assignment $\Phi ^{+}$ as a probability distribution over significant canonical magnitude claims about the values of $A,B$.
At $t_{1}$ Alice and Bob should both assign state $\Phi ^{+}$ and apply the \[Born\] to calculate the joint probability $\Pr_{a,b}^{\Phi
^{+}}(A,B)$ assigned to claims about the values of magnitudes $A,B$—claims that we may, but need not, choose to describe as records of polarization measurements along both $a{{}^\circ},b{{}^\circ}$ axes—and hence the (well-defined) conditional probability $\Pr_{a,b}^{\Phi ^{+}}(A,B)/\Pr_{a,b}^{\Phi ^{+}}(B)=|\langle A|B\rangle
|^{2} $. Each will then expect the observed non-localized correlations between the outcomes of polarization measurements in regions 1,2 when the detectors are set along the $a{{}^\circ},b{{}^\circ}$ axes. They will expect analogous correlations as these axes are varied, and so they will expect (\[CHSH violation\]) (violating the CHSH inequality) in such a scenario.
At $t_{2}$, after recording polarization $V_{B}$ for $R$, Bob should assign pure state $|V_{B}\rangle $ to $L$ and use the Born Rule to calculate the probabilities $\Pr_{a}^{|V_{B}>}(A)=|\langle A|V_{B}\rangle |^{2}$ for Alice to record polarization of $L$ with respect to the $a{{}^\circ}$-axis. At $t_{2}$, Alice should assign state $\hat{\rho}={\frac12}\hat{1}$ to $L$, and use the Born Rule to calculate probability $\Pr_{a}^{\rho }(A)={\frac12}$ that she will record either polarization of $L$ with respect to the $a{{}^\circ}$-axis. In this way each forms expectations as to the outcome of Alice’s measurement on the best information available to him or her at $t_{2}$. Alice’s statistics of her outcomes in many repetitions of the experiment are just what her quantum state ${\frac12}\hat{1}$ for $L$ led her to expect, thereby helping to explain her results. Bob’s statistics for Alice’s outcomes (in many repetitions in which his outcome is $V_{B}$) are just what his quantum state $|V_{B}\rangle $ for $L$ led him to expect, thereby helping him to explain Alice’s results.
There is no question as to which, if either, of the quantum states $|V_{B}\rangle $, ${\frac12}\hat{1}$ was the *real* state of Alice’s photon at $t_{2}$. Neither of the different probabilities $|\langle A|V_{B}\rangle |^{2}$ or ${\frac12}$ represents a unique physical propensity at $t_{2}$ of Alice’s outcome—even though neither its chance at $p$ nor its chance at $q$ is subjective. This discussion applies independent of the time-order in Alice’s frame of the regions 1, 2 in Figure 1: had she been moving away from Bob fast enough, she would have represented 1 as earlier than 2.
It is widely acknowledged that one cannot explain a phenomenon merely by showing that it was to be expected in the circumstances. To repeat a hackneyed counterexample, the falling barometer does not explain the coming storm even though it gives one reason to expect a storm in the circumstances. As a joint effect of a common cause, a symptom does not explain its other independent effects. But a system’s being in a quantum state at a time is not a symptom of causes specified by the true magnitude claims that back it since it is not an event distinct from the conditions those claims describe. Each event figuring in Bell’s particular correlations is truly described by a canonical magnitude claim. We may choose to describe some, but not all, such events as an outcome of a quantum measurement on a system: the probabilities of many of those events depend counterfactually on the particular entangled state assigned at $t_{1}$—if that state had been different, so would these probabilities. But this dependence is not causal. In quantum mechanics, neither states nor probabilities are the sorts of things that can bear causal relations: in Bell’s terminology, they are not beables.
When relativized to the physical situation of an actual or hypothetical agent, a quantum state assignment is objectively true or false—which depends on the state of the world. More specifically, a quantum state assignment is made true by the true magnitude claims that back it. One true magnitude claim backing the assignment of $|V_{B}\rangle $ to $L $ at $q$ reports the outcome of Bob’s polarization measurement in region 2 of Figure 1: but there are others, since this would not have been the correct assignment had the correct state assignment at $p^{\prime }$ been $|H_{A}\rangle |V_{B}\rangle $. We also need to ask for the backing of the entangled state $\Phi ^{+}$.
There are many ways of preparing state $\Phi ^{+}$, and this might also be the right state to assign to some naturally occurring photon pairs that needed no preparation. In each case there is a characterization in terms of some set of true magnitude claims describing the systems and events involved: these back the state assignment $\Phi ^{+}$. It may be difficult or even impossible to give this characterization in a particular case, but that is just an epistemic problem which need not be solved even by experimenters skilled in preparing or otherwise assigning this state. $\Phi
^{+}$ will be correctly assigned at $p^{\prime }$ only if some set of true magnitude claims backing that assignment is accessible from $p^{\prime }$: events making them true must lie in the backward light-cone of $p^{\prime }$.
A quantum state counterfactually depends on the true magnitude claims that back it in somewhat the same way that a dispositional property depends on its categorical basis. The state $\Phi ^{+}$ may be backed by alternative sets of true magnitude claims just as a person may owe his immunity to smallpox to any of a variety of categorical properties. If Walt owes his smallpox immunity to antibodies, his possession of antibodies does not cause his immunity: it is what his immunity consists in. No more is the state $\Phi ^{+}$ caused by its backing magnitude claims: a statement assigning state $\Phi ^{+}$ is true only if backed by some true magnitude claims of the right kind. A quantum state is counterfactually dependent on whatever magnitude claims back it because backing is a kind of determination or constitution relation, not because it is a causal relation.
In this view, a quantum state causally depends neither on the physical situation of the (hypothetical or actual) agent assigning it nor on any of its backing magnitude claims. The correct state $|V_{B}\rangle $ to be assigned to $L$ at $q$ is not causally dependent on anything about Bob’s physical situation even if he happens to be located at $q$: it is not causally dependent on the outcome of Bob’s polarization measurement in region 2: and it is not causally dependent on how Bob sets his polarizer in region 2. But a quantum state assignment is not just a function of the subjective epistemic state of any agent: If Bob or anyone else were to assign a state other than $|V_{B}\rangle $ to $L$ at $q$ he or she would be making a mistake.
The quantum derivation of (\[CHSH violation\]) shows not only that Bell’s particular correlations were to be expected, but also what they depend on. They depend counterfactually but not causally on the quantum state $\Phi ^{+}$, and they also depend counterfactually on that state’s backing conditions, as described by true magnitude claims. The status of the quantum state disqualifies it from participation in causal relations, but true magnitude claims may be taken to describe beables recognized by quantum mechanics. To decide which conditions backing any of the states involved in their explanation describe causes of Bell’s particular correlations or the events they correlate we need to return to the connection between causation and chance.
The intuition that, other things being equal, a cause raises (or at least alters) the chance of its effect is best cashed out in terms of an interventionist counterfactual: $c$ is a cause of $e$ just in case $c,e$ are distinct actual events and there is some conceivable intervention on $c$ whose occurrence would have altered the chance of $e$. Such an intervention need not be the act of an agent: it could involve any modification in $c$ of the right kind. Woodward \[2003, p. 98\] is one influential attempt to say what kind of external influence this would involve. Note that Einstein’s formulation of a principle of local action also appeals to intervention:
> The following idea characterizes the relative independence of objects far apart in space ($A$ and $B$): external influence on $A$ has no *immediate* (“unmittelbar”) influence on $B$; this is known as the ‘principle of local action’ (Einstein \[1948, pp. 321-2\])
I used the idea of intervention to argue against any causal dependence between events in $\mathbf{1}$ and $\mathbf{2}$: anyone who accepts quantum mechanics accepts that it makes no sense to speak of an intervention in one of these regions that would affect a hypothetical agent’s estimated chance of what happens in the other. So even though the outcome $e_{B}$ in $\mathbf{2}$ backs the assignment $|V_{B}\rangle $ to $L$ at $q$, the outcome in $\mathbf{1}$ does not depend causally on $e_{B}$: for similar reasons, neither does the outcome in $\mathbf{2}$ depend on that in $\mathbf{1}$. The same idea can now be used to show that both these outcomes *do* depend causally on whatever event $o$ in the overlap of the backward light cones of $\mathbf{1}$ and $\mathbf{2}$ warranted assignment of state $\Phi
^{+}$—an event truly described by magnitude claims that backed this assignment.
Assume first that the events $a,b$ at which the polarizers are set on a particular occasion occur in the overlap of the backward light cones of $\mathbf{1}$ and $\mathbf{2}$: this assumption will later be dropped. Let $r$ be a point outside the future light cones of $e_{A},e_{B}$ but within the future light cone of the event $o$. Let $e_{A}\uplus e_{B}$ be the event of the joint occurrence of $e_{A},e_{B}$. This is an event of a type to which the Born rule is applicable: the application yields its chance $Ch_{r}(e_{A}\uplus e_{B})=$ $Pr_{a,b}^{\Phi ^{+}}(V_{A},V_{B})={\frac12}\cos ^{2}\angle ab$. We already saw that $Ch_{r}(e_{A})=\Pr
{}_{_{a,b}{}}^{\Phi ^{+}}(V_{A})={\frac12}=\Pr {}_{_{a,b}{}}^{\Phi ^{+}}(V_{B})=Ch_{r}(e_{B})$. The event $o$ affects all these chances: had a different event $o^{\prime }$ occurred backing the assignment of a different state (e.g. $|H_{A}\rangle |V_{B}\rangle $), or no event backing any state assignment, then any or all of these chances could have been different. Since it makes sense to speak of an agent altering the chance of event $o$ at $s$ in its past light cone, we have$$\begin{gathered}
Ch_{r}(e_{A}\uplus e_{B}|do-o)\neq Ch_{r}(e_{A}\uplus e_{B}) \\
Ch_{r}(e_{A}|do-o)\neq Ch_{r}(e_{A}) \\
Ch_{r}(e_{B}|do-o)\neq Ch_{r}(e_{B})\end{gathered}$$where $do-o$ means $o$ is the result of an intervention without which $o$ would not have occurred. It follows that $e_{A},e_{B},e_{A}\uplus e_{B}$ are each causally dependent on $o$: $o$ is a common cause of $e_{A},e_{B}$ even though the probabilities of events of these types do not factorize. The same reasoning applies to each registered photon pair on any occasion at any settings $a,b$. So the second requirement on explanation is met: the separate recording events, as well as the event of their joint occurrence, depend *causally* on the event $o$ that serves to back assignment of state $\Phi ^{+}$ to the photon pairs involved in this scenario.
By rejecting any possibility of an intervention expressed by $do-e_{B} $ or $do-\bar{e}_{B}$, anyone accepting quantum mechanics should deny that $Ch_{p(q)}(e_{A}|do-e_{B})\neq Ch_{p(q)}(e_{A}|do-\bar{e}_{B})$ is true or even meaningful. Nevertheless $Ch_{q}(e_{A}|e_{B})\neq Ch_{q}(e_{A}|\bar{e}_{B})$: in this sense $e_{A}$ depends counterfactually but not causally on $e_{B}$. Does such counterfactual dependence provide reason enough to conclude that $e_{A}$ is part of the explanation of $e_{B}$? An obvious objection is that because of the symmetry of the situation with $\mathbf{1}$ and $\mathbf{2}$ space-like separated there is an equally strong reason to conclude that $e_{B}$ is part of the explanation of $e_{A}$, contrary to the fundamentally asymmetric nature of the explanation relation. But one can see that this objection is not decisive by paying attention to the contrasting epistemic perspectives associated with the different physical situations of hypothetical agents Alice\* and Bob\* with world-lines confined to interiors of the light cones of $\mathbf{1}$, $\mathbf{2}$ respectively.
As his world-line enters the future light cone of $\mathbf{2}$ Bob\* comes into position to know the outcome at $\mathbf{2}$ while still physically unable to observe the outcome at $\mathbf{1}$. His epistemic situation is then analogous to that of a hypothetical agent Chris in a world with a Newtonian absolute time, in a position to know the outcome of past events but physically unable to observe any future event. Many have been tempted to elevate the epistemic asymmetry of Chris’s situation into a global metaphysical asymmetry in which the future is open while the past is fixed and settled. It is then a short step to a metaphysical view of explanation as a productive relation in which the fixed past gives rise to the (otherwise) open future, either deterministically or stochastically.[^9]
Such a move from epistemology to metaphysics should always be treated with deep suspicion. But in this case it is clearly inappropriate in a relativistic space-time since the “open futures” of agents like Alice\* and Bob\* cannot be unified into *the* open future. This prompts a retreat to a metaphysically drained view of explanation as rooted in cognitive concerns of a physically situated agent, motivated by the need to unify, extend and efficiently deploy the limited information to which it has access.
For many purposes it is appropriate to regard the entire scientific community as a (spatially) distributed agent, and to think of the provision of scientific explanations as aiding *our* collective epistemic and practical goals. This is appropriate insofar as localized agents share an epistemic perspective, with access to the same information about what has happened. But Alice\* and Bob\* do not have access to the same information at time $t_{2}$ or $t_{3}$ since they are then space-like separated. So it is entirely appropriate for Bob\* to use $e_{B}$ to explain $e_{A}$ and for Alice\* to use $e_{A}$ to explain $e_{B}$. This does not make explanation a subjective matter for two reasons. There is an objective physical difference between the situations of Alice\* and Bob\* underlying the asymmetry of their epistemic perspectives: and by adopting either perspective in thought (as I have encouraged the reader to do) anyone can come to appreciate how each explanation can help make Bell’s correlations seem less puzzling. Admittedly, neither explanation is very deep, and I will end by noting one puzzle that remains.
By meeting both minimal requirements on explanation, the application of quantum theory enables us to explain Bell’s correlations. But this explanation local? Several senses of locality are relevant here. he explanation involves no superluminal causal dependence. As stated, the condition of Local Causality is not applicable to the quantum mechanical explanation since it presupposes the uniqueness of the probability to which it refers. (\[FactorizabilitySU\]) (and presumably also (\[Factorizability\])) are violated, but Bell (\[2004, p. 243\]) preferred to see (\[Factorizability\]) as not a formulation but a consequence of ‘local causality’: I have argued that it is not. To retain its connection to (IP), a version of Local Causality should speak of chances rather than general probabilities. A version that equates the unconditional chance of $e_{A}$ to its chance conditional on $e_{B}$ holds, no matter how these chances are relativized to the same space-time point. But a version that is clearly motivated by the intuitive principle (IP) would rather equate the unconditional chance of $e_{A}$ to its chance conditional on *an intervention that produces* $e_{B}$. However this version is inapplicable since acceptance of quantum mechanics renders senseless talk of interventions producing $e_{B}$.
The explanation one can give by applying quantum mechanics appeals to *chances* that are localized, insofar as they are assigned at space-time points that may be thought to offer the momentary perspective of a hypothetical idealized agent whose credences they would guide. But these chances are not quantum beables and they are not physical propensities capable of manifestation at those locations (or anywhere else). The only causes figuring in the explanation are localized where the physical systems are whose magnitudes back the assignment of state $\Phi ^{+}$. That chances are not propensities becomes clear when one drops the assumption that $a,b$ occur in the overlap of the backward light cones of $\mathbf{1}$ and $\mathbf{2}$, as depicted in Figure 5.
If $a,b$ are set at the last moment, the chance of $e_{A}\uplus e_{B}$ that figures in its explanation may be located *later* in the laboratory frame than $e_{A},e_{B}$. If chance were a physical propensity it should act *before* its manifestation. But chances aren’t* *propensities—proximate causes of localized events. They are a localized agent’s objective guide to credence about epistemically inaccessible events.
I will conclude by noting one sense in which the explanation one can give using quantum mechanics is not local as it stands. Though it is a (non-factorizable) cause of events of types $A,B$ in regions $\mathbf{1,}$ $\mathbf{2}$ respectively, the event $o$ is not connected to its effects by any spatiotemporally continuous causal process described by quantum mechanics. This puts the explanation in tension with the* first* conjunct of Bell’s (\[2004, p. 239\]) intuitive locality principle (IP):*"The direct causes (and effects) of events are near by*, and even the indirect causes (and effects) are no further away than permitted by the velocity of light."
$o$ is separated from both recording events in regions $\mathbf{1,}$ $\mathbf{2}$ in time, and from at least one in space. If $o$ is not merely a cause but a *direct* cause of these events then it violates the first conjunct of (IP) because it is *not* nearby. But if one adopts the present view of quantum mechanics, the theory has no resources to describe any causes mediating between $o$ and these recording events. So while their quantum explanation is not explicitly inconsistent with the first conjunct of (IP), mediating causes could be found only by constructing a new theory. Bell’s work has clearly delineated the obstacles that would have to be overcome on that path.
[99]{} Bell, J.S. \[1964\]: On the Einstein-Podolsky-Rosen paradox. *Physics* **1**, 195-200.
Bell, J.S. \[1975\]: The theory of local beables. TH-2053-CERN, July 28.
Bell, J.S. \[1981\]: Bertlmann’s socks and the nature of reality. *Journal de Physique*, Colloque 2, suppl. au numero 3, Tome 42, C2 41-61.
Bell, J.S. \[1985\]: The theory of local beables. *Dialectica* **39**, 86-96.
Bell, J.S. \[1990\]: La nouvelle cuisine. In Sarlemijn and Krose, eds., *Between Science and Technology*, 97-115.
Bell, J.S. \[2004\]: *Speakable and Unspeakable in Quantum Mechanics.* Second Revised Edition. Cambridge: Cambridge University Press.
Eberhard, P. \[1978\]: Bell’s theorem and the different concepts of locality. *Nuovo Cimento*, **B46**, 392–419.
Einstein, A. \[1948\]: Quanten-mechanik und Wirklichkeit. *Dialectica* **2**, 320-23.
Einstein, A., Podolsky, B., and Rosen, N. \[1935\]: Can quantum-mechanical description of physical reality be considered complete. *Physical Review* **47**, 777-80.
Healey, R. \[2012\]: Quantum theory: a pragmatist approach. *British Journal for the Philosophy of Science* **63**, 729–71.
Ismael, J. \[2008\]: **Raid!** Dissolving the big, bad bug. *Nous* **42**, 292-307.
Maudlin, T. \[2007\]: *The Metaphysics within Physics*. Oxford: Oxford University Press.
Maudlin, T. \[2011\]: Three roads to objective probability. In Beisbart, C. and Hartmann, S. eds. \[2011\]. *Probabilities in Physics*. Oxford: Oxford University Press, 293-319.
Norsen, T. \[2011\]: John S. Bell’s Concept of Local Causality. *American Journal of Physics* **79**, 1261-75.
Price, H. \[2012\]: Causation, Chance and the rational significance of supernatural evidence. *Philosophical Review* **121**, 483-538.
Seevinck, M.P., Uffink, J. \[2011\]: Not throwing out the baby with the bathwater: Bell’s condition of local causality mathematically ‘sharp and clean’, in D. Dieks *et al*. *Explanation, Prediction and Confirmation*. Berlin: Springer, 425-50.
Woodward, J. \[2003\]: *Making Things Happen*. Oxford: Oxford University Press.
[^1]: While Bell did not make this explicit in 1975, his 1990 paper also notes the analogy with source-free Maxwellian electromagnetism, and there he does say:
> The deterministic case is a limit of the probabilistic case, the probabilities becoming delta functions. \[2004, p. 240\]
[^2]: Both $\lambda $ and $c$ are assumed confined to region 3 (now symmetrically extended so as also to close the backward light cone of 2): $c$ stands for the values of magnitudes characterizing the experimental set-up in terms admitted by ordinary quantum mechanics, while $\lambda $ specifies the values of magnitudes introduced by the theory supposed to complete quantum mechanics. It will not be necessary to mention $c$ in what follows.
[^3]: I use each of ‘$A$’,‘$B$’ to denote a random variable with values {$V_{A},H_{A}$}, {$V_{B},H_{B}$} respectively. $e_{A}$ ($\bar{e}_{A}$) denotes the event in region 1 of Alice’s photon being registered as vertically (horizontally) polarized. $e_{B}$ ($\bar{e}_{B}$) denotes the event in region 2 of Bob’s photon being registered as vertically (horizontally) polarized.
[^4]: I use the tenseless present rather than the more idiomatic future tense here for reasons that will soon become clear.
[^5]: See Ismael \[2008\]. I have slightly altered her notation to avoid conflict with my own. Here ‘$e$’ ambiguously denotes both an event and the proposition that it occurs. $Cr$ stands for credence: an agent’s degree of belief in a proposition, represented on a scale from 0 to 1 and required to conform to the standard axioms of probability.
[^6]: A unitary evolution $\Phi ^{+}\Rightarrow \Xi ^{+}$ corresponding to a *local* interaction there would still yield $Pr_{b}^{\Xi ^{+}}(H_{B})={\frac12}$.
[^7]: Compare Bell \[2004, p. 55\].
[^8]: Compare Bell \[2004, p. 53\]: “...this does not bother us if we do not grant beable status to the wave-function.”
[^9]: See, for example, Maudlin \[2007, pp. 173-8\].
|
---
abstract: 'The origin of the radiatively induced Lorentz and CPT violations, in perturbative evaluations, of an extended version of QED, is investigated. Using a very general calculational method, concerning the manipulations and calculations involving divergent amplitudes, we clearly identify the possible sources of contributions for the violating terms. We show that consistency in the perturbative calculations, in a broader sense, leaves no room for the existence of radiatively induced contributions which is in accordance with what was previously conjectured and recently advocated by some authors supported on general arguments.'
author:
- 'O.A. Battistel\* and G. Dallabona\*\*'
title: '[**Consistency in Perturbative Calculations and Radiatively Induced Lorentz and CPT Violations**]{}'
---
\*Dept. of Physics-CCNE, Universidade Federal de Santa Maria
P.O. Box 5093, 97119-900, Santa Maria, RS, Brazil
orimar@ccne.ufsm.br
\*\*Dept. of Physics-ICEx, Universidade Federal de Minas Gerais
P.O. Box 702, 30161-970, Belo Horizonte, MG, Brazil
dalla@fisica.ufmg.br
0.5cm
PACS numbers: 11.15.Bt; 11.30.Qc; 11.30.Cp; 11.30.Er
The implications of Lorentz and CPT symmetry breaking have received a great deal of attention in the last few years. Most of them were all dedicated to the QED extended sector of the Extended Standard Model constructed by Colladay and Kostelecky [@Colladay-Kostelecky1]. They have developed a conceptual framework and a procedure for the treatment of spontaneous CPT and Lorentz violation within a context where the gauge structure and renormalizability are maintained [@Colladay-Kostelecky2]. The Lagrangian of the QED extended sector is composed by the usual QED theory in addition to the breaking terms $$L^{SB}=-a_{\mu }\bar{\Psi}\gamma ^{\mu }\Psi -b_{\mu }\bar{\Psi}\gamma
_{5}\gamma ^{\mu }\Psi +\frac{1}{2}k^{\alpha }\epsilon _{\alpha \lambda \mu
\nu }A^{\lambda }F^{\mu \nu }.$$ In the above expression, $a_{\mu }$ and $b_{\mu }$ are real and constant prescribed four-vectors. The coupling $k_{\alpha }$ is also real and it has dimension of mass. The matrix $\gamma _{5}$ is the usual Hermitian Dirac matrix which is related to the totally antisymmetric tensor $\varepsilon
_{\mu \nu \alpha \beta }$ through $tr{\gamma _{5}\gamma _{\alpha }\gamma
_{\beta }\gamma _{\mu }\gamma _{\nu }}=4i\epsilon _{\alpha \beta \mu \nu }$. The mathematical structure of the purely photonic sector breaking term, in the above Lagrangian, allows us to identify an important consequence for the modified QED theory. Due to the fact that it changes by a total derivative under potential gauge transformation $(A_{\mu }\rightarrow A_{\mu }+\partial
_{\mu }\Lambda ),$ the action is not modified and the resulting equations of motions remain the same ones as those of the original theory. Such behavior is precisely what we call the Chern-Simons(CS) form. In spite of this, there are many phenomenological consequences associated to the modified theory [@Colladay-Kostelecky1] -[@Coleman-Glashow]. However, all the present experimental and theoretical investigations seem to state that the $%
k_{\mu }$ coupling value, compatible with the phenomenology, is the identically zero one. Such statement does not completely eliminate the possibility of Lorentz and CPT breaking effects being present in the modified theory. Even that the $k_{\mu }$ coupling vanishes at the tree level, such type of effects can be, in principle, induced by radiative contributions. In the QED extended theory, radiative corrections coming from the fermionic sector can induce contributions of the CS form [@Colladay-Kostelecky2]. It could appear when the photon propagator is corrected by the $b$-breaking term. From the calculational point of view, we have to evaluate the usual QED one-loop vacuum polarization tensor with the free spin-$1/2$ fermion propagator, obeying the Dirac equation, changed by the inclusion of the $b_{\mu }$ coupling: $$G(k)=\frac{i}{\not{k}-m-\not{b}\gamma _{5}}.$$ The corresponding one-loop amplitude can be written as $$\Pi ^{\mu \nu }(p)=\int \frac{d^{4}k}{(2\pi )^{4}}tr\left\{ \gamma ^{\mu
}G(k)\gamma ^{\nu }G(k+p)\right\} .$$ The evaluation of the above amplitude has been performed by many authors and different aspects involved in the calculations were emphasized [@Colladay-Kostelecky2][@Jackiw1] -[@Kostelecky3] [@Coleman-Glashow][@Bonneau] -[@Orimar1-2]. For a first set of authors and works, the obtained result is nonzero but it is essentially ambiguous. As a consequence, a definite value can be only stated after an arbitrary set of choices is taken. So, the results presented by the referred authors represent only a particular choice for the involved ambiguities. More recently, a second set of authors have argued, by using very general arguments, that only the definite zero value for the radiatively induced CS term is reasonable. Among them, G. Bonneau [@Bonneau] shows that, if the theory is correctly defined by taking into account their Ward identities and appropriated normalization conditions, the CS term is absent in a non-ambiguous way. On the other hand, C. Adam and F.R. Klinkhamer [@Adam], by requiring causality in addition to the validity of the perturbation theory, concluded that no CS term must exist. The same result was previously conjectured by Colladay and Kostelecky [@Colladay-Kostelecky2] and advocated by Coleman and Glashow [@Coleman-Glashow]. So, it remains a question: Why do general physical arguments lead to a definite zero value while the perturbative calculations do not? On the other hand, given the fact that a nonzero Chern-Simons term represents Lorentz and CPT violation, we can ask which are the steps in the perturbative evaluation that give raise to the non-zero contribution. Talking in different words, which are the assumptions, taken in the intermediate steps, where resides the origin of the Lorentz and CPT violation in the perturbative calculation, that a consistent handling of divergences may avoid in order to obtain the expected value for the CS term, the definite zero value?
The main point is the following: If the CS term is non-vanishing, it represents a Lorentz and CPT symmetry breaking induced by radiative corrections in the QED extended theory. However, in order to consider such phenomenon as a fundamental one, it must not be a simple consequence of a choice for arbitrariness, but it should emerge as an unavoidable aspect of the calculations. In other words, to define a calculational scheme such that some symmetries are violated in the calculations is a very simple job, but this does not mean that the corresponding phenomenon must exist. We have to put this particular calculation in accordance with other ones of the perturbative calculations, specially those which are necessary for the construction of the (symmetric) renormalizable Standard Model. This means to treat all the involved mathematical structures in the same way they are treated in the symmetric theory.
The purpose of the present work is precisely to clarify these points. We will use a very general calculational strategy to handle divergences [@Orimar2] in order to isolate in a very clear way the possible contributions for the CS term in the perturbative evaluation, and show that when the interpretation required by the consistency in perturbative calculations is adopted, in a broader sense, an exactly zero value for the Lorentz and CPT contribution is achieved.
In order to evaluate the CS term we have to consider $\gamma _{5}$-odd divergent structures and therefore the Dimensional Regularization (DR) technique [@DR] is excluded from the possible tools. To make clear this point we write in the expression (3) the exact propagators $G(k)$, given in the expression (2), in the form [@Jackiw1] $$G(l)=S(l)+G_{b}(l),$$ with $$G_{b}(l)=\frac{1}{\not{k}-m-\not{b}\gamma _{5}}\not{b}\gamma _{5}S(l),$$ and $S(l)\ $being the usual spin-$1/2$ fermion propagator. After the substitution of the above expression, the $\Pi _{\mu \nu }(p)$ amplitude can be split in three terms: $\Pi ^{\mu \nu }=\Pi _{0}^{\mu \nu }+\Pi _{b}^{\mu
\nu }+\Pi _{bb}^{\mu \nu }$. The first contribution, $\Pi _{0}^{\mu \nu },$ is precisely the pure QED vacuum polarization tensor. The linear b-term is given by $$\Pi _{b}^{\mu \nu }(p)=\int \frac{d^{4}k}{(2\pi )^{4}}tr\left\{ \gamma ^{\mu
}S(l)\gamma ^{\nu }G_{b}(l+p)+\gamma ^{\mu }G_{b}(l+p)\gamma ^{\nu
}S(l)\right\} .$$ To evaluate $\Pi _{b}^{\mu \nu }$ to the lowest order in $b$, we simply replace the expression (5) by $$G_{b}(k)=-iS(k)\not{b}\gamma _{5}S(k).$$ Now, the corresponding expression to $\Pi _{b}^{\mu \nu }$ may be written as $$\Pi _{b}^{\mu \nu }(p)\simeq b_{\lambda }\Pi ^{\mu \nu \lambda }(p),$$ where $$\begin{aligned}
\Pi ^{\mu \nu \lambda }(p) &=&(-i)\int \frac{d^{4}k}{(2\pi )^{4}}tr\left\{
\gamma ^{\mu }S(k)\gamma ^{\nu }S(k+p)\gamma ^{\lambda }\gamma
_{5}S(k+p)\right. + \nonumber \\
&&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\left. \gamma ^{\mu
}S(k)\gamma ^{\lambda }\gamma _{5}S(k)\gamma ^{\nu }S(k+p)\right\} .\end{aligned}$$ So, the crucial mathematical structure, which we need to evaluate in order to get the value for the CS term, is an $AVV$ triangle amplitude. Such three-point function is a Green’s function of the symmetric theory (the renormalizable Standard Model). There are many kinds of arbitrariness involved in the evaluation of such amplitude. The requirement of consistency in the perturbative calculation implies that the choices for the arbitrariness present in the above expression must be taken in a consistent way with those adopted in the construction of the symmetric Standard Model. Given this argument we will consider the most general mathematical expression and only after all the considerations relative to the consistency in the evaluation of perturbative amplitudes have been made, we will return to the specific situation of the eq.(9).
We start by the definition $$T_{\lambda \mu \nu }^{AVV}=\int \frac{d^{4}k}{\left( 2\pi \right) ^{4}}%
tr\left\{ \gamma _{\mu }\left[ (\not{k}+\not{k}_{1})-m\right] ^{-1}\gamma
_{\nu }\left[ (\not{k}+\not{k}_{2})-m\right] ^{-1}i\gamma _{\lambda }\gamma
_{5}\left[ (\not{k}+\not{k}_{3})-m\right] ^{-1}\right\} ,$$ which corresponds to the most general expression for the direct diagram. In the above expression $k_{1},\;k_{2}$ and $k_{3}$ stand for arbitrary choices for the internal lines momenta. They are related to the external ones by their differences which we adopt: $k_{3}-k_{1}=p,k_{1}-k_{2}=p^{\prime }$ and $k_{3}-k_{2}=p^{\prime }+p=q$. Their summations are undefined physical quantities. If we are worried about consistency, in the evaluation of this superficially linearly divergent structure, the first step is the identification of eventual constraints that this amplitude should obey, in spite of its divergent character. Such constraints are invariably materialized through relations among other amplitudes and/or by fixing a kinematical limit through a low-energy theorem. Due to the fact that, in principle, all the Green’s functions of the perturbative expansion are in some way related, we can use eventual constraints imposed by general physical grounds upon a particular amplitude to restrict other ones. For the $AVV$ we note, for example, the identity $$\begin{aligned}
&&(k_{3}-k_{2})_{\lambda }\left\{ \gamma _{\nu }\frac{1}{(\not{k}+{\not{k}}%
_{2})-m}i\gamma _{\lambda }\gamma _{5}\frac{1}{(\not{k}+{\not{k}}_{3})-m}%
\gamma _{\mu }\frac{1}{(\not{k}+{\not{k}}_{1})-m}\right\} =-\left\{ i\gamma
_{\nu }\gamma _{5}\frac{1}{(\not{k}+{\not{k}}_{3})-m}\gamma _{\mu }\frac{1}{(%
\not{k}+{\not{k}}_{1})-m}\right\} \nonumber \\
&&-2mi\left\{ \gamma _{\nu }\frac{1}{(\not{k}+{\not{k}}_{2})-m}\gamma _{5}%
\frac{1}{(\not{k}+{\not{k}}_{3})-m}\gamma _{\mu }\frac{1}{(\not{k}+{\not{k}}%
_{1})-m}\right\} +\left\{ \gamma _{\nu }\frac{1}{(\not{k}+{\not{k}}_{2})-m}%
i\gamma _{\mu }\gamma _{5}\frac{1}{(\not{k}+{\not{k}}_{1})-m}\right\} .\end{aligned}$$ The above identity, which has nothing to do with divergences, can be converted to a relation among perturbative Green’s functions if we take the traces operation in both sides and next integrate in the momentum $k$. This gives us $$\left( k_{3}-k_{2}\right) _{\lambda }T_{\lambda \mu \nu }^{AVV}\left(
k_{1},k_{2},k_{3};m\right) =-2miT_{\mu \nu }^{PVV}\left(
k_{1},k_{2},k_{3};m\right) +T_{\mu \nu }^{AV}\left( k_{1},k_{2};m\right)
-T_{\nu \mu }^{AV}\left( k_{3},k_{1};m\right) ,$$ where we have introduced the $PVV$ three-point function defined by $$T_{\mu \nu }^{PVV}\left( k_{1},k_{2},k_{3};m\right) =\int \frac{d^{4}k}{%
(2\pi )^{4}}Tr\left\{ \gamma _{5}\frac{1}{(\not{k}+{\not{k}}_{3})-m}\gamma
_{\mu }\frac{1}{(\not{k}+{\not{k}}_{1})-m}\gamma _{\nu }\frac{1}{(\not{k}+{%
\not{k}}_{2})-m}\right\} ,$$ and the $AV$ two-point function $$T_{\mu \nu }^{AV}\left( k_{1},k_{2};m\right) =\int \frac{d^{4}k}{(2\pi )^{4}}%
Tr\left\{ i\gamma _{\mu }\gamma _{5}\frac{1}{(\not{k}+{\not{k}}_{1})-m}%
\gamma _{\nu }\frac{1}{(\not{k}+{\not{k}}_{2})-m}\right\} .$$ Following a similar procedure, two other relations can be produced by contracting the term between curly brackets on the left hand side of the eq.(11) with the external momenta $\left( k_{3}-k_{1}\right) _{\mu }$ and $%
\left( k_{1}-k_{2}\right) _{\nu }$. They are $$\begin{aligned}
\bullet (k_{3}-k_{1})_{\mu }T_{{\lambda }{\mu }{\nu }}^{AVV} &=&T_{{\lambda
\nu }}^{AV}(k_{1},k_{2};m)-T_{{\lambda \nu }}^{AV}(k_{3},k_{2};m) \\
\bullet (k_{1}-k_{2})_{\nu }T_{{\lambda }{\mu }{\nu }}^{AVV} &=&T_{{\lambda
\mu }}^{AV}(k_{3},k_{2};m)-T_{{\lambda \mu }}^{AV}(k_{3},k_{1};m).\end{aligned}$$ The $AV$ structure that appeared on the right hand side of the eqs.(12), (15) and (16), is a two-point function physical amplitude and possesses its own relations with other physical amplitudes. The most important one for our present purposes is the following $$T_{\mu \nu }^{AV}(k_{1},k_{2};m)=-\frac{1}{2m}\varepsilon _{\mu \nu \alpha
\beta }(k_{1}-k_{2})_{\alpha }T_{\beta }^{SV}(k_{1},k_{2};m),$$ where we have introduced the $SV$ two-point function defined as $$T_{\beta }^{SV}\left( k_{1},k_{2};m\right) =\int \frac{d^{4}k}{(2\pi )^{4}}%
Tr\left\{ \widehat{1}\frac{1}{(\not{k}+{\not{k}}_{1})-m}\gamma _{\beta }%
\frac{1}{(\not{k}+{\not{k}}_{2})-m}\right\} .$$ The eq.(17) can be stated before the introduction of the integration sign.
At this point we can ask ourselves for the meaning of the eqs.(12), (15), (16) and (17) and why they are important for our present investigation. First, we note that all the involved mathematical structures are, in principle, divergent quantities. This means that, in order to specify in a definite way the corresponding physical amplitudes, it will be necessary to handle undefined mathematical quantities. This implies in taking choices for the arbitrariness involved. Since we cannot run away from some assumptions, the unique guides we have at our disposal are the physical constraints we can eventually identify. The eqs.(12), (15) and (16) work like constraints for the explicit calculations, i.e., when we evaluate the $AVV$ amplitude and after this contract the obtained expression, it must be possible to identify mathematical structures identical to those obtained in the evaluation of the $AV$ and $PVV$ functions previously calculated by the same methods. The importance of the identities resides in the fact that through such relations we can submit the decisions about the involved arbitrariness, in the evaluation of the $AVV$ amplitude, from which the CS term should be extracted, to the physical constraints imposed to the $AV$ and $SV$ two-point functions. Due to the eqs.(12), (15) and (16), such structures are expected to be identified in the evaluation of the $AVV$ amplitude. This aspect is crucial for the controversy about the value for the Chern-Simons term in the Extended QED. In order to show what we have announced, we first note that if we evaluate the traces involved in the $AVV$ structure, the answer can be written in the form [@Orimar2][@Orimar1] $$t_{\lambda \mu \nu }^{AVV}=-4\left\{ -f_{\lambda \mu \nu }+n_{\lambda \mu
\nu }+m_{\lambda \mu \nu }+p_{\lambda \mu \nu }\right\} ,$$ where, after the integration, only $n_{\lambda \mu \nu }$ will acquire a linear divergence’s degree. It is explicitly given by $$n_{\lambda \mu \nu }=\varepsilon _{\mu \nu \lambda \alpha }\frac{%
(k+k_{2})\cdot (k+k_{3})(k+k_{1})_{\alpha }}{\left[ (k+k_{1})^{2}-m^{2}%
\right] \left[ (k+k_{2})^{2}-m^{2}\right] \left[ (k+k_{3})^{2}-m^{2}\right] }%
,$$ which can be conveniently reorganized as $$\begin{aligned}
n_{\lambda \mu \nu } &=&\frac{\varepsilon _{\mu \nu \lambda \alpha }}{4}%
\left\{ \frac{2k_{\alpha }+(k_{1}+k_{2})_{\alpha }}{\left[
(k+k_{1})^{2}-m^{2}\right] \left[ (k+k_{2})^{2}-m^{2}\right] }+\frac{%
2k_{\alpha }+(k_{1}+k_{3})_{\alpha }}{\left[ (k+k_{1})^{2}-m^{2}\right] %
\left[ (k+k_{3})^{2}-m^{2}\right] }\right\} \nonumber \\
&&+\frac{\varepsilon _{\mu \nu \lambda \alpha }}{4}\left\{ \frac{%
(k_{1}-k_{2})_{\alpha }}{\left[ (k+k_{1})^{2}-m^{2}\right] \left[
(k+k_{2})^{2}-m^{2}\right] }+\frac{(k_{1}-k_{3})_{\alpha }}{\left[
(k+k_{1})^{2}-m^{2}\right] \left[ (k+k_{3})^{2}-m^{2}\right] }\right.
\nonumber \\
&&\;\;\;\;\;\;\ \;\;\;\left. +[2m^{2}-(k_{2}-k_{3})^{2}]\frac{%
2(k+k_{1})_{\alpha }}{\left[ (k+k_{1})^{2}-m^{2}\right] \left[
(k+k_{2})^{2}-m^{2}\right] \left[ (k+k_{3})^{2}-m^{2}\right] }\right\} .\end{aligned}$$ The first two terms contain now all the linear divergence and the ambiguous combination of the arbitrary internal lines momenta. Given the identity (17) it is expected that such terms are related to $SV$ two-point functions. In fact, it is easy to verify that $$4m\frac{2k_{\alpha }+(k_{i}+k_{j})_{\alpha }}{\left[ (k+k_{i})^{2}-m^{2}%
\right] \left[ (k+k_{j})^{2}-m^{2}\right] }=tr\left[ \widehat{1}\frac{1}{(%
\not{k}+{\not{k}}_{i})-m}\gamma _{\alpha }\frac{1}{(\not{k}+{\not{k}}_{j})-m}%
\right] .$$ After the integration in the momentum $k,$ the right hand side can be identified with the $SV$ two-point function defined in eq.(18). The important aspect involved resides in the fact that all the undefined pieces present in the $AVV$ amplitude are linked with the value of the $SV$ physical amplitude. Consequently, we can make use of the eventual physical constraints, to be imposed on the $SV$ amplitude, to guide us in taking the consistent choices for the corresponding arbitrariness present in the $AVV$ amplitude.
After these important remarks, in order to give additional steps to our investigations, some manipulations and calculations involving divergent amplitudes are required. This means to specify some strategy to handle the problem. We adopt the calculational strategy introduced by one of us [@Orimar2][@Orimar3][@Orimar4][@Orimar1] in order to specify the Feynman integrals which are necessary for the evaluations of all the Green’s functions involved in the present discussion.
First, we consider the two-point structures defined by $$\left( I_{2};I_{2}^{\mu }\right) =\int \frac{d^{4}k}{(2\pi )^{4}}\frac{%
\left( 1;k^{\mu }\right) }{[(k+k_{1})^{2}-m^{2}][(k+k_{2})^{2}-m^{2}]},$$ which are given by $$\begin{aligned}
\bullet I_{2} &=&I_{log}(m^{2})-\left( \frac{i}{(4\pi )^{2}}\right)
Z_{0}((k_{1}-k_{2})^{2};m^{2}) \\
\bullet \left( I_{2}\right) _{\mu } &=&-\frac{1}{2}(k_{1}+k_{2})_{\alpha
}\Delta _{\alpha \mu }-\frac{1}{2}(k_{1}+k_{2})_{\mu }\left( I_{2}\right) ,\end{aligned}$$ where we have introduced, in a more compact notation, the two-point function structures [@Orimar2] $$Z_{k}(\lambda _{1}^{2},\lambda _{2}^{2},q^{2};\lambda
^{2})=\int_{0}^{1}dzz^{k}ln\left( \frac{q^{2}z(1-z)+\left( \lambda
_{1}^{2}-\lambda _{2}^{2}\right) z-\lambda _{1}^{2}}{-\lambda ^{2}}\right) ,$$ and the basic divergent objects $$\begin{aligned}
\bullet \Delta _{\mu \nu } &=&\int_{\Lambda }\frac{d^{4}k}{\left( 2\pi
\right) ^{4}}\frac{4k_{\mu }k_{\nu }}{\left( k^{2}-m^{2}\right) ^{3}}%
-\int_{\Lambda }\frac{d^{4}k}{\left( 2\pi \right) ^{4}}\frac{g_{\mu \nu }}{%
\left( k^{2}-m^{2}\right) ^{2}} \\
\bullet I_{log}(m^{2}) &=&\int_{\Lambda }\frac{d^{4}k}{\left( 2\pi \right)
^{4}}\frac{1}{\left( k^{2}-m^{2}\right) ^{2}}.\end{aligned}$$ According to the same prescription we can also calculate the integrals $$\left( I_{3};I_{3}^{\mu };I_{3}^{\mu \nu }\right) =\int \frac{d^{4}k}{(2\pi
)^{4}}\frac{\left( 1;k^{\mu };k^{\mu }k^{\nu }\right) }{\left[
(k+k_{1})^{2}-m^{2}\right] \left[ (k+k_{2})^{2}-m^{2}\right] \left[
(k+k_{3})^{2}-m^{2}\right] }.$$ We write the results as $$\begin{aligned}
&&\bullet I_{3}=\left( \frac{i}{(4\pi )^{2}}\right) \xi _{00} \\
&&\bullet \left( I_{3}\right) _{\mu }=\left( \frac{i}{(4\pi )^{2}}\right)
\{\left( k_{1}-k_{2}\right) _{\mu }\xi _{01}-\left( k_{3}-k_{1}\right) _{\mu
}\xi _{10}\}-k_{1\mu }I_{3} \\
&&\bullet \left( I_{3}\right) _{\mu \nu }=\left( \frac{i}{(4\pi )^{2}}%
\right) \left\{ -\frac{g_{\mu \nu }}{2}\left[ \eta _{00}\right] \;+\left(
k_{1}-k_{2}\right) _{\mu }\left( k_{1}-k_{2}\right) _{\nu }\xi _{02}+\left(
k_{3}-k_{1}\right) _{\mu }\left( k_{3}-k_{1}\right) _{\nu }\xi _{20}\right.
\nonumber \\
&&\;\;\;\;\;\;\left. -\left( k_{1}-k_{2}\right) _{\mu }\left(
k_{3}-k_{1}\right) _{\nu }\xi _{11}-\left( k_{1}-k_{2}\right) _{\nu }\left(
k_{3}-k_{1}\right) _{\mu }\xi _{11}\right\} \nonumber \\
&&\;\;\;\;\;\;+\frac{g_{\mu \nu }}{4}\left[ I_{\log }\left( m^{2}\right) %
\right] +\frac{\Delta _{\mu \nu }}{4}-k_{1\mu }\left( I_{3}\right) _{\nu
}-k_{1\nu }\left( I_{3}\right) _{\mu }+k_{1\nu }k_{1\mu }I_{3}.\end{aligned}$$ Here we have introduced the three-point function structures ${\xi }_{nm}$ defined as $$\xi _{nm}(k_{1}-k_{2},k_{3}-k_{1};m)=\int_{0}^{1}\,dz\int_{0}^{1-z}\,dy{%
\frac{z^{n}y^{m}}{Q(y,z)}},$$ where $Q(y,z)=\left( k_{1}-k_{2}\right) ^{2}y(1-y)+\left( k_{3}-k_{1}\right)
^{2}z(1-z)+2\left( k_{1}-k_{2}\right) \cdot \left( k_{3}-k_{1}\right)
yz-m^{2},$ and $$\eta _{00}={\frac{1}{2}}Z_{0}((k_{3}-k_{2})^{2};m^{2})-\left( {\frac{1}{2}}%
+m^{2}\xi _{00})\right) +{\frac{1}{2}}\left( k_{3}-k_{1}\right) ^{2}\xi
_{10}+{\frac{1}{2}}\left( k_{1}-k_{2}\right) ^{2}\xi _{01}.$$ This systematization is sufficient for the present discussions. The main point is to avoid the explicit evaluation of such divergent structures, in which case a regulating distribution needs to be specified.
It is important, at this point, to emphasize the general aspects of the method. No shifts have been performed and, in fact, no divergent integrals have been calculated. All the final results produced by this approach can be mapped into those of any specific technique. The finite parts are the same as they should be by physical reasons. The divergent parts can be easily obtained . All we need is to evaluate the remaining divergent structures. By virtue of this general character, the present strategy can be simply used to systematize the procedures, even if one wants to use traditional techniques. Those parts that depend on the specific regularization method are naturally separated allowing us to analyze such dependence in a particular problem. Let us now use the above results to calculate the physical amplitudes.
Substituting the values for the Feynman integrals in the corresponding expressions for the $AV$ and $SV$ two-point functions, eq.(14) and eq.(18), we get $$\begin{aligned}
\bullet T_{\mu }^{VS}(k_{1},k_{2};m) &=&(-)4m(k_{1}+k_{2})_{\beta
}[\triangle _{\beta \mu }] \\
\bullet T_{\mu \nu }^{AV}(k_{1},k_{2};m) &=&2\varepsilon _{\mu \nu \alpha
\beta }(k_{2}-k_{1})_{\beta }(k_{1}+k_{2})_{\xi }\triangle _{\xi \alpha }.\end{aligned}$$ Note that the relation (17) is preserved by the performed calculations. We focus on the fact that, in spite of the potentially ambiguous character of the $AV$ and $SV$ functions, the identity (17) relating them is a non-ambiguous one. The above expressions are the most general ones for both mathematical structures. All the intrinsic arbitrariness are still present in the result. They are the undefined mathematical structure $\triangle
_{\mu \nu }$ and the ambiguous combination of the internal momenta $%
k_{i}+k_{j}.$ In order to give a definite result for the physical amplitudes, the arbitrariness should be removed through choices, which must be made preserving the physical requirements. So, we can ask if there are general aspects of QFT or symmetry determinations constraining the values for the $SV$ and $AV$ structures. In fact, it is easy to see that both two-point functions can be constrained to the identically zero value by general arguments. First, we note that, due to unitarity, if a two-point function like those considered is non-zero, it must have an imaginary part at the threshold $(k_{1}-k_{2})^{2}=4m^{2}$ (Cutkosky’s rules). The remaining arbitrariness involved in the expressions (35) and (36) cannot introduce such content to the $SV$ and $AV$ two-point functions. So, if a non-zero value for the $SV$ and $AV$ amplitudes is assumed, as a consequence of some choices, the unitarity is violated in both cases. In addition, if a non-zero value is taken, the Lorentz invariance in the $SV$ amplitude and the CPT symmetry in the $AV$ amplitude are broken. A connection between a scalar and a vector particle is stated by the $SV$ two-point function, and a connection between an axial and a vector particle through the $AV$ two-point function. Another argument comes from the Ward identities analysis. The contraction with the external momentum $k_{1}-k_{2}$ must lead us to a definite zero value for vector Lorentz indexes, and to a proportionality between the axial and the pseudo-scalar one in the case of the axial-vector Lorentz indexes. There is no consistent interpretation apart from the zero value for the $AV$ amplitude since both contractions lead us to a vanishing value. In order to get the physically consistent result, we have at our disposal two options: to choose the internal lines momenta such that $%
k_{1}+k_{2}=0$ or to select a regularization by the constraint $\triangle
_{\mu \nu }^{reg}=0$. We note that due to the identity (17) the values of the two-point functions $SV$ and $AV$ are intimately related. From the point of view of the Dimensional Regularization (DR), only the $SV$ one can be treated and the result is identically vanishing. Due to the presence of the $%
\gamma _{5}$ Dirac matrix, or the totally antisymmetric tensor $\varepsilon
_{\mu \nu \alpha \beta }$, such a treatment cannot be performed in the $AV$ amplitude. The strategy we have adopted can be equally applied to both cases and shows us that it is not reasonable to make choices that lead us to a zero value for one of them and to a non-zero for the other one.
Given this argumentation we can immediately identify in these structures, which are contained in the $AVV$ amplitude, the source of the Lorentz and CPT symmetry breaking in the evaluation of the CS term. Any choices for the ambiguities present in the $AVV$ structure [@Jackiw2][@Elias], which imply in the attribution of a nonzero value for these contributions, will, in fact, generate Lorentz and CPT violations, because such choices produce non-vanishing $AV$ and $SV$ structures. So, the corresponding contributions for the CS term cannot be considered as an implication of the QED Extended theory but as a consequence of the adoption of an interpretation for the arbitrariness involved, which is clearly not consistent. The importance of this conclusion can be viewed if we evaluate $\Pi _{\lambda \mu \nu }\left(
p\right) $ using the most general expression for the $AVV$ mathematical structure[@Orimar2][@Orimar5] $$\Pi _{\lambda \mu \nu }\left( p\right) =\left( \frac{1}{2\pi ^{2}}\right)
\varepsilon _{\mu \nu \lambda \beta }p_{\beta }+2i\left\{ \varepsilon _{\mu
\nu \beta \sigma }p_{\beta }\Delta _{\lambda \sigma }-\varepsilon _{\mu \nu
\lambda \beta }p_{\sigma }\Delta _{\beta \sigma }\right\} .$$ In order to be consistent with the above discussion, we must choose $%
\triangle _{\mu \nu }^{reg}=0$ eliminating then the contribution coming from the ambiguous terms. So, given this fact, can we conclude that the contribution for the CS term coming from $\Pi _{\lambda \mu \nu }\left(
p\right) $ is the (non-ambiguous) value $\left( \frac{1}{2\pi ^{2}}\right)
\varepsilon _{\mu \nu \lambda \beta }p^{\beta }$? Not yet! We must show that the expression used to extract the above equation for $\Pi _{\lambda \mu \nu
}\left( p\right) $ is in agreement with the symmetry content of the QED extended theory. In particular, the U(1) gauge symmetry was not assumed broken in the construction of the extended theory. Another important aspect is that concerning the low-energy limits. It is well-known that the $AVV$ amplitude should obey the soft limit: $\lim_{q_{\lambda }\rightarrow
0}q^{\lambda }T_{\lambda \mu \nu }^{AVV}=0.$ The most minimum consistency requirement would force us to put any calculation of the $AVV$ structure in accordance with these very general symmetry aspects. Otherwise, an eventual value for the CS term again could not be considered as a consequence of the extended theory but of the violation of other fundamental symmetries in the intermediate steps of the calculations. It is a simple matter to check that, by taking the explicit expression for the $AVV$ amplitude and by contracting with the external momenta, we obtain [@Orimar2] $$\begin{aligned}
\bullet \left( k_{3}-k_{1}\right) _{\mu }T_{\lambda \mu \nu }^{AVV}
&=&\left( \frac{i}{8\pi ^{2}}\right) \varepsilon _{\nu \beta \lambda \xi
}\left( k_{1}-k_{2}\right) _{\beta }\left( k_{3}-k_{1}\right) _{\xi } \\
\bullet \left( k_{1}-k_{2}\right) _{\nu }T_{\lambda \mu \nu }^{AVV}
&=&-\left( \frac{i}{8\pi ^{2}}\right) \varepsilon _{\mu \beta \lambda \xi
}\left( k_{1}-k_{2}\right) _{\beta }\left( k_{3}-k_{1}\right) _{\xi } \\
\bullet \left( k_{3}-k_{2}\right) _{\lambda }T_{\lambda \mu \nu }^{AVV}
&=&-\left( \frac{i}{4\pi ^{2}}\right) \varepsilon _{\mu \nu \alpha \beta
}\left( k_{3}-k_{1}\right) _{\alpha }\left( k_{1}-k_{2}\right) _{\beta }%
\left[ 2m^{2}\xi _{00}\right] ,\end{aligned}$$ where $\xi _{00}(\left( k_{3}-k_{2}\right) ^{2}=0)=\frac{-1}{2m^{2}}$. We can identify the expression on the right hand side of the eq.(40) as the calculated $PVV$ amplitude. Both ingredients above mentioned are absent from the $AVV$ amplitude. This is not surprising because it is the same situation we find in the Sutherland-Veltman paradox [@Sutherland-Veltman], connected with the pion decay phenomenology.
The above equation represents a manifestation of a fundamental phenomenon: the $AVV$ triangle anomaly. Note that it involves a different type of arbitrariness in the perturbative calculations. It is not associated to the ones related to divergences aspects. The terms which violate the U(1) gauge symmetry come from finite contributions and therefore are not affected by an eventual regularization scheme. The $AVV$ (symmetrized) physical amplitude must be constructed (in an arbitrary way) by subtracting the violating term $$\left( T_{\lambda \mu \nu }^{AVV}(\left( k_{3}-k_{1}\right) ,\left(
k_{1}-k_{2}\right) )\right) _{phys}=T_{\lambda \mu \nu }^{AVV}(\left(
k_{3}-k_{1}\right) ,\left( k_{1}-k_{2}\right) )-T_{\lambda \mu \nu
}^{AVV}\left( 0\right) ,$$ where $$T_{\lambda \mu \nu }^{AVV}\left( 0\right) =-\left( \frac{i}{8\pi ^{2}}%
\right) \varepsilon _{\mu \nu \lambda \beta }\left[ \left(
k_{3}-k_{1}\right) _{\beta }-\left( k_{1}-k_{2}\right) _{\beta }\right] .$$ The resulting amplitude preserves the U(1) gauge symmetry and it is in agreement with the low-energy theorem, in spite of violating the axial Ward identity involved. This procedure is precisely the one followed in the construction of the renormalizability of the Standard Model. So, again the consistency in the perturbative calculations requires the same interpretation for the same Green’s function. This means to adopt for the $%
AVV$ amplitude the expression $$\begin{aligned}
\left( T_{\lambda \mu \nu }^{AVV}\right) _{Phys} &=&\left( \frac{i}{\left(
4\pi \right) ^{2}}\right) \left( -4\right) \left( k_{3}-k_{1}\right) _{\xi
}\left( k_{2}-k_{1}\right) _{\beta }\left\{ \varepsilon _{\nu \lambda \beta
\xi }[\left( k_{3}-k_{1}\right) _{\mu }\left( \xi _{20}+\xi _{11}-\xi
_{10}\right) \right. \nonumber \\
&&\hspace{1in}\hspace{1in}\hspace{0.4in}\;+\left( k_{2}-k_{1}\right) _{\mu
}\left( \xi _{11}+\xi _{02}-\xi _{01}\right) ] \nonumber \\
&&\hspace{1in}\hspace{1in}\;\;+\varepsilon _{\mu \lambda \beta \xi }[\left(
k_{3}-k_{1}\right) _{\nu }\left( \xi _{11}+\xi _{20}-\xi _{10}\right)
\nonumber \\
&&\hspace{1in}\hspace{1in}\hspace{0.4in}\;+\left( k_{2}-k_{1}\right) _{\nu
}\left( \xi _{02}+\xi _{11}-\xi _{01}\right) ] \nonumber \\
&&\hspace{1in}\hspace{1in}\;\;+\varepsilon _{\mu \nu \beta \xi }[\left(
k_{3}-k_{1}\right) _{\lambda }\left( \xi _{11}-\xi _{20}+\xi _{10}\right)
\nonumber \\
&&\hspace{1in}\hspace{1in}\hspace{0.4in}\;-\left. \left( k_{2}-k_{1}\right)
_{\lambda }\left( \xi _{02}-\xi _{01}-\xi _{11}\right) ]\right\} \nonumber
\\
&&-\left( \frac{i}{\left( 4\pi \right) ^{2}}\right) \varepsilon _{\mu \nu
\lambda \beta }\left( k_{3}-k_{1}\right) _{\beta }\left\{ Z_{0}\left( \left(
k_{1}-k_{3}\right) ^{2};m^{2}\right) -Z_{0}\left( \left( k_{2}-k_{3}\right)
^{2};m^{2}\right) \right. \nonumber \\
&&\hspace{1in}\hspace{1in}+\left[ 2\left( k_{3}-k_{2}\right) ^{2}-\left(
k_{1}-k_{3}\right) ^{2}\right] \xi _{10}+ \nonumber \\
&&\hspace{1in}\hspace{1in}\left. -\left( k_{1}-k_{2}\right) ^{2}\xi _{01}+%
\left[ 1-2m^{2}\xi _{00}\right] \right\} \nonumber \\
&&-\left( \frac{i}{\left( 4\pi \right) ^{2}}\right) \varepsilon _{\mu \nu
\lambda \beta }\left( k_{2}-k_{1}\right) _{\beta }\left\{ Z_{0}\left( \left(
k_{1}-k_{2}\right) ^{2};m^{2}\right) -Z_{0}\left( \left( k_{2}-k_{3}\right)
^{2};m^{2}\right) \right. \nonumber \\
&&\hspace{1in}\hspace{1in}+\left[ 2\left( k_{3}-k_{2}\right) ^{2}-\left(
k_{1}-k_{2}\right) ^{2}\right] \xi _{01} \nonumber \\
&&\hspace{1in}\hspace{1in}\left. -\left( k_{3}-k_{1}\right) ^{2}\xi _{10}+%
\left[ 1-2m^{2}\xi _{00}\right] \right\} -T_{\lambda \mu \nu }^{AVV}\left(
0\right) .\end{aligned}$$ Now, taking the kinematical situation where the CS term is defined, eq.(9), we get $$\Pi _{\mu \nu \lambda }(p)=\left( \frac{1}{2\pi ^{2}}\right) \varepsilon
_{\mu \nu \lambda \beta }p_{\beta }-iT_{\lambda \mu \nu }^{A\rightarrow
VV}\left( 0\right) .$$ Identifying then $T_{\lambda \mu \nu }^{AVV}\left( 0\right) $ as the violating term on the left hand side of the eq.(38) and (39) this means that the identically vanishing value is obtained.
So, a clean and sound conclusion is extracted: the consistency in perturbative calculations leave no room for the existence of the radiatively induced CS term in the extended QED. Therefore, if one wants to get a nonzero value for such contribution, it is necessary: [**1)**]{} to break in the intermediary steps of the calculation, Lorentz, CPT, unitarity and an axial Ward identity by attributing to mathematical structures, identical to the two-points functions $AV$ and $SV$, which are related, a nonzero value or, [**2)**]{} to violate the low-energy theorem $\lim_{q_{\lambda
}\rightarrow 0}q^{\lambda }T_{\lambda \mu \nu }^{AVV}=0,$ which may imply simultaneously in the violation of U(1) gauge symmetry in the Extended QED. The implication of the last sentence is the spoiling of the Standard Model renormalizability by destroying the anomaly cancellation mechanism. Any of such options clearly implies in ignoring the wider sense of the consistency in perturbative calculations, which means to treat the same Green’s function in the same way in all places where they occur. If one does not consider these aspects, in fact, one can obtain Lorentz and CPT violation not only for the discussed problem but, following the same recipe, it is possible to state a copious number of similar situations in other theories and models.
[**Acknowledgements**]{}: G.D. acknowledge a grant from CNPq/Brazil and O.A.B. from FAPERGS/Brazil.
D. Colladay and V.A. Kostelecký, Phys. Rev. D[**55**]{} 6760 (1997).
D. Colladay and V.A. Kostelecký, Phys. Rev. D[**58**]{} 116002 (1998).
S. Carroll, G. Field and R. Jackiw, Phys. Rev. D[**41**]{} 1231 (1990); M. Goldhaber and V. Trimble, J. Astrophys. Astr. [**17**]{} 17 (1996); S. Carroll and G. Field, Phys. Rev. Lett. [**79**]{} 2394 (1997).
S. Coleman and S. Glashow, Phys. Rev. D[**59**]{} 116008 (1999).
R. Jackiw and V. A. Kostelecký, Phys. Rev. Lett. [**82**]{} 3572-3575 (1999).
M. Pérez-Victoria, Phys. Rev. Lett. [**83**]{} 2518 (1999); M. Pérez-Victoria, J. High Energy Phys. 04 032 (2001) and references therein.
A.A. Andrianov, P. Giacconi, R. Soldati, JHEP 0202:030 (2002).
V.A. Kostelecký, R. Lehneart, Phys. Rev. D[**63**]{} 065008 (2001).
G. Bonneau, Nucl. Phys. B[**593**]{}, 398 (2001); See also HEP-TH/0109105.
C. Adam and F.R. Klinkhamer, Nucl. Phys. B[**607**]{} 247 (2001); Phys. Lett. B[**513**]{} 245 (2001);
O.A. Battistel and G. Dallabona, Nucl. Phys. B[**610**]{} 316 (2001).
O.A. Battistel and G. Dallabona, J. Phys. G[**27**]{} L53 (2001).
O.A. Battistel, [*PhD Thesis 1999*]{}, Universidade Federal de Minas Gerais, Brazil.
G.’t Hooft and M. Veltman, Nucl. Phys. B[**44**]{}, 189 (1972); C.G. Bollini and J.J. Giambiagi, Phys. Lett. B[**40**]{} 566 (1972); J.F. Ashmore, Nuovo Cimento Lett. [**4**]{} 289 (1972); G.M. Cicuta and E. Montaldi, Nuovo Cimento Lett. [**4**]{} 329 (1972).
O.A. Battistel and O.L. Battistel, Int. J. Mod. Phys. A[**17**]{}, 1-39 (2002).
O.A. Battistel and M.C. Nemes, Phys. Rev. D[**59**]{} 055010 (1999).
O.A. Battistel and G. Dallabona, Phys. Rev. D[**65**]{} 125017 (2002).
J.S. Bell and R. Jackiw, [*Nuovo Cimento*]{} [**60**]{} (1969) 47.
V. Elias, Gerry McKeon, R.B. Mann, Nucl. Phys. B[**229**]{} 487 (1983).
D.G. Sutherland, Phys.lett. [**23**]{} 384(1966), Nucl. Phys. B[**2**]{} 433(1967); M. Veltman, Proc.R.Soc. A[**301**]{} 107(1967).
|
---
author:
-
-
-
bibliography:
- 'IEEEabrv.bib'
- 'edcc.bib'
title: 'Improving Dependability of Neuromorphic Computing With Non-Volatile Memory'
---
Introduction {#sec:introduction}
============
Background {#sec:background}
==========
Reliability Formulation {#sec:formulation}
=======================
Proposed Solution {#sec:approach}
=================
Results and Discussions {#sec:evaluation}
=======================
Conclusion {#sec:conclusion}
==========
Acknowledgment {#acknowledgment .unnumbered}
==============
This work is supported by the National Science Foundation Faculty Early Career Development Award CCF-1942697 (CAREER: Facilitating Dependable Neuromorphic Computing: Vision, Architecture, and Impact on Programmability).
|
---
abstract: 'In this paper we consider a model for the diffusion of a population in a strip-shaped field, where the growth of the species is governed by a Fisher-KPP equation and which is bounded on one side by a road where the species can have a different diffusion coefficient. Dirichlet homogeneous boundary conditions are imposed on the other side of the strip. We prove the existence of an asymptotic speed of propagation which is greater than the one of the case without road and study its behavior for small and large diffusions on the road. Finally we prove that, when the width of the strip goes to infinity, the asymptotic speed of propagation approaches the one of an half-plane bounded by a road, case that has been recently studied in [@BRR1; @BRR2].'
author:
- |
Andrea Tellini\
\
\
\
`andrea.tellini@ehess.fr`
title: '**Propagation speed in a strip bounded by a line with different diffusion**[^1]'
---
*In memory of Giuliano Bardi (1948–2012),\
the one who taught me what a derivative is.*
**Keywords:** KPP equations, reaction-diffusion systems, 1D-2D systems, asymptotic speed of propagation.
**2010 MSC:** 35K57, 35B40, 35K40, 35B53.
Introduction {#sec1}
============
Recently, in [@BRR1], the system $$\label{11}
\left\{ \begin{array}{lll}
\!\!\!u_t(x,t)\!-\!Du_{xx}(x,t)\!=\!\nu v(x,L,t)\!-\!\m u(x,t) & \!\!\text{for } x\in{{\mathbb{R}}}, & \!\! t>0 \\
\!\!\!v_t(x,y,t)-d\D v(x,y,t)=f(v) & \!\!\text{for } \!(x,y)\!\in\!{{\mathbb{R}}}\!\times\!(-\infty,L), & \!\! t>0 \\
\!\!\!dv_y(x,L,t)=\m u(x,t)-\nu v(x,L,t) & \!\!\text{for } x\in{{\mathbb{R}}}, & \!\! t>0.
\end{array} \right.$$ was introduced to model the evolution of the species $v(x,y,t)$ in a *field* ${{\mathbb{R}}}\times(-\infty,L)$ which is bounded at the level $y=L$ by a *road* where part of the same species, $u(x,t)$, diffuses with coefficient $D>0$, which in principle may be different from the diffusion coefficient in the field $d>0$. A reaction of Fisher-KPP type takes place in the field, i.e. $f\in{\mathcal}{C}^1([0,+\infty))$ satisfies $$\label{12}
f(0)=0=f(1), \qquad 0<f(s)<f'(0)s \;\; \text{ in } (0,1), \qquad f<0 \text{ in } (1,+\infty).$$ On the contrary, no reaction occurs on the road, where the density of the species varies only because a fraction $\mu>0$ of the population jumps from the road to the field while a fraction $\nu>0$ of the population jumps from the field to the road.
This model was motivated by empirical observations of wolves moving along seismic lines in Canada (see [@McK]) or insects like the *Aedes albopictus* (tiger mosquito) spreading in the United States along highways (see [@MM]). Another example of this phenomenon is the diffusion of diseases along commercial and transport networks (see [@TRH] and the references therein).
In [@BRR1], the authors established the existence of an asymptotic speed of propagation (see Definition \[deasp\]) of the solution of , starting from a continuous, nonnegative, compactly supported initial datum $(u_0, v_0)\neq(0, 0)$, towards the unique steady state of the problem, as well as some qualitative properties of it. Denoting such speed by $c^*_{\infty}$, they showed that, if $D\leq 2d$, then $c^*_{\infty}={c_{\operatorname{KPP}}}$, where $$\label{13}
{c_{\operatorname{KPP}}}=2\sqrt{df'(0)}$$ is the asymptotic speed of propagation of the classical Fisher-KPP equation $$v_t(x,y,t)-d\D v(x,y,t)=f(v(x,y,t))$$ in the half-plane (see [@KPP; @AW]), while, if $D> 2d$, then $c^*_{\infty}>{c_{\operatorname{KPP}}}$. This means that a large diffusion on the road speeds up the propagation of the population in the field. Moreover the authors showed that the spreading velocity increases to infinity as the diffusivity on the line grows to infinity. In [@BRR2] they also studied the influence that a drift term and a Fisher-KPP reaction also on the road have on the asymptotic speed of propagation.
In this work we investigate the effect of the road on the propagation in a field which is no longer a half-plane but a strip $\O={{\mathbb{R}}}\times(0,L)$. On the other part of the boundary of the field we impose homogeneous Dirichlet boundary conditions, modeling in this way an unfavorable region at level $y=0$. The system we consider is therefore $$\label{14}
\left\{ \begin{array}{lll}
u_t(x,t)-Du_{xx}(x,t)=\nu v(x,L,t)-\m u(x,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v_t(x,y,t)-d\D v(x,y,t)=f(v) & \text{for } (x,y)\in\O, & t>0 \\
dv_y(x,L,t)=\m u(x,t)-\nu v(x,L,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v(x,0,t)=0 & \text{for } x\in{{\mathbb{R}}}, & t>0, \end{array}\right.$$ where $D,d,\mu,\nu,L$ are positive constants. Ascertaining the long time behavior of the solutions of the Cauchy problem associated to will be the first step in the study of the problem. The result is the following
\[thltb\] Let $(u,v)$ denote the solution of starting from a nonnegative, not equal to $(0,0)$, bounded and continuous initial datum $(u_0, v_0)$. If $$\label{15}
\frac{f'(0)}{d}\leq\left(\frac{\pi}{2L}\right)^2,$$ then $$\label{16}
\lim_{t\to+\infty}(u(x,t),v(x,y,t))=(0,0)$$ locally uniformly in $\overline{\O}$, while if $$\label{17}
\frac{f'(0)}{d}>\left(\frac{\pi}{2L}\right)^2$$ and, moreover, $$\label{18}
\frac{f(s)}{s} \text{ is nonincreasing},$$ then $$\label{19}
\lim_{t\to+\infty}(u(x,t),v(x,y,t))=\left(\frac{\nu}{\mu}V(L),V(y)\right)$$ locally uniformly in $\overline{\O}$, where $V(y)$ is the unique solution of $$\label{110}
\left\{ \begin{array}{l}
-dV''(y)=f(V(y)) \quad \text{for } y\in(0,L)\\
V'(L)=0, \quad V(0)=0. \end{array}\right.$$
A remarkable difference with respect to is that here the width of the strip $L$ plays a role in the existence of positive steady states. Moreover, condition was not necessary to guarantee uniqueness in [@BRR1], while here it is (see Remark \[re33\] below). From a biological point of view, Theorem \[thltb\] says that, if the strip is not sufficiently large, the influence of the unfavorable region $y=0$ drives the species to extinction. On the contrary, the species will persist if the strip is sufficiently large. In this latter case, a natural question is to study more deeply how the convergence to the steady state occurs. To this end we consider the following concept
\[deasp\] We say that $c^*\in{{\mathbb{R}}}_+$ is an *asymptotic speed of propagation* (in the $x-$direction) for if, denoting by $(u, v)$ the solution of with a continuous, nonnegative, compactly supported initial datum $(u_0, v_0)\neq(0, 0)$, we have
- for all $c > c^*$, $$\label{111}
\lim_{t\to +\infty} \sup_{\substack{|x|\geq ct \\ y\in[0,L]}} |(u(x, t), v(x, y, t))| = 0,$$
- for all $0<c < c^*$, $$\label{112}
\lim_{t\to +\infty}\sup_{\substack{|x|\leq ct \\ y\in[0,L]}} \left|(u(x, t), v(x, y, t)) - \left(\frac{\nu}{\mu}V(L), V(y)\right)\right|=0,$$ where $V(y)$ is the unique solution of .
In this sense, the main result of the paper is the following
\[thmain\] Problem admits an asymptotic speed of propagation, denoted by $c^*=c^*_L(D,d,\mu,\nu)$, such that:
- $D\mapsto c^*(D)$ is increasing,
- the following limits exist and are positive real numbers $$\lim_{D\to 0}c^*(D)=\ell_0,
\qquad
\lim_{D\to+\infty}\frac{c^*(D)}{\sqrt{D}}=\ell_\infty,$$
- for fixed $D,d,\mu,\nu$ we have $$\lim_{L\to+\infty}c^*_{L}(D,d,\mu,\nu)=c^*_{\infty}(D,d,\mu,\nu),$$ where $c^*_{\infty}(D,d,\mu,\nu)$ is the asymptotic speed of propagation of Problem .
The paper is organized like follows: in Section \[sec2\] we recall some tools from [@BRR1; @BRR2] which will be indispensable throughout the rest of the paper; Section \[sec3\] provides the proof of Theorem \[thltb\]; in Sections \[sec4\] and \[sec5\] we construct $c^*$ and derive some properties that will allow us, in Section \[sec6\], to show that it satisfies Definition \[deasp\] and relation $(i)$ of Theorem \[thmain\]. In Section \[sec7\] we prove relation $(ii)$ of Theorem \[thmain\] and finally in Section \[sec8\] we study the influence of the road on the asymptotic speed of propagation, comparing with the case in which no road is present, and give the proof of Theorem \[thmain\]$(iii)$.
Preliminary results {#sec2}
===================
In this section we present some fundamental results that are contained in or follow easily from [@BRR1; @BRR2]. The existence of a solution for the Cauchy problem associated to with a continuous initial datum $(u_0,v_0)$ follows from an easy modification of [@BRR1 Appendix A] and uniqueness follows from a comparison principle which will be diffusely used throughout this paper and whose proof can be easily adapted from [@BRR1 Proposition 3.2]. Before stating it we point out that, as usual, by a *supersolution* (resp. *subsolution*) of we mean a pair $(u,v)$ satisfying System with $\geq$ (resp. $\leq$) instead of $=$.
\[prcp1\] Let $(\un{u}, \un{v})$ and $(\ov{u},\ov{v})$ be, respectively, a subsolution bounded from above and a supersolution bounded from below of satisfying $\un{u}\leq\ov{u}$ and $\un{v}\leq\ov{v}$ at $t = 0$. Then, either $\un{u}<\ov{u}$ and $\un{v}<\ov{v}$ for all $t$, or there exists $T > 0$ such that $(\un{u}, \un{v})=(\ov{u},\ov{v})$ for $t\leq T$.
We also need the following comparison principle regarding an extended class of generalized subsolutions and which is a particular instance of [@BRR1 Proposition 3.3].
\[prcp2\] Let $E\subset {{\mathbb{R}}}\times{{\mathbb{R}}}_+$ and $F\subset \O\times{{\mathbb{R}}}_+$ be two open sets, let $(u_1, v_1)$ be a subsolution of bounded from above and satisfying $$u_1 = 0 \text{ on } (\p E) \cap ({{\mathbb{R}}}\times{{\mathbb{R}}}_+), \quad v_1= 0 \text{ on } (\p F) \cap (\O\times{{\mathbb{R}}}_+),$$ and consider $$\un{u} :=\begin{cases}
\max\{u_1, 0\} & \text{in $\ov{E}$} \\
0 & \text{otherwise,}
\end{cases}
\qquad
\un{v} :=\begin{cases}
\max\{v_1, 0\} & \text{in $\ov{F}$} \\
0 & \text{otherwise.}
\end{cases}$$ If they satisfy $$\label{21}
\begin{split}
\un{v}(x, L, t) &\geq v_1(x, L, t) \quad \text{ for all } (x,t) \text{ such that } \un{u}(x, t) > 0, \\
\un{u}(x, t) &\geq u_1(x, t) \qquad \text{ for all } (x,t) \text{ such that } \un{v}(x, L, t) > 0,
\end{split}$$ then, for any supersolution $(\ov{u},\ov{v})$ of bounded from below and such that $\un{u}\leq\ov{u}$ and $\un{v}\leq\ov{v}$ at $t = 0$, we have $\un{u}\leq\ov{u}$ and $\un{v}\leq\ov{v}$ for all $t > 0$.
\[recp2\] The same result of Proposition \[prcp2\] holds for problems like with an additional drift term in the differential operator.
As a consequence of the previous analysis, we will consider continuous nonnegative initial data throughout the rest of this work, since we are interested in nonnegative solutions of .
Liouville-type result and long time behavior {#sec3}
============================================
In order to determine the long time behavior of the solutions of we need to study the solutions of the elliptic system associated to it, precisely $$\label{31}
\left\{ \begin{array}{ll}
-D U_{xx}(x)=\nu V(x,L)-\m U(x) & \text{for } x\in{{\mathbb{R}}}, \\
-d\D V(x,y)=f(V) & \text{for } (x,y)\in\O, \\
d V_y(x,L)=\m U(x)-\nu V(x,L) & \text{for } x\in{{\mathbb{R}}}, \\
V(x,0)=0 & \text{for } x\in{{\mathbb{R}}}. \end{array}\right.$$ Actually, Propositions \[pr31\] and \[pr34\] below suggest that we have to focus on solutions of which are $x-$independent. They are of the form $(U,V(y))$, where $V$ satisfies and, thanks to the first equation of , $U=\frac{\nu}{\m} V(L)$. The first result regarding the long time behavior is the following
\[pr31\] Let $(u,v)$ be the solution of starting with a nonnegative, bounded initial datum $(u_0,v_0)$. Then, there exists a nonnegative, bounded solution $V_1$ of such that $$\limsup_{t\to+\infty}u(x,t)\leq U_1, \qquad \limsup_{t\to+\infty}v(x,y,t)\leq V_1(y)$$ locally uniformly in $\overline{\O}$, where $$U_1=\frac{\nu}{\m} V_1(L).$$
Observe preliminarily that, if we define, for $(x,y)\in\overline{\O}$, $$\overline{v}(x,y)=\max\left\{1,\left\|v_0\right\|_{\infty},\frac{\mu}{\nu}\left\|u_0\right\|_{\infty}\right\}, \qquad \overline{u}(x)=\frac{\nu}{\mu}\overline{v},$$ then $(\overline{u},\overline{v})$ is a strict supersolution of which is larger than $(u_0,v_0)$. Therefore, by Proposition \[prcp1\], we have that the solution of with $(\overline{u},\overline{v})$ as initial datum is decreasing and, thanks to parabolic estimates, converges locally uniformly in $\overline{\O}$ to a nonnegative stationary solution $(U_1,V_1)$ of , i.e. a solution of . Proposition \[prcp1\] also gives $$\limsup_{t\to+\infty} u(x,t)\leq U_1(x), \qquad \limsup_{t\to+\infty} v(x,y,t)\leq V_1(x,y).$$ From the invariance of Problem in the $x$ direction and the uniqueness of the associated Cauchy problem, translations in $x$ of a solution of with a certain initial datum coincide with the solution of starting from the translated initial datum. Since $(\ov{u},\ov{v})$ is $x-$independent, the $x-$invariance of $(U_1,V_1)$ follows.
Obviously, admits the trivial solution $V=0$. In the following proposition we will show that is a necessary and sufficient condition for to possess positive bounded solutions.
\[pr32\] Problem admits positive bounded solutions if and only if holds. Moreover, if we assume and , then Problem admits a *unique* positive bounded solution.
We begin with the necessity of . Suppose it does not hold and Problem admits a positive solution $v$. Then, multiplying the differential equation of by $\sin(\frac{\pi}{2L}y)$ and integrating by parts in $(0,L)$, we get $$\begin{gathered}
\int_0^Lf(v(y))\sin\left(\frac{\pi}{2L}y\right)\,dy=d\left(\frac{\pi}{2L}\right)^2\int_0^Lv(y)\sin\left(\frac{\pi}{2L}y\right)\,dy\geq \\
\geq f'(0)\int_0^Lv(y)\sin\left(\frac{\pi}{2L}y\right)\,dy>\int_0^Lf(v(y))\sin\left(\frac{\pi}{2L}y\right)\,dy,\end{gathered}$$ where, for the last relation, we have used the second assumption in . We have reached a contradiction and therefore no positive solution can exist.
Now we pass to the sufficiency. First of all we show that any positive solution of must satisfy $v(y)<1$ in $[0,L]$. Indeed if there was a point $y_0\in(0,L]$ where $v(y_0)=1$, either it would be a maximum of $v$ in a (relative to $(0,L]$) neighborhood of $y_0$, but in such a case $v\equiv 1$ by the uniqueness of the Cauchy problem $-v''=f(v)$ with conditions $v(y_0)=1, v'(y_0)=0$, or there would be $y_1\in(0,L]$ with $v(y_1)=\max v>1$. But in this case, $-dv''(y_1)=f(v(y_1))<0$, which is impossible for a maximum.
As a consequence of this result and , we have that $v''<0$ in $(0,L)$ and therefore $v'$ is decreasing. This means that $v'$ is positive in $(0,L)$, since $v'(L)=0$, i.e. $v$ is increasing in $(0,L)$. By multiplying the differential equation of by $v'$ and integrating in $(y,L)$, with $0<y<L$, we get $$d\frac{v'(y)^2}{2}=\int_{v(y)}^{v(L)}f(s)\,ds$$ and, recalling that $v'>0$, we have that any solution of must satisfy $$L=\int_0^L\frac{v'(y)\,dy}{\sqrt{\frac{2}{d}\int_{v(y)}^{v(L)}f(s)\,ds}}=\int_0^1\frac{d\xi}{\sqrt{\frac{2}{d}\int_{\xi}^{1}\frac{f(v(L)\eta)}{v(L)\eta}\eta\,d\eta}}$$ with $0<v(L)<1$. Therefore, if we define the function $$\label{33}
M(\r):=\int_0^1\frac{d\xi}{\sqrt{\frac{2}{d}\int_{\xi}^{1}\frac{f(\r\eta)}{\r\eta}\eta\,d\eta}},$$ which is continuous in $(0,1)$ and measures the length of the interval necessary for a solution of $$\left\{ \begin{array}{l}
-dV''(y)=f(V(y)) \quad \text{for } y\in(0,y_0)\\
V(0)=0, \quad V'(y_0)=0 \end{array}\right.$$ to attain its maximum value $\r$ (at $y_0$), the uniqueness of the Cauchy problem associated to the ordinary differential equation of , we have that any solution of $M(\r)=L$ provides a solution of . The function $M$ satisfies $$\lim_{\r\ua 1}M(\r)=+\infty$$ since a maximum equal to $1$ cannot be attained in a finite interval, as seen before. Moreover, thanks to , $$\lim_{\r\da 0}M(\r)=\sqrt{\frac{d}{f'(0)}}\int_0^1\frac{d\xi}{\sqrt{1-\xi^2}}=\sqrt{\frac{d}{f'(0)}}\frac{\pi}{2}<L$$ and, therefore, there exists $\bar{\r}\in(0,1)$ such that $M(\bar{\r})=L$, which provides us with a solution of .
As far as uniqueness, it is easily seen that, under hypothesis , the function $M$ is increasing and therefore there exists a unique value of $\rho$ for which $M(\rho)=L$.
\[re33\] If condition does not hold, Problem may exhibit more than one solution. Consider indeed $f(s)=s(-6s^3+9s^2-4s+1)$, which satisfies but not . With this choice, the function $M$ defined in , which is ${\mathcal}{C}^1[0,1)$, satisfies $$M'(\r)=\sqrt{\frac{15\, d}{2}}\int_0^1h(\r,\xi)\,d\xi$$ where $$\label{34}
h(\r,\xi):=\frac{216\r^2(1-\xi^5)-270\r(1-\xi^4)+80(1-\xi^3)}{\left(-72\r^3(1-\xi^5)+135\r^2(1-\xi^4)-80\r(1-\xi^3)+30(1-\xi^2)\right)^{3/2}}$$ and, as a consequence, $M'(0)>0$, since $h(0,\xi)>0$ for every $\xi\in(0,1)$. On the other hand, for $\r=1/2$ the numerator in reduces to $$h_1(\xi)=-54\xi^5+135\xi^4-80\xi^3-1,$$ which satisfies $h_1(0)=-1$, is decreasing for $\xi\in(0,2/3)$ and increasing for $\xi>2/3$. Since $h_1(1)=0$, this implies that $h(1/2,\xi)<0$ for all $\xi\in(0,1)$ and, therefore, $M'(1/2)<0$. Recalling that $M(\r)\to+\infty$ as $\r\ua 1$, the previous analysis entails that $L$ can be chosen in such a way that Problem possesses at least $3$ solutions.
The last result we need in order to prove Theorem \[thltb\] is the following
\[pr34\] Assume and let $(u,v)$ be the solution of starting with a nonnegative, not equal to $(0,0)$, bounded initial datum. Then, there exists a positive bounded solution $V_2$ of such that $$U_2\leq \liminf_{t\to+\infty}u(x,t), \qquad V_2\leq \liminf_{t\to+\infty}v(x,y,t)$$ locally uniformly in $(x,y)\in\overline{\O}$, where $$U_2=\frac{\nu}{\mu}V_2(L).$$
Consider $$\label{35}
(\un{u},\un{v}):=\left\{\begin{array}{ll}
\cos(\o x)\left(1,\frac{\mu\sin(\b y)}{d\b\cos(\b L)+\nu\sin(\b L)}\right) &\text{if $(x,y)\in\left(-\frac{\pi}{2\o},\frac{\pi}{2\o}\right)\times\left[0,L\right]$} \\
(0,0) &\text{otherwise}.
\end{array}\right.$$ We now show that $\b$ and $\o$ can be chosen so that $(\un{u},\un{v})$ satisfy $$\label{36}
\left\{ \begin{array}{ll}
-D\un{u}_{xx}(x)\leq\nu \un{v}(x,L)-\m \un{u}(x) & \text{for } x\in \left(-\frac{\pi}{2\o},\frac{\pi}{2\o}\right) \\
-d\D \un{v}(x,y)\leq(f'(0)-\d)\un{v} & \text{for } (x,y)\in\left(-\frac{\pi}{2\o},\frac{\pi}{2\o}\right)\times\left(0,L\right) \\
d\un{v}_y(x,L)=\m \un{u}(x)-\nu \un{v}(x,L) & \text{for } x\in \left(-\frac{\pi}{2\o},\frac{\pi}{2\o}\right) \\
\un{v}(x,0)=0 & \text{for } x\in \left(-\frac{\pi}{2\o},\frac{\pi}{2\o}\right), \end{array}\right.$$ for $0<\d<f'(0)$ and therefore, by the second relation of , there exists $\e_0$ such that, for all $0<\e<\e_0$, $\e(\un{u},\un{v})$ is a strict generalized subsolution of to which Proposition \[prcp2\] can be applied. Observe that, with choice , the last two equations of are satisfied and the first two inequalities reduce to $$D\o^2\leq \frac{-\mu d\b\cos(\b L)}{d\b\cos(\b L)+\nu\sin(\b L)}, \qquad
d\o^2+d\b^2\leq f'(0)-\d.$$ Now, thanks to , it is possible to fix $\d$ in such a way that $$d\left(\frac{\pi}{2L}\right)^2<f'(0)-\d$$ and take $\frac{\pi}{2L}<\b<\frac{\pi}{L}$ in a neighborhood of $\frac{\pi}{2L}$ (we denote $\b\sim\frac{\pi}{2L}$), so that $$m:=\min\left\{\frac{f'(0)-\d}{d}-\b^2,\frac{-\mu d\b\cos(\b L)}{D\left(d\b\cos(\b L)+\nu\sin(\b L)\right)}\right\}>0.$$ As a consequence, if $\o^2\leq m$, $(\un{u},\un{v})$ satisfies . Moreover, reducing $\e$ if necessary, we can assume that $\e(\un{u},\un{v})<(u(x,1),v(x,y,1))$, because, thanks to Proposition \[prcp1\] and the Hopf lemma, we have that $$u(x,1)>0, \quad v_y(x,0,1)>0 \quad \text{ and } \quad v(x,y,1)>0 \quad \text{ for all } (x,y)\in\O,$$ and, in addition, $v(x,L,1)>0$ for every $x\in{{\mathbb{R}}}$, since if there was $x_0$ such that $v(x_0,L,1)=0$, then from the third equation in we would have $$dv_y(x_0,L,1)=\mu u(x_0,1)>0,$$ which is impossible, since, again by Proposition \[prcp1\], $v(x,y,1)\geq 0$ in $\ov{\O}$.
By Proposition \[prcp2\], the solution of with $\e(\un{u},\un{v})$ as initial datum, converges, increasingly, to a stationary solution $(U_2,V_2)$ of locally uniformly in $\overline{\O}$ and moreover $$U_2\leq\liminf_{t\to+\infty}u(x,t) \quad \text{ and } \quad V_2\leq\liminf_{t\to+\infty}v(x,y,t).$$ As before we have, for all $(x,y)\in\O$ $$U_2(x)>0, \quad V_{2,y}(x,0)>0, \quad V_2(x,y)>0 \quad \text{ and } \quad V_2(x,L)>0$$ and, since $\e(\un{u},\un{v})$ is continuous and compactly supported and, thanks to the above-mentioned monotonicity, it does not touch $(U_2,V_2)$, we have that there exists $k>0$, $k\sim 0$, such that $\e(\un{u}(x-h),\un{v}(x-h,y))$, which still is a subsolution of lies below $(U_2(x),V_2(x,y))$ for all $h\in(-k,k)$. Anyway, by the uniqueness of the Cauchy problem associated to , the solution of with the translated subsolution as initial datum converges to the corresponding translation of $(U_2,V_2)$ and, by comparison, we have that $(U_2,V_2)$ is smaller than small translations in the $x$ direction of itself, which entails that the partial derivatives of $(U_2,V_2)$ with respect to $x$ are $0$.
We are now able to give the
*Proof of Theorem \[thltb\].* If holds, we obtain from Propositions \[pr31\] and \[pr32\], since Proposition \[prcp1\] guarantees that $(u,v)\geq(0,0)$. On the other hand, if and hold, follows from Propositions \[pr31\], \[pr32\] and \[pr34\].
Since we are interested in the speed of propagation towards positive steady states of , we will assume and throughout the rest of the paper, for to hold.
Supersolutions in the moving framework {#sec4}
======================================
In this section we construct positive supersolutions to moving at appropriate speeds in the $x-$direction. This will be the key to find an upper bound for the asymptotic speed of propagation of Theorem \[thmain\] (see Section \[sec6\]). Observe that solutions of the linearized problem $$\label{41}
\left\{ \begin{array}{lll}
u_t(x,t)-Du_{xx}(x,t)=\nu v(x,L,t)-\m u(x,t) & \text{for } x\in{{\mathbb{R}}}, &\!\! t>0 \\
v_t(x,y,t)-d\D v(x,y,t)=f'(0)v(x,y,t) & \text{for } (x,y)\in\O, &\!\! t>0 \\
dv_y(x,L,t)=\m u(x,t)-\nu v(x,L,t) & \text{for } x\in{{\mathbb{R}}}, &\!\! t>0 \\
v(x,0,t)=0 & \text{for } x\in{{\mathbb{R}}}, &\!\! t>0, \end{array}\right.$$ provide us with supersolutions to , thanks to the second condition of . We start looking for solutions of of the form $$\label{42}
(u(x,t),v(x,y,t))=e^{\a(x+ct)}(1,\g \sin (\b y))$$ with positive $\a,\g, c$ and $\b\in(0,\frac{\pi}{L})$, in order for $u$ and $v$ to be positive in $\O$. Notice that is a solution of if and only if $$\label{43}
\left\{ \begin{array}{l}
-D\a^2+c\a=\n\g\sin(\b L)-\m \\
-d\a^2+d\b^2+c\a=f'(0) \\
\g=\frac{\m}{d\b \cos(\b L)+\n \sin (\b L)}. \end{array}\right.$$ In order for $\g$ to be positive, $\b$ must lie in $(0,\ov{\b})$, where $\ov{\b}\in(\frac{\pi}{2L},\frac{\pi}{L})$ is the first positive value of $\b$ for which $$\label{44}
d\b\cos(\b L)+\n \sin(\b L)=0.$$
By substituting the expression of $\g$ given by the third equation of into the first one, we get $$-D\a^2+c\a+\frac{\m}{1+\frac{\n\tan(\b L)}{d\b}}=0,$$ whose solutions are $$\label{eqdef}
\a_D^{\pm}(c,\b)=\frac{1}{2D}\left(c\pm \sqrt{c^2+\chi(\b)}\right),$$ where we have set $$\chi(\b)=\frac{4\m D}{1+\frac{\n\tan(\b L)}{d\b}}=\frac{4\m dD\b\cos(\b L)}{d\b\cos(\b L)+\n \sin(\b L)}.$$ It is easy to see that $\chi(\b)$ is continuous, even, decreases for $\b\in(0,\ov{\b})$ and satisfies $$\lim_{\b\da 0} \chi(\b)=\frac{4\m D}{1+\frac{\n L}{d}}, \qquad \chi\left(\frac{\pi}{2L}\right)=0, \qquad \lim_{\b\ua\ov{\b}} \chi(\b)=-\infty,$$ where $\ov{\b}$ is the one defined in . Hence, for every $c>0$, there exists a unique $\tilde{\b}(c)\in(\frac{\pi}{2L},\ov{\b})$, for which $$\label{45}
c^2=-\chi(\tilde{\b}(c)).$$ Moreover it satisfies $$\lim_{c\da 0} \tilde{\b}(c)=\frac{\pi}{2L}, \qquad \lim_{c\ua \infty} \tilde{\b}(c)=\ov{\b}.$$
As a consequence of these properties, for fixed $c>0$, $\a_D^{+}(c,\b)$ is a regular even function, which decreases in $(0,\tilde{\b}(c))$ and satisfies $$\begin{gathered}
\a_D^+(c,0)\!=\!\frac{1}{2D}\left(c\!+\!\sqrt{c^2+\frac{4\m D}{1+\frac{\n L}{d}}}\right), \quad \a_D^{+}\left(c,\frac{\pi}{2L}\right)\!=\!\frac{c}{D}, \quad \a_D^{+}(c,\tilde{\b}(c))\!=\!\frac{c}{2D}, \label{eqstar} \\
\p_\b\a_D^+(c,0)=0, \qquad \lim_{\b\ua\tilde{\b}(c)} \p_\b\a_D^+(c,\b)=-\infty. \notag\end{gathered}$$ In addition it is easy to verify that $$\label{46}
\p_c\a_D^+(c,\b)>0.$$ Since $\a_D^-$ is symmetric to $\a_D^+$ with respect to the line $\a=\frac{c}{2D}$, analogous properties can be established for $\a_D^-$. In particular observe that $$\a_D^{-}\left(c,\frac{\pi}{2L}\right)=0$$ and that, for fixed $\b\in(\frac{\pi}{2L},\ov{\b})$, we have $$\begin{aligned}
\lim_{c\ua\infty} \a_D^{-}(c,\b)=0,\end{aligned}$$ $\a_D^{-}$ decreasing monotonically in $c$. As a consequence, the bounded region of the first quadrant in the $(\b,\a)-$plane delimited by the curve $$\label{47}
\S_D(c):=\left\{(\b,\a_D^-(\b)):\b\in\left[\frac{\pi}{2L},\tilde{\b}(c)\right]\right\}\cup\left\{(\b,\a_D^+(\b)):\b\in\left(0,\tilde{\b}(c)\right]\right\},$$ which has been represented in Figure \[fig21\], invades monotonically, as $c\ua\infty$, the half-strip $\{(\b,\a): \b\in(0,\ov{\b}), \a>0\}$.
![The curve $\S_D(c)$ defined in .[]{data-label="fig21"}](fig1.pdf)
As far as the monotonicity of the curves with respect to $D$, the other parameters being fixed, it follows from the definition of $\a_D^{\pm}$ that $$\label{48}
\begin{split}
D&\mapsto \a_D^+(c,\b) \; \text{ is decreasing for every $\b\in[0,\tilde{\b}(c)]$} \\
D&\mapsto \a_D^-(c,\b) \; \text{ is increasing for every $\b\in\left(\frac{\pi}{2L},\tilde{\b}(c)\right]$.}
\end{split}$$
On the other hand (see Figure \[fig22\]), the second equation of represents an hyperbola whose branches are given by $$\label{49}
\a_d^{\pm}(c,\b)=\frac{1}{2d}\left(c\pm\sqrt{c^2-\eta(\b)}\right),$$ where we have set, $$\eta(\b):=4d(f'(0)-d\b^2)={c_{\operatorname{KPP}}}^2-4d^2\b^2,$$ $c_{\operatorname{KPP}}$ being the one defined in . The function $\eta$ is decreasing and satisfies $$\eta(0)={c_{\operatorname{KPP}}}^2, \qquad \eta\left(\sqrt{\frac{f'(0)}{d}}\right)=0, \qquad \lim_{\b\ua\infty}\eta(\b)=-\infty.$$ Therefore, the functions $\a_d^{\pm}(c,\b)$ are defined for every $\b\in(0,+\infty)$ if $c\geq {c_{\operatorname{KPP}}}$, while, if $c\in(0,{c_{\operatorname{KPP}}})$ there exists $\hat{\b}(c)>0$ such that their domain (within the positive part of the real line) is $[\hat{\b}(c),+\infty)$, where $\hat{\b}(c)$ satisfies $$\label{410}
c^2=\eta(\hat{\b}(c)).$$ As a consequence, $$\lim_{c\da 0} \hat{\b}(c)=\sqrt{\frac{f'(0)}{d}}, \qquad \lim_{c\ua {c_{\operatorname{KPP}}}} \hat{\b}(c)=0.$$ It can be easily seen, as $\a_d^-$ is the reflection of $\a_d^+$ about the line $\a=\frac{c}{2d}$, that $$\a_d^-\left(c,\sqrt{\frac{f'(0)}{d}}\right)=0, \qquad \lim_{\b\ua\infty}\a_d^+(c,\b)=+\infty,$$ and, in the proper domain of definition, $$\label{411}
\p_\b\a_d^+(c,\b)>0, \qquad \p_c\a_d^+(c,\b)>0, \qquad \p_c\a_d^-(c,\b)<0.$$ Moreover, if $0<c<{c_{\operatorname{KPP}}}$, we have that $$\a_d^+(c,\hat{\b}(c))=\frac{c}{2d} \qquad \lim_{\b\da\hat{\b}(c)}\p_\b\a_d^{+}(c,\b)=+\infty,$$ while, for $c={c_{\operatorname{KPP}}}$, the hyperbolas degenerate into the straight lines with equations $$\label{ipdeg}
\a_d^{\pm}({c_{\operatorname{KPP}}},\b)=\pm\b+\frac{{c_{\operatorname{KPP}}}}{2d}.$$ Finally, for $c>{c_{\operatorname{KPP}}}$, we have that $$\a_d^{\pm}(c,0)=\frac{c\pm\sqrt{c^2-{c_{\operatorname{KPP}}}^2}}{2d}, \qquad \p_\b\a_d^{\pm}(c,0)=0$$ and, for fixed $\b$, $$\lim_{c\ua\infty} \a_d^{-}(c,\b)=0.$$ As a consequence of all the aforementioned properties, if we set $\hat{\b}(c)=0$ for $c\geq {c_{\operatorname{KPP}}}$, which is consistent with the previous notation, and define $$\S_d(c):=\left\{(\b,\a_d^-(\b)):\b\in\left[\hat{\b}(c),\sqrt{\frac{f'(0)}{d}}\right]\right\}\cup\left\{(\b,\a_d^+(\b)):\b\geq\hat{\b}(c)\right\},$$ we have that the region of the first quadrant in the $(\b,\a)-$plane bounded by $\S_d(c)$ and containing the point $\left(\sqrt{\frac{f'(0)}{d}},\frac{c}{2d}\right)$ invades monotonically, as $c\ua\infty$, the first quadrant in the $(\b,\a)-$plane. All these features have been represented in Figure \[fig22\].
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The hyperbola defined by the second equation of , for $0<c<{c_{\operatorname{KPP}}}$ (A), $c={c_{\operatorname{KPP}}}$ (B) and $c>{c_{\operatorname{KPP}}}$ (C).[]{data-label="fig22"}](fig221bis.pdf "fig:") ![The hyperbola defined by the second equation of , for $0<c<{c_{\operatorname{KPP}}}$ (A), $c={c_{\operatorname{KPP}}}$ (B) and $c>{c_{\operatorname{KPP}}}$ (C).[]{data-label="fig22"}](fig222bis.pdf "fig:") ![The hyperbola defined by the second equation of , for $0<c<{c_{\operatorname{KPP}}}$ (A), $c={c_{\operatorname{KPP}}}$ (B) and $c>{c_{\operatorname{KPP}}}$ (C).[]{data-label="fig22"}](fig223bis.pdf "fig:")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Another candidate to construct supersolutions to for $c>{c_{\operatorname{KPP}}}$ are the functions $$\label{412}
(u(x,t),v(x,y,t))=e^{\a(x+ct)}(1,\phi(y))$$ with $$\phi(y):=\g(e^{\b y}-e^{-\b y}),$$ where $\a,\b,\g$ are positive constants. By plugging into we are driven to $$\label{413}
\left\{ \begin{array}{l}
-D\a^2+c\a=\n\phi(L)-\m \\
-d\a^2-d\b^2+c\a=f'(0) \\
\g=\frac{\m}{d\b(e^{\b L}+e^{-\b L})+\n (e^{\b L}-e^{-\b L})}>0. \end{array}\right.$$ Using the expression of $\phi(L)$ given from the third equation of , the first one reduces to $$-D\a^2+c\a+\frac{\m d\b(e^{\b L}+e^{-\b L})}{d\b(e^{\b L}+e^{-\b L})+\n (e^{\b L}-e^{-\b L})}=0,$$ whose solutions are $$\label{tipo2}
\tilde{\a}_D^{\pm}(c,\b)=\frac{1}{2D}\left(c\pm \sqrt{c^2+\tilde{\chi}(\b)}\right),$$ where we have set $$\tilde{\chi}(\b)=\frac{4\m D}{1+\frac{\n\tanh(\b L)}{d\b}}.$$ This function is positive and therefore $\tilde{\a}_D^{+}(c,\b)$ is defined for every $\b\in{{\mathbb{R}}}$. It can be easily seen that it is even and satisfies the following monotonicity conditions $$\label{417}
\p_{\b}\tilde{\a}_D^+(c,\b)>0, \quad \p_{c}\tilde{\a}_D^+(c,\b)>0, \quad \p_{D}\tilde{\a}_D^+(c,\b)<0$$ for every positive $c,\b$. Moreover we have $\tilde{\a}_D^+(c,0)=\a_D^+(c,0)$, for every $c>0$. Naturally, similar properties hold for $\tilde{\a}_D^-$, taking into account that it is symmetric to $\tilde{\a}_D^+$ with respect to $\a=\frac{c}{2D}$. Finally, observe that, for every $\b\geq 0$, $$\lim_{c\ua+\infty}\tilde{\a}_D^+(c,\b)=+\infty$$ and, therefore, the regions of the first quadrant in the $(\b,\a)-$plane delimited by $$\label{418}
\tilde{\S}_D(c):=\left\{(\b,\tilde{a}_D^-(c,\b)):\b>0\right\}\cup\left\{(\b,\tilde{a}_D^+(c,\b)):\b>0\right\}.$$ and containing $\left(\frac{c}{2D},1\right)$ invade monotonically the first quadrant of the plane $(\b,\a)$.
![The curves $\tilde{\a}_D^+$ and $\tilde{\a}_d^{\pm}$.[]{data-label="fig24"}](fig24.pdf)
On the other hand, the second equation of describes, for $c>{c_{\operatorname{KPP}}}$, as we are assuming, a circle in the $(\b,\a)-$plane, with center at $\left(0,\frac{c}{2d}\right)$ and radius $$r(c)=\frac{\sqrt{c^2-{c_{\operatorname{KPP}}}^2}}{2d}$$ (see Figure \[fig24\]). Precisely, the part of the graph of the circle which lies in the first quadrant is given by $$\tilde{\S}_d(c):=\left\{(\b,\tilde{\a}_d^{\pm}(c,\b)):\b\in[0,r(c)]\right\},$$ where $$\label{419}
\tilde{\a}_d^{\pm}(c,\b)=\frac{1}{2d}\left(c\pm\sqrt{c^2-{c_{\operatorname{KPP}}}^2-4d^2\b^2}\right).$$ The function $\tilde{\a}_d^-$ satisfies $\p_{\b}\tilde{\a}_d^-(c,\b)>0$, $\p_{c}\tilde{\a}_d^-(c,\b)<0$, for $c>{c_{\operatorname{KPP}}}$ and $\b\in(0,r(c))$, while its value at $\b=0$ satisfies $\tilde{\a}_d^-(c,0)=\a_d^-(c,0)$. Moreover, it can be easily seen that $$\tilde{\S}_d({c_{\operatorname{KPP}}})=\left\{\left(0,\frac{{c_{\operatorname{KPP}}}}{2d}\right)\right\}$$ and $$\lim_{c\ua+\infty}\tilde{a}_d^-(c,\b)=0.$$ As a consequence of these properties, the half-disks delimited by $\tilde{\S}_d(c)$ and contained in the first quadrant of the $(\b,\a)-$plane invade it monotonically as $c$ increases. With these ingredients we are able to give the following result
\[pr41\] There exists $c^*:=c^*_L(D,d,\mu,\nu)$ such that, for every $c>c^*$, Problem admits supersolutions either of the form or , with positive $\a,\b,\g$. Moreover the function $D\mapsto c^*(D)$ is increasing.
By the previous discussion, we have that, in order to find a solution of and therefore a supersolution to , it is sufficient to find an intersection between either $\S_D(c)$ and $\S_d(c)$ or $\tilde{\S}_D(c)$ and $\tilde{\S}_d(c)$ lying in the interior of the first quadrant of the $(\b,\a)-$plane. Due to the monotonicity properties of these curves with respect to $c$ shown above, $c^*$ will be the smallest value of $c$ for which such an intersection exists for all $c>c^*$.
Let us start by examining the case $D$ small (relatively to $d$), in which we consider the curves $\S_D(c)$ and $\S_d(c)$. Recalling that we are assuming , it is clear from the above discussion that, for $c\sim 0$, they are disjoint, since so are their domain of definition. On the contrary, for sufficiently large $c$ such an intersection exists, since $$\a_d^-(c,0)<\a_D^+(c,0), \quad \a_d^-\left(c,\sqrt{\frac{f'(0)}{d}}\right)=0, \quad \a_D^-\left(c,\frac{\pi}{2L}\right)=0.$$
------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------
![The tangency between $\S_D(c)$ and $\S_d(c)$, for $D\leq D_{\operatorname{KPP}}$.[]{data-label="fig23"}](fig231.pdf "fig:") ![The tangency between $\S_D(c)$ and $\S_d(c)$, for $D\leq D_{\operatorname{KPP}}$.[]{data-label="fig23"}](fig232.pdf "fig:")
![The tangency between $\S_D(c)$ and $\S_d(c)$, for $D\leq D_{\operatorname{KPP}}$.[]{data-label="fig23"}](fig233.pdf "fig:") ![The tangency between $\S_D(c)$ and $\S_d(c)$, for $D\leq D_{\operatorname{KPP}}$.[]{data-label="fig23"}](fig234.pdf "fig:")
------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------
Actually, the first value of $c$, denoted by $\un{c}=\un{c}(D)$, for which an intersection can exist is the one for which the domains of $\a_D^{\pm}$ and $\a_d^{\pm}$ intersect in one point, i.e. the one for which $$\left\{ \begin{array}{l}
c^2=-\chi(\b) \\
c^2=\eta(\b) \end{array}\right.$$ admits a solution. Indeed, as shown in Figure \[fig23\](A), the curves $\S_D(\un{c})$ and $\S_d(\un{c})$ are tangent if $D=d$, their common tangent being vertical. Therefore in this case $c^*=\un{c}$. If $D<d$, there exists $c^*$ satisfying $\un{c}<c^*<{c_{\operatorname{KPP}}}$ and such that $\a_D^-$ and $\a_d^+$ are tangent, as described in Figure \[fig23\](B). If $D>d$ the situation is more complex, because we have to take into account the change in the nature of $\a_d^-(c,\b)$ as $c$ crosses ${c_{\operatorname{KPP}}}$. From , it follows that $$\lim_{D\ua+\infty}\tilde{\b}(c_{\operatorname{KPP}},D)=\frac{\pi}{2L}$$ and, together with the first and third relations of , this implies that there exists a value of $D$, denoted by $D_{\operatorname{KPP}}$, for which, for $c=c_{\operatorname{KPP}}$, $\a_{D_{\operatorname{KPP}}}^+$ and the straight line $\a_d^-$ are tangent (see Figure \[fig23\](C)). Observe that, for $D=2d$, we have $$\a_{D}^+(c_{\operatorname{KPP}},0)>\a_{D}^+\left(c_{\operatorname{KPP}},\frac{\pi}{2L}\right)=\frac{c_{\operatorname{KPP}}}{D}=\frac{c_{\operatorname{KPP}}}{2d}=\a_{d}^-(c_{\operatorname{KPP}},0),$$ which implies that $D_{\operatorname{KPP}}>2d$. Thanks to , for $d<D<D_{\operatorname{KPP}}$ and $c=c_{\operatorname{KPP}}$, $\a_D^+$ and $\a_d^-$ will be secant, therefore in this case the tangency will occur between $\a_D^+$ and $\a_d^-$ for $\un{c}<c^*<{c_{\operatorname{KPP}}}$, as represented in Figure \[fig23\](D).
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Different configurations of the tangency between $\S_D(c)$ and $\S_d(c)$ (continuous lines) or $\tilde{\S}_D(c)$ and $\tilde{\S}_d(c)$ (dashed lines), for $c>c_{\operatorname{KPP}}$.[]{data-label="fig26"}](fig261.pdf "fig:") ![Different configurations of the tangency between $\S_D(c)$ and $\S_d(c)$ (continuous lines) or $\tilde{\S}_D(c)$ and $\tilde{\S}_d(c)$ (dashed lines), for $c>c_{\operatorname{KPP}}$.[]{data-label="fig26"}](fig262.pdf "fig:") ![Different configurations of the tangency between $\S_D(c)$ and $\S_d(c)$ (continuous lines) or $\tilde{\S}_D(c)$ and $\tilde{\S}_d(c)$ (dashed lines), for $c>c_{\operatorname{KPP}}$.[]{data-label="fig26"}](fig263.pdf "fig:")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finally, when $D>D_{\operatorname{KPP}}$ we consider at the same time the pairs of curves $\S_D(c)$, $\S_d(c)$ and $\tilde{\S}_D(c)$, $\tilde{\S}_d(c)$. We define $c^*$ to be the smallest value of $c$ for which either the former two curves or the latter two are tangent in a positive $\b$ (see Figure \[fig26\](A) and (B) respectively). The last case to be examined is the one in which all the four curves touch for the first time, being tangent, at $\b=0$ for $c={c_{\operatorname{int}}}$, where ${c_{\operatorname{int}}}$ is $$\label{420}
{c_{\operatorname{int}}}^2={c_{\operatorname{int}}}^2(D)=\frac{\left((d+\nu L)D{c_{\operatorname{KPP}}}^2+ 4\mu d^3\right)^2}{4d(D-d)(d+\nu L)\left((d+\nu L){c_{\operatorname{KPP}}}^2 + 4\mu d^2\right)}$$ (see figure \[fig26\](C)). In this case, thanks to and , we have that $\S_D(c)$ and $\S_d(c)$ intersect for every $c>{c_{\operatorname{int}}}$ and a certain $\b>0$. Therefore we set $c^*={c_{\operatorname{int}}}$.
The monotonicity of the function $D\mapsto c^*(D)$ follows from the monotonicity of the curves $\a_D^{\pm}$ and $\tilde{\a}_D^+$, given by and the last relation of respectively.
\[remark42\]
- The proof of Proposition \[pr41\] shows indeed that it is possible to construct a supersolution to of type or not only for every $c>c^*$, but also for $c=c^*$, except for the case represented in Figure \[fig26\](C). Actually it is possible to construct a supersolution also in this case, when $c^*={c_{\operatorname{int}}}$, by taking $$\label{421}
(u(x,t),v(x,y,t))=e^{\a(x+ct)}\left(1,\frac{\mu y}{d+\nu L}\right).$$ This can be heuristically seen by taking the limit of or as $\b\da 0$ (notice that $\g=\g(\b)\to 0$), while a formal proof consists in plugging into and observing that the resulting algebraic system in $\a,c$ has a solution for $c={c_{\operatorname{int}}}$.
- From the last part of proof of Proposition \[pr41\], it arises the natural question of characterizing which of the curves first touch, either $\S_D(c)$ and $\S_d(c)$ or $\tilde\S_D(c)$ and $\tilde\S_d(c)$. This analysis can be performed following the same ideas of the proof of Theorem \[thmain\](iii) (see Section \[sec8\]), i.e. by studying, for $c={c_{\operatorname{int}}}$, the second derivatives of the curves at $\b=0$, and adapting a fine result given in [@RTV Proposition 4.1], which is based on the comparison principles and characterizes the total number of possible intersections between the curves. Since such analysis is quite technical in general (indeed some of the arguments provided in Section \[sec8\] do not adapt to the general situation) and not strictly necessary for the results of this work, we send the reader to [@RTV] for the details.
Generalized subsolutions with compact support {#sec5}
=============================================
In this section we construct stationary compactly supported generalized (in the sense of Proposition \[prcp2\]) subsolutions in a framework moving in the $x-$direction at slightly smaller speeds than $c^*$, the one of Proposition \[pr41\]. Provided that Proposition \[prcp2\] can be applied, this will provide a lower bound for the asymptotic speed of propagation and will be the second and last ingredient for the proof of Theorem \[thmain\] (see Section \[sec6\]). The result is the following
\[pr51\] Let $c^*$ be the one constructed in Proposition \[pr41\]. Then, for every $c<c^*$, $c\sim c^*$, and $\d>0$, $\d\sim 0$, there exists $\e_0>0$ such that, for every $0<\e<\e_0$, $\e(\un{U}(x),\un{V}(x,y))$ is a compactly supported generalized (in the sense of Proposition \[prcp2\]) subsolution of $$\label{51}
\left\{ \begin{array}{lll}
u_t-Du_{xx}+c u_x=\nu v(x,L,t)-\m u & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v_t-d\D v+c v_x=f(v) & \text{for } (x,y)\in\O, & t>0 \\
dv_y(x,L,t)=\m u-\nu v(x,L,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v(x,0,t)=0 & \text{for } x\in{{\mathbb{R}}}, & t>0, \end{array}\right.$$ satisfying , where $(\un{U}(x),\un{V}(x,y))$ is a truncation of a solution of $$\label{52}
\left\{ \begin{array}{ll}
-D U_{xx}(x)+c U_x(x)=\nu V(x,L)-\m U(x) & \text{for } x\in{{\mathbb{R}}}, \\
-d\D V(x,y)+c V_x(x,y)=(f'(0)-\d)V(x,y) & \text{for } (x,y)\in\O, \\
d V_y(x,L)=\m U(x)-\nu V(x,L) & \text{for } x\in{{\mathbb{R}}}, \\
V(x,0)=0 & \text{for } x\in{{\mathbb{R}}}. \end{array}\right.$$
Once the solution $(\un{U},\un{V})$ of will be constructed, the existence of $\e_0$ such that $\e(\un{U}(x),\un{V}(x,y))$ is a subsolution to for every $0<\e<\e_0$ follows immediately from . From the construction of $c^*$ carried out in the proof of Proposition \[pr41\], we know that, for $c<c^*$, no real solution of of type or exists. However we will show that exhibits, for $c<c^*$, $c\sim c^*$, complex solutions, which will be the starting point for the construction of $(\un U,\un V)$. Actually we will consider the case $\d=0$ and the existence of $(\un{U},\un{V})$ for $\d\sim 0$ will follow by a perturbation argument which consists in repeating the same construction of the proof of Proposition \[pr41\] considering the curves with $f'(0)$ replaced by $f'(0)-\d$ (observe that the dependence of the curves with respect to $f'(0)$ is continuous).
We will give the details only in the case in which $c^*$ was constructed in Proposition \[pr41\] as the one for which $\a_D^+(c^*,\b)$ and $\a_d^-(c^*,\b)$ were tangent in a point $\b=\b^*>0$. The other cases related to supersolution like are analogous; the case of supersolutions like was treated in [@BRR1] and the one related to supersolutions like will follow from the case that we are going to present here, by passing to the limit as $\b^*\to 0$, like in Remark \[remark42\](i).
Let us consider, for $(c,\b)$ in a neighborhood of $(c^*,\b^*)$, the function $$\label{53}
g(c,\b)=\a_d^-(c,\b)-\a_D^+(c,\b).$$ Our goal is to find, for $c<c^*$, $c\sim c^*$, a root $\b\in{{\mathbb{C}}}\setminus {{\mathbb{R}}}$ of . In this way we will obtain a solution $(\a,\b,\g)=(\a_1+i\a_2,\b_1+i\b_2,\g_1+i\g_2)\in({{\mathbb{C}}}\setminus {{\mathbb{R}}})^3$ of . It is easily seen that $(\ov{\a},\ov{\b},\ov{\g})$ also solves and, therefore, by taking the real part in , we can set $$\begin{gathered}
\label{54}
\un{U}(x)=\max\{e^{\a_1x}\cos(\a_2x), 0\}, \\
\begin{split}
\label{55}
\un{V}(x,y)=&\max\{e^{\a_1x}(\sin(\b_1 y)\cosh(\b_2 y)(\g_1\cos(\a_2x)-\g_2\sin(\a_2x))+ \\
&- \cos(\b_1 y)\sinh(\b_2 y)(\g_1\sin(\a_2x)+\g_2\cos(\a_2x))), 0\},
\end{split}\end{gathered}$$ where $$\label{56}
\g_1=\frac{\mu\t_1}{\t_1^2+\t_2^2}, \qquad
\g_2=\frac{\mu\t_2}{\t_1^2+\t_2^2},$$ being $$\label{57}
\begin{split}
\t_1& = ( d\b_1\cos( \b_1 L ) + \nu\sin( \b_1 L ) )\cosh( \b_2 L )+d\b_2\sin( \b_1 L )\sinh( \b_2 L ) \\
\t_2& = ( d\b_1\sin( \b_1 L ) - \nu\cos( \b_1 L ) )\sinh( \b_2 L ) - d\b_2\cos( \b_1 L )\cosh( \b_2 L ).
\end{split}$$ After the change of variables $$\xi=\b-\b^*, \qquad \tau=c-c^*$$ the search for zeros of is equivalent to the search for zeros of the function $$\label{58}
h(\xi,\tau):=g(c^*+\tau,\b^*+\xi)$$ with $(\xi,\tau)$ in a neighborhood of $(0,0)$. Since $(c^*,\b^*)$ is the first contact point between $\a_D^+$ and $\a_d^-$, we have that there exists $n\in{{\mathbb{N}}}\setminus\{0\}$ such that $$h(0,0)=\dots=\p^{2n-1}_\xi h(0,0)=0 \quad \text{ and } a_{2n}:=\frac{\p^{2n}_\xi h(0,0)}{(2n)!}>0,$$ while from and it follows that $$a_1:=\p_\tau h(0,0)<0.$$ By considering the Taylor series of in a neighborhood of $(0,0)$ we have that $h(\xi,\tau)=0$ is equivalent to $$\label{59}
a_1\tau+a_{2n}\xi^{2n}=p(\xi,\tau)\tau+o(\xi^{2n+1})$$ where $p(\xi,\tau)$ is a polynomial which is either identically $0$ or of degree at least $1$. Thanks to the signs of the coefficients determined above, we know that the left hand side of $$h_1(z):=a_{2n}z^{2n}+a_1\tau$$ has, for $\tau<0$, $2n$ complex roots $$z_j:=z_j(\tau)=\left(\frac{a_1 \tau}{a_{2n}}\right)^{\frac{1}{2n}}e^{i\frac{(2j-1)\pi}{2n}} \quad j=1,\dots,2n.$$ Consider now the ball $$B:=B_r(z_1)\subset{{\mathbb{C}}}\qquad \text{ with } r=\s|\tau|^{\frac{1}{2n}}, \quad \s\sim 0$$ From geometrical considerations we have that, on $\p B$, $$|h_1(z)|=\prod_{j=1}^{2n}|z-z_j|\geq r\prod_{j=2}^{2n}C_j|\tau|^{\frac{1}{2n}}>C|\tau|$$ while, the right hand side $\varphi$ of , considered as a function of $\xi$, satisfies, on $\p B$, $$|\varphi(z)|\leq|p(z,\tau)||\tau|+o(|z|^{2n+1})<\tilde{C}|\tau|^{1+\frac{1}{2n}}.$$ Therefore, by choosing $\tau$ negative and sufficiently small, we can make $|\varphi|<|h_1|$ on $\p B$ and Rouché’s theorem can be applied, guaranteeing the existence of complex roots of and therefore of for $c<c^*$, $c\sim c^*$. This same analysis also shows that $\b=\b_1+i\b_2=\b_1(c)+i\b_2(c)$ satisfies $$\label{510}
\b_1(c)\to \b^*, \quad \b_2(c)\to 0 \qquad \text{ as } c\ua c^*.$$ As a consequence, from and we have that $$\cosh(\b_2 L)\to 1, \quad \sinh(\b_2 L)\to 0, \quad \g_1\to\frac{\mu}{d\b^*\cos(\b^*L)+\nu\sin(\b^*L)}>0, \quad \g_2\to 0,$$ as $c\ua c^*$, since $0<\b^*<\ov{\b}<\pi/L$. Moreover, when $n>1$, the second equations in systems and ensure that $\a_2\neq 0$, since both $\b_1$ and $\b_2$ are positive for $c\sim c^*$. This follows from by continuity in the case $\b^*\neq 0$ and, when $\b^*=0$, it can be shown directly by analyzing the equation for $\a$ which is obtained by plugging into . On the other hand, when $n=1$, it can be proved as in Section \[sec8\] (see ) that $\b^*>0$ and then $\a_2\neq 0$ follows as in [@BRR1 Lemma 6.1]. In conclusion, it is apparent from and that, by taking $c$ sufficiently close to $c^*$, it is possible to take a component of the sets $\{\un{U}>0\}$ and $\{\un{V}>0\}$ in such a way is satisfied, obtaining compactly supported generalized subsolutions satisfying .
Asymptotic speed of propagation {#sec6}
===============================
We are now able to give the proof of the first part of Theorem \[thmain\].
*Proof of Theorem \[thmain\].* To prove the first condition of Definition \[deasp\] we will use the supersolutions of constructed in Proposition \[pr41\] and Remark \[remark42\](i). We recall that they solve the linear system and are of type $$(\bar{u}(x,t),\bar{v}(x,y,t))=e^{\a(x+c^*t)}(1,\g\phi(y))$$ with $\a,\g>0$ and $\phi(y)>0$ in $(0,L]$ satisfies $\phi'(0)>0$. As a consequence, since $(u_0,v_0)$ has compact support, there exists $k>0$ such that $$(u_0(x),v_0(x,y))<k(\bar{u}(x,0),\bar{v}(x,y,0))$$ for every $(x,y)\in{{\mathbb{R}}}\times(0,L]$. Moreover $(0,0)$ is a strict subsolution of and from Proposition \[prcp1\] we have that $$\label{61}
(0,0)\leq(u(x,t),v(x,y,t))<k(\bar{u}(x,t),\bar{v}(x,y,t))$$ for every $t>0$. Observe that $$(\bar{\bar{u}}(x,t),\bar{\bar{v}}(x,y,t))=e^{\a(-x+c^*t)}(1,\g\phi(y))$$ is also a supersolution of satisfying $(u_0(x),v_0(x,y))<k(\bar{\bar{u}}(x,0),\bar{\bar{v}}(x,y,0))$ for large $k$.
Fix now $c>c^*$, $t>0$, $|x|\geq ct$ and $y\in[0,L]$. We distinguish the cases $x\leq -ct<0$ and $-x\leq -ct<0$. In the first case, it follows that $$e^{\a(x+c^*t)}\leq e^{\a(c^*-c)t}$$ and this, together with , implies $$(0,0)\leq(u(x,t),v(x,y,t))<k e^{\a(c^*-c)t}\left(1,\g\phi(y)\right)$$ for every $(x,y)\in{{\mathbb{R}}}\times(0,L]$ and follows. The second case can be treated by comparing $(u,v)$ with $(\bar{\bar{u}},\bar{\bar{v}})$ in a similar fashion.
By adapting the arguments of the proof of Proposition \[pr34\] to the case of Problem , using the subsolution constructed in Proposition \[pr51\], it can be shown that $$\label{62}
\lim_{t\to+\infty}(u(x+ct,t),v(x+ct,y,t))=\left(\frac{\nu}{\mu}V(L),V(y)\right)$$ locally uniformly in $\overline{\O}$, where $V(y)$ is the unique solution of . Property now follows from and by using [@RTV Lemma 4.4].
Properties $(ii)$ and $(iii)$ of Theorem \[thmain\] will be proved in the next sections.
Limits for small and large diffusion on the road {#sec7}
================================================
In this section we analyze the behavior of $c^*=c^*(D)$ as the diffusion on the road $D$ tends to $0$ and to $+\infty$, giving the proof of Theorem \[thmain\]$(ii)$. The result regarding the first case is the following
\[pr71\] We have that $$\label{71}
\lim_{D\da 0} c^*(D)=\ell_0>0,$$ where $\ell_0$ is the asymptotic speed of propagation of the problem $$\label{72}
\left\{ \begin{array}{lll}
u_t(x,t)=\nu v(x,L,t)-\m u(x,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v_t(x,y,t)-d\D v(x,y,t)=f(v) & \text{for } (x,y)\in\O, & t>0 \\
d v_y(x,L,t)=\m u(x,t)-\nu v(x,L,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v(x,0,t)=0 & \text{for } x\in{{\mathbb{R}}}, & t>0. \end{array}\right.$$
Observe first of all that limit exists thanks to Theorem \[thmain\]$(i)$. Fix now $D<d$. By the discussion of Sections \[sec4\] and \[sec5\], we have that $c^*(D)$ is the one for which $\a_{D}^-(c,\b)$ given by is tangent to $\a_{d}^+(c,\b)$ defined in . Moreover we have that necessarily the tangency occurs for $\b^*>\frac{\pi}{2L}$ (see Figure \[fig23\](B)). By passing to the limit for $D\da 0$ in , we get that $\ell_0$ is the unique value of $c$ for which $\a_{d}^+(c,\b)$ is tangent to $$\label{73}
\a_0^-(c,\b)=\frac{-\mu d\b\cos(\b L)}{c\left(d\b\cos(\b L)+\nu\sin(\b L)\right)}, \quad \b\in\left[\frac{\pi}{2L},\ov{\b}\right),$$ where $\ov{\b}$ is the one defined in . The existence of such $c$ follows from the fact that $$\begin{gathered}
\a_0^-\left(c,\frac{\pi}{2L}\right)=0, \qquad \a_0^-(c,\b)>0 \text{ for } \b\in\left(\frac{\pi}{2L},\ov{\b}\right), \qquad \lim_{\b\ua\ov{\b}}\a_0^-(c,\b)=+\infty, \\
\p_\b\a_0^-(c,\b)>0, \qquad \p_c\a_0^-(c,\b)<0, \\
\lim_{c\da 0}\a_0^-(c,\b)=+\infty, \qquad \lim_{c\ua +\infty}\a_0^-(c,\b)=0, \end{gathered}$$ together with the properties of $\a_d^+(c,\b)$ already described in Section \[sec4\].
To see that $\ell_0$ coincides with the asymptotic speed of propagation of , it is sufficient, as in Section \[sec4\], to construct supersolutions of of the form for $c>\ell_0$ by intersecting the curves $\a_{d}^+(c,\b)$ and and to proceed like in Section \[sec5\] to construct compactly supported subsolutions to for every $c<\ell_0$. Of course one has to prove the corresponding comparison principles for system , which couples a strongly parabolic equation with a degenerate one. They essentially hold because, if there was a first contact point at a positive time between a supersolution and a subsolution, either it would be for the $v$ component, which is impossible since a classical comparison principle holds, or for the $u$ component at $y=L$. In such case, the time derivative of the difference between the super- and the subsolution would be negative, while the right-hand side of the first equation of would be positive, obtaining again a contradiction (for a more detailed treatment of the comparison principles for such degenerate system in a similar context see [@RTV Proposition 2.5]).
We now pass to the case $D\to+\infty$.
\[pr72\] We have that $c^*(D)$ is unbounded as $D\to+\infty$ and $$\label{74}
\lim_{D\ua \infty} \frac{c^*(D)}{\sqrt{D}}=\ell_\infty>0$$ where $\ell_\infty$ is the asymptotic speed of propagation of the problem $$\label{75}
\left\{ \begin{array}{lll}
u_t(x,t)-u_{xx}(x,t)=\nu v(x,L,t)-\m u(x,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v_t(x,y,t)-d v_{yy}(x,y,t)=f(v) & \text{for } (x,y)\in\O, & t>0 \\
dv_y(x,L,t)=\m u(x,t)-\nu v(x,L,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v(x,0,t)=0 & \text{for } x\in{{\mathbb{R}}}, & t>0. \end{array}\right.$$
Recall from the proof of Proposition \[pr41\] that, for large $D$, $$\label{76}
c^*(D)=\min\{c^{*,1}(D),c^{*,2}(D)\},$$ where $c^{*,1}$ is the first value of $c$ for which $\a_D^+(c,\b)$ and $\a_d^-(c,\b)$ intersect, being tangent and the same for $c^{*,2}$, considering $\tilde{\a}_D^+(c,\b)$ and $\tilde{\a}_d^-(c,\b)$. We will prove that holds both for $c^{*,1}(D)$ and $c^{*,2}(D)$ and therefore will follow from .
We start with the case of $c^{*,1}(D)$ (for convenience, we will denote it by $c^{*}(D)$ when there is no possibility of confusion), which is increasing thanks to Proposition \[pr41\] and admits a limit as $D\ua\infty$. It is obvious (see Figure \[fig26\](A)) that $$\label{77}
\a_d^-(c^*(D),\tilde{\b}(c^*(D),D))<\a_D^+(c^*(D),0),$$ where $\tilde{\b}(c^*(D),D)\in(\frac{\pi}{2L},\ov{\b})$ is the one defined in (we have pointed out explicitly the dependence on $D$). Relation can be written as $$\label{78}
\frac{1}{d}\left(\!\!1\!-\!\sqrt{1-\frac{{c_{\operatorname{KPP}}}^2-4d^2\tilde{\b}^2(c^*(D),D)}{{c^*}^2(D)}}\right)\!<\frac{1}{D}\left(1+\sqrt{1+\frac{4\mu d}{d+\nu L}\frac{D}{{c^*}^2(D)}}\right).$$ Assume by contradiction that $c^*(D)$ is bounded. Then, from , we have that $$\label{79}
\frac{c^*(D)^2}{D}=\frac{-4\mu d\tilde{\b}(c^*(D),D)\cos(\tilde{\b}(c^*(D),D) L)}{d\tilde{\b}(c^*(D),D)\cos(\tilde{\b}(c^*(D),D) L)+\nu\sin(\tilde{\b}(c^*(D),D) L)},$$ from which we get $$\lim_{D\ua\infty} \tilde{\b}(c^*(D),D)=\frac{\pi}{2L}.$$ By passing now to the limit as $D\ua\infty$ in , we get a contradiction and, therefore, $$\label{710}
\lim_{D\ua\infty}c^*(D)=+\infty.$$ As the curves are tangent for the first time, we also have $\a_D^+(c^*,0)<\a_d^-(c^*,0)$, which reads $$\frac{1}{D}\left(1+\sqrt{1+\frac{4\mu d}{d+\nu L}\frac{D}{{c^*}^2}}\right)<\frac{1}{d}\left(1-\sqrt{1-\frac{{c_{\operatorname{KPP}}}^2}{{c^*}^2}}\right).$$ Using , we derive from the previous relation $$\label{711}
\frac{1}{D}\left(1+\sqrt{1+\frac{4\mu d}{d+\nu L}\frac{D}{{c^*}^2}}\right)<{c^*}^{-2}\left(\frac{{c_{\operatorname{KPP}}}^2}{2d}+o(1)\right),$$ where, as usual, $o(1)$ denotes a quantity that goes to $0$ as $D\ua\infty$. Solving now for ${c^*}^2/D$, we obtain $$\frac{{c^*}^2}{D}<\frac{(d+\nu L)f'(0)^2}{(d+\nu L)f'(0)+\mu d}+o(1),$$ from which we conclude $$\label{712}
\limsup_{D\ua\infty}\frac{{c^*}^2}{D}\leq\frac{(d+\nu L)f'(0)^2}{(d+\nu L)f'(0)+\mu d}.$$ We now observe that $$\label{713}
\tilde{\b}(c^*(D),D)<\sqrt{\frac{f'(0)}{d}}$$ for every $D$, because otherwise $$\a_D^+(c^*(D),\tilde{\b}(c^*(D),D))>0\geq\a_d^-(c^*(D),\tilde{\b}(c^*(D),D))$$ and there would be another intersection between $\S_D(c^*)$ and $\S_d(c^*)$ apart from $\b^*$, which contradicts the construction of $c^*$. As a consequence of we have that $$\label{714}
\liminf_{D\ua\infty}\tilde{\b}(c^*(D),D)\leq
\limsup_{D\ua\infty}\tilde{\b}(c^*(D),D)\leq\sqrt{\frac{f'(0)}{d}}.$$ We now distinguish two cases. If $$\label{715}
\liminf_{D\ua\infty}\tilde{\b}(c^*(D),D)=\sqrt{\frac{f'(0)}{d}},$$ we have from that $$\lim_{D\ua\infty}\tilde{\b}(c^*(D),D)=\sqrt{\frac{f'(0)}{d}}>\frac{\pi}{2L}$$ and, therefore, using and taking the limsup for $D\ua\infty$ in , we get that $\lim_{D\ua\infty}c^*(D)^2/D$ exists and is positive. On the other hand, if $$\label{716}
\liminf_{D\ua\infty}\tilde{\b}(c^*(D),D)<\sqrt{\frac{f'(0)}{d}},$$ we consider relation , which, by using and , becomes $$2{c^*}^{-2}\left(f'(0)-d\tilde{\b}^2(c^*(D),D)+o(1)\right)<\frac{1}{D}\left(1+\sqrt{1+\frac{4\mu d}{d+\nu L}\frac{D}{{c^*}^2}}\right)$$ for large $D$. Solving for ${c^*}^2/D$, we now get $$\frac{{c^*}^2}{D}>\!\min\!\left\{\frac{(d+\nu L)(f'(0)-d\tilde{\b}^2(c^*(D),D))^2}{(d+\nu L)(f'(0)-d\tilde{\b}^2(c^*(D),D))+\mu d},2\left(f'(0)\!-\!d\tilde{\b}^2(c^*(D),D)\right)\right\}+o(1)$$ and, thanks to , we have $$\label{717}
\liminf_{D\to+\infty}\frac{{c^*}^2}{D}>0.$$ Summing up, holds both in case and . This, together with , implies that ${c^*}^2/D$ is bounded and bounded away from $0$. It is therefore natural to perform in the change of variables $$\label{718}
\hat{\a}=\sqrt{D}\a, \qquad \hat{c}=\frac{c}{\sqrt{D}},$$ obtaining $$\label{719}
\left\{\begin{array}{l}
-\hat{\a}^2+\hat{c}\hat{\a}+\frac{\mu}{1+\frac{\nu \tan(\b L)}{d \b}}=0 \\
-\frac{d\hat{\a}^2}{D}+\hat{c}\hat{\a}=f'(0)-d\b^2,
\end{array}\right.$$ where $\hat{c}$ is bounded and bounded away from $0$. The first equation describes, in the plane $(\b,\hat{\a})$, the curve $\S_1(\hat{c})$ defined in , therefore the function $\hat{\a}_1^+(\hat{c},\b)$, to which we will be interested in, is bounded and bounded away from $0$ for all $c>c_{\operatorname{KPP}}$ and $\b$ in the proper domain of definition. Therefore, by taking the limit for $D\ua\infty$ in , we get $$\label{720}
\left\{\begin{array}{l}
-\hat{\a}^2+\hat{c}\hat{\a}+\frac{\mu}{1+\frac{\nu \tan(\b L)}{d \b}}=0 \\
\hat{\a}=\frac{f'(0)-d\b^2}{\hat{c}}.
\end{array}\right.$$ The second equation is a concave parabola, symmetric with respect to the $\hat{\a}-$axis, passes through $\left(\sqrt{f'(0)/d},0\right)$ and whose vertex is $\left(0,f'(0)/\hat{c}\right)$.
Now we pass to the case of $c^{*,2}$, which, as above, will be simply denoted by $c^*$. In this case, as it is apparent from Figure \[fig26\](B), we have $$\tilde{\a}_d^-(c^*(D),0)<\lim_{\b\to+\infty}\tilde{\a}_D^+(c^*(D),\b),$$ which reads as $$\label{721}
\frac{1}{d}\left(1-\sqrt{1-\frac{{c_{\operatorname{KPP}}}^2}{{c^*}^2}}\right)<\frac{1}{D}\left(1+\sqrt{1+4\mu \frac{D}{{c^*}^2}}\right)$$ and gives that $$\lim_{D\to+\infty}c^*(D)=+\infty,$$ because, if the limit was finite, say $\ell$, passing to the limit in would lead to $$0<\frac{1}{d}\left(1-\sqrt{1-\frac{{c_{\operatorname{KPP}}}^2}{\ell^2}}\right)\leq 0,$$ which is impossible. Therefore, gives $${c^*}^{-2}\left(\frac{{c_{\operatorname{KPP}}}^2}{2d}+o(1)\right)<\frac{1}{D}\left(1+\sqrt{1+4\mu \frac{D}{{c^*}^2}}\right)$$ and, solving for ${c^*}^2/D$ and taking the liminf as $D\ua\infty$ we get $$\liminf_{D\to+\infty}\frac{{c^*}^2}{D}\geq\min\left\{\frac{f'(0)^2}{f'(0)+\mu},2f'(0)\right\}.$$ On the other hand, in this situation we also have that $\tilde{\a}_D^+(c^*(D),0)<\tilde{\a}_d^-(c^*(D),0)$, which provides us with and, therefore, with the upper bound for ${c^*}^2/D$ given by . By performing the change of variables in and passing to the limit for $D\to+\infty$ we obtain $$\label{722}
\left\{\begin{array}{l}
-\hat{\a}^2+\hat{c}\hat{\a}+\frac{\mu}{1+\frac{\nu \tanh(\b L)}{d \b}}=0 \\
\hat{\a}=\frac{f'(0)+d\b^2}{\hat{c}}.
\end{array}\right.$$ The first equation describes, in the plane $(\b,\hat{\a})$, the curve $\tilde{\S}_1(\hat{c})$ defined in , while the second one is a parabola, which is symmetric to the one of the second equation of with respect to the line $\hat{\a}=f'(0)/\hat{c}$.
With a similar reasoning as that of Section \[sec4\], it is easily seen that there is a smallest value of $\hat{c}$ for which either the two curves of or of are tangent, which provides us with $\ell_\infty$.
To see that this limit coincides with the asymptotic speed of propagation of Problem it suffices to repeat the construction of Sections \[sec4\]-\[sec5\] for this problem, starting from supersolutions of type and and using comparison principles analogous to the ones of Section \[sec2\], which can be proved for by applying the parabolic maximum principle in $y$ on every slice, with fixed $x$ (see [@RTV] for a detailed proof in a similar context and [@D], in the context of travelling waves, for comparison principles related to this degenerate system with Neumann boundary conditions at $y=0$).
Influence of the road and limit for large field {#sec8}
===============================================
To examine the influence of the road in Problem , it is appropriate to compare its asymptotic speed of propagation with the one of the following problem $$\label{81}
\left\{ \begin{array}{lll}
v_t(x,y,t)-d\D v(x,y,t)=f(v) & \text{for } (x,y)\in\O, & t>0 \\
dv_y(x,L,t)=-\nu v(x,L,t) & \text{for } x\in{{\mathbb{R}}}, & t>0 \\
v(x,0,t)=0 & \text{for } x\in{{\mathbb{R}}}, & t>0, \end{array}\right.$$ which models a classical Fisher-KPP diffusion in the strip $\O$ and part of the population $v$ just leaves the field at level $y=L$.
By using the same techniques of Section \[sec3\] it is possible to show that Problem admits a unique positive steady state if and only if $$\label{82}
\frac{f'(0)}{d}>\ov{\b}^2>\left(\frac{\pi}{2L}\right)^2,$$ where $\ov{\b}=\ov{\b}(d,\nu,L)$ is, as in Section \[sec4\], the first positive value for which vanishes. By comparing with , it is apparent that one effect of the road is to enhance the persistence of the species, since the condition for persistence is less restrictive in the presence of the road. On the other hand, when holds, by taking $$\ov{v}(x,y,t)=e^{\a(x+ct)}\sin(\ov{\b} y)$$ as supersolution and following the lines of Sections \[sec4\]–\[sec6\], it is possible to show that Problem admits an asymptotic speed of propagation $$c_{\operatorname{KPP}}^{\operatorname{DR}}=2\sqrt{d\left(f'(0)-d\ov{\b}^2\right)}<c_{\operatorname{KPP}}$$ (here $\operatorname{DR}$ stands for the Dirichlet-Robin boundary conditions associated to the Fisher-KPP equation in ).
Recalling the monotonicity property of $c^*(D)$ given by Theorem \[thmain\]$(i)$, we have that $c^*(D)>\ell_0$, where $\ell_0$ is the one constructed in Proposition \[pr71\] as the smallest value of $c$ for which the curves and $\a_d^+(c,\b)$ intersect. Observe that the former is defined for $\b<\ov{\b}$, while, recalling , $\a_d^+(c_{\operatorname{KPP}}^{\operatorname{DR}},\b)$ is defined for $\b\geq\ov{\b}$. This means that, for every $D$, $$\label{83}
c^*(D)\geq\lim_{D\da 0}c^*(D)>c_{\operatorname{KPP}}^{\operatorname{DR}}.$$ Therefore, a second effect of the road is speeding up the propagation in the field. Moreover, from the second relation of Theorem \[thmain\]$(ii)$, we have that this effect can be arbitrarily enhanced, provided that $D$ is sufficiently large.
We conclude with the proof of Theorem \[thmain\], considering the limit for large field. We will emphasize the dependence of $c^*$ on the width of the strip $L$, by writing $c^*_L$.
*Proof of Theorem \[thmain\](iii).* We recall that in [@BRR1] it was proved that $$c^*_{\infty}(D)\begin{cases}
={c_{\operatorname{KPP}}}&\text{if } D\leq 2d \\
>{c_{\operatorname{KPP}}}&\text{if } D> 2d.
\end{cases}$$ We distinguish the same two cases, starting with $D\leq 2d$. From and the discussion of Section \[sec4\] we have that $$\label{84}
c_{\operatorname{KPP}}^{\operatorname{DR}}(L)< c^*_L<{c_{\operatorname{KPP}}}.$$ Recalling from that $\ov{\b}(L)\in\left(\frac{\pi}{2L},\frac{\pi}{L}\right)$, we have that implies $$\lim_{L\to+\infty}c^*_L=c_{\operatorname{KPP}}.$$
We now assume $D>2d$ and recall (see [@BRR1]) that $c^*_{\infty}$ is the value of $c$ for which the curve $$\tilde{\a}_D^{\infty}(c,\b):=\frac{1}{2D}\left(c+\sqrt{c^2+\frac{4\mu D}{1+\frac{\nu}{d\b}}}\right),$$ and the curve $\tilde{\a}_d^-(c,\b)$ defined in are tangent.
In our case, a crucial role will be played by the behavior of the curves $\S_D(c)$, $\S_d(c)$, $\tilde\S_D(c)$, $\tilde\S_d(c)$ for $c={c_{\operatorname{int}}}$, where ${c_{\operatorname{int}}}$ is the one of . It is possible to show with direct computations that $D\mapsto {c_{\operatorname{int}}}(D)$ is increasing for $D>2d$. As a consequence, there exists $\tilde D$ such that ${c_{\operatorname{int}}}(D)>{c_{\operatorname{KPP}}}$ for every $D>\tilde D$. Due to the monotonicity of ${c_{\operatorname{int}}}(D)$, we deduce that $\tilde D$ is the unique value of $D$ for which $\a_D^+({c_{\operatorname{KPP}}},0)=\a_d^\pm({c_{\operatorname{KPP}}},0)$, which, taking into account and , provides $$\tilde D=\tilde D(L)=2d+\frac{4\mu d^3}{(d+\nu L){c_{\operatorname{KPP}}}^2}.$$ In particular, since we are taking $D>2d$, we have that, for $L$ large enough, $D>\tilde D(L)$ and, therefore, ${c_{\operatorname{int}}}>{c_{\operatorname{KPP}}}$. Moreover, recalling and , $$\label{85}
\p_{\b\b}\left(\tilde{\a}_{D}^+({c_{\operatorname{int}}},0)-\tilde{\a}_{d}^-({c_{\operatorname{int}}},0)\right)=2d\left(\frac{\psi}{3\zeta\sqrt{{c_{\operatorname{int}}}^2+\frac{4\mu d D}{d+\nu L}}}-\frac{1}{\sqrt{{c_{\operatorname{int}}}^2-{c_{\operatorname{KPP}}}^2}}\right),$$ where we have set $\psi:=L^3\mu\nu$ and $\zeta:=(d+\nu L)^2$. It is possible to check, by direct computations, that has the same sign as $$\begin{gathered}
(\psi^2-9\zeta^2) \left((d+\nu L)D{c_{\operatorname{KPP}}}^2+ 4\mu d^3\right)^2+\\
-4d(D-d)\left((d+\nu L){c_{\operatorname{KPP}}}^2 + 4\mu d^2\right)\left(36\zeta^2\mu d D+\psi^2(d+\nu L){c_{\operatorname{KPP}}}^2\right),\end{gathered}$$ which is positive for large $L$, since it is a polynomial of degree $8$ in $L$ with leading coefficient equal to $\left(\mu\nu^2c_{\operatorname{KPP}}^2(D-2d)\right)^2>0$. This implies that $$\tilde{\a}_{D}^+({c_{\operatorname{int}}},\b)>\tilde{\a}_{d}^-({c_{\operatorname{int}}},\b) \qquad \text{ for } \b>0, \quad \b\sim0$$ and, since the curve $\tilde\S_d({c_{\operatorname{int}}})$ intersects the $\a-$axis in another point, which lies above the intersection with $\tilde\S_D({c_{\operatorname{int}}})$, we obtain that $\tilde{\S}_D({c_{\operatorname{int}}})$ and $\tilde{\S}_d({c_{\operatorname{int}}})$ intersect, apart from $\b=0$, in a positive value of $\b$ too.
On the other hand, using and defining $$\mathfrak{c}^2:=\lim_{L\to+\infty}{c_{\operatorname{int}}}^2(L)=\frac{D^2c_{\operatorname{KPP}}^2}{4d(D-d)},$$ it is easy to see that the curve $\S_D(c^*_L)$ introduced in approaches in the $(\b,\a)-$plane, as $L$ goes to $+\infty$, the vertical segment $\{0\}\times [0,\mathfrak{c}/D)$. As a consequence of these considerations and the discussion of Section \[sec4\], we have that, for large $L$, $c^*_L$ is obtained as the value for which $\tilde{\a}_D^+(c,\b)$ and $\tilde{\a}_d^-(c,\b)$ are tangent. Observe that $\tilde{\a}_D^+$ is decreasing in $L$ and, therefore, $c^*_L$ is increasing. In addition $\tilde{\a}_D^+>\tilde{\a}_D^{\infty}$, which entails that $c^*_L<c^*_\infty$. Finally, $\tilde{\a}_D^+$ tends, as $L\to+\infty$, to $\tilde{\a}_D^{\infty}$, together with its derivatives, locally uniformly in ${{\mathbb{R}}}_+\times{{\mathbb{R}}}_+$. This implies that $$\lim_{L\to+\infty}c^*_L=c^*_\infty$$ also in this case and concludes the proof.
At a first glance, one may think that the result provided in Theorem \[thmain\](iii) gives a contradiction for $D\in(2d,\limsup_{L\to+\infty}D_{\operatorname{KPP}})$, since holds for $D<D_{\operatorname{KPP}}$ and one could repeat the argument of the first part of the proof to show that $c^*_L$ converges to ${c_{\operatorname{KPP}}}$ as $L\to+\infty$. However, this is not the case, as the previous interval is empty, since $$\label{86}
\lim_{L\to+\infty}D_{\operatorname{KPP}}(L)=2d.$$ This follows as a byproduct of the proof of Theorem \[thmain\](iii), but for clarity we provide a direct proof here. Thanks to the monotonicity in $L$ and $D$ of $\S_D({c_{\operatorname{KPP}}})$ and by definition of $D_{\operatorname{KPP}}$, it follows easily that $L\mapsto D_{\operatorname{KPP}}(L)$ is decreasing and therefore the limit in exists. Let us denote it by $\ov D$ and assume by contradiction that it satisfies $\ov D>2d$. As shown in the last part of the proof of Theorem \[thmain\](iii), the curve $\S_{D_{\operatorname{KPP}}(L)}({c_{\operatorname{KPP}}})$ converges, as $L\to+\infty$, to the segment $\{0\}\times [0,{c_{\operatorname{KPP}}}/\ov D)$, which, since we are assuming $\ov D>2d$, lies at a positive distance from $\S_d({c_{\operatorname{KPP}}})$. As a consequence, there would be no intersection between the latter curve and $\S_{D_{\operatorname{KPP}}(L)}({c_{\operatorname{KPP}}})$ for large $L$ either, contradicting the definition of $D_{\operatorname{KPP}}$.
Acknowledgement {#acknowledgement .unnumbered}
===============
The author is deeply grateful to Professors Henri Berestycki, Jean-Michel Roquejoffre and Luca Rossi for the broad discussions that helped him in the preparation of this work. Moreover he wishes to thank all the people at CAMS (*Centre d’analyse et de mathématique sociales*, Paris), where most of the work was developed, in 2013, for the very warm treatment.
[5]{}
D.G. Aronson, H.F. Weinberger, *Multidimensional nonlinear diffusion arising in population genetics*, Adv. in Math. **30** (1978), 33–76.
H. Berestycki, J.-M. Roquejoffre, L. Rossi, *The influence of a line with fast diffusion in Fisher-KPP propagation*, J. Math. Biology **66** (2013), 743–766.
H. Berestycki, J.-M. Roquejoffre, L. Rossi, *Fisher-KPP propagation in the presence of a line: further effects*, Nonlinearity **26** (2013), 2623–2640.
L. Dietrich, *Velocity enhancement of reaction-diffusion fronts by a line of fast diffusion*, to appear in Trans. Amer. Mat. Soc. (2015), preprint arXiv:1410.4738 \[math.AP\].
A.N. Kolmogorov, I.G. Petrovskii, N.S. Piskunov, *Étude de l’équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique*, Bull. Univ. État. Moscou Sér. Intern. A **1** (1937), 1–26.
H.W. McKenzie, E.H. Merrill, R.J. Spiteri, M.A. Lewis, *How linear features alter predator movement and the functional response*, Interface focus **2** (2012) 205–216.
C.G. Moore, C.J. Mitchell, *Aedes albopictus in the United States: ten-year presence and public health implications*, Emerging Infectious Diseases **3** (1997), 6 pages.
L. Rossi, A. Tellini, E. Valdinoci, *The effect on Fisher-KPP propagation in a cylinder with fast diffusion on the boundary*, arXiv:1504.04698 \[math.AP\].
A.J. Tatem, D.J. Rogers, S.I. Hay, *Global transport networks and infectious disease spread*. Adv Parasitol **62** (2006), 293–343.
[^1]: This work was mainly supported by the Spanish Ministry of Economy and Competitiveness through grants BES-2010-039030 and EEBB-I-13-05962. Partial founding was also provided by European Union’s ERC Grant Agreement n. 321186 - ReaDi - Reaction-Diffusion Equations, Propagation and Modelling and Research Project Stabilità asintotica di fronti per equazioni parabolicheof the University of Padova (2011) coordinated by Luca Rossi.
|
---
bibliography:
- 'disorderbib.bib'
---
$$\begin{tikzpicture}
\draw (\textwidth, 0) node[text width = \textwidth, right] {\color{white} easter egg};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw (0.5\textwidth, -3) node[text width = \textwidth] {\huge \textsf{\textbf{Hydrodynamic transport in strongly coupled \\ \vspace{0.07in} disordered quantum field theories}} };
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw (0.5\textwidth, 0.1) node[text width=\textwidth] {\large \color{black} $\text{\textsf{Andrew Lucas}}$};
\draw (0.5\textwidth, -0.5) node[text width=\textwidth] {\small \textsf{Department of Physics, Harvard University, Cambridge, MA 02138, USA}};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw (0, -13.1) node[right, text width=0.5\paperwidth] {\texttt{lucas@fas.harvard.edu}};
\draw (\textwidth, -13.1) node[left] {\textsf{\today}};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw[very thick, color={violet}] (0.0\textwidth, -5.75) -- (0.99\textwidth, -5.75);
\draw (0.12\textwidth, -6.25) node[left] {\color{{violet}} \textsf{\textbf{Abstract:}}};
\draw (0.53\textwidth, -6) node[below, text width=0.8\textwidth, text justified] {\small We compute direct current (dc) thermoelectric transport coefficients in strongly coupled quantum field theories without long lived quasiparticles, at finite temperature and charge density, and disordered on long wavelengths compared to the length scale of local thermalization. Many previous transport computations in strongly coupled systems are interpretable hydrodynamically, despite formally going beyond the hydrodynamic regime. This includes momentum relaxation times previously derived by the memory matrix formalism, and non-perturbative holographic results; in the latter case, this is subject to some important subtleties. Our formalism may extend some memory matrix computations to higher orders in the perturbative disorder strength, as well as give valuable insight into non-perturbative regimes. Strongly coupled metals with quantum critical contributions to transport generically transition between coherent and incoherent metals as disorder strength is increased at fixed temperature, analogous to mean field holographic treatments of disorder. From a condensed matter perspective, our theory generalizes the resistor network approximation, and associated variational techniques, to strongly interacting systems where momentum is long lived.};
\end{tikzpicture}$$
$$\begin{tikzpicture}
\draw[very thick, color={violet}] (0.0\textwidth, -5.75) -- (0.99\textwidth, -5.75);
\end{tikzpicture}$$
Introduction
============
One of the most exotic and mysterious systems in condensed matter physics is the strange metal, non-Fermi liquid phase of the high $T_{\mathrm{c}}$ superconductors [@taillefer; @keimer]. The transport data in these materials – including, most famously, the linear in temperature dc electrical resistivity – defies clear explanation by a theory of long lived quasiparticles [@kasahara]. Alternatively, the effectively relativistic plasma in graphene may provide an experimental realization of a strongly interacting quantum fluid [@muller]. Finally, recent advances in ultracold atomic gases have paved the way to realizing strongly interacting fluids [@adams]. In all of the above systems, the absence of long lived quasiparticles on experimentally appropriate time scales (e.g., in the computation of dc transport coefficients) poses a challenge for traditional, quasiparticle-based approaches to condensed matter physics.
From a theoretical perspective, a generic strongly interacting quantum field theory (QFT) in more than one spatial dimension has only a few quantities (energy, charge and momentum) that are long lived, and so hydrodynamics may be a sensible description of the low energy physics at finite temperature and density of all of the above systems. Though hydrodynamics is an old theory [@landau; @kadanoff], its implications for transport have only been understood comparatively recently [@hkms], in weakly disordered systems near quantum criticality, in external magnetic fields [@hkms; @bhaseen; @bhaseen2], and in some simple examples of disordered, non-relativistic electron fluids [@andreev]. This is because “textbook" hydrodynamics is utterly inappropriate for most metals, where the electron-impurity scattering length is short compared to the electron-electron scattering length. Momentum and energy rapidly decay, and the only hydrodynamic variable is the charge density. Note that in contrast to this canonical lore, [@zaanen] proposed that observing viscous hydrodynamics in some metals may be possible.
In most ways, hydrodynamics is a far simpler theory to understand (and perform computations in) than quasiparticle based approaches, such as kinetic theory. The difficulty in studying these systems theoretically lies in the fact that hydrodynamics does not completely solve the transport problem: the coefficients in the hydrodynamic equations must be related to Green’s functions in a microscopic model. Nonetheless, if hydrodynamics is valid, it does provide universal constraints on transport, and a transparent physical picture to interpret the results. There are two tractable approaches that can compute the requisite microscopic Green’s functions, without reference to quasiparticles. The first is methods from (perturbative) QFT, combined with the memory matrix approach [@zwanzig; @mori; @forster], which has recently been used in many microscopic models of strange metals, reasonable for describing cuprate strange metals [@raghu1; @raghu2; @patel; @debanjan]. These approaches rely on properly resumming certain families of Feynman diagrams to all orders. The second approach is holography [@review1; @review2; @review3], which reduces the computation of Green’s functions to solving classical differential equations in a black hole background. This can be done numerically [@santoslat1; @santoslat2; @santoslat3; @chesler; @ling; @donos1409; @rangamani], though in some cases analytic insight can be obtained [@ugajin; @chesler; @btv; @lss; @donos1409; @lucas1411; @peet; @rangamani; @grozdanov], sometimes by employing the memory matrix method [@hkms; @hartnollimpure; @hartnollhofman].
Surprisingly, many of the above transport theories from recent years completely match hydrodynamic predictions, at least superficially, despite being formally beyond the regime of validity of hydrodynamics. We take this as an indication that a thorough understanding of hydrodynamic implications for transport in disordered theories is worthwhile, though we will also carefully describe the regime of validity of the approach. In addition, while almost every citation above aims to address the strange metal phase [@taillefer; @keimer; @kasahara], the “hydrodynamic insight" gained from these methods may be applicable to a much broader set of experimentally realized interacting quantum systems.
Motivation: Incoherent Metals and Holography
--------------------------------------------
Let us begin with the main quantitative motivation for the present paper, which is the physical interpretation of a large body of recent holographic work on transport in QFTs without translational symmetry.
[@hkms] proposed a simple hydrodynamic framework for dc transport which has been quite predictive of both holographic and memory function results in subsequent works, at weak disorder. As we previously mentioned, this framework has been surprisingly good at describing many holographic models that treat disorder at a mean field level. A natural conjecture is that disordered hydrodynamic dc transport can describe holographic systems with explicitly broken translational symmetry, and so it is worthwhile to fully flesh out the disordered hydrodynamic formalism.
We begin with a generic hydrodynamic framework for zero frequency transport calculations in Section \[sec2\]. Our emphasis is on a clear presentation of the assumptions and regime of validity of a hydrodynamic description of transport.
We exactly solve the transport problem in Section \[sec3\], at leading order in the strength of disorder, in the limit where translational symmetry is only broken weakly. These systems describe coherent metals, in the language of [@hartnoll1] – henceforth we will also adopt this terminology. We show that our resulting computations exactly equal the results found by the memory function formalism, under the assumption that the momentum is long-lived, justifying that our approach is sensible, as well as providing a physically transparent derivation of memory function based formulas for conductivities (at least, in the mutual regime of validity of the two methods).
We further show in Section \[sec4\] that the hydrodynamic framework can be used to interpret exact, non-perturbative analytic results for dc transport found using holography. This is subject to the important subtlety that the transport coefficients are computed in terms of a new emergent horizon fluid with distinct, emergent (but somewhat sensible) equations of state.
We then proceed to study hydrodynamic transport in non-perturbatively disordered QFTs. Though not amenable to analytic techniques, we develop a combination of rigorous variational approaches and heuristic approximations, outlined in Section \[sec5\], to calculate dc transport in this regime. One might expect that transport becomes dominated by dissipative hydrodynamics, as momentum may become a “bad" conserved quantity; such a state is an incoherent metal, in the language of [@hartnoll1]. We find further evidence for this qualitative picture.
To date, all models of incoherent metals are holographic massive gravity models [@vegh; @davison; @blake1; @dsz; @thermoel1; @thermoel2], or similarly inspired holographic approaches [@hartnolldonos; @donos1; @andrade; @donos2; @gouteraux; @thermoel3; @donos1406; @kim; @gouteraux2; @davison15; @blake2] which we will henceforth lump together under the label of mean-field disorder. These models break translational symmetry phenomenologically, but not explicitly.[^1] These models always predict dc transport which, at all disorder strengths, can be interpreted in terms of the hydrodynamic results of [@hkms], or the slight generalization of [@lucasMM]. A simple example of this is the exact formula for dc electrical conductivity in an isotropic system [@blake1]: $$\sigma = \sigma_{\textsc{q}} + \frac{\mathcal{Q}^2 \tau}{\mathcal{M}}, \label{drude}$$ with $\sigma_{\textsc{q}}$ a transport coefficient independent of disorder strength, $\mathcal{Q}$ a charge density, and $\mathcal{M}$ an analogue of the mass density. The parameter $\tau$ is analogous to a momentum relaxation time, and related to the phenomenological graviton mass. This formula was already known from quantum critical hydrodynamics [@hkms], using computations valid as $\tau \rightarrow \infty$. Indeed, the latter term is nothing more than the Drude formula, valid in a system without quasiparticles, and the former is a quantum effect that can be important close to a particle-hole symmetric point [@patel2]. Mean field models always predict that (\[drude\]) holds even as $\tau\rightarrow 0$, or in the non-perturbative, strong disorder regime. In this limit $\tau$ cannot be interpreted as the momentum relaxation time directly, but importantly, $\sigma$ stays larger than $\sigma_{\textsc{q}}$, which is the conductivity when $\mathcal{Q}=0$ (an uncharged theory).[^2] And while mean field models do agree with approaches that explicitly break translational symmetry weakly [@btv; @lss; @lucas1411], this is simply a consequence of the perturbative equivalence between holographic and memory function computations of transport, proven in many cases in [@lucas].
One might suspect that the fact that (\[drude\]) holds as $\tau\rightarrow 0$ is a sign that mean field physics is a poor description of a strongly disordered QFT, even in holography. For example, it is well known that mean field descriptions of disorder can completely fail to capture even basic thermodynamics of strongly disordered spin models in classical statistical physics – instead, the emergent phases are spin glasses and must be treated using much more delicate technologies [@spinglass].
Our work in Section \[sec5\] demonstrates that our hydrodynamic framework gives an independent framework in which the qualitative picture of dc transport given in (\[drude\]) *is correct at all disorder strengths* until the hydrodynamic description fails. As an important example, we argue that for an isotropic quantum critical system where viscous transport may be neglected, $$\sigma_{\textsc{q}1}(u) + \sigma_{\textsc{q}2}(u)\frac{\mathcal{Q}_0^2}{u^2} \le \sigma \lesssim \sigma_{\textsc{q}3}(u) + \sigma_{\textsc{q}4}(u) \frac{\mathcal{Q}_0^2}{u^2}, \label{eq2}$$ with $\sigma$ the dc electrical conductivity, $\mathcal{Q}_0$ the spatial average of the charge density $\mathcal{Q}$, $u$ the typical size of fluctuations in $\mathcal{Q}$ about this average, and $\sigma_{\textsc{q}1,2,3,4}$ are related to the “quantum critical" diffusive conductivity, $\sigma_{\textsc{q}}$.[^3] As $u\rightarrow 0$, they are all proportional to the constant $\sigma_{\textsc{q}}$ associated with the translationally invariant QFT. At stronger $u$, $\sigma_{\textsc{q}1,2,3,4}$ may be complicated (spatially-averaged) correlations between $\mathcal{Q}$ and $\sigma_{\textsc{q}}$: see (\[saa\]), (\[sq2\]) and (\[eq125\]).
We have written (\[eq2\]) in the manner we did to emphasize asymptotic behavior. When $u\ll \mathcal{Q}_0$, $\sigma \sim u^{-2}$ if $\mathcal{Q}_0 \ne 0$, and this is a direct consequence of the fact that currents can only decay due to the very slow relaxation of momentum, as in a translationally invariant (momentum conserving) system at finite $\mathcal{Q}_0$, Galilean invariance imposes $\sigma=\infty$.[^4] In contrast, when $u\gg \mathcal{Q}_0$, $\sigma$ is sensitive to typical behavior of local $\sigma_{\textsc{q}}$ and is not parametrically larger. Remarkably, $\sigma_{\textsc{q}1}>0$ in any system where the local quantum critical conductivity never vanishes, so any such system is provably a conductor. The physical intuition behind this is that a current can always flow locally due to finite $\sigma_{\textsc{q}}$, and so if these local currents can always flow, a global current flow can necessarily be established. The upper bound is simply a statement that (up to subtleties involving conservation laws) we can bound the power dissipated (and accordingly the conductance), with the average electric field fixed, by assuming that the electric field within the system is uniform. When $u\ll \mathcal{Q}_0$, we can make contact with the memory matrix formalism and exactly compute $\sigma$ perturbatively, as we discuss in Section \[sec3\], and so in this regime we can do better than the bounds in (\[eq2\]). Indeed, in this regime, we can identify $\tau/\mathcal{M} \approx \sigma_{\textsc{q}}/u^2$, and so our bounds justify (\[drude\]) in our class of hydrodynamic models. Analogous phenomena may be responsible for the finite conductivity of mean field holographic models at all “disorder strengths".
A pictorial summary of (\[eq2\]) is shown in Figure \[fig1\], and the main quantitative result of this paper is the justification for Figure \[fig1\] without any mean-field treatment of disorder, and the development of new techniques to address the strongly disordered regime.
![A qualitative sketch of the coherent-incoherent transition realizable in our framework. $\sigma$ denotes the value of a transport coefficient, such as electrical conductivity, and $u$ denotes the “strength of randomness". The solid black line shows our perturbative analytic computation of $\sigma \sim u^{-2}$ as $u\rightarrow 0$. The dashed red line is the qualitative prediction of mean field models that $\sigma$ saturates at a finite value at strong disorder in a theory with quantum critical transport; in particular, $\sigma\ge \sigma^*$. The gray shaded region corresponds to the region of $\sigma$ allowed by variational bounds on $\sigma$, in generic agreement with mean field models. $u\sim \mathcal{Q}_0$ is the scale of the crossover between a coherent and incoherent metal. []{data-label="fig1"}](fig1hydro.pdf){width="3.5in"}
After we had completed this work, [@donos1506; @donos1507] appeared, and has some overlap with ideas in Section \[sec4\].
Motivation: Beyond Resistor Lattices
------------------------------------
Though the main quantitative focus of this work is a set of computational tools to study hydrodynamic transport in relativistic fluids, such as in holography, we also emphasize that the framework we are developing (with suitable generalizations) is sensible for a description of transport in strongly interacting condensed matter systems, without any reference to holography. A common approximation made in condensed matter is what we will refer to below as the “resistor lattice" approximation, which in physical terms is the statement that the slow, hydrodynamic sector of the theory consists of only a conserved charge. One may then model the emergent hydrodynamics – a simple diffusion equation for charge – as a local resistor network: see e.g. [@ruzin; @halperin]. As mentioned before, this is sensible if electrons scatter more frequently from impurities than they do from each other.
However, we will point out in Section \[sec32\] that this approximation fails in a clean hydrodynamic system: the necessary resistor lattice becomes nonlocal. This is not a surprise. What this paper clarifies is the technique that unifies the computation of transport in a weakly disordered (memory matrix) regime and a strongly disordered regime. In doing so, we generalize well-known variational techniques from resistor networks to account for convective transport. In very special cases [@andreev] performed similar calculations, though did not elucidate the connections with the memory matrix formalism, or with resistor lattice technologies, which we generalize directly in the continuum in this paper. Such resistor lattice methods – commonly with an additional approximation called effective medium theory [@landauer; @kirkpatrick] – have been used recently to study transport in a variety of experimentally realizable systems [@meir; @sarma; @demler]. Our approach can generalize these computations to the regime when disorder is weak, and may result in interesting new experimental predictions.
We emphasize that the calculations in [@meir; @sarma; @demler] typically include non-relativistic effects such as Coulomb screening, or additionally approximate that electron-hole recombination is slow enough that both the electron and hole densities are hydrodynamic quantities. We will not make either assumption in this paper, but the general framework and many computational methods we develop almost certainly extend quite naturally to account for these effects.
Steady-State Hydrodynamics {#sec2}
==========================
We consider a strongly coupled QFT in $d$ spatial dimensions at finite temperature and density, on a flat spacetime. It is necessary to generalize to curved spaces to connect with the results of Section \[sec4\], but every result in this paper generalizes in the obvious way (replacing partial derivatives with covariant derivatives, $\int \mathrm{d}^d\mathbf{x} \rightarrow \int \mathrm{d}^d\mathbf{x} \sqrt{g}$, etc.), and so we will not do so explicitly for ease of presentation. Without quasiparticles, the long time dynamics are that of charge, energy and momentum. In this section, we will work with relativistic notation, though the techniques work for non-relativistic theories as well. We focus on theories with a single conserved charge, but the techniques straightforwardly generalize to theories with multiple conserved charges. Note that we will work in units with $\hbar=1$.
We deform the microscopic Hamiltonian $H$ by an external chemical potential: $$H \rightarrow H - \int \mathrm{d}^{d}\mathbf{x} \; \bar A_\mu J^\mu .$$ with $J^\mu$ a conserved electrical current, $\bar F=\mathrm{d}\bar A$, and$$\bar A = \bar \mu(\mathbf{x}) \mathrm{d}t,$$ so if $\mathcal{Q}(\mathbf{x})$ is the local charge density operator, $$H \rightarrow H - \int \mathrm{d}^d\mathbf{x} \; \bar\mu(\mathbf{x}) \mathcal{Q}(\mathbf{x}).$$The chemical potential in the fluid is thus $\bar \mu$. We also assume that the temperature is uniformly $T$, and that there is no fluid velocity, in our background state. This forms the basis of a consistent solution to hydrodynamic equations, driven by the coupling $\bar\mu$ to an external bath, as we will derive below. The steady-state hydrodynamic equations read (in relativistic notation) [@hkms]
\[hydroeq\]$$\begin{aligned}
\partial_i T^{i\mu} &= \bar F^{\mu\nu}J_\nu, \\
\partial_i J^i &= 0, \end{aligned}$$
where Greek indices denote spacetime indices and Latin indices denote spatial indices and $T^{\mu\nu}$ is the energy-momentum current. We have implicitly taken expectation values over all operators in (\[hydroeq\]) and will do so for the remainder of the paper. Because we have sourced disorder in our fluid entirely through $\bar\mu(\mathbf{x})$, we do not need to couple any other dynamical sectors to the theory, though we will point out how this may be done perturbatively in Section \[sec:scalar\], when additional scalars contribute to disorder. The coupling of the fluid to an external chemical potential means that both energy and momentum may be exchanged with the external bath.
In order for hydrodynamics to be valid, it is necessary that $\bar\mu$ vary slowly in space, on a length scale $\xi$ which is large compared to the (possibly position-dependent) mean free path of the fluid $l$. In our strongly interacting fluid, $l$ is the analogue of the electron-electron scattering length in traditional solid-state physics. Without quasiparticles, it is best interpreted as the minimal length scale at which a hydrodynamic description is sensible. The requirement that $\bar\mu$ vary slowly is often written as $$\left|\frac{\partial_x \bar\mu}{\bar\mu}\right| \ll \frac{1}{l}, \label{eq5}$$though this should not be taken literally ($\bar \mu$ may vary slowly through $\bar \mu=0$). The requirement we will assume henceforth in calculations is that, in Fourier space, $\bar\mu(\mathbf{k})$ is only non-negligible for $|\mathbf{k}|\xi \lesssim 1$. It is not necessary that $\bar \mu$ be approximately the same at all points at space:[^5] disorder can be non-perturbative, with hydrodynamic coefficients such as viscosity and charge density, contained within $T^{\mu\nu}$ and $J^\mu$ in (\[hydroeq\]), varying substantially over distances large compared to $l$; see Figure \[figfluid\]. This was noted in [@andreev] as well.
![We employ a separation of 3 length scales in this paper. $\bar\mu$, and the local fluid properties such as entropy density $\mathcal{S}$, may vary substantially over the distance scale $\xi$. We require $l \ll \xi$ for a hydrodynamic description to be sensible. We will often put our fluids in a large but finite box of length $L\gg \xi$ as well.[]{data-label="figfluid"}](fig2fluid.pdf){width="5in"}
In a quantum critical theory of dynamical exponent $z$, one finds $$l \sim T^{-1/z} \label{lT1z}$$ by dimensional analysis [@sachdev]. Note that (\[eq5\]) does not hold as $T\rightarrow 0$ – it is thus important that we are considering the finite temperature response of the QFT. (\[lT1z\]) may be modified in models where the hydrodynamic limit can persist in regimes where $\bar\mu \gg T$, such as in holography [@davisoncold] (in this particular model one seems to find $l\sim \bar\mu^{-1}$), or when the expectation value of neutral scalar fields is large. Explicit computations of $l$ are beyond the scope of this paper but are necessary to properly understand the regime of validity of hydrodynamics. A conservative requirement is certainly to fix the background temperature $T$ to be uniform and large enough that $\xi \gg T^{-1/z}$ at all points in space, but this may be too strict, as we will see in holographic models.
More liberally, one could only require that $\xi \gg l$ hold locally, with short wavelength disorder in “hot" regions of space with small $l$, and long wavelength disorder in “cold" regions of space with large $l$. So long as the solution to the hydrodynamic equations of motion itself varies slowly in the cold regions with large $l$, then our hydrodynamic formalism should be an acceptable description of transport.
When (\[eq5\]) holds, it is a sensible approximation (and standard in condensed matter physics) to assume that thermodynamic and hydrodynamic coefficients, such as viscosity $\eta$ or charge density $\mathcal{Q}$, are *local* and depend only on $\bar \mu(\mathbf{x})$. We then – very crudely speaking – put together pieces of homogeneous fluid of width $\xi$, whose equations of state are translation invariant, and smooth over the fluctuations from piece to piece. Our approach to transport is to focus on the response of the low energy, hydrodynamic degrees of freedom, exactly treating their evolution across the slowly varying background fluid, as we now detail.
For holographic theories, we do *not* need to make the assumption $\xi \gg l$, or the assumption that all transport coefficients are functions of $\bar\mu$ alone. It is remarkable that the mathematical framework we develop in this paper is nonetheless applicable to so many holographic computations.
Linear Response: A Warm-Up
--------------------------
Let us begin with some simple calculations to get an intuitive feeling for hydrodynamic transport. We work with first order hydrodynamics, and will justify this later in the section. We will also assume that our theory is isotropic, another assumption which we relax later.
A first simple case to consider is when the only slow dynamics in the system are of charge. As we mentioned previously, in this case the dynamics reduce to the solution of a diffusion equation: $$\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i = \partial_i \left(\sigma^{\mathrm{loc}}({\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i-\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\tilde\mu)\right) = 0, \label{eqjsimple}$$ where ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i$ is the infinitesimal, constant, externally applied electric field, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i$ is the infinitesimal electric current, and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\tilde\mu$ is the infinitesimal local chemical potential, excited in response to ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j$. $\sigma^{\mathrm{loc}}(\mathbf{x})$ is a transport coefficient which is inhomogeneous in space, and can be interpreted as the local conductivity of the theory. This approximation is well known in condensed matter physics, and we mentioned it in the introduction. Note that chemical potential gradients are equivalent to electric fields in the hydrodynamic equations of motion. The electrical conductivity of such a system is defined as $$\mathbb{E}[{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i] =\sigma_{ij}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j,$$ where ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i$ is evaluated on the unique solution to (\[eqjsimple\]) where ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ obeys sensible boundary conditions (e.g., periodicity when disorder is periodic with period $L$), and we have denoted with $\mathbb{E}[\cdots]$ a uniform spatial average. If the disorder is isotropic, then $\sigma_{ij} = \sigma {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$. Note that $\sigma$ is not equivalent to $\mathbb{E}[\sigma^{\mathrm{loc}}]$. Henceforth, we will define $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu \equiv {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\tilde\mu - x_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j,$$so that (\[eqjsimple\]) can be written compactly as $$\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i = -\partial_i \left(\sigma^{\mathrm{loc}}\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu\right).$$
Let us now account for convective transport – this means that momentum is a long lived quantity and must be included in hydrodynamics. If we neglect thermal transport, then we must modify (\[eqjsimple\]) to account for the convective contributions to charge: $$\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i = \partial_i \left(\mathcal{Q}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i-\sigma^{\textsc{q}}\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu\right) = 0,$$ where $\mathcal{Q}$ is the charge density, and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$ is the velocity field in the fluid. $\sigma^{\textsc{q}} \ne \sigma^{\mathrm{loc}}$ is a “quantum critical" transport coefficient corresponding to the flow of a current in the absence of any velocity field [@hkms]. The momentum conservation equation allows us to determine ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$, and we will show more carefully below that this equation is the following analogue of the Navier-Stokes equation: $$\mathcal{Q} \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu = \partial_j \left( \eta \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i + \eta \partial_i{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_j+ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}\left(\zeta - \frac{2\eta}{d}\right)\partial_k {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_k \right)$$ with $\eta$ the shear viscosity and $\zeta$ the bulk viscosity. As before, $\mathcal{Q}$, $\sigma^{\textsc{q}}$, $\zeta$ and $\eta$ can all depend on position $\mathbf{x}$, though not in an arbitrary way. As we discussed previously, the $\mathbf{x}$-dependence of all these coefficients is fixed by their dependence on $\bar\mu(\mathbf{x})$, as determined in a locally homogeneous fluid.
Linear Response: Complete Theory
--------------------------------
Let us now describe the complete linear response theory which includes the response of temperature, chemical potential and velocity to external fields.
About our background fluid we perturb the system with an infinitesimal electric field $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i = \mathbb{E}\left[- \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu\right],$$and an infinitesimal temperature gradient $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\zeta_i = \mathbb{E}\left[-\frac{1}{T} \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\right].$$ Defining the heat current$$Q^i \equiv T^{it}- \bar\mu J^i,$$ we find that the heat current is conserved (divergenceless): $$\partial_i Q^i = \partial_i T^{it} - \partial_i (\bar \mu J^i) = - \bar F_{ti}J_i -J_i \partial_i \bar \mu = 0.$$ The thermoelectric response of the theory is given by the matrices: $$\left(\begin{array}{c} \mathbb{E}[ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i] \\ \mathbb{E}[{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}Q_i] \end{array}\right) = \left(\begin{array}{cc} \sigma_{ij} &\ \alpha_{ij} T \\ \bar\alpha_{ij} T &\ \bar\kappa_{ij} T\end{array}\right) \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_j \\ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\zeta_j \end{array}\right). \label{jq}$$
Let us define
$$\begin{aligned}
{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha &\equiv \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu \\ T^{-1} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\end{array}\right), \\
{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}F^\alpha_i &\equiv \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}E_i \\ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\zeta_i \end{array}\right), \\
{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{J}^\alpha_i &\equiv \left(\begin{array}{c} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}J_i \\ {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}Q_i \end{array}\right), \end{aligned}$$
where the $\alpha$ vector index denotes charge (q) or heat (h). Note that we may write bold-face vectors below, but this always refers to spatial indices only – we will always write out the $\alpha$ index explicitly in equations. We then may write (\[jq\]) as $$\mathbb{E}[{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{J}^\alpha_i] = \sigma^{\alpha\beta}_{ij} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}F^\beta_j.$$
We write down the gradient expansion of hydrodynamics to first order in derivatives acting on ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$, by expanding stress tensor and charge current in terms of the linear response ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i$ of the fluid. The charge and heat conservation equations of the fluid may be written as
$$\begin{aligned}
0 &= \partial_i \left(\mathcal{Q}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i - \sigma^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu - \alpha^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\right), \\
0 &= \partial_i \left(T\mathcal{S}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i - T \bar\alpha^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu - \bar\kappa^{\textsc{q}}_{ij}\partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T\right),\end{aligned}$$
which we henceforth package into the more compact form $$0 = \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{J}^\alpha_i = \partial_i \left[\rho^\alpha {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_i - \Sigma^{\alpha\beta}_{ij} \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\beta\right], \label{maineq1}$$ where $$\rho^\alpha \equiv \left(\begin{array}{c} \mathcal{Q} \\ T\mathcal{S} \end{array}\right),$$ where $\mathcal{Q}$ is the electric charge density and $\mathcal{S}$ is the entropy density. $\Sigma^{\alpha\beta}_{ij}$ correspond to diffusive transport coefficients that couple charge and heat flows to gradients in ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$, even in the absence of any convective (non-vanishing $v_i$) fluid motion. In particular, $\Sigma^{\text{qq}}_{ij} = \sigma^{\textsc{q}}_{ij}$ corresponds to “quantum critical" conductivity and is typically assumed to vanish in a non-relativistic theory without particle-hole symmetry, as in [@andreev]. $\Sigma^{\mathrm{qh}}_{ij} = \alpha^{\textsc{q}}_{ij}$ corresponds to an intrinsic diffusive conductivity that couples charge and heat flows. In standard non-relativistic theories, only $\Sigma^{\mathrm{hh}}_{ij} = \bar\kappa^{\textsc{q}}_{ij}$ is nonvanishing, as in [@andreev]. All three are non-zero in relativistic systems [@hkms]. We assume that $$\Sigma^{\alpha\beta}_{ij} = \Sigma^{\beta\alpha}_{ji}, \label{diffsym1}$$and that locally $\Sigma$ be a positive definite matrix (note $\alpha i$ and $\beta j$ group together for purposes of matrix inversion). This is sensible from the point of view of the second law of thermodynamics. Indeed, in isotropic theories, the second law provides more constraints on these transport coefficients than (\[diffsym1\]) alone [@hkms], but we can and will relax these constraints in the technical formalism we develop without substantially altering any physical content. This loose treatment of entropic constraints proves useful in Section \[sec4\].
The momentum conservation equation becomes[^6] $$\begin{aligned}
\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}P + \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{T}_{ij} &={\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\left[\mathcal{S}\partial_i T + \mathcal{Q}\partial_i \mu \right]+ \partial_j {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{T}_{ij}= \partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\left( \Phi^\alpha \rho^\alpha \right) - \partial_j \left[ \eta_{ijkl} \partial_l {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_k\right] = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{Q} \partial_i \bar\mu,\end{aligned}$$ where $\mathcal{T}_{ij}$ is the viscous stress tensor, $P$ is the pressure, $\eta_{ijkl}$ is the viscosity tensor with symmetries $$\eta_{ijkl}= \eta_{jikl} = \eta_{ijlk} = \eta_{klij}, \label{diffsym2}$$ and we have used the fact that thermodynamic relations imply $$\partial_i P = \mathcal{S} \partial_i T + \mathcal{Q} \partial_i \mu, \label{dip}$$ Now since $T$ is constant on the background, and as the background $\mu$ is simply given by $\bar \mu$, we cancel the two ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{Q}$ terms,and we are left with $$0 = \rho^\alpha\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha - \partial_j \left[ \eta_{ijkl} \partial_k {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}v_l\right]. \label{maineq2}$$ In the above equations, $\rho$, $\Sigma$ and $\eta$ are all smooth functions of $\bar\mu(\mathbf{x})$, varying on large length scales compared to $l$.
(\[dip\]), along with (\[hydroeq\]) and the fact that $\mathcal{J}^\alpha_i = \mathcal{T}_{ij}=0$ on the background solution, demonstrates that the background solution to the hydrodynamic equations indeed exists. $\mathcal{J}^\alpha_i=0$ on the background because the hydrodynamic equations only depend on $\bar\mu - \mu$, which identically vanishes [@hkms]. Though we expect that disorder implies that $\rho^\alpha$, $\Sigma^{\alpha\beta}_{ij}$ and $\eta_{ijkl}$ are all functions of $\bar\mu$ alone, we will not comment further the precise nature of this dependence, and a microscopic computation is necessary in general.
(\[maineq1\]) and (\[maineq2\]) are linear and have a unique solution when subject to appropriate boundary conditions. These boundary conditions will be periodic boundary conditions in a large box of size $L$ in all directions, up to non-trivial gradients ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}F^\alpha_i = \mathbb{E}[-\partial_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha]$. We also stress that ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha$ only enters the equations of motion through derivatives – this is crucial in order for the linear response problem to be well posed on spaces that are periodic or compact. Henceforth we will drop the ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}$ so as to avoid clutter, with a few exceptions.
Having imposed these boundary conditions, we prove in Appendix \[apponsager\] that, for any hydrodynamic transport computation,$$\sigma^{\alpha\beta}_{ij} = \sigma^{\beta\alpha}_{ji}.$$ This is referred to as Onsager reciprocity, and is a non-trivial consistency check on this framework. Note that this condition is violated when time-reversal symmetry (in the microscopic Hamiltonian $H$) is broken, e.g. by a background magnetic field [@hkms]. We do not consider this possibility in this paper.
As mentioned previously, we have truncated the hydrodynamic gradient expansion at first order. Let us give some sensible, though non-rigorous, justifications for this. The hydrodynamic gradient expansion can be organized as follows:
$$\begin{aligned}
\mathcal{T}_{ij} &\sim l \mathcal{T}^{(1)}_{ij} + l^2 \mathcal{T}^{(2)}_{ij} + \cdots, \\
\mathcal{J}^\alpha_i - \rho^\alpha v_i &\sim l \mathcal{J}^{(1)\alpha}_i+ l^2 \mathcal{J}^{(2)\alpha}_i + \cdots
\end{aligned}$$
$\mathcal{T}^{(n)}_{ij}$ corresponds to the coefficient of the stress tensor carrying $n$ spatial derivatives; similarly for $\mathcal{J}^{(n)\alpha}_i$. This is a qualitative statement – the basic idea is that the coefficients of $\mathcal{T}_{ij} \sim l^n \epsilon/v$ at $n^{\mathrm{th}}$ order in derivatives, e.g., with $\epsilon$ the energy density and $v$ a velocity scale such as the speed of sound, and so we have extracted out the overall scaling in $l$ above. Assuming that the solution $\Phi$ and $v_i$ varies over the length scale $\xi \gg l$, we see that higher derivative corrections to the charge, heat and momentum currents are suppressed by powers of $l/\xi$, and thus can be neglected. In the special case where diffusive charge and heat transport dominates, this argument can be made rigorous. When the convective contributions cannot be ignored, this argument is not rigorous – not all coefficients $\rho$, $\Sigma$ and $\eta$ scale as the same power of $l$, in general, and so it is not obvious to prove that $\Phi$ and $v_i$ must vary on the length scale $\xi$. However, this is still a plausible assumption – any rapidly oscillatory $\Phi$ and $v_i$ on short length scales compared to $\xi$ seems non-sensible as a static solution, since static solutions to dissipative hydrodynamics tend to be “as close as possible" to equilibrium, given the boundary conditions; the variational methods we will develop in this paper also suggest that it is unlikely to have fast variations of $\Phi$ and $v_i$. This general framework readily generalizes to account for higher derivative corrections to hydrodynamics, if one wishes to directly include them, but we will not include them in this paper. (\[maineq1\]) and (\[maineq2\]) are not well-posed until we include first order corrections to hydrodynamics, so it is necessary to work at least to this order in the gradient expansion.
In the absence of other dynamical sectors of the theory, it is necessary that either $\mathcal{S}$ or $\mathcal{Q}$ be position dependent in order to obtain finite thermoelectric conductivities. Indeed, if both $\mathcal{S}$ and $\mathcal{Q}$ are constants, there is a zero mode in (\[maineq1\]) and (\[maineq2\]) corresponding to uniform shifts in $v_i$ and $\mathcal{J}^\alpha_i$. This zero mode is responsible for infinite dc transport coefficients in a fully translation invariant theory. Mathematically, we could break translation invariance only in $\Sigma$ or $\eta$ and still have this zero mode. However, in a microscopic theory $\Sigma$, $\eta$, $\mathcal{S}$ and $\mathcal{Q}$ are not arbitrary but are fixed by equations of state that relate these parameters to $\bar\mu$, so in general both will be inhomogeneous. Alternatively, as we will discuss in Section \[sec:scalar\], it is possible to add other dynamical disordered sectors of the theory which lead to finite conductivities even when $\mathcal{S}$ and $\mathcal{Q}$ are constants.
Let us also briefly mention the issue of momentum relaxation times. In many holographic mean field models of disorder, the momentum relaxation time can be parametrically fast [@gouteraux2]. In these hydrodynamic models, the “momentum relaxation time" is parametrically slower than the mean free time ($1/T$ in most quantum critical models). We do not explicitly compute this momentum relaxation time, and a single momentum relaxation time will not be easily definable when disorder is non-perturbative. We will see that it is possible to spoil coherent transport despite parametrically long lived local momentum currents.
It is also worth stressing that henceforth, when we refer to “hydrodynamic" transport we refer to the transport equations being written in terms of (\[maineq1\]) and (\[maineq2\]). Remarkably, essentially all of our results rely only on the structure of these equations being obeyed, and not on $\mathcal{S}$ and $\mathcal{Q}$ obeying thermodynamic Maxwell relations – holographic horizon fluids do not obey any obvious Maxwell relations. If one has a microscopic system of interest, with known equations of state, they may simply take the above equations and numerically solve them. So the point of this paper is less to describe complicated (numerical) solutions to these equations of motion, but rather to elucidate simple and universal physical consequences of hydrodynamic transport: first through exact results for weakly disordered theories, and then through a combination of rigorous bounds and heuristic arguments for strongly disordered theories. Numerical solutions to these equations, and a discussion of their relevance to realistic quantum critical systems, will be presented elsewhere.
Weak Disorder Limit {#sec3}
===================
In this section, we specialize to the weak disorder limit in which slow momentum relaxation dominates the conductivities. In this limit we can make direct contact with the memory matrix formalism [@zwanzig; @mori; @forster], and provide physically transparent derivations of many previous results derived within this formalism, in the overlapping regime of validity of hydrodynamics and the memory function formalism. We simply quote the results of this approach (see e.g. [@lucasMM]): if the Hamiltonian of our weakly disordered system is $$H = H_0 - \int \mathrm{d}^d\mathbf{x}\; h(\mathbf{x})\mathcal{O}(\mathbf{x}).$$ with $H_0$ translation invariant, and $\mathcal{O}$ an operator in the theory coupled to the field $h$, then the memory matrix formalism predicts, at leading order in perturbation theory: $$\sigma^{\alpha\beta}_{ij} \approx \mathbb{E}\left[ \rho^\alpha
\right]\mathbb{E}\left[\rho^\beta\right] \left[\sum_{\mathbf{k}} \; k_i k_j |h(\mathbf{k})|^2\left[ \lim_{\omega\rightarrow 0} \frac{\mathrm{Im}\left(G_{\mathcal{OO}}(\mathbf{k},\omega)\right)}{\omega}\right] \right]^{-1} \equiv \mathbb{E}\left[ \rho^\alpha
\right]\mathbb{E}\left[\rho^\beta\right] \Gamma^{-1}_{ij} \label{eqmm}$$ In some models of strange metals appropriate for real world modeling, some care is required in defining $\mathcal{Q}$ [@patel]. Formally, the memory matrix formalism is exact, but the formalism does not appear to be tractable in practice beyond leading order, in higher dimensional models.
Let us briefly note our conventions in Fourier space. Fourier transforms are defined as $$\mathcal{O}(\mathbf{k}) = \frac{1}{L^d} \int \mathrm{d}^d\mathbf{x} \; \mathrm{e}^{-\mathrm{i}\mathbf{k}\cdot\mathbf{x}}\mathcal{O}(\mathbf{x}).$$ We will often assume that disordered sectors of the fluid have (zero-mean) Gaussian fluctuations: e.g., quenched disorder on $\mathcal{O}$ would scale as
$$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\mathcal{O}(\mathbf{k})] &= 0, \\
\mathbb{E}_{\mathrm{d}}[\mathcal{O}(\mathbf{k})\mathcal{O}(\mathbf{q})] &= \frac{V^2_{\mathcal{O}}}{N}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},-\mathbf{q}}, \;\;\;\; (|\mathbf{k}|\xi \lesssim 1)\end{aligned}$$
where $N\gg 1$ represents the number of Fourier modes which “meaningfully contribute" to $\mathcal{O}(\mathbf{x})$, and $\mathbb{E}_{\mathrm{d}}$ denotes an average over quenched disorder: $$N \sim \left(\frac{L}{\xi}\right)^d.$$These definitions are chosen so that $\mathbb{E}[\mathcal{O}(\mathbf{x})^2] = V_{\mathcal{O}}^2$. We will use these conventions throughout the paper. Such disorder is consistent with (\[eq5\]). At a typical point in the fluid, $$\left|\frac{\partial_x \mathcal{O}}{\mathcal{O}}\right|^2 \sim \frac{\mathbb{E}[(\partial_x \mathcal{O})^2]}{\mathbb{E}[\mathcal{O}^2]} \sim \frac{\xi^{-2} V_{\mathcal{O}}^2}{V_{\mathcal{O}}^2} \sim \frac{1}{\xi^2}.$$ To obtain the numerator in the third step above, it is helpful to go to Fourier space, and note that $|\mathbf{k}| \lesssim \xi^{-1}$ for all non-negligible modes.
In [@hkms] and many subsequent works, within the hydrodynamic approach to transport, the momentum transport equation is modified to $$\partial_j T^{ji} = -\frac{T^{ti}}{\tau} + \cdots,$$where $T^{ti}$ is the momentum density, and $\tau$ is a phenomenological relaxation time that is subsequently computed using memory functions.[^7] However, as we will see in this section, at least for dc transport, it is actually not necessary to add in $\tau$ by hand. With weak disorder, the dc transport can be accounted for exactly from hydrodynamics, and (\[eqmm\]) recovered provided that the equations of motion properly account for disorder.
Interestingly, our hydrodynamic approach requires the disorder is always long wavelength compared to $l$, and the memory matrix formalism does not require this restriction. Nonetheless, at leading order in perturbation theory, we will recover the exact memory matrix formula for transport coefficients from hydrodynamic considerations. It is also worth noting that the memory function approach is equivalent to holographic computations of transport in their overlapping regime of validity [@lucas]. Thus, all three approaches give the same picture of transport, which is best physically understood in terms of this simple hydrodynamic framework (notwithstanding the regime of validity).
Disorder Sourced by Scalar Operators {#sec:scalar}
------------------------------------
Let us begin with the case where the operator $\mathcal{O}$ is a scalar field. We assume that all hydrodynamic coefficients are $\mathbf{x}$-independent – $h$ is the only disordered parameter. In this case, (\[maineq2\]) must be modified:$$\rho^\alpha \partial_j \Phi^\alpha + \partial_i \mathcal{T}_{ij} = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O} \partial_j h . \label{maineq2scalar}$$We place a ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}$ on $\mathcal{O}$ to distinguish the response due to the electric field from the background. The scalar’s static equation of motion is [@kadanoff] $$\int \mathrm{d}^d\mathbf{y}\; G_{\mathcal{OO}}^{-1}(\mathbf{x}-\mathbf{y},\omega=0) \mathcal{O}(\mathbf{y}) = h(\mathbf{x}).$$The Green’s function $G_{\mathcal{OO}}$ is the retarded Green’s function of the translationally invariant Hamiltonian $H_0$: in position space,$$G_{\mathcal{OO}}(\mathbf{x},t) \equiv \mathrm{i}\mathrm{\Theta}(t) \langle [\mathcal{O}(\mathbf{x},t),\mathcal{O}(\mathbf{0},0)]\rangle$$with the average $\langle \cdots \rangle$ taken over quantum and thermal fluctuations, and $\mathrm{\Theta}$ the Heaviside step function. We emphasize that while $G_{\mathcal{OO}}$ is the true quantum Green’s function of $\mathcal{O}$, and an intricate quantum mechanical computation may be necessary to compute it, $G_{\mathcal{OO}}$ does play the role of the coefficient of proportionality in the linear response of macroscopic, thermal expectation values.
Let us make an ansatz for the solution to the hydrodynamic equations, and show that it is consistent with all conservation laws. Our ansatz is that the only divergent terms in linear response, as $h\rightarrow 0$, are $ \mathbf{v} \sim h^{-2}$ and ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O} \sim h^{-1}$. Furthermore, at leading order $ \mathbf{v}=\mathbf{v}_0$ is a constant. All other spatially dependent response is $\mathrm{O}(h^0)$ and we will see that it can be neglected in the computation of $\sigma_{ij}$ at leading order.
The leading order response of $\mathcal{O}$ in the $h\rightarrow 0$ limit is best computed in the rest frame of the fluid, which has shifted. It is simplest to Fourier transform to momentum space as well: $$\mathcal{O}(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0)_{\mathrm{co-moving}} = G_{\mathcal{OO}}(\mathbf{k}, -\mathbf{k}\cdot\mathbf{v}_0) h(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0)_{\mathrm{co-moving}}.$$Here, everything is measured in the co-moving frame of the fluid, and so the only non-vanishing $h$ (and therefore $\mathcal{O}$) will have this special relation between $\mathbf{k}$ and $\omega$. Note, of course, that $\mathcal{O}(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0)_{\mathrm{co-moving}} = \mathcal{O}(\mathbf{k})$, as measured in the original rest frame, and similarly for $h(\mathbf{k})$. We wish to keep only the linear response coefficient, proportional to $\mathbf{v}_0$:[^8] $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}(\mathbf{k}) = -h(\mathbf{k})\frac{\partial G_{\mathcal{OO}}(\mathbf{k},0)}{\partial \omega} \mathbf{k}\cdot\mathbf{v}_0 = -\mathrm{i}h(\mathbf{k}) \left[ \lim_{\omega\rightarrow 0} \frac{\mathrm{Im}\left(G_{\mathcal{OO}}(\mathbf{k},\omega)\right)}{\omega}\right] \mathbf{k}\cdot \mathbf{v}_0.$$In the latter equality we have used reality propeties of Green’s functions, and assumed analyticity near the real axis for $\mathbf{k}\ne \mathbf{0}$.
Now, let us study the momentum conservation equation, averaged over space, so the derivatives of the stress tensor do not contribute: $$\begin{aligned}
0 &= \sum_{\mathbf{k}} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}(\mathbf{k}) (-\mathrm{i}k_j h(-\mathbf{k}))+ \rho^\alpha F^\alpha_j \notag \\
&= -\sum_{\mathbf{k}} \; k_i k_j |h(\mathbf{k})|^2\left[ \lim_{\omega\rightarrow 0} \frac{\mathrm{Im}\left(G_{\mathcal{OO}}(\mathbf{k},\omega)\right)}{\omega}\right] v_{0i} + \rho^\alpha F^\alpha_j = -\Gamma_{ij}v_{0j} + \rho^\alpha F^\alpha_i.\end{aligned}$$ At leading order, the electric current is uniform:$$\mathcal{J}^\alpha_i \approx \rho^\alpha v_{0i} = \rho^\alpha \rho^\beta \Gamma^{-1}_{ij}F^\beta_j, \label{jpv}$$ which gives us (\[eqmm\]). It is straightforward to generalize to the case where there are multiple types of scalar fields.
Let us now argue that the ansatz (and thus results) we have found are self-consistent. If we do not average the momentum conservation equation over space, then the ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}\partial_j h$ term is not translationally invariant, and this will induce corrections to $ T$, $ \mu$ and $ \mathbf{v}$ which are spatially varying. However, ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O} \sim h^{-1}$ and so these spatially varying corrections will be $\sim h^0$. Indeed, it is easy to see that (\[maineq1\]) and (\[maineq2scalar\]) are consistent with the leading order inhomogeneous response (except in ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{O}$) arising at this subleading order. Thus, our ansatz is indeed correct in the asymptotic limit $h\rightarrow 0$, and we have derived from hydrodynamic principles the momentum relaxation times derived via the memory function formalism. The computation above is completely analogous to the holographic computation of [@lucas].
Disorder Sourced by Chemical Potential {#sec32}
--------------------------------------
In this section, we consider the case where $$\bar\mu = \mu_0 + \epsilon \hat\mu,$$with $\mathbb{E}[\hat\mu]=0$, and $\epsilon \ll 1$ a small perturbative parameter. Alternatively, we can write $$\bar\Phi^\alpha = \Phi_0^\alpha + \epsilon \hat\Phi^\alpha$$with $\bar\Phi^{\mathrm{h}}=T\mathcal{S}$ and $\hat\Phi^{\mathrm{h}}=0$ – this will be more compact notation for subsequent manipulations. We will denote $\hat\rho^\alpha$ with the fluctuations in the charge and entropy densities associated with $\hat\mu$ – in general both will be non-zero. For simplicity, we assume that the background fluid is isotropic, though the technique certainly generalizes (but with more cumbersome calculations). [@dsz] studied similar problems employing hydrodynamic Green’s functions in the memory matrix formalism.
Again, we make the ansatz that the only response at $\mathrm{O}(\epsilon^{-2})$ is a constant velocity field $\mathbf{v}_0$, so that $\mathcal{J}^\alpha_i$ is again approximated by (\[jpv\]). At $\mathrm{O}(\epsilon^{-1})$, there are $\mathbf{x}$-dependent corrections to $T$, $\mu$, and $\mathbf{v}$. A similar calculation to before gives $$\Gamma_{ij} = \sum_{\mathbf{k}} \; k_ik_j \hat\rho^{\alpha}(-\mathbf{k}) \left[\frac{1}{\eta^\prime}\rho^\alpha_0 \rho^\beta_0 + k^2 \Sigma^{\alpha\beta}\right]^{-1} \hat \rho^\beta(\mathbf{k}) \equiv\sum_{\mathbf{k}} \; k_ik_j \hat\rho^{\alpha}(-\mathbf{k}) (\mathfrak{m}^{-1})^{\alpha\beta} \hat \rho^\beta(\mathbf{k}), \label{gammaij32}$$with $$\eta^\prime \equiv \eta \left(2-\frac{2}{d}\right) +\zeta,$$ with $\eta$ the shear viscosity and $\zeta$ the bulk viscosity. We provide more details in Appendix \[apppert\].
Let us briefly discuss some simplisitic limiting cases of (\[gammaij32\]) and give some analytic insight into the solutions – in general, the solutions will be more complicated than what we write here.
First, let us begin with the case with $\eta^\prime \rightarrow \infty$ and $\hat\rho^{\mathrm{h}} \approx 0$. Suppose further $\hat\rho^{\mathrm{q}}$ are Gaussian disordered random variables:
$$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\hat\rho^\alpha(\mathbf{k})] &= 0, \\
\mathbb{E}_{\mathrm{d}}[\hat\rho^{\mathrm{q}}(\mathbf{k})\hat\rho^{\mathrm{q}}(\mathbf{q})] &= \frac{u^2}{N}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},-\mathbf{q}},
\end{aligned}$$
with $\mathbb{E}_{\mathrm{d}}[\cdots]$ denoting averages over the distribution of quenched disorder modes. Then we find $$\mathbb{E}_{\mathrm{d}}[\Gamma_{ij}] \approx \frac{1}{Nd}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \sum_{\mathbf{k}} \frac{u^2}{\Sigma^{\mathrm{qq}}} = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \frac{u^2}{d\Sigma^{\mathrm{qq}}} = {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \mathbb{E}\left[\frac{\hat{\mathcal{Q}}^2}{d\Sigma^{\mathrm{qq}}} \right]. \label{gamma40}$$ Fluctuations of this quantity are suppressed in the limit $V_d\rightarrow\infty$ [@lucas1411], so $\mathbb{E}_{\mathrm{d}}[\Gamma_{ij}] \approx \Gamma_{ij}$.
An alternative simple case is thermal transport with $\mathcal{Q}=0$, and $\mathcal{S}\approx \mathcal{S}_0$ with small variations. In this case, we may approximately neglect the $\Sigma$ contributions to $\mathfrak{m}$ as $\xi \rightarrow \infty$, and we find by a similar calculation to above: $$\mathbb{E}_{\mathrm{d}}[\Gamma_{ij}] \approx {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \mathbb{E}\left[\frac{\eta^\prime}{d} \frac{(\partial_i \mathcal{S})^2}{\mathcal{S}_0^2}\right],$$ which is similar to results discussed in [@dsz].
To make contact between (\[gammaij32\]) and the memory function framework, of course, we need to compute the retarded Green’s functions for charge and heat. Unfortunately, the Green’s functions coupling charge and heat flow in a relativistic hydrodynamic system are quite messy [@kovtunlec], and so we will prove the equivalence with (\[eqmm\]) in a more abstract manner. We proceed analogously to the previous case – as above, we have showed that the leading order response of the fluid that contributes to $\mathcal{J}^\alpha_i$ at $\mathrm{O}(\epsilon^{-2})$ is from a constant shift to the velocity.
The retarded Green’s function is *defined* as: $$\hat\rho^\alpha(\mathbf{k}) + {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha(\mathbf{k}) = G^{\alpha\beta}(\mathbf{k},-\mathbf{k}\cdot\mathbf{v}_0) \hat\Phi^\beta(\mathbf{k}) \approx \chi^{\alpha\beta}\hat\Phi^\beta(\mathbf{k}) - \mathbf{k}\cdot\mathbf{v}_0 \frac{\partial G^{\alpha\beta}(\mathbf{k},0)}{\partial \omega} \hat\Phi^\beta(\mathbf{k}).$$ Of course, just as in Section \[sec:scalar\], we must use the boosted Green’s function to obtain the linear response ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha$: the $\mathrm{O}(v^0)$ term is the response of the background fluid, and the $\mathrm{O}(v)$ term is the linear response contribution of interest for the computation of transport – we focus on the latter (${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha$) henceforth. We can also relate ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha$ to ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha$ by the static susceptibilities, since we are perturbing about a translationally invariant state: $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\alpha = \chi^{\alpha\beta} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\beta.$$ Again, we can relate $\mathbf{v}_0$ to $\mathbf{F}^\alpha$ by spatially averaging (\[maineq2\]): $$\begin{aligned}
\rho_0^\alpha F^\alpha_i &= \sum_{\mathbf{k}} \; \hat \rho^\alpha(-\mathbf{k}) \mathrm{i}k_i {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\Phi^\alpha(\mathbf{k}) = \sum_{\mathbf{k}} \; \hat \rho^\alpha(-\mathbf{k}) \mathrm{i}k_i \left(\chi^{-1}\right)^{\alpha\beta}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\rho^\beta \notag \\
&= \sum_{\mathbf{k}} \; k_ik_j \hat\Phi^\alpha(\mathbf{-k}) \left[\lim_{\omega\rightarrow 0} \frac{\mathrm{Im}(G^{\alpha\beta}(\mathbf{k},\omega))}{\omega}\right]\hat\Phi^\beta(\mathbf{k}) v_{0j}.\end{aligned}$$ It is straightforward to read off $\Gamma_{ij}$ from this equation, and we see that it agrees with the generalization of (\[eqmm\]) to multiple disordered quantities – though of course at this point we use the fact that only $\hat\Phi^{\mathrm{q}} \ne 0$. Since $\mathcal{J}^\alpha_i\approx \rho^\alpha v_{0i}$, we reproduce the results of the memory matrix formalism.
We have worked through two specific examples of deriving (\[eqmm\]) from hydrodynamics. Of course one may need to generalize further, but it should be quite evident from the derivations above that the agreement between the memory matrix formalism and our hydrodynamic framework will persist.
It is possible to compute the transport coefficients at higher orders in perturbation theory, where the memory matrix formalism has become unwieldy enough that such a calculation has not yet been attempted. Even at next order in perturbation theory, the corrections to the conductivity become quite messy. We discuss the general structure of higher order computations in Appendix \[apppert\]. The key point is that organizing the perturbative expansion in a hydrodynamic framework is straightforward in principle, albeit messy to carry out in practice. This framework provides quantitative predictions for memory matrix calculations at higher orders in $\epsilon$, for classes of models perturbed by $\bar\mu$.
### Breakdown of the Resistor Lattice Approximation
Finally, let us compare the results of this subsection with the “resistor lattice" approximation:$$J_i \approx - \sigma_{ij}(\mathbf{x}) \partial_j \mu(\mathbf{x}), \label{emteq}$$ with $\sigma_{ij} \ne \Sigma^{\mathrm{qq}}_{ij}$ taken to be a local function, determined in terms of local properties of the fluid, and and $-\partial_j \mu$ the local electric field in the sample. Of course such a function $\sigma$ may be found by solving a linear algebra problem at each point in space, and so the question is whether this is a useful statement – namely, whether $\sigma_{ij}(\mathbf{x})$ can be computed by appealing to local properties of the disordered QFT (on length scales large compared to $l$). Essentially, can we integrate out $v_i$ and $T$, and be left with a local, dissipative description of electrical transport in terms of $\mu$ alone?
This is impossible in the weak disorder limit, though our comments appear to have broader validity whenever viscosity cannot be neglected. $\partial_j \mu$ is actually inhomogeneous at leading order $\epsilon^{-1}$, and must be expressed in terms of a non-local integral over $\mathcal{Q}(\mathbf{x}) = \rho^{\mathrm{q}}(\mathbf{x}) \approx \chi^{\mathrm{qq}}\epsilon \hat\mu(\mathbf{x})$, with $\chi^{\mathrm{qq}}$ the charge-charge susceptibility, assumed to be constant at this order in perturbation theory. This is derived in (\[eqphim1\]), with the $X$ and $Y$ corrections in (\[eqphim1\]) vanishing at leading order in $\epsilon$; in this equation, the leading order behavior of $\mu$ is “local in Fourier space", and becomes non-local in position space, in terms of the original disorder $\hat\mu$. It is therefore generally impossible to find a function $\sigma_{ij} \sim \epsilon^{-1}$ expressable in terms of $\hat\mu$ or its (low order) derivatives, such that $J_i = \sigma_{ij}(-\partial_j \mu) = \text{constant} \sim \epsilon^{-2}$ (at leading order).
Comparing to Holography {#sec4}
=======================
\[sec:holostripe\] Let us now compare with holographic results. Many holographic results, valid in the weak disorder limit, are equivalent to the memory matrix results [@lucas] – and therefore our hydrodynamic framework. So our focus here will be on non-perturbative holographic results.
Our discussion of holography is brief – for further details, consult the excellent reviews [@review1; @review2; @review3]. Holography refers to a conjectured duality between a classical gravity theory in $d+2$ spacetime dimensions, in an emergent anti-de Sitter (AdS) space, and a strongly coupled QFT in $d$ spatial dimensions. The strongly coupled QFTs (in every case where we know them explicitly) are large-$N$ matrix models, and can be thought to “live" at the boundary of AdS. Making the gravity theory classical is equivalent to sending the bulk Newton’s gravitational constant to 0, and this makes bulk quantum gravity fluctuations negligible. This corresponds to the limit $N\rightarrow \infty$ in the dual theory. However, unlike vector models, these matrix models do not behave at all like free theories, and encode rich quantum critical dynamics. The nonlinear dynamics of gravity is dual to the stress-energy sector of the boundary theory. Furthermore, studying finite temperature dynamics becomes simply related to studying the dynamics in a black hole background. These black holes will be assumed to have the same planar (or toroidal) topology of the boundary theory. Adding a finite charge density in the boundary theory is dual to adding a bulk U(1) gauge field, and charging the black hole under the associated U(1) charge. Of interest for us in this paper is that holographic models can further be used to add strong disorder in addition to finite temperature and density. We are interested in modeling disorder explicitly, and so the bulk geometry becomes inhomogeneous and rugged. At small temperatures, the black hole gets pushed “farther back" into the emergent bulk direction, and the bulk fields become significantly renormalized, with higher momentum modes usually decaying away, as depicted in Figure \[fig2\].
![A qualitative sketch of holography. A finite temperature $T$ and density boundary theory is dual to an emergent gravitational theory in one extra spatial dimension (depicted above). Strong disorder in the boundary theory (depicted in green) backreacts and leads to the formation of a lumpy charged black hole of Hawking temperature $T$. The emergent black hole horizon is curved and is denoted in black. The membrane paradigm suggests that dc transport can be computed in an emergent fluid living on the horizon, which can undergo renormalization relative to the “bare fluid" in the boundary theory.[]{data-label="fig2"}](fig2ads.pdf){width="2.75in"}
The precise duality allows us to compute correlation functions in our unknown QFT by solving gravitational equations instead. The basic idea is as follows: correlation functions of the stress-energy tensor $T^{\mu\nu}$ and the U(1) current $J^\mu$ are respectively related to bulk Green’s functions of the metric $g_{MN}$ and a gauge field $A_M$, all computed at (classical) tree level. We are using $MN$ etc. to denote all coordinates, including the bulk radial coordinate. For example, to compute the electrical conductivity, we add an explicit infinitesimal source for the bulk field $A_i$ at the AdS boundary, and then compute the expectation value of the current[^9] in the field theory in the background perturbed by $A_i$.
These computations are often intractable analytically. However, in the simple case of dc transport, many analytic computations of dc transport in holography are performed using the membrane paradigm [@iqbal]. In this case one can show that there is an analogous “electric current" flowing on the black hole horizon, which is also conserved and whose average value equals that of $J^\mu$ in the boundary (an analogous more complicated story holds for the heat current). The resulting computation of the conductivities then depend only on black hole horizon data. It is natural to conjecture that we should solve the hydrodynamic transport problem using the equations of state of an emergent “fluid" whose equation of state is related to local properties of the black hole horizon. Indeed – up to some subtleties we will see shortly – this is the case.
It is remarkable that *first order* hydrodynamics captures the transport problem on the emergent black hole horizons in holography, in all nonperturbative computations to date. As we will see, however, this emergent horizon fluid is not locally equivalent to a fluid in the boundary QFT – instead it has undergone non-local renormalization, through the radial evolution of the bulk fields and geometry. In addition, while we previously had to make assumptions that the disorder correlation length was large to justify our hydrodynamic formalism, no such justification is necessary (at least, a priori) for the holographic results to be valid. When $\xi\rightarrow \infty$ in a holographic model, the differences between the horizon and boundary fluids become negligible, as was found in [@herzog].
Let us begin with the striped models studied in [@donos1409],[^10] which consider gravitational solutions to the AdS-Einstein-Maxwell system: $$S = \int \mathrm{d}^4x\; \sqrt{-g} \left(R+6 - \frac{F^2}{4}\right).$$ (we have set Newton’s constant, the bulk charge, and the size of AdS equal to 1 to follow their notation). Above $R$ is the Ricci scalar, and $F$ is the Maxwell tensor associated with the bulk gauge field. In these models, translation invariance is only broken in the $x$ direction, but the boundary theory lives in $d=2$. Let us summarize their results briefly. They found that (after inverting their thermoelectric conductivity matrix):
\[donosdata1\]$$\begin{aligned}
T\bar\kappa_{xx} = \sigma^{\mathrm{hh}}_{xx} &= \frac{16{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}^2T^2}{X} \mathbb{E}\left[\mathrm{e}^{B}\right], \\
T\alpha_{xx} = \sigma^{\mathrm{qh}}_{xx} &=\frac{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T}{X} \mathbb{E}\left[ \mathrm{e}^{B}\frac{a_t}{H_{tt}}\right], \\
\sigma_{xx} = \sigma^{\mathrm{qq}}_{xx} &= \frac{1}{X} \mathbb{E}\left[ \mathrm{e}^{B}\left(\frac{a_t}{H_{tt}}\right)^2 + \frac{1}{S}\left(\partial_x B - \frac{\partial_xS}{S}\right)^2\right]\end{aligned}$$
where $$X = \mathbb{E}\left[ \mathrm{e}^{B}\left(\frac{a_t}{H_{tt}}\right)^2 + \frac{1}{S}\left(\partial_x B - \frac{\partial_xS}{S}\right)^2\right] \mathbb{E}\left[\mathrm{e}^{B}\right] - \mathbb{E}\left[ \mathrm{e}^{B}\frac{a_t}{H_{tt}}\right]^2$$ and $B$, $a_t$, $H_{tt}$ and $S$ are data associated with the solution of classical Einstein-Maxwell gravity, near the horizon of a black hole, as detailed below. The near horizon geometry in their coordinate system was $$\label{ds21}
\mathrm{d}s^2 \approx \frac{H_{tt}(x)}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}Tr} \mathrm{d}r^2 -4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}TrH_{tt}(x)\mathrm{d}t^2 + S(x) \left[\mathrm{e}^{B(x)} \mathrm{d}x^2 + \mathrm{e}^{-B(x)}\mathrm{d}y^2\right],$$ with $r$ the radial coordinate, and $r=0$ denoting the location of the black hole. The bulk gauge field only has a time-like component, whose value is $a_t$. $T$ denotes the temperature of the boundary theory.
Let us postulate the following equations of state for the emergent fluid on the horizon:
\[donosdata2\]$$\begin{aligned}
\eta &= \frac{\mathcal{S}}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}} = S, \\
\mathcal{Q} &= \frac{Sa_t}{H_{tt}}, \\
\Sigma^{\mathrm{qq}} &= 1, \\
\Sigma^{\mathrm{qh}} &= \Sigma^{\mathrm{hh}} = 0.\end{aligned}$$
The first of these equations is the canonical result for $\eta/\mathcal{S}$ in a strongly coupled theory [@kss], which also holds for the charged black holes here [@mas; @kss2], in the translationally invariant limit, though this universal ratio can be different in mean-field disordered black holes [@gouteraux2]. The last of these equations was argued to occur in holographic models in [@lucasMM], by matching $\omega=0$ results of massive gravity. More recently, it has been pointed out that this is not the correct interpretation of $\Sigma^{\alpha\beta}$ in the boundary theory, and this becomes discernable at finite $\omega$ [@davison15]. However, we will see that this prescription can describe correctly an emergent fluid, associated with data on the horizon, whose hydrodynamic response is equivalent to (\[donosdata1\]). The inequivalence of the boundary fluid and this emergent “horizon fluid" is an important subtlety, and one we will not resolve in this paper.
We also need to make two more assumptions. The first is rather simple – let us suppose (\[donosdata2\]) is valid for the disordered model, with $x$-dependence trivially put in: e.g., $\eta(x) = S(x)$. This is in accordance with our logic in Section \[sec2\]. The second assumption is that the boundary fluid lives on a curved space with metric $$\mathrm{d}s^2 \equiv \gamma_{ij} \mathrm{d}x^i\mathrm{d}x^j = \mathrm{e}^{B(x)} \mathrm{d}x^2 + \mathrm{e}^{-B(x)}\mathrm{d}y^2,$$which differs from (\[ds21\]) by a conformal rescaling. Intuitively this can be argued for on the grounds that $S$ determines $\mathcal{S}$ and $\eta$, and therefore should not determine the boundary metric $\gamma_{ij}$, since we expect that $\gamma_{ij}={\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$ in a translationally invariant isotropic model, even though $S\ne 1$ in general. We compute the thermoelectric conductivties for such a fluid using special techniques for striped systems, discussed in Appendix \[appstripe\], and we find
\[donosresult\]$$\begin{aligned}
\left(\sigma^{-1}\right)^{\mathrm{qq}}_{xx} &= \mathbb{E}\left[\mathrm{e}^{B}\right], \\
\left(\sigma^{-1}\right)^{\mathrm{qh}}_{xx} &= -\frac{1}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T}\mathbb{E}\left[ \mathrm{e}^{B}\frac{a_t}{H_{tt}}\right] , \\
\left(\sigma^{-1}\right)^{\mathrm{hh}}_{xx} &= \frac{1}{(4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T)^2} \mathbb{E}\left[ \mathrm{e}^{B}\left(\frac{a_t}{H_{tt}}\right)^2 + \frac{1}{S}\left(\partial_x B - \frac{\partial_xS}{S}\right)^2\right].\end{aligned}$$
Inverting this matrix returns (\[donosdata1\]), the exact result found holographically in [@donos1409].
More recently [@rangamani] has generalized the results of [@donos1409] to include the effects of a dynamical scalar field in the dual theory. In general this scalar must be consistently included within hydrodynamics, and so we must consider a more general theory to connect with these results in the generic case.
In the case where translational symmetry is broken in multiple directions, there is an important subtlety. It turns out that the local “current" in the emergent horizon fluid is *not* equivalent to $\langle J\rangle$ in the boundary theory. This is true only after taking a spatial average [@donos1506]. To relate the “current" in the horizon fluid to the current the boundary fluid, one must add a non-local integral over the bulk direction.
It was recently shown [@donos1506] in a more general context that dc transport in holography reduces to solving “hydrodynamic" equations on the black hole horizon. [@donos1506] interpreted the resulting fluid equations as an incompressible Navier-Stokes equation. [@grozdanov] points out that these equations can also be interpreted in the framework of the present paper.
These examples suggest that – while the hydrodynamic framework of this paper is extremely helpful providing some physical intuition to these non-perturbative holographic results – this story is not complete. Importantly, however, much of the variational technology that we develop can be directly applied to holographic models.
Strong Disorder {#sec5}
===============
We cannot be as rigorous in the strong disorder limit and give closed form expressions for the conductivity matrix. Nonetheless, we will develop simple but powerful variational methods that allow us to get a flavor of transport at strong disorder, by providing lower and upper bounds on the conductivity matrix. We focus on the discussion of $\sigma^{\mathrm{qq}}_{ij}$ in an isotropic theory in this section. However, the techniques developed below may be used to compute all thermoelectric transport coefficients. Our discussion is therefore not an exhaustion of all possible physics contained in the hydrodynamic formalism, but simply a demonstration of what we believe is a general feature of hydrodynamic transport: a crossover from coherent (Drude) physics to incoherent behavior as disorder strength increases.
We present the mathematical formalism in the subsections below – explicit examples of calculations may be found in Appendix \[appa\].
Power Dissipated
----------------
Define “voltage drops" $V^\alpha_i$ of each conserved quantity in each direction as $$V^\alpha_i \equiv \Phi^\alpha (x_i=0, \mathbf{x}_{\perp i}) - \Phi^\alpha(x_i=L, \mathbf{x}_{\perp i}).$$ Recall our boundary conditions on $\Phi^\alpha$: it must be periodic up to linear terms. We also define the net currents flowing in the $i$ direction via $$I^\alpha_i \equiv \int\limits_{\text{fixed }x_i} \mathrm{d}^{d-1}\mathbf{x}\; \mathcal{J}^\alpha_i.$$Note that by current conservation, $I^\alpha_i$ can be evaluated at any $x_i$. Since $V$ and $I$ are determined by the solution to a linear response problem, we can relate them via a conductance matrix (do not confuse this $G^{\alpha\beta}$ with the retarded Green’s function defined previously)$$I^\alpha_i \equiv G^{\alpha\beta}_{ij} V^\beta_j,$$or via its inverse, the resistance matrix $$V^\alpha_i = R^{\alpha\beta}_{ij} I^\beta_j.$$ $G^{\alpha\beta}$ is by definition related to the dc transport coefficients: $$G^{\alpha\beta}_{ij} = L^{d-2} \sigma^{\alpha\beta}_{ij}.$$ We claim that the power dissipated in the system is simply given by $$\mathcal{P} = I^\alpha_i R^{\alpha\beta}_{ij} I^\beta_j = V^\alpha_i G^{\alpha\beta}_{ij} V^\beta_j.$$
Let us verify this. Energy is dissipated[^11] locally via the dissipative ($\Sigma$ and $\eta$) terms in hydrodynamics: $$\mathcal{P} = \int \mathrm{d}^d\mathbf{x} \left(\Sigma_{ij}^{\alpha\beta} \partial_i\Phi^\alpha \partial_j \Phi^\beta + \eta_{ijkl} \partial_j v_i \partial_l v_k\right). \label{eqppd}$$ We integrate by parts on the second term and use (\[maineq2\]) (recall $v_i$ obeys periodic boundary conditions): $$\mathcal{P} = \int \mathrm{d}^d\mathbf{x} \left(\Sigma_{ij}^{\alpha\beta} \partial_i\Phi^\alpha \partial_j \Phi^\beta - v_i \rho^\alpha \partial_i\Phi^\alpha \right) = - \int \mathrm{d}^d\mathbf{x} \; \mathcal{J}^\alpha_i \partial_i\Phi^\alpha.$$But $\mathcal{J}^\alpha_i$ is a conserved current, and so $$\mathcal{P} = -\oint \mathrm{d}^{d-1}\mathbf{x} \; \Phi^\alpha n_i \mathcal{J}^\alpha_i = \sum_i I^\alpha_i(\Phi^\alpha(x_i=0) - \Phi^\alpha(x_i=L)) = I^\alpha_i V^\alpha_i.$$with $n_i$ the outward pointing normal.
Lower Bounds
------------
Let us begin by discussing the lower bounds on conductivities. These are by far the more important bounds to obtain, because – as we will see – they allow us to rule out insulating behavior in a wide variety of strongly disordered hydrodynamic systems.
We obtain lower bounds on conductivities analogous to how one obtains upper bounds on the resistance of a disordered resistor network, via Thomson’s principle [@levin]. Similar approaches are also used in kinetic theory [@ziman]. Thomson’s principle states that if we run any set of “trial" currents through a resistor network, subject to appropriate boundary conditions, then we can upper bound the inverse conductivity by simply computing the power dissipated by our trial currents. The power dissipated in the resistor network is minimal on the true distribution of currents, which is compatible with Ohm’s Law and a singly-valued voltage function. We will see that remarkably, this simple approach immediately generalizes.
Let us propose a trial set of charge and heat currents, $\tilde{\mathcal{J}}^\alpha_i$, which are periodic functions, and exactly conserved: $$\partial_i \tilde{\mathcal{J}}^\alpha_i =0.$$ In general, this trial function will not be compatible with a single-valued (well-defined) $\Phi^\alpha$. We write $$\tilde{\mathcal{J}}^\alpha_i = \bar{\mathcal{J}}^\alpha_i + \hat{\mathcal{J}}^\alpha_i$$with overbars denoting the true solution of the hydrodynamic equations subject to our boundary conditions, tildes denoting our trial “guesses" at the true solution, and hats denoting the deivations, on all variables henceforth. We also impose $$\int \mathrm{d}^{d-1}\mathbf{x}\; \hat{\mathcal{J}}^\alpha_i(x_i=0,L) = 0, \label{hatjaeq}$$ as there is a true solution $\bar{\mathcal{J}}^\alpha_i$ with the same net currents $I^\alpha_i$ as our trial $ \tilde{\mathcal{J}}^\alpha_i$. We also propose a completely arbitrary periodic velocity field $\tilde{v}_i$. Define $$\tilde{\mathcal{P}} = \int \mathrm{d}^d\mathbf{x}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\tilde{\mathcal{J}}^\alpha_i - \rho^\alpha \tilde{v}_i\right)\left(\tilde{\mathcal{J}}^\beta_j - \rho^\beta \tilde{v}_j\right) + \eta_{ijkl} \partial_j \tilde{v}_i \partial_l \tilde{v}_k \right], \label{tildeplower}$$which, on the true solution, is analogous to (\[eqppd\]). We define $\bar{\mathcal{P}}$ (the true power dissipated) and $\hat{\mathcal{P}}$ analogously. Recall $\bar{\mathcal{P}},\;\tilde{\mathcal{P}},\;\hat{\mathcal{P}}\ge 0$, and expand out $\tilde{\mathcal{P}}$: $$\tilde{\mathcal{P}} = \hat{\mathcal{P}} + \bar{\mathcal{P}} +2 \int \mathrm{d}^{d}\mathbf{x}\; \left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right)\left(\bar{\mathcal{J}}^\beta_j - \rho^\beta \bar{v}_j\right) + \eta_{ijkl} \partial_j \hat{v}_i \partial_l \bar{v}_k \right] \equiv \hat{\mathcal{P}} + \bar{\mathcal{P}} +2\mathcal{K}$$Now $$\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right)\left(\bar{\mathcal{J}}^\beta_j - \rho^\beta \bar{v}_j\right) = -\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right) \Sigma^{\beta\gamma}_{jk} \partial_k \bar{\Phi}^\gamma = -\partial_i \bar\Phi^\alpha\left(\hat{\mathcal{J}}^\alpha_i - \rho^\alpha \hat{v}_i\right)$$and so we obtain, integrating by parts: $$\begin{aligned}
\mathcal{K} &= \int \mathrm{d}^d\mathbf{x} \left[ - \partial_i \bar\Phi^\alpha \hat{\mathcal{J}}^\alpha_i + \rho^\alpha \hat{v}_i \partial_i \bar{\Phi}^\alpha + \eta_{ijkl} \partial_j \hat{v}_i \partial_l \bar{v}_k\right] \notag \\ &= -\oint \mathrm{d}^{d-1}\mathbf{x} \bar{\Phi}^\alpha n_i \hat{\mathcal{J}}_i^\alpha + \int \mathrm{d}^d\mathbf{x} \left[ \rho^\alpha \partial_i \bar{\Phi}^\alpha - \partial_j ( \eta_{ijkl} \partial_l \bar{v}_k)\right]\hat{v}_i = 0.\end{aligned}$$ The first term vanishes since $\tilde{\mathcal{J}}^\alpha_i$ is periodic, and the constant gradient terms vanish due to (\[hatjaeq\]); the second by (\[maineq2\]). We conclude that $\tilde{\mathcal{P}} \ge \bar{\mathcal{P}}$. If we define $\tilde{\mathcal{P}} = \tilde{R}^{\alpha\beta}_{ij} I^\alpha_i I^\beta_j$, then we obtain $$I^\alpha_i \tilde{R}^{\alpha\beta}_{ij} I^\beta_j \ge I^\alpha_i R^{\alpha\beta}_{ij} I^\beta_j.$$
In particular, we can immediately obtain bounds for all diagonal entries of $R^{\alpha\beta}_{ij}$. Suppose that we have a large, isotropic disordered metal, in which case we find $R^{\alpha\beta}_{ij} = R^{\alpha\beta}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$ and $\tilde R^{\alpha\beta}_{ij} = \tilde R^{\alpha\beta}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}$. Then we have the generic bounds
\[eqlowerbound\]$$\begin{aligned}
\frac{1}{\tilde R^{\mathrm{qq}}} &\le \frac{1}{R^{\mathrm{qq}}} = \frac{G^{\mathrm{hh}}G^{\mathrm{qq}}- (G^{\mathrm{hq}})^2}{G^{\mathrm{hh}}} \le G^{\mathrm{qq}}, \\
\frac{1}{\tilde R^{\mathrm{hh}}} &\le G^{\mathrm{hh}}.\end{aligned}$$
It is also straightforward to convert from $R^{\alpha\beta}_{ij}$ into $(\sigma^{-1})^{\alpha\beta}_{ij}$: $$\mathcal{P} = I^\alpha_i R^{\alpha\beta}_{ij}I^\beta_j = L^{2d-2} \mathbb{E}\left[\mathcal{J}^\alpha_i \right] R^{\alpha\beta}_{ij} \mathbb{E}\left[\mathcal{J}^\beta_j \right] = L^d \mathbb{E}\left[\mathcal{J}^\alpha_i \right] (\sigma^{-1})^{\alpha\beta}_{ij} \mathbb{E}\left[\mathcal{J}^\beta_j \right].$$ To obtain bounds on off-diagonal elements is slightly more subtle. Information can be found about off-diagonal elements by studying linear combinations of various components of $I^\alpha_i$, but it is not as clearcut as (\[eqlowerbound\]).
Upper Bounds
------------
Obtaining upper bounds on conductivities is in principle more simple, but quite a bit more subtle in practice. Let us write $$\mathcal{P} = \int \mathrm{d}^d\mathbf{x}\left(\Sigma_{ij}^{\alpha\beta}\partial_i \Phi^\alpha \partial_j \Phi^\beta + \eta^{-1}_{ijkl} \mathcal{T}_{ij}\mathcal{T}_{kl}\right) \equiv V^\alpha_i G^{\alpha\beta}_{ij} V^\beta_j, \label{pupperbound}$$ where $\mathcal{T}_{ij} = -\eta_{ijkl} \partial_l v_k$ is the viscous stress tensor, which has $d(d+1)/2$ independent components; $\eta^{-1}$ is a matrix inverse with the first two indices grouped together and the last two indices grouped together (but only in symmetric combinations). It is possible that $\eta$ may not be invertible, but in this case it is straightforward to regulate the zero eigenvalue with an infinitesimal positive eigenvalue and then take the inverse.[^12] To compute the conductance, we need to demand (as stated previously) that $\Phi^\alpha$ obeys $\Phi^\alpha(x_i=0) - \Phi^\alpha(x_i=L) = V^\alpha_i$.
We are going to guess a single valued trial function $\tilde{\Phi}^\alpha = \bar\Phi^\alpha + \hat\Phi^\alpha$, with $\bar\Phi^\alpha$ the exact solution as before, and $\hat\Phi$ a periodic function (recall that $\Phi^\alpha$ should be periodic up to the linear gradient terms). We will also guess a trial $\tilde{\mathcal{T}}_{ij} = \bar{\mathcal{T}}_{ij} + \hat{\mathcal{T}}_{ij}$, which must be a symmetric tensor, and a periodic function. We do *not* require that $\tilde{\mathcal{T}}_{ij}$ be expressable in terms of a velocity function, or that $\partial_i\tilde{\mathcal{J}}^\alpha_i = 0$. Let us verify the circumstances under which we can nevertheless find $\tilde{\mathcal{P}} \ge \bar{\mathcal{P}}$, as before. We find that $\tilde{\mathcal{P}} = \bar{\mathcal{P}} + \hat{\mathcal{P}} + 2\mathcal{K}$, with $$\begin{aligned}
\mathcal{K} &= \int \mathrm{d}^d\mathbf{x} \left( \Sigma^{\alpha\beta}_{ij} \partial_i \hat\Phi^\alpha \partial_j \bar\Phi^\beta + \eta^{-1}_{ijkl} \hat{\mathcal{T}}_{ij} \bar{\mathcal{T}}_{kl}\right) = \int \mathrm{d}^d\mathbf{x} \left( \left(\rho^\alpha \bar{v}_i - \bar{\mathcal{J}}^\alpha_i\right) \partial_i \hat\Phi^\alpha - \eta^{-1}_{ijkl} \hat{\mathcal{T}}_{ij} \eta_{klmn} \partial_n \bar{v}_m\right) \notag \\
&= \int \mathrm{d}^d\mathbf{x} \left(\rho^\alpha \partial_i \hat{\Phi}^\alpha + \partial_j \hat{\mathcal{T}}_{ij} \right)\bar{v}_i - \oint \mathrm{d}^{d-1}\mathbf{x} \; n_i \bar{\mathcal{J}}^\alpha_i \hat{\Phi}^\alpha\end{aligned}$$ We have used the periodicity of $\tilde{\mathcal{T}}_{ij}$ to integrate that term in $\mathcal{K}$ by parts. The first term vanishes if we require that $$\rho^\alpha \partial_i \tilde{\Phi}^\alpha + \partial_j \tilde{\mathcal{T}}_{ij} =0 \label{upcons}$$ of all perturbations. The second term vanishes because $\hat\Phi^\alpha$ and $\bar{\mathcal{J}}^\alpha_i$ are periodic, with $n_i \bar{\mathcal{J}}^\alpha_i$ taking opposite signs on each face.
And since the integrand in (\[pupperbound\]) is positive semi-definite, $\hat{\mathcal{P}}\ge 0$. We conclude that this forms the basis of a variational principle for the computation of $G^{\alpha\beta}_{ij}$. If we define $$\tilde{\mathcal{P}} = V^\alpha_i \tilde{G}^{\alpha\beta}_{ij} V^\beta_j,$$ then we obtain bounds on $G^{\alpha\beta}_{ij}$, analogously to $\tilde{G}^{\alpha\beta}_{ij}$. As before, diagonal elements of $G^{\alpha\beta}_{ij}$ can be straightforwardly upper bounded, and off-diagonal elements require more care.
Discussion of Variational Results
---------------------------------
Here we present a summary of the calculations performed in Appendix \[appa\]. For simplicity, let $\mathbb{E}[\mathcal{Q}]=\mathcal{Q}_0$ and $\mathrm{Var}[\mathcal{Q}] = \mathbb{E}[\mathcal{Q}^2] - \mathbb{E}[\mathcal{Q}]^2 = u^2$. The electrical conductivity will, in a disordered isotropic fluid without parametrically large fluctuations in $\Sigma$ or $\eta$, be bounded from above and below by the following schematic bounds: $$\sigma_{\textsc{q}1}(u) + \sigma_{\textsc{q}2}(u) \frac{\mathcal{Q}_0^2}{u^2} \le \sigma \lesssim \sigma_{\textsc{q}3}(u) + \sigma_{\textsc{q}4}(u) \frac{\mathcal{Q}_0^2}{u^2} + \frac{\xi^2\mathcal{Q}_0^4}{\eta_1(u) u^2} + \frac{\xi^2 u^2}{\eta_2(u)}+ \frac{\xi^2 \mathcal{Q}_0^2}{\eta_3(u)} \label{boundseq}$$with each $\sigma_{\textsc{q}}$ and $\eta$ factor above related to “typical" behavior of $\Sigma^{\mathrm{qq}}$ and $\eta_{ijkl}$ respectively. In particular the upper bounds are quite subtle (see (\[eq125\])), and so each $\sigma_{\textsc{q}1,2,3,4}$ may have complicated $u$ dependence for $u\gtrsim\mathcal{Q}_0$, and we have written (\[boundseq\]) as we did to emphasize qualitative behavior, as discussed after (\[eq2\]).
(\[boundseq\]) proves that there is a crossover at $u\sim \mathcal{Q}_0$ between an coherent regime when $u \ll \mathcal{Q}_0$ (translational symmetry is weakly broken) and an incoherent regime when $u\gg\mathcal{Q}_0$ (translational symmetry is strongly broken), as depicted in Figure \[fig1\]. As discussed in the introduction, this is the physics found by mean field holographic models, and demonstration of this without a mean field treatment of disorder is a primary quantitative result of this paper.
Many of the statements which lead to (\[boundseq\]) can be made quite rigorously. The lower bounds on conductivity are derived carefully and will be valid in a wide variety of theories. The upper bounds which we derive are much more challenging to evaluate analytically when viscosity is not neglected, and so we have made heuristic arguments to understand the qualitative physics that are non-rigorous and may break down in some cases. Theories with very large fluctuations in $\Sigma$, $\eta$, or $\rho^\alpha$ could render the upper and lower bounds to be far enough apart (perhaps parametrically so) for (\[boundseq\]) to not be useful. Still, we propose that the coherent-to-incoherent crossover described by (\[boundseq\]) is generic, and will provide some intuition into why this occurs.
In (\[eq2\]), we ignored viscous effects, and in Figure \[fig1\], we depicted $\sigma$ saturating at $\sigma^*$ when $u\gg \mathcal{Q}_0$. (\[boundseq\]) generically confirms this picture, with $\sigma_{\textsc{q}1} \lesssim \sigma^* \lesssim \sigma_{\textsc{q}3}$, so long as $$\sigma_{\textsc{q}}\eta \gg \xi^2u^2. \label{5bound1}$$ This inequality may or may not be satisfied, and determines whether transport may become sensitive to viscous effects. For example, in a strongly interacting quantum critical system of dynamical exponent $z$, we expect $\sigma_{\textsc{q}} \sim T^{(d-2)/z}$, $\eta \sim \mathcal{S}\sim T^{d/z}$ [@kss], and $\xi\gg T^{-1/z}$. The requirement that (\[5bound1\]) is violated is $$u \gtrsim \sqrt{\frac{\sigma_{\textsc{q}}}{T^{(d-2)/z}}} \frac{T^{d/z}}{T^{1/z}\xi} \sim \sqrt{\frac{\sigma_{\textsc{q}}}{T^{(d-2)/z}}} \frac{\mathcal{S}}{T^{1/z}\xi}. \label{5bound2}$$ When $\mu\lesssim T$ (the regime of validity[^13] of the hydrodynamic approach in a typical quantum critical model), then it is reasonable to expect that $u \lesssim \mathcal{S}$, as most of the entropy will be associated with a background charge neutral plasma, and not with the deformation by a chemical potential. Rearranging (\[5bound2\]) we find $$1 \gtrsim \frac{u}{\mathcal{S}} \gtrsim \sqrt{\frac{\sigma_{\textsc{q}}}{T^{(d-2)/z}}} \frac{1}{T^{1/z}\xi}.$$ Recalling that $T^{1/z}\xi \gg 1$, this sequence of equalities is satisfied for disorder on the longest wavelengths. However, if $u\ll \mathcal{S}$ then it may be possible to have disorder on short enough wavelengths that this sequence of equalities is not satisfied. It is this regime where (\[eq2\]) and Figure \[fig1\] are valid. A viscous-dominated transport regime is not understood well, and a further understanding of this regime is an important goal for future work.
A second assumption that went into (\[eq2\]) is that $(\Sigma^{-1})^{\mathrm{qq}}$ is finite. This is true in the effective horizon fluid of holographic models, but need not be true in other quantum critical models. Transport in models where $(\Sigma^{-1})^{\mathrm{qq}}$ is infinite will be discussed elsewhere.
Similar bounds can be found for other transport coefficients. In particular, for bounds on $\bar\kappa$, one must simply replace $\sigma_{\textsc{q}}$ with $T\bar\kappa_{\textsc{q}}$, $\mathcal{Q}_0$ with $T\mathcal{S}_0=\mathbb{E}[T\mathcal{S}]$, and $u^2$ with $T^2 \mathrm{Var}[\mathcal{S}]$. It is more likely that thermal transport is sensitive to viscosity in a quantum critical system, as $\bar\kappa_{\textsc{q}} \rightarrow 0$ when $\bar \mu \ll T$ [@hkms].
One of the most important results we find is an exact inequality for an isotropic fluid: $$\sigma^{\alpha\alpha} \ge \frac{1}{\mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\alpha}_{ii}\right]} , \;\;\;\text{(no summation on }\alpha\text{ or } i). \label{saa}$$ This can be interpreted simply as the statement that a uniform charge or heat current could flow through the fluid, with no convective transport, encountering this effective conductivity. And so as long as a current can flow everywhere locally with a finite conductivity, so can a current flow globally. (\[saa\]) is incredibly powerful – in particular, if $\Sigma^{\mathrm{qq}}$ is strictly positive at all points in space, we have *proven* that the QFT described by this framework is a conductor. As mentioned above, the lower bounds in (\[boundseq\]) are derived quite carefully, and essentially follow from generalizations of (\[saa\]). Our proof that these fluids are conductors when $\Sigma^{\alpha\beta}_{ij}$ is finite generalize to anisotropic theories, though the bounds become more easily expressed as upper bounds on the matrix $\sigma^{-1}$.
The new approaches advocated in this paper, along with the existing mean-field literature, suggests that the fate of most holographic models at strong disorder – at fixed temperature $T$, and arbitrarily strong disorder – is to become an incoherent conductor, and not an interacting quantum glass. This is a remarkable and highly non-trivial prediction. In contrast, in metals described by (fermionic) free quantum field theories, there is a transition to an insulating phase at some critical disorder strength [@anderson], which is zero in $d\le 2$ [@abrahams]. Generically, interactions do lead to delocalized, conducting phases at weak disorder, with localization and insulating physics arising at stronger disorder strengths, in any $d$ [@giamarchi; @basko]. It is possible that this localization transition is not observable in classical holography, which only captures the leading order in $N\rightarrow \infty$, and so has taken the “coupling strength $\rightarrow\infty$" limit before the “disorder $\rightarrow\infty$" limit.
We have always referred to these hydrodynamic models as incoherent metals. Holographic “insulators" discussed in the literature typically rely on $\Sigma^{\mathrm{qq}}$ scaling as a positive power of $T$, in a homogeneous model. In [@rangamani], it is likely due to stripes of such decreasing $\Sigma^{\mathrm{qq}}$ arising at low $T$. More generally, such insulators arise from the percolation of locally insulating $\Sigma^{\mathrm{qq}}$ regions through the effective horizon fluid.[^14] This is not unlike the “metal-insulator" transition of a classical disordered resistor lattice, associated with percolation of $R=\infty$ resistors across the lattice [@kirkpatrick; @derrida]. This is a different mechanism from Anderson localization in typical condensed matter systems, which is related to destructive interference of quasiparticles scattering off of disorder. Of course, in holographic models, the percolation phenomenon on the horizon could emerge from “benign" disorder on the boundary, but from the point of view of the emergent horizon fluid the metal-insulator transition is simply a percolation transition. We emphasize that our hydrodynamic formalism is still mathematically valid for dc transport in holographic insulators, due to the remarkable mathematical results of [@donos1506; @donos1507]. The physical interpretation of such a fluid is an important question for future work, as emphasized in the previous section. There is to date no construction of a holographic metal-insulator transition that is unambiguously driven by (non-striped) disorder, and interpreting any such model in terms of hydrodynamic transport may lead to interesting insights. In simple holographic models, it has recently been shown that such a transition is impossible [@grozdanov], and so more complicated models with bulk scalar fields will be necessary.
In a non-holographic context, it is less clear whether or not our hydrodynamic formalism will be valid in a quantum system undergoing a metal-insulator transition, as the validity of hydrodynamics rests on the disorder being long wavelength. The classical “metal-insulator" transition realized by resistor networks [@kirkpatrick; @derrida] is a crude example of this phenomenon, but relies on the only hydrodynamic degree of freedom being charge.
Finally, as we are studying a strongly disordered system, it is also worthwhile to think about fluctuations in the transport coefficients between different realizations of the quenched disorder. As in [@lucas1411], we expect that these fluctuations are suppressed as the size of the sample increases with $L$ as $L^{-d/2}$, with possible deviations when distributions on the random coefficients $\rho$, $\Sigma$ and $\eta$ are heavy tailed. Such fluctuations are classical, but this is not surprising since the dc response of our QFTs are governed by classical hydrodynamics. This is analogous to weakly interacting theories at finite temperature [@leestone2]. In contrast, a free quantum field theory has universal conductance fluctuations at $T=0$ [@leestone; @altshuler; @imry], so it would be interesting to ask if the $T\rightarrow 0$ limit of holographic models (where hydrodynamics can still be a sensible approach [@davisoncold]) has anomalous fluctuations in transport coefficients, in disordered models.
Localization
============
As we previously mentioned, many free or weakly interacting quantum systems are described by a “localized" phase where transport is exponentially suppressed at low temperatures [@anderson], at strong disorder. Naively, one might think that a strong coupling analogue of localization – with the associated reduction in transport – would exist at strong disorder. Indeed, [@saremi] provided evidence for a possible connection in a holographic model. In seeming contrast to this, we have rigorously ruled out any insulating, localized phase in our framework (which includes many such holographic models), so long as the quantum critical conductivity is finite everywhere; the simple holographic models studied in the literature to date are described by our framework, with finite $\Sigma^{\mathrm{qq}}$ everywhere in space, in most models.
This is consistent with known results in elastic networks and other random resistor networks. Despite the localization of classical eigenfunctions [@anderson2; @john; @ludlam], diffusion and transport are possible even with localized eigenmodes of the linearized hydrodynamic equations. This has been shown in similar models without convective transport [@halperin2; @ziman2; @amir1; @amir2]. Localization is more subtle in these systems due to the presence of zero modes of the hydrodynamic operators, due to exact conservation laws. Together with modes of arbitrarily long correlation length with finite eigenvalues, transport is possible despite classical localization, and so the signatures observed in [@saremi] need not be important for dc transport.
The finite momentum or finite frequency response of the system may be more sensitive to localization. In a simple model of disordered RC circuits, interesting new universal phenomena arise [@amir3]. It is worthwhile to understand finite $\omega$ transport in the class of models in this paper as well. In particular, it is interesting to ask whether at strong disorder, the Drude peak found via memory matrices [@lucasMM] broadens out enough to look “incoherent" [@hartnoll1], at least at small frequencies, or whether more exotic phenomena emerge.
There is one other point worth making about localized eigenmodes. In a translationally invariant fluid, long-time tails in hydrodynamic correlation functions in $d\le 2$ spoil hydrodynamic descriptions of dc transport [@kovtunlec]. In particular, in $d=2$, the conductivity $\sigma(\omega)$ in an uncharged, translationally invariant system picks up a correction $\sim \log (T/|\omega|)$, which diverges as $\omega\rightarrow 0$ [@willwk]. In holography, it is known that such long time tails are quantum bulk effects [@caronhuot], and are thus completely suppressed in the models described in Section \[sec4\]. Since it has been argued that the memory matrix approach gives sensible predictions for realistic strange metal physics [@raghu2; @patel; @debanjan], and the memory matrix framework employed there can be interpreted hydrodynamically, one might be, a priori, concerned about whether long time tails can spoil dc transport in these models. If in the thermodynamic limit, all modes (except for the two zero modes) of the classical hydrodynamic operators are localized, as is believed in $d\le 2$ (where long time tails are problematic), then the standard argument for long time tails [@kovtunlec] seems to fail. It would be interesting to explore this point further in future work.
Conclusion
==========
In this paper, we have explored the consequences of hydrodynamics on the transport coefficients of a strongly coupled QFT, disordered on large length scales. We demonstrated that hydrodynamics can be used to understand the memory function computations of momentum relaxation times, which have previously been derived using an abstract and opaque formalism. It is also straightforward – at least in principle – to compute transport coefficients at higher orders in perturbation theory, whereas memory function formulas only give leading order transport coefficients. Remarkably, we also demonstrated that many non-perturbative holographic dc transport computations can be interpreted entirely by solving a hydrodynamic response problem of a new emergent horizon fluid. Thus, the technology of Appendix \[appa\] may be applied to these models, when exact solutions are not available. We still need specific microscopic theories to compare with $T$-scaling laws in experiments, but this work provides important physical transparency to a large body of recent literature on transport in strongly coupled QFTs. We emphasize again that the memory function formalism and holography are valid in regimes where hydrodynamics should formally break down, and so it is strange (but useful) that hydrodynamic technology (which is readily understandable) can be used to help interpet these results nonetheless.
The fact that this hydrodynamic framework can be used to interpret such a wide variety of results from memory function or holographic computations is suggestive of the fate of such theories at strong disorder. Shortly after this paper was released, it was proved in [@grozdanov] that all $\mathrm{AdS}_4$-Einstein-Maxwell holographic models are electrical conductors, using these hydrodynamic techniques (which are valid so long as the bulk black hole horizon is connected). Thus, we do *not* expect a holographic analogue of a many-body localized phase [@basko] to exist in many strongly disordered holographic systems. Strongly disordered black holes have been numerically constructed recently [@santosdisorder1; @santos; @santosdisorder2]; hence, dc transport coefficients in these backgrounds, along with finite momentum or frequency response, may be numerically computable in the near future.
More generally, we have also demonstrated – without recourse to mean field treatments of disorder, or to holography – a framework which generically gives rise to both a coherent metal at weak disorder, and an incoherent metal at strong disorder. Incoherent metallic physics has been proposed recently to be responsible for some of the exotic thermoelectric properties of the cuprate strange metals [@cupratescale1; @cupratescale2]. One can easily imagine taking realistic scaling laws from microscopic models appropriate for cuprates [@raghu2; @patel; @debanjan; @patel2] and making quantitative scaling predictions about the strong disorder regime using insights from our framework.
Hydrodynamics provides a valuable framework for interpreting more specific microscopic calculations. There are many natural extensions of this work: two examples are the study of hydrodynamic transport in disordered superfluids and superconductors, or the study of systems perturbed by further deformations than $\bar\mu$. Still, this framework has limitations. High frequency transport (in particular, $\omega \gtrsim T$) cannot be captured by hydrodynamics, and provides a unique opportunity for holography in particular to make experimentally relevant predictions about quantum critical dynamics [@willwk; @prokofev].
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Ariel Amir, Richard Davison, Blaise Goutéraux, Sarang Gopalakrishnan, Bertrand Halperin, Sean Hartnoll and Michael Knap for helpful discussions, and especially Subir Sachdev for critical discussions on presenting these ideas in a more transparent way.
Onsager Reciprocity {#apponsager}
===================
In this appendix we prove that the thermoelectric conductivity matrix $\sigma^{\alpha\beta}_{ij}$ is symmetric. This follows entirely from the symmetries of the diffusive transport coefficients (\[diffsym1\]) and (\[diffsym2\]), as well as the equations of motion (\[maineq1\]) and (\[maineq2\]).
To do this, we look for a periodic solution $\Phi^\alpha$ and $v_i$, up to the constant linear terms in $\Phi^\alpha$. In particular, we write
$$\begin{aligned}
\Phi^\alpha &= -F^\alpha_i x_i + \Phi^{\alpha\beta}_j F^\beta_j, \\
v_i &= v^\beta_{ij}F^\beta_j,
\end{aligned}$$
which we may always do, as the equations of motion are linear. (\[maineq1\]) and (\[maineq2\]) become:
\[onsagereq\]$$\begin{aligned}
\partial_i \left(\rho^\alpha v^\beta_{ij} - \Sigma^{\alpha\gamma}_{ik}\partial_k \Phi^{\gamma\beta}_j\right) &= -\partial_i \Sigma^{\alpha\beta}_{ij}, \\
\rho^\alpha \partial_i \Phi^{\alpha\beta}_j - \partial_m\left(\eta_{imkl}\partial_l v_{kj}^\beta\right) &= \rho^\beta{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}.
\end{aligned}$$
We further have $$\sigma^{\alpha\beta}_{ij} = \mathbb{E}\left[\rho^\alpha v^\beta_{ij} - \Sigma^{\alpha\gamma}_{ik}\partial_k \Phi^{\gamma\beta}_j + \Sigma^{\alpha\beta}_{ij}\right] = \mathbb{E}\left[\rho^\alpha v^\beta_{ij} + \Phi^{\gamma\beta}_j \partial_k\Sigma^{\alpha\gamma}_{ik} + \Sigma^{\alpha\beta}_{ij}\right] .$$ as we can now always integrate by parts inside of spatial averages. Now, let us employ (\[onsagereq\]) and (\[diffsym1\]) and write $$\sigma^{\alpha\beta}_{ij} = \mathbb{E}\left[ v^\beta_{kj} \rho^\gamma \partial_k \Phi^{\gamma \alpha}_{ki} + \eta_{klmn}\partial_l v_{kj}^\beta \partial_n v_{mi}^\alpha +\partial_k\Phi^{\gamma\beta}_j \left(\rho^\gamma v^\alpha_{ki} - \Sigma^{\gamma\delta}_{kl}\partial_l \Phi^{\delta\alpha}_i\right)+ \Sigma^{\alpha\beta}_{ij} \right].$$ Using (\[diffsym1\]) and (\[diffsym2\]) it is straightforward to see from the previous equation that $\sigma^{\alpha\beta}_{ij}$ is symmetric.
Perturbative Expansions {#apppert}
=======================
Let us describe how to extend the weak disorder calculations of Section \[sec3\] to arbitrarily high orders in perturbation theory, in the special case where the disorder is introduced entirely through $\bar \mu$. This also gives a flavor for how to “extend the memory matrix formalism" beyond leading order in perturbation theory.
Let us write $\mu = \mu_0 +\epsilon \hat\mu(\mathbf{x})$, with $\epsilon \ll 1$ a perturbatively small number. Within linear response, the fields $\Phi^\alpha$ and $v_i$ may be written as follows:
$$\begin{aligned}
\Phi^\alpha &= -F^\alpha_ix_i + \sum_{n=-1}^\infty \epsilon^n \Phi^\alpha_{(n)}, \\
v_i &= \sum_{n=-2}^\infty \epsilon^n \bar{v}_{i(n)} + \sum_{n=-1}^\infty \epsilon^n \tilde{v}_{i(n)}\end{aligned}$$
where $\mathbb{E}[\tilde{\mathbf{v}}]=\mathbf{0}$, $\bar{\mathbf{v}}$ a constant, and $\Phi^\alpha_{(n)}$ single-valued. We will justify the powers of $\epsilon$ above in our computation below, but for now let us emphasize that $\bar v_{i(n-1)}$, $\tilde{v}_{i(n)}$ and $\Phi^\alpha_{(n)}$ enter the computation at the same order. In addition, the hydrodynamic background becomes disordered:
$$\begin{aligned}
\rho^\alpha &= \rho_0^\alpha + \sum_{n=1}^\infty \epsilon^n \rho_{(n)}^\alpha, \\
\Sigma^{\alpha\beta}_{ij} &= \Sigma^{\alpha\beta}_0 {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} + \sum_{n=1}^\infty \epsilon^n \Sigma_{ij(n)}^{\alpha\beta}, \\
\eta_{ijkl} &= \eta_{0ijkl} + \sum_{n=1}^\infty \epsilon^n \eta_{ijkl(n)}.\end{aligned}$$
The background at $\mathrm{O}(\epsilon^0)$ is translation invariant, but not at higher orders. As in the main text, we will assume isotropy of the leading order transport coefficeints.
(\[maineq1\]) and (\[maineq2\]) may be perturbatively expanded in powers of $\epsilon$. We find the following equations in Fourier space:
$$\begin{aligned}
\mathrm{i}k_i \rho_{(1)}^\alpha(\mathbf{k}) \bar{v}_{i(n-1)} + \mathrm{i}k_i \rho^\alpha_0 \tilde{v}_{i(n)}(\mathbf{k}) + k^2\Sigma_0^{\alpha\beta}\Phi^\beta_{(n)}(\mathbf{k}) &= -X^{\alpha}_{(n)}(\mathbf{k}), \\
\mathrm{i}k_i \rho_0^\alpha \Phi^\alpha_{(n)}(\mathbf{k}) + \eta_0 k_i \left(k_i \tilde{v}_{j(n)}(\mathbf{k}) + k_j \tilde{v}_{i(n)}(\mathbf{k}) - \frac{2}{d}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}k_l\tilde{v}_{l(n)}(\mathbf{k})\right) + \zeta_0 {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}k_l\tilde{v}_{l(n)}(\mathbf{k}) &= -Y_{i(n)}(\mathbf{k}), \\
\mathrm{i}\sum_{\mathbf{k}} k_i \Phi^\alpha_{(n)}(\mathbf{k}) \rho_1^\alpha(-\mathbf{k}) &= -Z_{i(n)}\end{aligned}$$
with the third equation the zero mode of the second, and
$$\begin{aligned}
X^\alpha_{(-1)} &= 0 \\
Y_{i(-1)} &= 0 \\
Z_{i(-1)} &= -\rho_0^\alpha F^\alpha_i\end{aligned}$$
and, for $n\ge 0$:
$$\begin{aligned}
X^\alpha_{(n)} &= \mathrm{i}k_i \sum_{m=-2}^{n-2} \rho^\alpha_{(n-m)}(\mathbf{k}) \bar{v}_{i(m)}
+ \mathrm{i}k_i \sum_{m=-1}^{n-1} \rho^\alpha_{(n-m)}(\mathbf{k}-\mathbf{q}) \tilde{v}_{i(m)}(\mathbf{q}) \notag \\
&+ k_i \sum_{m=-1}^{n-1} \sum_{\mathbf{q}} \Sigma_{ij(n-m)}^{\alpha\beta}(\mathbf{k}-\mathbf{q}) q_j \Phi^\beta_{(m)}(\mathbf{q}) \\
Y_{i(n)} &= \sum_{m=-1}^{n-1} \sum_{\mathbf{q}} \left[ \mathrm{i}q_i \Phi^\alpha_{(m)}(\mathbf{q}) \rho^\alpha_{(n-m)}(\mathbf{k}-\mathbf{q}) + \eta_{(n-m)}(\mathbf{k}-\mathbf{q}) k_j q_j \tilde{v}_{i(m)}(\mathbf{q}) \right. \notag \\
&\left. + \left(\eta^\prime_{(n-m)}(\mathbf{k}-\mathbf{q}) - \eta_{(n-m)}(\mathbf{k}-\mathbf{q})\right) k_i q_j \tilde{v}_{j(m)}(\mathbf{q}) \right] \\
Z_{i(n)} &= -\mathrm{i}\sum_{\mathbf{k}} \sum_{m=-1}^{n-1} k_i \Phi^\alpha_{(m)}(\mathbf{k}) \rho^\alpha_{(n+1-m)}(-\mathbf{k}).\end{aligned}$$
Order by order in perturbation theory, these equations may be solved exactly:
$$\begin{aligned}
\bar{v}_{i(n-1)} &= -\Gamma^{-1}_{ij} \left(Z_{j(n)} -\mathrm{i} \sum_{\mathbf{k}} k_j \rho^\beta_{(1)}(-\mathbf{k})(\mathfrak{m}(k)^{-1})^{\alpha\beta} \left(X^\alpha_n(\mathbf{k}) - \mathrm{i}\rho_0^\alpha \frac{k_l Y_{l(n)}(\mathbf{k})}{\eta^\prime_0 k^2}\right)\right), \\
\Phi^\alpha_{(n)}(\mathbf{k}) &= -\mathrm{i}(\mathfrak{m}(k)^{-1})^{\alpha\beta}\left(k_i \bar{v}_{i(n-1)} \rho_{(1)}^\beta(\mathbf{k}) + X_{(n)}^\alpha(\mathbf{k}) - \mathrm{i}\rho_0^\beta \frac{k_i Y_{i(n)}(\mathbf{k})}{\eta^\prime_0 k^2}\right), \label{eqphim1} \\
\tilde{v}_{i(n)} &= -\frac{\mathrm{i}}{\eta^\prime_0 k^2} k_i \rho_0^\alpha \Phi^{\alpha}_{(n)}(\mathbf{k}) -\frac{1}{\eta_0 k^2} \left({\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} - \frac{\eta^\prime_0 - \eta_0}{\eta^\prime_0}\frac{k_ik_j}{k^2}\right) Y_{j(n)}(\mathbf{k}) . \end{aligned}$$
with $\Gamma_{ij}$ and $\mathfrak{m}^{\alpha\beta}$ given by (\[gammaij32\]), with $\hat\rho^\alpha\rightarrow \rho^\alpha_{(1)}$. These equations have a clearly nested structure and can be iteratively solved. At leading order, it is readily seen that the response of the fluid is simply $$\mathcal{J}^\alpha_{i(-2)} = \rho_0^\alpha \rho_0^\beta \Gamma^{-1}_{ij} F^\beta_j,$$ as claimed in the main text. We also stress that even at leading order, $\Phi^\alpha$ and $\tilde{v}_i$ are non-local functions.
At higher orders, the $X$, $Y$ and $Z$ corrections must be systematically accounted for, and this is overwhelming to process by hand, especially without specific equations of state. However, this tedious procedure does seem easier than attempting to generalize the memory matrix formalism to higher orders in perturbation theory, and indeed makes predictions for such an effort. So let us at least comment on, qualitatively, what happens at higher orders in perturbation theory. Many terms that contribute to $\mathcal{J}^\alpha_i$ at higher orders in $\epsilon$ are related to $\rho^\alpha_{(n)}$, $\Sigma^{\alpha\beta}_{ij(n)}$, and $\eta_{(n)}$ and $\zeta_{(n)}$, at higher powers of $n$. As we mentioned in Section \[sec32\], we can interpret $$\rho^\alpha_{(1)}(\mathbf{k}) = \mathrm{Re}\left[G^{\alpha\mathrm{q}}(\mathbf{k},\omega=0)\right] \hat\mu(\mathbf{k}) = \chi^{\alpha\mathrm{q}} \hat\mu(\mathbf{k}).$$ Namely, the response coefficients above are related to certain Green’s functions that can be computed in a microscopic model. Recall that the disorder is on such long wavelengths that we may neglect $\mathbf{k}$-dependence in the hydrodynamic Green’s functions. So it is tempting to interpret $$\begin{aligned}
\rho^\alpha_{(n)}(\mathbf{k}) &= \frac{1}{n!}\sum_{\mathbf{k}_1,\ldots,\mathbf{k}_{n-1}} \mathrm{Re}\left[ G^{\alpha\mathrm{q}\cdots\mathrm{q}}(\mathbf{k}_1,\ldots,\mathbf{k}_n, \omega=0)\right] \hat\mu(\mathbf{k}_1)\cdots \hat\mu(\mathbf{k}_n) {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},\mathbf{k}_1+\cdots+\mathbf{k}_n} \notag \\
&\approx \frac{\chi^{\alpha\mathrm{q}\cdots\mathrm{q}}}{n!} \sum \hat\mu(\mathbf{k}_1)\cdots \hat\mu(\mathbf{k}_n) {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},\mathbf{k}_1+\cdots+\mathbf{k}_n},\end{aligned}$$ with $G^{\alpha\mathrm{q}}$ an appropriate $n$-point Green’s function in the microscopic theory. Similar statements may be made for $\Sigma$ and $\eta$ by relating them properly to Green’s functions of $J_\mu$ and $T_{\mu\nu}$, as in [@kovtunlec]. In the last step, we have used the fact that disorder is long wavelength, and so we expect $\rho^\alpha_{(n)}$, $\Sigma^{\alpha\beta}_{ij(n)}$ and $\eta_{ijkl(n)}$ to be local functions of $\hat\mu$ in position space.
This provides a prediction of our hydrodynamic framework which may be compared with a memory matrix calculation at higher orders in perturbation theory (or another method). Of course, we should stress that in principle, memory matrix calculations can account for corrections beyond the regime of validity of hydrodynamics, though in the limits we identified in Section \[sec2\], one should find that only the contributions described above contribute to the conductivities.
In the above framework, it does not seem as though there are any natural cancellations between various terms at higher orders in perturbation theory. So this approach becomes rapidly unwieldy for computing transport coefficients past leading order in $\epsilon$. Holographic mean field phenomenology suggests that these corrections are all related to a single phenomenological coefficient – the Drude relaxation time $\tau \sim \epsilon^{-2}$ in (\[drude\]). It would be interesting to understand further under what circumstances the Green’s functions above undergo similar universal cancellations, and whether this is a sensible prediction of holography.
Examples of Variational Calculations {#appa}
====================================
Upper Bounds on the Resistance Matrix
-------------------------------------
A simple set of trial functions is
$$\begin{aligned}
\tilde{\mathcal{J}}^\alpha_i &= \text{constant}, \\
\tilde{v}_i &= 0.\end{aligned}$$
This is a guess corresponding to strong momentum relaxation, as the response of the metal is entirely in the diffusive sector. Employing (\[eqlowerbound\]) we obtain (\[saa\]).
In cases with weak disorder this bound is not strong enough to be useful, and we can do better by allowing for $\tilde{v}_i$ to be a constant ($\mathbf{x}$-independent) variational parameter. In this case, we obtain $$\tilde{\mathcal{P}}(\tilde{v}_i) = L^d\left[ A^{\alpha\beta}_{ij} \mathcal{J}^\alpha_i \mathcal{J}^\beta_j + 2 B^\beta_{ij} \mathcal{J}^\beta_j \tilde{v}_i + C_{ij}\tilde{v}_i \tilde{v}_j\right]$$where
$$\begin{aligned}
A^{\alpha\beta}_{ij} &= \mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij}\right], \\
B_{ij}^\beta &= -\mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij} \rho^\alpha\right], \\
C_{ij} &= \mathbb{E}\left[\left(\Sigma^{-1}\right)^{\alpha\beta}_{ij} \rho^\alpha \rho^\beta\right].\end{aligned}$$
Minimizing $\tilde{\mathcal{P}}(\tilde{v}_i)$, we find $$\mathcal{J}^\alpha_i \left(\sigma^{-1}\right)^{\alpha\beta}_{ij} \mathcal{J}^\beta_j \le \mathcal{J}^\alpha_i\left[ A^{\alpha\beta}_{ij} - B^\alpha_{ik} B^\beta_{jl} C^{-1}_{kl} \right]\mathcal{J}^\beta_j \equiv \mathcal{J}^\alpha_i (\tilde{\sigma}^{-1})^{\alpha\beta}_{ij} \mathcal{J}^\beta_j . \label{difflower}$$
It is straightforward to see that the smallest eigenvalue of $\tilde \sigma^{-1}$ must be larger than the smallest eigenvalue of $\sigma^{-1}$. A generic consequence of this result is that if the components of $\tilde \sigma^{-1}$ are not parametrically small in the weak disorder limit, the components of $\tilde \sigma$ may be parametrically large in the weak disorder limit. A simple example analogous to Section \[sec32\] is the case where $\Sigma$ is a constant, isotropic matrix, and $\rho^\alpha = \rho_0^\alpha + \hat\rho^\alpha$, with $\rho_0^\alpha$ a constant and $\hat\rho^\alpha$ a small perturbation with $\mathbb{E}[\hat\rho^\alpha]=0$. In this case we find $$(\tilde{\sigma}^{-1})^{\alpha\beta}_{ij} = (\Sigma^{-1})^{\alpha\beta} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} - \frac{(\Sigma^{-1})^{\alpha\gamma}(\Sigma^{-1})^{\beta\delta} \rho_0^\gamma \rho_0^\delta }{(\Sigma^{-1})^{\eta\zeta}\left(\rho_0^\eta \rho_0^\zeta + \mathbb{E}\left[\hat\rho^\eta\hat\rho^\zeta\right]\right)}{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} .$$ As $\hat\rho^\alpha \rightarrow 0$, one can check that $\rho_0^\beta$ becomes an eigenvector of $\tilde{\sigma}^{-1}$ with a parametrically small eigenvalue. Exactly at $\hat\rho^\alpha=0$, $\sigma^{\alpha\beta}$ will have an eigenvalue of $\infty$; as discussed previously, this follows on quite general principles from the fact that $\mathcal{S}$ and $\mathcal{Q}$ become constant. If we invert this matrix, we find $$\tilde{\sigma}^{\alpha\beta} \approx \frac{\rho_0^\alpha \rho_0^\beta}{\mathbb{E}[\hat\rho^\eta \hat\rho^\zeta] (\Sigma^{-1})^{\eta\zeta}} + \cdots \equiv \frac{\rho_0^\alpha \rho_0^\beta}{\tilde{\mathcal{C}}} + \cdots \label{sq2}$$ To compute this eigenvalue,[^15] it is easiest to compute $\rho_0^\alpha (\tilde{\sigma}^{-1})^{\alpha\beta}\rho_0^\beta$, and take the leading order coefficient in $\hat\rho^\alpha$. The subleading corrections correspond to diffusive transport and stay finite in the $\hat\rho^\alpha\rightarrow 0$ limit.
Let us compare to the exact results in the perturbative limit in Section \[sec32\]. Technically speaking, we are not guaranteed that $\tilde{\sigma}^{\alpha\beta} \le \sigma^{\alpha\beta}$, though this inequality is satisfied in this limit (assuming that $\rho_0^\alpha>0$). For we can write $\sigma^{\alpha\beta} \approx \rho_0^\alpha \rho_0^\beta/\mathcal{C}$, with $\mathcal{C}$ given by (\[gammaij32\]), and $$\mathcal{C} = \frac{1}{d}\sum_{\mathbf{k}} \hat\rho^\alpha(-\mathbf{k}) \left(\frac{\rho_0^\alpha \rho_0^\beta}{\eta^\prime k^2} + \Sigma^{\alpha\beta}\right)^{-1} \hat\rho^\beta(\mathbf{k}) \le \frac{1}{d}\sum_{\mathbf{k}} \hat\rho^\alpha(-\mathbf{k}) (\Sigma^{-1})^{\alpha\beta} \hat\rho^\beta(\mathbf{k}) = \frac{\tilde{\mathcal{C}}}{d}.$$ The first inequality here follows from the fact that for any vector $u_i$, and two positive definite matrices $A_{ij}$ and $B_{ij}$, the following inequality holds: $$u_i (A+B)^{-1}_{ij} u_j \le u_i A^{-1}_{ij}u_j. \label{matproof1}$$To prove this, let $\lambda>0$ be a positive coefficient, and $$\frac{\mathrm{d}}{\mathrm{d}\lambda} u_i (A+\lambda B)^{-1}_{ij} u_j = -u_i (A+\lambda B)^{-1}_{ik} B_{kl} (A+\lambda B)^{-1}_{lj} u_j < 0, \label{matproof2}$$with the latter inequality following from positive-definiteness of sums and products of positive definite matrices. Integrating (\[matproof2\]) from $\lambda=0$ to 1 proves (\[matproof1\]).
It is also possible to find viscosity-limited bounds on the resistance matrix which can be smaller than the diffusion-limited bound (\[difflower\]), where viscosity plays no role. For simplicity, let us focus on the specific case of computing thermal transport in an isotropic theory with $\mathcal{Q}=0$. (\[difflower\]) gives us that $$(\sigma^{-1})^{\mathrm{hh}} \le \mathbb{E}\left[(\Sigma^{-1})^{\mathrm{hh}}\right] -\frac{\mathbb{E}[(\Sigma^{-1})^{\mathrm{hh}}\mathcal{S}]^2}{\mathbb{E}[(\Sigma^{-1})^{\mathrm{hh}}\mathcal{S}^2]}. \label{thermb1}$$ A natural guess for a viscous-dominated bound is to assume that
$$\begin{aligned}
\tilde{\mathcal{J}}^{\mathrm{h}}_i &= \text{constant}, \\
\tilde{v}_i &= \frac{\tilde{\mathcal{J}}^{\mathrm{h}}_i}{T\mathcal{S}}.\end{aligned}$$
This directly leads to $$(\sigma^{-1})^{\mathrm{hh}} \le \mathbb{E}\left[ \frac{\eta^\prime}{d} \left(\frac{\partial_i\mathcal{S}}{T\mathcal{S}^2}\right)^2 \right]. \label{thermb2}$$ We may employ whichever of (\[thermb1\]) or (\[thermb2\]) is smaller.
Upper Bounds on the Conductivity Matrix
---------------------------------------
For simplicity, we focus on the bounding of $G^{\mathrm{qq}}_{ij}$; $G^{\mathrm{hh}}_{ij}$ may be bounded with an exactly analogous ansatz. Let us write the background charge density as $$\mathcal{Q} = \mathcal{Q}_0 + \hat{\mathcal{Q}},$$ with $\mathbb{E}[\mathcal{Q}]=\mathcal{Q}_0 \ne 0$ and $\mathbb{E}[\hat{\mathcal{Q}}]=0$. Let us also split $\Phi^{\mathrm{q}}$ into a linear term sourcing a background electric field, and a periodic response $\varphi$: $$\Phi^{\mathrm{q}} = \varphi + \sum_i V^{\mathrm{q}}_i \left(1-\frac{x_i}{L}\right).$$ It is easier to deal with (\[upcons\]) in Fourier space, so let us write (with $E_i = V^{\mathrm{q}}_i/L$): $$\mathrm{i}E_i \mathcal{Q}(\mathbf{k}) + \sum_{\mathbf{q}} q_i \varphi(\mathbf{q})\mathcal{Q}(\mathbf{k}-\mathbf{q}) = k_j \mathcal{T}_{ij}(\mathbf{k}).$$
Let us begin by assuming that $\eta\rightarrow\infty$, so that we may ignore the response of $\mathcal{T}_{ij}$ to $\Phi$ when computing (\[pupperbound\]). The only constraint we must impose in this limit is $$\mathbb{E}[\mathcal{Q} \partial_i \tilde\Phi^{\mathrm{q}}] = 0.$$ A natural guess for $\varphi$, inspired by exact results in the weak disorder limit in Section \[sec32\], is $$\varphi(\mathbf{k}) = -\mathrm{i} \frac{k_i E_i }{Ak^2} \hat{\mathcal{Q}}(\mathbf{k}). \label{eqcalf}$$with $A$ a positive constant constrained by (\[upcons\]): $$E_i \mathcal{Q}_0 = E_j \sum_{\mathbf{q}} \frac{q_iq_j}{Aq^2} |\hat{\mathcal{Q}}(\mathbf{q})|^2 \label{fconstraint}$$ So far, up to neglecting viscous contributions to $\mathcal{P}$, this is completely rigorous.
For simplicity, suppose that $\hat{\mathcal{Q}}(\mathbf{k})$ are disordered random variables, that are not drawn from a heavy-tailed distribution:
$$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\hat{\mathcal{Q}}(\mathbf{k})] &= 0, \\
\mathbb{E}_{\mathrm{d}}[\hat{\mathcal{Q}}(\mathbf{k})\hat{\mathcal{Q}}(\mathbf{q})] &= \frac{u^2}{N} {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},-\mathbf{q}}.
\end{aligned}$$
This will allow us to extract meaningful qualitative information out of bounds which will be quite opaque in Fourier space. $A$ is fixed by (\[fconstraint\]) as $N\rightarrow\infty$: $$A = \frac{u^2}{d\mathcal{Q}_0}.$$ Fluctuations in $A$ are suppressed as $N^{-1/2}$ [@lucas1411].
Plugging this $\Phi$ into (\[pupperbound\]) we obtain $$\begin{aligned}
G^{\mathrm{qq}}_{ij} E_i E_j &\le L^{d-2} \mathbb{E}\left[ \Sigma^{\mathrm{qq}}_{ij}(\mathbf{x}) (E_i - \partial_i \hat \Phi)(E_j - \partial_j \hat \Phi) \right] \notag \\
&= L^{d-2}\left[ \mathbb{E}\left[\Sigma^{\mathrm{qq}}_{ij}\right] E_i E_j - 2\mathbb{E}\left[\Sigma^{\mathrm{qq}}_{ij}\partial_i \varphi \right] E_j + \mathbb{E}\left[\Sigma^{\mathrm{qq}}_{ij}\partial_i \varphi \partial_j \varphi \right]\right] .\end{aligned}$$ Now, recall that we expect $\Sigma^{\mathrm{qq}}$ is a function of $\bar \mu$, and $\hat{\mathcal{Q}}$ is a function of $\bar \mu$ as well, so let us just consider $\Sigma^{\mathrm{qq}}$ to be a function of $\hat{\mathcal{Q}}$. Then we obtain (exploiting isotropy): $$\begin{aligned}
\mathbb{E}_{\mathrm{d}}[\sigma^{\mathrm{qq}}_{ij}] &\le \frac{{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}}{d} \mathbb{E}_{\mathrm{d}} \left[d\Sigma^{\mathrm{qq}}(\hat{\mathcal{Q}}) - 2\sum_{\mathbf{k}}\Sigma^{\mathrm{qq}}(-\mathbf{k})\frac{\hat{\mathcal{Q}}(\mathbf{k})}{A} + \sum_{\mathbf{k}_1,\mathbf{k}_2}\Sigma^{\mathrm{qq}}(-\mathbf{k}_1-\mathbf{k}_2)\frac{\hat{\mathcal{Q}}(\mathbf{k}_1)\hat{\mathcal{Q}}(\mathbf{k}_2)}{A^2}\frac{(\mathbf{k}_1\cdot\mathbf{k}_2)^2}{k_1^2k_2^2}\right] \notag \\
&\le \frac{{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij}}{d}\mathbb{E}_{\mathrm{d}} \left[d\Sigma^{\mathrm{qq}}(\hat{\mathcal{Q}}) - 2\sum_{\mathbf{k}}\Sigma^{\mathrm{qq}}(-\mathbf{k})\frac{\hat{\mathcal{Q}}(\mathbf{k})}{A} + \sum_{\mathbf{k}_1,\mathbf{k}_2}\Sigma^{\mathrm{qq}}(-\mathbf{k}_1-\mathbf{k}_2)\frac{\hat{\mathcal{Q}}(\mathbf{k}_1)\hat{\mathcal{Q}}(\mathbf{k}_2)}{A^2}\right] \notag \\
&= \mathbb{E}_{\mathrm{d}}\left[\Sigma^{\mathrm{qq}} - \frac{2\mathcal{Q}_0\hat{\mathcal{Q}}\Sigma^{\mathrm{qq}}}{u^2} + \frac{d\mathcal{Q}_0^2\hat{\mathcal{Q}}^2 \Sigma^{\mathrm{qq}}}{u^4}\right]{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \label{eq125}\end{aligned}$$ This equation holds for arbitrary functions $\Sigma^{\mathrm{qq}}(\hat{\mathcal{Q}})>0$, so long as viscosity is negligible. Our disorder-averaged bound on $\sigma_{ij}$ is also manifestly positive, as it must be.
In a theory with $\mathcal{Q}_0 \rightarrow 0$, and viscosity still negligible, our upper bound collapses to $\mathbb{E}[\Sigma^{\mathrm{qq}}]$. This bound may also be found at $\mathcal{Q}_0=0$ by directly plugging in the satisfactory ansatz $\Phi^{\mathrm{q}} = -E_ix_i$ into (\[pupperbound\]), and so our bound is actually valid for all $\mathcal{Q}_0$.
Now, let us consider the effects of finite viscosity. Henceforth, the discussion will be more qualitative, and we will not be particularly concerned with O(1) prefactors, as it turns out to be quite difficult to write down a good non-perturbative analytic solution to (\[upcons\]).
Let us see what happens if we simply use (\[eqcalf\]), along with a sensible ansatz for $\mathcal{T}_{ij}$. Denoting $$\mathrm{i}E_i \mathcal{Q}(\mathbf{k}) + \sum_{\mathbf{q}}q_i \varphi(\mathbf{q}) \mathcal{Q}(\mathbf{k}-\mathbf{q}) = \mathcal{A}_i(\mathbf{k}),$$ we pick in $d=1$: $$\mathcal{T}_{xx} = \frac{\mathcal{A}_x}{k_x},$$in $d=2$:
$$\begin{aligned}
\mathcal{T}_{xx} &= - \mathcal{T}_{yy} = \frac{k_x \mathcal{A}_x - k_y\mathcal{A}_y}{k_x^2+k_y^2}, \\
\mathcal{T}_{xy} &= \frac{k_y\mathcal{A}_x + k_x\mathcal{A}_y}{k_x^2+k_y^2},\end{aligned}$$
and in $d=3$:
$$\begin{aligned}
\mathcal{T}_{xy} &= \frac{k_x\mathcal{A}_{x}+k_y\mathcal{A}_{y}-k_z\mathcal{A}_{z}}{2k_xk_y}, \\
\mathcal{T}_{xz} &= \frac{k_x\mathcal{A}_{x}-k_y\mathcal{A}_{y}+k_z\mathcal{A}_{z}}{2k_xk_z}, \\
\mathcal{T}_{yz} &= \frac{-k_x\mathcal{A}_{x}+k_y\mathcal{A}_{y}+k_z\mathcal{A}_{z}}{2k_yk_z}.\end{aligned}$$
In all of the above cases, the equations are only valid at $\mathbf{k}=\mathbf{0}$, and we take the zero modes to vanish. In $d=3$, we make the stronger assumption that, e.g., $\mathcal{T}_{xy}=0$ whenever $k_x=0$ or $k_y=0$. It is now straightforward to (qualitatively) see what happens. The first contribution to the conductivity is unchaged from (\[eq125\]), and the average viscous power dissipated scales as $$\begin{aligned}
\mathbb{E}\left[\eta^{-1}_{ijkl}\mathcal{T}_{ij}\mathcal{T}_{kl}\right] &\sim \sum_{\mathbf{k}\ne \mathbf{0}} \eta^{-1}(\mathbf{0}) \frac{\mathcal{A}(\mathbf{k})\mathcal{A}(-\mathbf{k})}{k^2} + \sum_{\mathbf{k}_1,\mathbf{k}_2\ne \mathbf{0}} \eta^{-1}(-\mathbf{k}_1-\mathbf{k}_2)\frac{\mathcal{A}(\mathbf{k}_1)\mathcal{A}(\mathbf{k}_2)}{k_1k_2} \notag \\
&\sim \left[\frac{\xi^2 u^2}{\eta} + \frac{\xi^2\mathcal{Q}_0^2}{\eta} + \frac{\xi^2 \mathcal{Q}_0^4}{\eta u^2} \right]E_i E_i \label{eq130}\end{aligned}$$ where we have been schematic and neglected tensor indices on $\eta$. To obtain the final scaling law above, we have used that $\mathcal{A}(\mathbf{k}\ne \mathbf{0}) \sim \eta^{-1}(\mathbf{k}\ne \mathbf{0}) \sim 1/\sqrt{N}$. We have neglected in the above scaling the possibility that fluctuations in $\eta$ may be large.
Of course, the general framework can certainly account for this possibility, and one can directly plug in our ansatzes into (\[pupperbound\]) – however, we do not see general simplifcations that can be made, other than the crude scaling arguments here. Given our ansatzes, the expression in (\[eq130\]) is generally nonlocal and thus will not be as elegant as (\[eq125\]). Putting all of this together, we find (\[boundseq\]).
Essential to the scaling laws in (\[eq130\]) was that $\sum k^{-2} \sim N\xi^2$ above. However, this is not true in $d\le 2$, where the sum will diverge at small $k$. We now give a heuristic argument that the typical scaling behavior we found above need not be parametrically different in these dimensions, consistent with what we found previously (in Section \[sec32\], the form of the conductivities is the same in all dimensions at weak disorder). To do so, we need to argue that there is a small modification that we can make to $\varphi$ (so the contribution to the bound on conductivity from $\Sigma^{\mathrm{qq}}$ is qualitatively unchanged), yet which can remove the IR divergence from the viscous contribution. The natural guess is to modify $\varphi(\mathbf{k})$ so that $\mathcal{T}_{ij}(\mathbf{k}) =0 $ for $|\mathbf{k}| \xi \lesssim \delta$, with $\delta \ll 1$ a small constant. In this case, then we predict that $$\sum \frac{1}{k^2} \sim \left\lbrace\begin{array}{ll} N/\delta &\ d=1 \\ N\log(1/\delta) &\ d=2\end{array}\right.,$$ and so if we choose a “reasonable" $\delta$ (especially in $d=2$), we can get acceptably finite viscous contributions to the conductivity. This can be accomplished by writing down $$\varphi = -\mathrm{i}\frac{k_iE_i}{Ak^2}\left(\hat{\mathcal{Q}}(\mathbf{k}) + \delta \cdot Q(\mathbf{k})\right),$$with $\varphi_0$ given by (\[eqcalf\]), and $q\sim \hat{\mathcal{Q}}$, carrying no anomalous powers of $N$ or $\delta$. As typical elements of $\varphi$ changed by an amount $\sim \delta$, we expect that the conductivity in (\[eq125\]) has only changed by a small amount.
Let us verify this is possible. We wish to find a solution to the highly overdetermined equation $${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{ij} \hat{\mathcal{Q}}(\mathbf{k}) (1-{\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}_{\mathbf{k},\mathbf{0}}) - \mathcal{Q}_0 \frac{q_iq_j}{Aq^2}\hat{\mathcal{Q}}(\mathbf{k}) - \sum_{\mathbf{q}} \hat{\mathcal{Q}}(\mathbf{k}-\mathbf{q}) \frac{q_iq_j}{Aq^2} \hat{\mathcal{Q}}(\mathbf{q}) =\sum_{\mathbf{q}} \mathcal{Q}(\mathbf{k}-\mathbf{q}) \frac{q_iq_j}{Aq^2} \delta Q(\mathbf{q}),\;\;\;\; 0\le |\mathbf{k}| \le \frac{\delta}{\xi}. \label{eq128}$$ The left hand side of the above equation scales as $1/\sqrt{N}$ for typical disorder in $\hat{\mathcal{Q}}$. In the third term, this scaling follows from the lack of correlations between $\hat{\mathcal{Q}}$ at different momenta. So long as we choose $Q(\mathbf{k})=0$ in for $|\mathbf{k}|\xi \lesssim 2\delta$, then the “matrix elements" $\hat{\mathcal{Q}}$ on the right hand side are also small.
We now look for $Q(\mathbf{q})$ by solving a constrained optimization problem of the schematic form: $$\mathbf{a} = \mathsf{B}\mathbf{x},$$ where $\mathbf{a} \in \mathbb{R}^{n_1}$ and $\mathbf{x} \in \mathbb{R}^{n_2}$, $n_1 \sim \delta N \ll n_2\sim (1-2^d\delta)N\sim N$, and $\mathsf{B}$ is a rectangular matrix, such that $|\mathbf{x}|$ is smallest. $\mathbf{a}$ is analogous to the (known) left hand side of (\[eq128\]), $\mathsf{B}$ is a known, truncated convolution-like matrix in a Fourier basis, and $\mathbf{x}$ plays the role of the undetermined, high momentum modes of $Q(\mathbf{q})\cdot \delta $. This is a classic problem in constrained optimization with the solution [@boyd] $$\delta Q(\mathbf{q}) = \mathbf{a} \cdot (\mathsf{BB}^{\mathrm{T}})^{-1} \mathsf{B}.$$ Given that elements of $\mathbf{a}$ and $\mathsf{B}$ each scale as $\sqrt{1/N}$, we roughly estimate that $\delta Q(\mathbf{q}) \sim \delta$, and so indeed it is possible to pick a small correction to $\varphi$ which eliminates the divergence in the viscous contribution to the conductivity. Note that the matrix $\mathsf{BB}^{\mathrm{T}}$ is nearly diagonal (the off diagonal elements involve uncorrelated sums of random variables and so scale as $1/\sqrt{N}$ instead of 1), and so there are no concerns about parametrically small eigenvalues of $\mathsf{BB}^{\mathrm{T}}$.
Striped Models {#appstripe}
==============
Thinking about resistivities turns out to be most convenient for models in $d=1$, or with translational symmetry only broken in a single direction $x$, as noted in [@andreev]. This follows from the general arguments that we make in Appendix \[appa\]. Since we know that the current flow $\mathcal{J}^\alpha_x$ is a constant, to solve the variational problem exactly we need only vary (\[tildeplower\]) with respect to arbitrary $\tilde{v}$ – the global minimum will corresponds to the true velocity field $\bar v$. We find that $\bar v$ obeys the differential equation $$\partial_x \left(\eta_{xxxx}\partial_x \bar v\right) = \left(\Sigma^{-1}\right)_{xx}^{\alpha\beta}\left(\mathcal{J}^\beta_x - \rho^\beta \bar v\right)\rho^\alpha.$$ This second order linear differential equation cannot be solved exactly in general. We do not believe that closed form solutions exist in general for $\sigma^{\alpha\beta}_{xx}$, though they can be found in special cases – for example, if $\Sigma^{\mathrm{hh}}$ is the only non-vanishing diffusive transport coefficient (as in [@andreev], for a non-critical fluid), or if $\Sigma^{\mathrm{qq}}$ is the only non-vanishing coefficient, as in Section \[sec:holostripe\]. In both of these cases, $\Sigma^{\alpha\beta}$ is not an invertible matrix – the zero eigenvector then provides a constraint which fixes $v_x$ in terms of $\mathcal{J}^\alpha_x$.
In particular, let us carry out this computation explicitly for the holographic striped models with equations of state given in Section \[sec:holostripe\]. We must generalize the discussion to curved spaces, but this is not so difficult. The heat conservation equation implies (on a curved space) that $$\mathcal{J}^{\mathrm{h}} = \sqrt{\gamma}\gamma^{xx} T\mathcal{S} v_x = \mathrm{e}^{-B}T\mathcal{S}v_x = \text{constant}.$$ Note that $\sqrt{\gamma}=1$, which simplifies calculations. After an appropriate generalization to curved space, we use (\[tildeplower\]) (on the true solution) to compute the inverse thermoelectric conductivity matrix: $$\begin{aligned}
\left(\sigma^{-1}\right)^{\alpha\beta}\mathcal{J}^\alpha\mathcal{J}^\beta &= \mathbb{E}\left[\sqrt{\gamma}\left(\gamma_{xx}\left( \mathcal{J}^{\mathrm{q}} - \mathcal{Q} v_x\gamma^{xx}\sqrt{\gamma}\right)^2 + \frac{\eta}{2}\gamma^{ij}\gamma^{kl}s_{ik}s_{jl}\right)\right] \end{aligned}$$ where $$s^{ij} \equiv \gamma^{ik}\gamma^{jl} \left(\nabla_k v_l + \nabla_l v_k\right) - \gamma^{ij}\gamma^{kl}\nabla_k v_l.$$ For our set-up, parity symmetry in the $y$ direction ensures that $v_y=0$, and that the only non-vanishing components of $s^{ij}$ are $$\gamma_{xx} s^{xx} = -\gamma_{yy}s^{yy} = \mathrm{e}^{-B}\partial_x v_x.$$ Putting this together and using (\[donosdata2\]) to determine $\eta$, $\mathcal{S}$ and $\mathcal{Q}$, we obtain $$\left(\sigma^{-1}\right)^{\alpha\beta}_{xx}\mathcal{J}^\alpha\mathcal{J}^\beta = \mathbb{E}\left[\mathrm{e}^B \left(\mathcal{J}^{\mathrm{q}} - \frac{Sa_t}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}TSH_{tt}}\mathcal{J}^{\mathrm{h}}\right)^2 + S \left(\mathrm{e}^{-B}\partial_x \left(\frac{\mathrm{e}^B}{4{\text{{{\mbox{ \sbox{\foobox}{$\pi$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}TS}\right)\right)^2 \left(\mathcal{J}^{\mathrm{h}}\right)^2\right],$$which gives (\[donosresult\]).
[^1]: This is technically not quite right – there is one (set of) bulk scalar fields in these models which is of the form $\phi_i = kx_i$, but this choice maintains homogeneity in the sectors of the theory of interest.
[^2]: See [@davison15; @blake2] for recent updates on this particular holographic model.
[^3]: In graphene, for example, $\sigma_{\textsc{q}} \sim e^2/h$, with $e$ the charge of the electron.
[^4]: We can maintain an electric current without adding any energy by simply shifting to a moving reference frame.
[^5]: $|\bar\mu(\mathbf{x}_1)-\bar\mu(\mathbf{x}_2)|$ can be comparable to, or larger than, $|\bar\mu(\mathbf{x}_1)|$, so long as $|\mathbf{x}_1-\mathbf{x}_2|\gg l$.
[^6]: Note that ${\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mathcal{Q} = (\partial \mathcal{Q}/\partial \mu) {\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}\mu + (\partial \mathcal{Q}/\partial T){\text{{{\mbox{ \sbox{\foobox}{$\delta$} \hskip\wd\foobox
\pdfsave
\pdfsetmatrix{1 0 -.18 1} \llap{\usebox{\foobox}} \pdfrestore
}}}}}T$.
[^7]: In contrast, [@lucasMM] has recently used the memory matrix formalism to recover hydrodynamic transport, with expressions for the phenomenological $\tau$ and other thermodynamic coefficients expressed in terms of microscopic Green’s functions. This is a more complete approach to the problem, but does not generalize easily to higher orders in perturbation theory.
[^8]: Note that this linear term in perturbations is parametrically large in $h$ – but it is linear in $F^\alpha_i$. Our perturbative parameter is first and foremost $F^\alpha_i$, since we are computing a linear response transport coefficient. And while the background may also be treated perturbatively in disorder strength, we must take $F^\alpha_i\rightarrow 0$ *before* $h\rightarrow 0$.
[^9]: This expectation value is also encoded near the boundary of AdS.
[^10]: Note that their results simplify, in some special cases, to analytic results derived in [@chesler; @peet]. The case where $\mathcal{Q}=0$ was also studied in [@ugajin].
[^11]: Of course, this would lead to temperature growth at second order in perturbation theory, so that the energy conservation equation (up to external sources) exactly holds at all orders.
[^12]: For example, this may correspond in a conformal fluid to deforming the equation of state with a non-zero bulk viscosity.
[^13]: When $\mu \gg T$, generally Fermi liquid theory is valid.
[^14]: As the percolation problem is trivial in $d=1$, the study of holographic insulators may be quite a bit richer in models with translational symmetry broken in multiple spatial dimensions.
[^15]: To compute an eigenvalue, one should first properly “re-dimensionalize" $\hat\rho^\alpha$ so that all matrix elements of $\tilde{\sigma}^{\alpha\beta}$ have the same dimension.
|
---
abstract: 'We consider colour-electric screening as expressed by the quark contribution to the Debye mass calculated in a PNJL model with emphasis on confining and chiral symmetry breaking effects. We observe that the screening mass is entirely determined by the nonperturbative quark distribution function and temperature dependent QCD running coupling. The role of the gluon background (Polyakov loop) is to provide strong suppression of the number of charge carriers below the transition temperature, as an effect of confinement, while the temperature dependent dynamical quark mass contributes additional suppression, as an effect of chiral symmetry breaking. An alternative derivation of this result from a modified kinetic theory is given, which allows for a slight generalization and explicit contact with perturbative QCD. This gives the possibility to gain insights into the colour screening mechanism in the region near the QCD pseudocritical temperature and to provide a guideline for the interpretation of lattice QCD data.'
author:
- 'J. Jankowski'
- 'D. Blaschke'
- 'O. Kaczmarek'
title: Debye mass at the QCD transition in the PNJL model
---
Introduction {#Intro}
============
Theoretical and experimental investigations of quantum chromodynamics (QCD) at finite temperatures are performed with the aim to gain insights to the mechanisms of chiral symmetry restoration and deconfinement. From this perspective the heavy-quark (HQ) potential is one of the most important probes to be studied. In the vacuum the heavy quark-antiquark system is well described in terms of effective field theories, owing to the energy scale separation [@Brambilla:1999xf; @Brambilla:2004jw] and the fact that the HQ potential can be properly defined in terms of large Wilson loops. Then the spectrum of charmonium and bottomonium states as solutions of the Schrödinger equation for heavy quarkonia can be extracted and faced experiment. Finite temperature studies rely on the extraction of a static potential from [*ab initio*]{} simulations of lattice QCD (lQCD) considering the singlet free energy due to a pair of static color charges as a function of their distance by means of Polyakov loop correlators [@Kaczmarek:2005ui]. However, the systematic field theoretic description within the NRQCD framework [@Brambilla:2004jw] is much more subtle and delicate because new variable energy scales like the temperature $T$ or the screening mass $m_D$ enter the problem distorting the energy hierarchy. In addition, from the definition of a real-time potential [@Laine:2006ns] it was realized that also an imaginary part in the potential appears at finite temperature [@Laine:2007qy; @Beraudo:2007ky; @Brambilla:2008cx]. Using lattice QCD studies of this complex values static potential it was recently found that the real part is well described by the color singlet free energy of a static quark-antiquark pair [@Burnier:2014ssa].
While those results shed light on the important question about the identification of the HQ potential with the colour singlet free energy [@Wong:2004zr] another important question is that of microphysics insights into the screening mechanism. Based on some first principle motivations [@Megias:2007pq], Riek and Rapp have proposed an ansatz for the HQ potential [@Riek:2010fk] in the form of a screened Cornell potential where the Coulomb and the linear parts are subject to two different screening masses $m_D$ and $\tilde {m}_D$, respectively. The fit of the temperature dependence of these parameters provided in Ref. [@Riek:2010fk] using available lQCD data has revealed some unexpected aspects. First, the coulombic Debye mass $m_D$ has a linear behaviour with very small slope (smaller than expected from pQCD). Second, the screening mass of the confining part, $\tilde {m}_D$, shows a strong suppression for temperatures below $T_c$ and a linear rise for high temperatures (higher than expected from pQCD).
This is somehow different from the standard approach, where the lQCD data for the large distance part of this HQ potential are fitted either to a Debye screened Coulomb ansatz or to a form motivated by a Debye-Hückel theory [@Kaczmarek:2007pb; @Digal:2005ht; @Burnier:2015nsa]. An ansatz taking into account real and imaginary parts of the HQ potential has been recently considered in [@Burnier:2015nsa] in the context of quenched lattice data. Here we use $m_D(T)$ from lQCD data with dynamical fermions obtained from fitting the heavy quark-antiquark free energy at large separations to the standard Debye screened potential [@Kaczmarek:2007pb] to compare with the model predictions. While its behaviour well above the pseudocritical temperature $T_c$ of the QCD phase transition can qualitatively be understood in terms of perturbation theory, the interpretation of the lattice data in the vicinity of $T_c$ require essentially nonperturbative approaches addressing effects of confinement and chiral symmetry breaking. The leading order perturbative result reads $m_D=(1+\frac{N_f}{6})^\frac{1}{2}gT$, while the next-to-leading order can be obtained by resummation of the leading contribution of the high temperature expansion [@Rebhan:1993az; @Rebhan:1994mx; @Braaten:1994pk]. A detailed understanding of the physics behind the Debye mass in the nonperturbative domain is subject to many current studies.
Approaches based on the operator product expansion (OPE) [@Chakraborty:2011uw], gauge/gravity duality [@Bak:2007fk; @Finazzo:2014zga] or phenomenological models [@Megias:2007pq] make an attempt to give a microscopic description of the screening phenomenon. However, the very definition and numerical determination of a screening mass is obscured by the complications of the non-abelian nature of QCD and the large value of the coupling constant near the QCD transition region [@Nadkarni:1986cz; @Arnold:1995bh].
In this paper we investigate screening effects in a PNJL model which proved successful in reproducing various aspects of hadronic excitations in the medium [@Hansen:2006ee] and of lQCD thermodynamics [@Ratti:2005jh]. We will evaluate the one-loop polarization function using PNJL propagators with QED like vertices thus extending a previous calculation made for massless quarks [@Jankowski:2009kr]. In this rough way we implement confinement and chiral symmetry breaking effects which in turn allows a comparison to the lattice data for the Debye mass. We will show that one can reproduce the correct shape of its temperature dependence. However, due to the absence of dynamical gluons in our PNJL model calculation, we lack dynamical degrees of freedom and therefore stay below the lQCD result.
It is well known that the screening mass in QED or pQCD can also be derived from kinetic theory [@Mrowczynski:1998nc; @Blaizot:2001nr]. Interrelations between plasma physics and quark gluon plasma are known to bring many relevant insights and physical motivation behind the field theory calculations [@Mrowczynski:2007hb]. We also explore this possibility and modify the standard kinetic theory approach by replacing usual Fermi-Dirac distribution functions for quarks with those modified by the coupling to the Polyakov loop. In this way we are able to reproduce the result for the Debye mass calculated within our model and furthermore to achieve contact with QCD by inclusion of effects of perturbative non-Abelian vertices.
The paper is organized as follows. In section \[model\_mD\] we outline the model calculation of the Debye mass within the PNJL model, whereby details are referred to the Appendix. Section \[KinTheo\] presents the kinetic theory approach to the problem where we give a simple and intuitive derivation of the screening mass suggesting a straightforward modification of the standard approach. Section \[LQCD\] is devoted to a comparison with lQCD data and their interpretation while section \[Concl\] gives the conclusions.
PNJL model calculation of the Debye mass {#model_mD}
========================================
In reference [@Jankowski:2009kr] a model was considered where the vacuum HQ potential was screened by a quark loop with internal lines coupled to a temporal background gluon field. For the static interaction potential $V(q)$, $ q^{2} = |{\bf{q}}|^{2} $, the statically screened potential is given by a resummation of one-particle irreducible diagrams (RPA “bubble” resummation) $$V_{\rm sc}(q) = {V(q)}/[{1 + F(0; {\bf q})/q^{2}}]~,
\label{Vsc}$$ where the longitudinal polarization function in the finite temperature case is defined via the projector decomposition of the self energy (in Euclidean space) $$\Pi_{\mu\nu}(i\omega,q) = F(i\omega,q) P^L_{\mu\nu} + G(i\omega,q)P^T_{\mu\nu}~,
\label{eq:Decomposition}$$ where the projectors satisfy $$(P^L)^2=(P^T)^2=1 \hspace{15pt} P^TP^L=P^LP^T=0~,
\label{eq:}$$ $$P^L_{\mu\nu} + P^T_{\mu\nu} = \delta_{\mu\nu} - \frac{q_\mu q_\nu}{Q^2}~,
\label{eq:}$$ $$P^T_{ij} = \delta_{ij} - \frac{Q_iQ_j}{Q^2} \hspace{15pt} P^T_{44}=P^T_{j4}=P^T_{4j}=0~.
\label{eq:}$$ Here, $Q=(\omega,{\bf q})$ is the Euclidean four momentum. From this we get the gauge invariant longitudinal component $$F(i\omega,q) = \frac{Q^2}{q^2}\Pi_{44}(i\omega,q)~,
\label{eq:}$$ and it can be calculated within thermal field theory as
$$\Pi_{44}(i\omega_{l};{\bf q} )
= g^{2} T\sum_{n=-\infty}^{\infty} \int\frac{d^{3}p}{(2\pi)^{3}}
{\textrm{ Tr}} [\gamma_{4}S_{\Phi}(i\omega_{n}; {\bf p})
\gamma_{4}S_{\Phi}(i\omega_{n}-i\omega_{l}; {\bf p} - {\bf q})]~.$$
Here $\omega_{l}=2\pi lT$ are the bosonic and $\omega_{n}=(2n+1)\pi T$ the fermionic Matsubara frequencies of the imaginary-time formalism. The symbol ${\textrm{Tr}}$ stands for traces in color, flavor and Dirac spaces. $S_{\Phi}$ is the fermionic quark propagator coupled to the homogeneous static gluon background field $\varphi_3$. Its inverse is given by [@Ratti:2005jh; @Hansen:2006ee] $$S^{-1}_{\Phi}( {\bf p}; i\omega_{n} ) =
{\bf \gamma\cdot p} - \gamma_{4}\omega_{n} + \gamma_4\lambda_{3}\varphi_3+m~,
\label{coupling}$$ where $\{\gamma_\mu,\gamma_\nu\}=-2\delta_{\mu\nu}$ and $m=m(T)$ is the dynamically generated temperature dependent mass for light quarks as described, e.g., within the NJL model [@Klevansky:1992qe]. The variable $\varphi_3$ is related to the Polyakov loop variable defined by [@Ratti:2005jh] $$\Phi(T) = \frac{1}{3}\rm Tr_c (e^{i\beta\lambda_{3}\varphi_{3}})
= \frac{1}{3}(1 + 2\cos(\beta\varphi_3) )~,$$ The physics of $\Phi(T)$ is governed by the temperature-dependent Polyakov loop potential ${\cal{U}}(\Phi)$, which is fitted to describe the lattice data for the pressure of the pure glue system [@Ratti:2005jh]. After performing the color-, flavor- and Dirac traces and making the fermionic Matsubara summation we obtain [@Kapusta:2006pm] (see Appendix \[App\] for the details) $$\begin{aligned}
\Pi_{44}(i\omega,q) &=& g^2\textrm{Re}\int_0^\infty \frac{p^2dp}{\pi^2} \frac{2f_\Phi(E_p)}{E_p}
\nonumber\\
&&\Bigg\{1 + \frac{4E_pi\omega + q^2+\omega^2-4E_p^2}{4pq}\ln\frac{R_+(\omega)}{R_-(\omega)}\Bigg\}~,
\nonumber\\
\label{eq:}\end{aligned}$$ where $$R_\pm(\omega) = -\omega^2-q^2-2i\omega E_p\pm2pq~,
\label{eq:}$$ and $\textrm{Re}f(\omega):=\frac{1}{2}(f(\omega)+f(-\omega))$. Taking the static, long wavelength limit [@Kapusta:2006pm; @LeBellac; @Hansen:2006ee] we identify, after continuation $i\omega\rightarrow q_0+i\epsilon$ to the Minkowski space, $F(q_0=0,q\rightarrow0)=m_D^2(T)$ and whence $$\begin{aligned}
m_{D}^2(T) = \frac{16\alpha_s}{\pi} \int_{0}^{\infty}dp\,
\left[p^{2} + E_p^2\right] \frac{f_\Phi(E_p)}{E_p} ~.
\label{debyemass}\end{aligned}$$ Here $m_{D}(T)$ is the Debye mass, $E_p=\sqrt{p^2+m^2}$ is the quasiparticle dispersion relation for light quarks with $N_c=3$ colour and $N_f=2$ flavour degrees of freedom; $f_\Phi(E)$ is the quark distribution function [@Hansen:2006ee] $$f_\Phi(E) = 3\frac{\Phi(1+2e^{- \beta E})e^{- \beta E}+e^{-3 \beta E}}
{1 + 3\Phi(1 + e^{- \beta E})e^{- \beta E}+e^{-3 \beta E}}~.
\label{eq:Dist}$$ This result is different from the standard QED case only in that the Fermi-Dirac distribution has been replaced by the function (\[eq:Dist\]). In comparison to the free fermion case [@LeBellac; @Beraudo:2007ky] the coupling to the Polyakov loop variable $\Phi(T)$ gives rise to a modification of the Debye mass, encoded in the modification of usual Fermi-Dirac distribution function. In the limit of deconfinement ($\Phi = 1$), the case of a massive quark gas is obtained, while for confinement ($\Phi = 0$) one finds a considerable suppression. The temperature dependence of the Debye mass is shown in figure \[Fig.1\] and as expected from the very beginning it turns out to be below the free massless case, in the confined and in the transition region (with a pseudocritical temperature $T_c\approx200~\textrm{MeV}$). For temperatures $T>>T_c$ the free gas behaviour $m_D^2=2/3g^2T^2$ is reproduced.
![Debye mass in leading order perturbation theory (dotted line), including the effect of dynamical chiral symmetry breaking without coupling to the Polyakov loop (dash-dotted line) and including the coupling to the Polyakov loop (solid line). Calculated with $\alpha_s=0.471$ fitted to charmonium spectrum at $T=0$. []{data-label="Fig.1"}](mD.pdf){width="49.00000%"}
The above result can be also obtained in a different way. It is well known that in QED the Debye mass is related to the pressure via the second derivative [@Kapusta:2006pm; @LeBellac] $$m_D^2(T,\mu_e) = e^2 \frac{\partial^2 P(T,\mu_e)}{\partial\mu_e^2}~,
\label{eq:mDQED}$$ where here $\mu_e$ is related to the electric charge of the system. This relation is a consequence of the Dyson-Schwinger equation for the photon self energy and the Ward identity relating the electron-photon vertex to the quark propagator. This is only true in abelian gauge theory and breaks down for non-abelian theories like QCD. On the technical level the proper Ward identity (called then Slavnov-Taylor identity) becomes much more complex and does not allow a simple derivation. Because our calculation in this section is similar to a one-loop QED calculation it is interesting to see if a similar relation holds also in the PNJL model. We check this in the finite temperature and zero chemical potential case by directly evaluating right hand side of eq. (\[eq:mDQED\]). Let us recall the quark contribution to the meanfield pressure reads $$\begin{aligned}
P_q(T,\mu) &=& -2N_f T \int \frac{d^3p}{(2\pi)^3}
\nonumber\\
&&\left\{\ln\left[1+3(\bar \Phi +\Phi X_-)X_- +X_-^3\right]
\right.
\nonumber\\
&&
+\left. \ln\left[1+3(\Phi +\bar \Phi X_+)X_+ + X_+^3\right]\right\},
\label{eq:Pq}\end{aligned}$$ where $X_\mp=e^{-\beta (E_p\mp \mu)}$. The vacuum pressure has been subtracted and at finite $\mu$ the function $\Phi$ is generally different from its complex conjugate $\bar \Phi$. In the small density limit constituent quark mass and expectation value of traced Polyakov loop are $\mu$ independent so the second derivative of the quark pressure simplifies giving after noticing that it can be written as derivative with respect to the quasiparticle energy $E_p$ $$\begin{aligned}
m_D^2(T) &=& -g^2 \frac{\partial^2 P}{\partial\mu^2}(T,\mu=0)
\nonumber\\
&=&
12 g^2 N_f \int \frac{d^3p}{(2\pi)^3} \frac{d}{dE_p}\left[ \frac{X^3+\Phi(X+2X^2)}{1+3\Phi(X+X^2)+X^3}\right],
\nonumber\\
\label{eq:}\end{aligned}$$ where we have used $X=e^{-\beta E_p}$ and the fact that $\Phi=\bar \Phi$ for $\mu=0$. The quantity under the integral can be identified with the energy derivative of our modified distribution function bringing us to the following formula $$m_D^2(T) = -\alpha_s \frac{8N_f}{\pi} \int_0^\infty dp p^2 \frac{df_\Phi}{dE_p}(E_p)~,
\label{eq:}$$ which after integration by parts (see section \[KinTheo\]) gives for $N_f=2$ the result (\[debyemass\]).
Kinetic theory approach {#KinTheo}
=======================
The usual Debye screening mass in QED or perturbative QCD can be derived within a kinetic theory approach [@Yagi:2005yb; @Mrowczynski:1998nc]. In this section we will modify the standard kinetic theory so that we consistently reproduce our previous Debye mass derivation and generalize it in order to make contact with perturbative calculations for high temperatures. The kinetic theory approach has been widely used in the context of perturbative QCD [@Blaizot:2001nr; @Litim:1999id] providing a physical picture behind the hard thermal loop approximation and some insights into transport properties and collective modes of the quark gluon plasma. The appropriate change with respect to the textbook result is that the usual Fermi Dirac distribution function is replaced by the Polyakov loop modified distribution function (\[eq:Dist\]). Then the charge density induced by the electrostatic potential $A_0(x)=V(x)$ can be written $$\begin{aligned}
\rho_{\textrm{ind}}(x) &=& 2g\int \frac{d^3 p}{(2\pi)^3} \left[f_\Phi(E_p+gV(x))-f_\Phi(E_p-gV(x))\right]
\nonumber\\
&&\approx 4g^2\int\frac{d^3 p}{(2\pi)^3} \frac{df_\Phi}{dE_p}(E_p)V(x)~,
\label{eq:}\end{aligned}$$ where the factor $2$ is due to the fermion spin. The Maxwell equations give $$\begin{aligned}
-\nabla^2 V(x) &=& \rho_{\textrm{ind}}(x)
= \frac{2g^2}{\pi^2}\int_0^\infty dp p^2 \frac{df_\Phi}{dE_p}(E_p)V(x) \nonumber\\
&=& - m_D^2(T)V(x)~,
\label{eq:Maxwell}\end{aligned}$$ where $$\begin{aligned}
m_D^2(T) &=& -\frac{2N_fg^2}{\pi^2}\int_0^\infty dp p^2 \frac{df_\Phi}{dE_p}(E_p)
\nonumber\\
&=& -\frac{8N_f\alpha_s}{\pi} \int_{0}^{\infty}dp\,
p\sqrt{p^{2} + m^2} \frac{df_\Phi(E_p)}{dp} ~,
\label{eq:}\end{aligned}$$ and the relation $E_pdE_p=pdp$ has been used. Integrating by parts, $$\begin{aligned}
m_D^2(T) &=& -\frac{8N_f\alpha_s}{\pi} \int_{0}^{\infty}dp\,
p\sqrt{p^{2} + m^2} \frac{df_\Phi(E_p)}{dp}
\nonumber\\
&=&-\frac{8N_f\alpha_s}{\pi}
\left[p\sqrt{p^2+m^2}f_\Phi(E_p)|_{p=0}^{p=\infty} \right.
\nonumber\\
&& \left.- \int_{0}^{\infty}dp f_\Phi(E_p) \frac{d}{dp}
\left(p\sqrt{p^2+m^2}\right)\right]~,
\label{eq:}\end{aligned}$$ one arrives at the result for the Debye screening mass for $N_f$ flavours of fermions $$m_D^2(T)
= \frac{8N_f\alpha_s}{\pi} \int_{0}^{\infty}dp\,
\left[p^{2} + E_p^2\right] \frac{f_\Phi(E_p)}{E_p} ~.
\label{eq:}$$ For the non-abelian case the induced density reads $$\rho_{\textrm{ind}}(x) = 2g\int \frac{d^3 p}{(2\pi)^3}
\left[f_+^b(p,x)-f_-^b(p,x)\right]\textrm{Tr}[t^bt^a]~,
\label{eq:}$$ where $\textrm{Tr}[t^bt^a]=\frac{1}{2}\delta^{ab}$ and we assume $$f_\pm^a(x,p)=\pm g \frac{df_\Phi}{dE_p}(E_p)V^a(x)~.
\label{eq:}$$ Doing the same steps as before will give the Debye mass ($N_f=2$) $$m_D^{*2}(T) = \frac{8\alpha_s}{\pi} \int_{0}^{\infty}dp\,
\left[p^{2} + E_p^2\right] \frac{f_\Phi(E_p)}{E_p} ~,
\label{eq:mD_QCD}$$ which now reproduces the Debye mass of perturbative QCD for high temperatures.
Comparison with Lattice QCD {#LQCD}
===========================
Within lQCD, the temperature dependent screening masses have been defined from the exponential fall-off of the colour singlet free energies [@Kaczmarek:2005ui] and the results could be represented in the rescaled leading order perturbative result $$\frac{m_D(T)}{T} = A(T)\left(1+\frac{N_f}{6}\right)^{1/2}
g_{2-\textrm{loop}}(T)~,
\label{eq:}$$ where the factor $A(T)$ was introduced to account for nonperturbative effects. The analysis performed in [@Kaczmarek:2007pb] has shown that $A(T) \approx 1.66$ for $N_f=2+1$ and $T \geq 1.2~T_c$. Obviously, $A(T) \rightarrow 1$ for high temperatures in agreement with perturbation theory. In order to compare our model with lQCD data we have to adopt equation (\[eq:mD\_QCD\]), because it makes contact with perturbation theory for high temperatures. The running coupling constant is modelled in two forms. The first one being the two-loop result [@Caswell:1974gg] $$\alpha(T) = \frac{1}{4\pi\left[2\beta_0\ln(\frac{\mu T}{\Lambda})+(\frac{\beta_1}{\beta_0})\ln(2\ln(\frac{\mu T}{\Lambda}))\right]}~,
\label{eq:}$$ where $\mu=\pi$ is the upper bound for the perturbative coupling. For $\Lambda$ we use a value in the ${\overline{\rm MS}}$ scheme of $\Lambda_{\overline{MS}}=260$ MeV and $$\beta_0 = \frac{1}{16\pi^2}\left(11-\frac{2N_f}{3}\right)~,
\label{eq:}$$ $$\beta_1 = \frac{1}{(16\pi^2)^2}\left(102 - \frac{38N_f}{3}\right)~.
\label{eq:}$$ The other possibility is to use a running coupling constant obtained by solving the one-loop renormalization group equation with pole subtraction [@Shirkov:1997wi] $$\alpha_s(T) =
\frac{4\pi}{\widetilde{\beta}_0}\left[\frac{1}{2\ln\frac{\pi T}{\Lambda}}
+ \frac{\Lambda^2}{\Lambda^2-(\pi T)^2}\right]~,
\label{eq:}$$ where $\widetilde{\beta}_0=11-2/3N_f$.
In this way we can identify our model predictions for the nonperturbative effects of confinement and chiral symmetry restoration which are expressed as a temperature dependent factor $A(T)$ which reads (for $N_f=2$) $$A^2(T) = \frac{6}{\pi T^2}\int_0^\infty~dp\frac{f_\Phi(E_p)}{E_p}
\left\{p^2 + E_p^2\right\}~.
\label{eq:}$$ We see that it is entirely controlled by the quark distribution function and by the temperature dependent quark mass which mimic confinement as well as chiral symmetry breaking aspects of strong interaction dynamics.
For a comparison to lattice QCD data we have chosen the $N_t=6$ data for $2+1$-flavors from [@Kaczmarek:2007pb]. There two definitions of the running coupling were used, $\alpha_s$ from a fit using a Debye screened Coloumb Ansatz at large separations and $\alpha_{\rm max}$ obtained at the maximum of the effective $r$ and $T$-dependent running coupling. Those results are based on an analysis of gauge field configurations generated by the RBC-Bielefeld collaboration in (2+1)-flavor QCD for the calculation of the QCD equation of state [@Cheng:2007jq] where the pion mass is about 220 MeV and the strange quark mass is adjusted to its physical value.
From the lower panel of Fig. \[Fig.2\] we see that the general trend of the lattice data is reproduced. The fact that our result for the Debye mass stays below the lattice data is due to the lack of dynamical gluons which in full QCD also bring a contribution to the screening effects. Also the smoother behaviour around $T_c$ is due to the used form of the running coupling which in principle is not applicable in the transition region. Below $T_c$ the suppression of colour charges (which is meant as a rough form of confinement) drives $m_D(T)$ fastly to zero and overcompensates the increasing interaction strength which taken alone would tend to increase the screening mass. Note that the quenched results for $m_D$ in [@Kaczmarek:2007pb] and [@Burnier:2015nsa] lie below the lattice data shown in Fig. \[Fig.2\] due to the missing degrees of freedom dynamical quarks.
![ Upper panel: Two loop running coupling an the analytical running coupling [@Caswell:1974gg; @Shirkov:1997wi]. Lower panel: Debye mass with coupling to the Polyakov loop and running coupling constant $\alpha(T)$ compared with the lattice data [@Kaczmarek:2005ui]. []{data-label="Fig.2"}](Alpha "fig:"){width="49.00000%"} ![ Upper panel: Two loop running coupling an the analytical running coupling [@Caswell:1974gg; @Shirkov:1997wi]. Lower panel: Debye mass with coupling to the Polyakov loop and running coupling constant $\alpha(T)$ compared with the lattice data [@Kaczmarek:2005ui]. []{data-label="Fig.2"}](MD2L "fig:"){width="49.00000%"}
Conclusions {#Concl}
===========
In this short note we have presented a calculation of the quark contribution to the Debye screening mass using the PNJL model which captures chiral dynamics and in a very rough way some aspects of confinement. We have compared our results to the Debye masses extracted from lattice QCD simulations of the static heavy-quark potential and obtain overall agreement for the shape of the temperature dependence. Naturally, our results for the Debye mass stay below the lattice results since in our model the gluon contribution is neglected. However, as the observed gluon contribution on the lattice is of the same shape [@Kaczmarek:2007pb] we are convinced that our model captures the essentials of the influence of chiral dynamics and confinement on the screening of the heavy-quark potential. A further improvement of the calculation would be to include dynamical gluons into the system, to improve the modelling of quark (and gluon) confinement and to elaborate on the behaviour of the running coupling constant. The latter should also be compared with the lattice QCD result for this quantity as it is obtained simultaneously with that for the Debye mass. At this point it is interesting to go back to the different interpretations of the same lattice data. Here we would like to mention the one by Ref. [@Riek:2010fk] where the ansatz of a screened Corrnell type potential was adopted with two different Debye masses, one for the linear confining part ($\widetilde{m}_D$) and one for the Coulombic part ($m_D$). The performed fit gave a drastically different behaviour of the two screening masses. The temperature dependence obtained for $\widetilde{m}_D$ appears similar to that of the Debye mass in the present approach, calculated for a $T$- independent coupling constant (see Fig. \[Fig.1\]). The physical reason for such a distinction could be that the stringy and the Coulombic parts of the potential act on different length scales so that the screening of them involves different dynamics. The linear part should be dominant for larger distances thus involving stronger interactions and more correlations in the screening mechanism. Thus one could expect that $\widetilde{m}_D>m_D$ for all temperatures which is the finding of [@Riek:2010fk]. This is, however, only a qualitative argument. Our calculation, since it is at one-loop order with a QED like interaction should apply to the screened Coulomb potential part from which the lattice QCD result has been extracted. As has been demonstrated here, this comparison provides a reasonable interpretation of the temperature dependence of the Debye mass.
We thank Alexander Rothkopf and Yannis Burnier for their detailed comments on the first version of the manuscript. We thank Rafa[ł]{} [Ł]{}astowiecki for providing data for the PNJL model. Edwin Laermann and Krzysztof Redlich contributed with their discussions. This work has been supported in part by NCN under grant number UMO-2011/02/A/ST2/00306. The work of J.J. was partially supported by the NCN post-doctoral internship grant DEC-2013/08/S/ST2/00547. D.B. and J.J. gratefully acknowledge the hospitality of the University of Bielefeld extended to them during their visits, while O.K. is grateful for the hospitality and productive atmosphere in the Particle Physics group at the Institute for Theoretical Physics of the University of Wroclaw.
[99]{}
N. Brambilla, A. Pineda, J. Soto and A. Vairo, Nucl. Phys. B [**566**]{} (2000) 275.
N. Brambilla, A. Pineda, J. Soto and A. Vairo, Rev. Mod. Phys. [**77**]{} (2005) 1423.
O. Kaczmarek, F. Zantow, Phys. Rev. [**D71** ]{} (2005) 114510.
M. Laine, O. Philipsen, P. Romatschke and M. Tassler, JHEP [**0703**]{}, 054 (2007). M. Laine, O. Philipsen and M. Tassler, JHEP [**0709**]{}, 066 (2007). A. Beraudo, J. P. Blaizot and C. Ratti, Nucl. Phys. A [**806**]{} 312 (2008).
N. Brambilla, J. Ghiglieri, A. Vairo and P. Petreczky, Phys. Rev. D [**78**]{}, 014017 (2008). Y. Burnier, O. Kaczmarek and A. Rothkopf, Phys. Rev. Lett. [**114**]{}, no. 8, 082001 (2015).
C. -Y. Wong, Phys. Rev. C [**72**]{} (2005) 034906.
E. Megias, E. Ruiz Arriola and L. L. Salcedo, Phys. Rev. D [**75**]{} (2007) 105019.
F. Riek, R. Rapp, Phys. Rev. [**C82** ]{} (2010) 035201.
O. Kaczmarek, PoS [**CPOD07** ]{} (2007) 043.
S. Digal, O. Kaczmarek, F. Karsch and H. Satz, Eur. Phys. J. C [**43**]{}, 71 (2005) \[hep-ph/0505193\]. Y. Burnier and A. Rothkopf, arXiv:1506.08684 \[hep-ph\].
A. K. Rebhan, Phys. Rev. D [**48**]{} (1993) 3967.
A. K. Rebhan, Nucl. Phys. B [**430**]{}, 319 (1994).
E. Braaten and A. Nieto, Phys. Rev. Lett. [**73**]{}, 2402 (1994).
P. Chakraborty, M. G. Mustafa and M. H. Thoma, Phys. Rev. D [**85**]{}, 056002 (2012).
D. Bak, A. Karch, L. G. Yaffe, JHEP [**0708** ]{} (2007) 049.
S. I. Finazzo and J. Noronha, Phys. Rev. D [**90**]{}, no. 11, 115028 (2014).
S. Nadkarni, Phys. Rev. [**D33** ]{} (1986) 3738.\
S. Nadkarni, Phys. Rev. [**D34** ]{} (1986) 3904.
P. B. Arnold and L. G. Yaffe, Phys. Rev. D [**52**]{}, 7208 (1995)
H. Hansen, W. M. Alberico, A. Beraudo, A. Molinari, M. Nardi and C. Ratti, Phys. Rev. D [**75**]{} (2007) 065004.
C. Ratti, M. A. Thaler and W. Weise, Phys. Rev. D [**73**]{} (2006) 014019
J. Jankowski, D. Blaschke, H. Grigorian, Acta Phys. Polon. Supp. [**3** ]{} (2010) 747-752.
S. Mrowczynski, Phys. Part. Nucl. [**30**]{} (1999) 419 \[Fiz. Elem. Chast. Atom. Yadra [**30**]{} (1999) 954\].
J. -P. Blaizot and E. Iancu, Phys. Rept. [**359**]{} (2002) 355.
S. Mrowczynski and M. H. Thoma, Ann. Rev. Nucl. Part. Sci. [**57**]{} (2007) 61.
K. Yagi, T. Hatsuda and Y. Miake, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. [**23**]{} (2005) 1.
D. F. Litim and C. Manuel, Nucl. Phys. B [**562**]{} (1999) 237. S. P. Klevansky, Rev. Mod. Phys. [**64** ]{} (1992) 649-708.
J. I. Kapusta, C. Gale, Cambridge, UK: Univ. Pr. (2006) 428 p.
M. LeBellac, [*Thermal Field Theory*]{}, Cambridge University Press (1996).
W. E. Caswell, Phys. Rev. Lett. [**33** ]{} (1974) 244.
D. V. Shirkov, I. L. Solovtsov, Phys. Rev. Lett. [**79** ]{} (1997) 1209-1212. M. Cheng [*et al.*]{}, Phys. Rev. D [**77**]{}, 014511 (2008).
Polarization loop, temporal component {#App}
=====================================
The calculation we have performed follows in all steps the standard QED evaluation of the polarization loop [@LeBellac; @Kapusta:2006pm] with the only difference that the usual Matsubara summation is now equipped with a trace over the colour indices [@Hansen:2006ee]. To start with let us define the $44$ component of the polarization tensor $$\Pi_{44}=g^2T\sum_n\int \frac{d^3p}{(2\pi)^3}\textrm{Tr}\Bigg\{\left[\gamma_4(m+\gamma_4\omega_n-\vec{\gamma}\vec{p})\gamma_4(m+\gamma_4\omega_n-\gamma_4\omega_l-\vec{\gamma}(\vec{p}-\vec{q}))\right]
\Delta(i\omega_n,\vec{p})\Delta(i\omega_n-i\omega_l,\vec{p}-\vec{q})\Bigg\}~,$$ where Tr stands for the trace in Dirac and color spaces, $\omega_n=(2n+1)\pi T - A_4$, with the temporal gluon field $A_4$, $\omega_l=2\pi lT$. Let us define $$\Delta(i\omega_n,\vec{p}) = \frac{1}{\omega_n^2+p^2+m^2} = \sum_{s=\pm} \frac{s}{2E_p} \frac{1}{i\omega_n+sE_p}~,
\label{eq:}$$ where $E_p=\sqrt{p^2+m^2}$. The first step is to calculate the Dirac trace using $\{\gamma_\mu,\gamma_\nu\}=-2\delta_{\mu\nu}$ $$\textrm{Tr}\left[(m+\gamma_4\omega_n+\vec{\gamma}\vec{p})\gamma_4^2(m+\gamma_4\omega_n-\gamma_4\omega_l-\vec{\gamma}(\vec{p}-\vec{q}))\right] =
-4\left[m^2-\omega_n(\omega_n-\omega_l)+\vec{p}(\vec{p}-\vec{q})\right]~,
\label{eq:}$$ $$\mathcal{N}_V = \omega_n(\omega_n-\omega_l)-\vec{p}(\vec{p}-\vec{q})-m^2 = \omega_n^2-\omega_n\omega_l-p^2+pq-m^2~,
\label{eq:}$$ $$\textrm{Tr}\left[(m+\gamma_4\omega_n+\vec{\gamma}\vec{p})\gamma_4^2(m+\gamma_4\omega_n-\gamma_4\omega_l-\vec{\gamma}(\vec{p}-\vec{q}))\right] =
4\mathcal{N}_V~.
\label{eq:}$$ $$\begin{aligned}
\Pi_{44}(i\omega_{l};{\bf q} )
= \frac{g^2T}{2}
T\sum_{n=-\infty}^{\infty} \int\frac{d^{3}p}{(2\pi)^{3}}
\textrm{Tr}\Bigg\{2\Delta(i\omega_n,\vec{p})
+(4pq-4E_p^2-q^2-\omega_l^2)\Delta(i\omega_n,\vec{p})
\Delta(i\omega_n-\omega_l,\vec{p}-\vec{q})\Bigg\}~.
\label{eq:}\end{aligned}$$ Now we want to decompose (\[eq:\]) so that we have $\Delta(i\omega_n,\vec{p})+\Delta(i\omega_n-i\omega_l,\vec{p}-\vec{q})$ and for that we introduce $$\mathcal{M} = p^2+2m^2+\omega_n^2+(\omega_n-\omega_l)^2+(p-q)^2 = 2(p^2+m^2+\omega_n^2-\omega_n\omega_l-pq)+\omega_l^2+q^2~,
\label{eq:}$$ $$2\mathcal{N}_V = \mathcal{M} + 4pq -4p^2 - 4m^2 - q^2 -\omega_l^2~.
\label{eq:}$$ Further, we evaluate three coloured Matsubara sums $$J_3 = T\sum_n\textrm{Tr}_c\left[\Delta(i\omega_n,\vec{p})\Delta(i\omega_n-i\omega_l,\vec{p}-\vec{q})\right]~,
\label{eq:}$$ $$J_1 = T\sum_n\textrm{Tr}_c\Delta(i\omega_n,\vec{p})~.
\label{eq:}$$ $$J_2 = T\sum_n\textrm{Tr}_c\left\{[\vec{p}(\vec{p}-\vec{q})]\Delta(i\omega_n,\vec{p})\Delta(i\omega_n-i\omega_l,\vec{p}-\vec{q})\right\}~,
\label{eq:}$$ following [@Hansen:2006ee] by going to the Polyakov gauge (where Polyakov loop variable is diagonal) and explicitly calculating the trace. The outcome reads $$J_3 = \sum_{s,s'=\pm} \frac{ss'}{4E_pE_{p-q}}\frac{1}{i\omega+sE_p-s'E_{p-q}}\left[f_\Phi(sE_p)- f_\Phi(s'E_{p-q})\right]~,
\label{eq:}$$ $$\begin{aligned}
J_3 = \frac{1}{4E_pE_{p-q}}\Bigg\{[f_\Phi(E_p)-f_\Phi(E_{p-q})]\left(\frac{1}{i\omega+E_p-E_{p-q}}-\frac{1}{i\omega-E_p+E_{p-q}}\right)+\\
\nonumber
[1-f_\Phi(E_p)-f_\Phi(E_{p-q})]\left(\frac{1}{i\omega+E_p+E_{p-q}}-\frac{1}{i\omega-E_p-E_{p-q}}\right)\Bigg\}~,
\label{eq:}\end{aligned}$$ $$J_2 = \sum_{s,s'=\pm} \frac{ss'}{4E_pE_{p-q}}\frac{\vec{p}(\vec{p}-\vec{q})}{i\omega+sE_p-s'E_{p-q}}\left[f_\Phi(sE_p)-f_\Phi(s'E_{p-q})\right]~,
\label{eq:}$$ $$\begin{aligned}
J_2 = \frac{\vec{p}(\vec{p}-\vec{q})}{4E_pE_{p-q}}\Bigg\{[f_\Phi(E_p)-f_\Phi(E_{p-q})]\left(\frac{1}{i\omega+E_p-E_{p-q}}-\frac{1}{i\omega-E_p+E_{p-q}}\right)+\\
\nonumber
[1-f_\Phi(E_p)-f_\Phi(E_{p-q})]\left(\frac{1}{i\omega+E_p+E_{p-q}}-\frac{1}{i\omega-E_p-E_{p-q}}\right)\Bigg\}~,
\label{eq:}\end{aligned}$$ $$J_4 = \sum_{s=\pm}\frac{s}{2E_p}f_\Phi(-sE_p) = \frac{1}{2E_p}[1-2f_\Phi(E_p)]~,$$ which is in agreement with (6.27) and (6.37) from LeBellac, with the only difference that Fermi-Dirac distribution functions have been replaced with the modified ones [@Hansen:2006ee] $$f_\Phi(E) = T\sum_n\textrm{Tr}_c\left[\frac{1}{i\omega_n-E}\right] = 3\frac{\Phi(1+2e^{-\beta E})e^{-\beta E}+e^{-3\beta E}}{1+3\Phi(1+e^{-\beta E})e^{-\beta E}+e^{-3\beta E}}~.
%\label{eq:Dist}$$ To obtain the last equation we use the fact that in this specific gauge the Polyakov loop variable is diagonal and that after a Matsubara summation we get $$f_\Phi(E) = \sum_{j=1}^3 \frac{1}{1+e^{\beta A_{jj}}e^{\beta E}}=-\frac{1}{\beta}\frac{\partial}{\partial E}\sum_{j=1}^3\ln(1+L_{jj}e^{-\beta E})~,
\label{eq:}$$ where $L_{jj}=e^{-\beta A_{jj}}$ and $A$ is to be understood as a temporal component of the gauge field. The evaluation of the colour trace is now trivial and results in $$\sum_{j=1}^3\ln(1+L_{jj}e^{-\beta E})=\ln\left[(1+L_{11}e^{-\beta E})(1+L_{22}e^{-\beta E})(1+L_{33}e^{-\beta E})\right]~.
\label{eq:}$$ Using $L_{11}+L_{22}+L_{33}=\textrm{Tr}L=\textrm{Tr}L^\dagger = 3\Phi$, $L_{11}L_{22}L_{33}=\det L=\det L^\dagger = 1$ we get $$f_\Phi(E) = -\frac{1}{\beta}\frac{\partial}{\partial E}\ln[1+3\Phi(1+e^{-\beta E})e^{-\beta E}+e^{-3\beta E}]~,
\label{eq:}$$ which is the modified distribution function (\[eq:Dist\]). We also show that $$f_\Phi(x+i\omega_l)=f_\Phi(x)~, \hspace{75pt} f_\Phi(-E)=1-f_\Phi(E)~.
\label{eq:}$$ Now to get rid of the $f(E_{p-q})$ terms whenever we meet them we change the variables according to [@Klevansky:1992qe] $p\rightarrow -p+q$ so that $E_{p-q}\rightarrow E_p$, $E_p\rightarrow E_{p-q}$ and $\vec{p}(\vec{p}-\vec{q})$ does not change. We also make the Wick rotation $i\omega\rightarrow\omega$ $$J_3 = \frac{f_\Phi(E_p)}{E_p}\left[\frac{1}{(\omega+E_p)^2-E_{p-q}^2} + \frac{1}{(\omega-E_p)^2-E_{p-q}^2}\right]~,
\label{eq:}$$ $$J_2 = \vec{p}(\vec{p}-\vec{q})\frac{f_\Phi(E_p)}{E_p}\left[\frac{1}{(\omega+E_p)^2-E_{p-q}^2} + \frac{1}{(\omega-E_p)^2-E_{p-q}^2}\right]
=
\vec{p}(\vec{p}-\vec{q})J_3~.
\label{eq:}$$ $$E_p^2 - E_{p-q}^2 = 2pq - q^2 \hspace{20pt} \vec{p}\vec{q}=pq\lambda \hspace{20pt}
\int \frac{d^3p}{(2\pi)^3} = \int \frac{dpp^2 d\lambda}{(2\pi)^2} ~.
\label{eq:}$$ $$(\omega\pm E_p)^2-E_{p-q}^2 = \omega^2 - q^2 \pm 2\omega E_p + 2pq\lambda~,
\label{eq:}$$ $$2pq\lambda\rightarrow\lambda~.
\label{eq:}$$ We are now left with two angular integrals $$\int_{-2pq}^{2pq} \frac{d\lambda}{\omega^2-q^2\pm2\omega E_p+\lambda}
=
\ln\frac{\omega^2-q^2\pm2\omega E_p+2pq}{\omega^2-q^2\pm2\omega E_p-2pq}~,
\label{eq:}$$ $$\int_{-2pq}^{2pq} \frac{\lambda d\lambda}{\omega^2-q^2\pm2\omega E_p+\lambda}
=
4pq - [\omega^2-q^2\pm2\omega E_p]\ln\frac{\omega^2-q^2\pm2\omega E_p+2pq}{\omega^2-q^2\pm2\omega E_p-2pq}~,
\label{eq:}$$ $$R_\pm(\omega) = \omega^2-q^2-2\omega E_p\pm2pq~.
\label{eq:}$$ Using the definition $\textrm{Re}f(\omega)=\frac{1}{2}[f(\omega)+f(-\omega)]$ we obtain for the longitudinal component of (\[eq:Decomposition\]) the result $$F(\omega,q) = -g^2\frac{\omega^2-q^2}{q^2}\textrm{Re}\int_0^\infty \frac{p^2dp}{\pi^2} \frac{2f_\Phi(E_p)}{E_p}
\Bigg\{1 + \frac{4E_p\omega + q^2-\omega^2-4E_p^2}{4pq}\ln\frac{R_+(\omega)}{R_-(\omega)}\Bigg\}~.
\label{eq:}$$ To get the electric screening mass (Debye mass) we have to compute $F(0,q\rightarrow0)=m_D^2$, i.e., $$F(0,q) = g^2\int_0^\infty \frac{p^2dp}{\pi^2} \frac{f_\Phi(E_p)}{E_p}
\Bigg\{2 + \frac{q^2-4E_p^2}{4pq}\ln\frac{(2p-q)^2}{(2p+q)^2}\Bigg\}~,
\label{eq:}$$ $$\lim_{q\rightarrow0}\frac{1}{q}\ln\frac{(2p-q)^2}{(2p+q)^2} = - \frac{2}{p}~,
\label{eq:}$$ $$F(0,q\rightarrow0) = m_D^2 = g^2\int_0^\infty \frac{dp}{\pi^2}\frac{2f_\Phi(E_p)}{E_p}
\left\{p^2 + E_p^2\right\}~.
\label{eq:}$$ This result has a structure exactly as the QED Debye mass with the only difference that the Fermi-Dirac distribution function is replaced with the Polyakov loop suppressed distribution. After including the factors $N_f=2$ and $\alpha_s=g^2/4\pi$ we get $$m_D^2 = \frac{16\alpha_s}{\pi}\int_0^\infty~dp\frac{f_\Phi(E_p)}{E_p}
\left\{p^2 + E_p^2\right\}~,
\label{eq:}$$ which is the formula (\[debyemass\]) from the text.
|
---
abstract: 'For a class of polynomials $f \in {\mathbb{Z}}[X]$, which in particular includes all quadratic polynomials, and also trinomials of some special form, we show that, under some natural necessary conditions, the set of primes $p$ such that all iterations of $f$ are irreducible modulo $p$ is of relative density zero. Furthermore, we give an explicit bound on the rate of the decay of the density of such primes in an interval $[1, Q]$ as $Q \to \infty$. This is an unconditional and more precise version of a recent conditional results of A. Ferraguti (2018), which applies to arbitrary polynomials but requires several conjectures about their Galois group. Furthermore, under the Generalised Riemann Hypothesis we obtain a stronger bound on this density.'
address:
- 'L.M.: Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Altenberger Straße 69, A-4040 Linz, Austria'
- 'A.O.: School of Mathematics and Statistics, University of New South Wales. Sydney, NSW 2052, Australia'
- 'I.E.S.: School of Mathematics and Statistics, University of New South Wales. Sydney, NSW 2052, Australia'
author:
- 'L[á]{}szl[ó]{} M[é]{}rai'
- Alina Ostafe
- 'Igor E. Shparlinski'
title: Dynamical irreducibility of polynomials modulo primes
---
\[theorem\][Lemma]{} \[theorem\][Claim]{} \[theorem\][Corollary]{} \[theorem\][Proposition]{} \[theorem\][Open Question]{} \[theorem\][Example]{} \[theorem\][Remark]{}
\
${\left(}
\def$[)]{} $${\left[}
\def$$[\]]{} \#1[\#1]{} \#1[\#1]{}
\#1
\#1
[blue]{}\#1
*[\_m]{}*
\#1
[dgreen]{}[*[\#1]{}*]{}
\#1
[blue]{}[*[\#1]{}*]{}
\#1
[red]{}\#1
\#1
[magenta]{}[*[\#1]{}*]{}
\#1
[NavyBlue]{}[*[\#1]{}*]{}
\#1
[Maroon]{}[*[\#1]{}*]{}
Introduction
============
Motivation
----------
For a polynomial $f\in {\mathbb{K}}[X]$ over a field ${\mathbb{K}}$ we define the sequence of polynomials: $$f^{(0)}(X) = X, \qquad f^{(n)}(X) = f\(f^{(n-1)}(X)\), \quad
n =1, 2, \ldots\,.$$ The polynomial $f^{(n)}$ is called the $n$-th iterate of the polynomial $f$.
Following the established terminology, see [@Ali; @AyMcQ; @Jon1; @JB; @Gomez_Nicolas; @GNOS], one says that a polynomial $f\in {\mathbb{K}}[X]$ is [*stable*]{} if all iterates $ f^{(n)}(X)$, $n =1,2 , \ldots$, are irreducible over ${\mathbb{K}}$. However, we prefer to use the more informative terminology introduced by Heath-Brown and Micheli [@H-BM] and instead we call such polynomials [*dynamically irreducible*]{}.
For a polynomial $f\in{\mathbb{Q}}[X]$ and a prime $p$ we define $f_p\in{\mathbb{F}_p}[X]$ to be the [*reduction*]{} of $f$ modulo $p$. In this paper we consider the following question, see [@BdMIJMST Question 19.12].
Let $f\in{\mathbb{Q}}[X]$ be a dynamically irreducible polynomial of degree $d\geq 2$. Is it true that the set of primes $$\label{eq:stable p}
\{p: f_p \text{ is dynamically irreducible over } {\mathbb{F}_p}\}$$ is a finite set?
For example, Jones [@Jon2 Conjecture 6.3] has conjectured that $x^2+1$ is dynamically irreducible over ${\mathbb{F}_p}$ if and only if $p=3$. Ferraguti [@Fer] has shown that under some Galois-theoretic assumptions, the set of dynamically irreducible primes has density zero. Our aim is to prove such a result unconditionally for a class of polynomials which includes trinomials of the form $f(X)=aX^d+bX^{d-1}+c\in {\mathbb{Z}}[X]$ of even degree, and hence all quadratic polynomials. Furthermore, combining
- some effective results from Diophantine geometry [@BEG],
- the square-sieve of Heath-Brown [@H-B],
- a slightly refined bound of character sums over almost-primes from [@KonShp],
we obtain an explicit saving in our density estimate.
Furthermore, asumming the Generalised Riemann Hypothesis (GRH), we obtain a stronger bound.
We believe these techniques have never been used before in this combination and for similar purposes. Hence we expect that this approach may find several other applications.
Main results {#sec:res}
------------
Clearly, it is enough to consider the distribution of primes for which $f_p$ is dynamically irreducible in dyadic intervals of the form $[Q,2Q]$. Thus, given a polynomial $f(X) \in {\mathbb{Z}}[X]$ we define $$\begin{split}
\label{eq:PfQ}
P_f(Q)=\#\{p\in [Q&, 2Q] \cap \cP:\\
& ~f_p \text{ is dynamically irreducible over } {\mathbb{F}_p}\},
\end{split}$$ where $\cP$ denotes the set of primes.
\[thm:lin sqr\] Let $f(X) \in {\mathbb{Z}}[X]$ be such that the derivative $f'(X)$ is of the form $$f'(X)= g(X)^2 (aX+b), \qquad g(X) \in {\mathbb{Z}}[X], \ a,b \in {\mathbb{Z}}, \ a \ne 0.$$ Assume that $0$ is not a periodic point of $f$ and $\gamma=-b/a$ is not a pre-periodic point of $f$. Then $$P_f(Q) \le \frac{(\log \log\log\log Q)^{2+o(1)}} {\log\log\log Q} \cdot \frac{Q }{\log Q}.$$
Obviously all quadratic polynomials have their derivatives of the form required in Theorem \[thm:lin sqr\]. We now exhibit a larger class of polynomials of higher degree.
\[cor:trinom\] Let $f(X)=r(X-u)^d+s(X-u)^{d-1}+t\in {\mathbb{Z}}[X]$ with some $r,s,t,u \in {\mathbb{Z}}$, $r \ne 0$, be such that $d$ is even, 0 is not a periodic point of $f$ and $$\gamma=u-\frac{(d-1)s}{dr}$$ is not a pre-periodic point of $f$. Then $$P_f(Q) \le \frac{(\log \log\log\log Q)^{2+o(1)}} {\log\log\log Q} \cdot \frac{Q }{\log Q}.$$
We now give conditional (on the GRH) estimates.
\[thm:lin sqr-GRH\] Let $f(X) \in {\mathbb{Z}}[X]$ be such that the derivative $f'(X)$ is of the form $$f'(X)= g(X)^2 (aX+b), \qquad g(X) \in {\mathbb{Z}}[X], \ a,b \in {\mathbb{Z}}, \ a \ne 0.$$ Assume that $0$ is not a periodic point of $f$ and $\gamma=-b/a$ is not a pre-periodic point of $f$. Then, assuming the GRH, $$P_f(Q) = O\( \frac{Q }{\log Q \log\log Q}\),$$ where the implied constant depends only on $f$.
Accordingly, we also have:
\[cor:trinom-GRH\] Let $f(X)=r(X-u)^d+s(X-u)^{d-1}+t\in {\mathbb{Z}}[X]$ with some $r,s,t,u \in {\mathbb{Z}}$, $r \ne 0$, be such that $d$ is even, 0 is not a periodic point of $f$ and $$\gamma=u-\frac{(d-1)s}{dr}$$ is not a pre-periodic point of $f$. Then, assuming the GRH, $$P_f(Q) =O\( \frac{Q }{\log Q \log\log Q}\),$$ where the implied constant depends only on $f$.
We note that $s=0$ is admissible in Corollaries \[cor:trinom\] and \[cor:trinom-GRH\] and thus they apply to binomials of the form $r(X-u)^d +t$, which (for $r=1$ and $u=0$) are commonly studied in arithmetic dynamics. Specially, Corollary \[cor:trinom\] also answers a weakened version of a conjecture of Jones [@Jon2 Conjecture 6.3].
We also remark that the requirement in Theorems \[thm:lin sqr\] and \[thm:lin sqr-GRH\] as well as in Corollaries \[cor:trinom\] and \[cor:trinom-GRH\] that $\gamma$ is not a pre-periodic point is necessary. For example, for the polynomial $f(X)=(X-2)^2+2$ with $\gamma=2$, the reduction $f_p$ is dynamically irreducible for all primes $p\equiv 5 \mod 8$, which follows from Lemma \[lemma:stab\] below.
Preliminaries
=============
Notation, general conventions and definitions
---------------------------------------------
Throughout the paper, $p$ always denotes a prime number.
For a prime $p$, $v_p : {\mathbb{Q}}^*\to {\mathbb{Z}}$ represents the usual [*$p$-adic valuation*]{}, that is, for $a\in {\mathbb{Z}}\setminus \{0\}$, we let $v_p(a)=k$ if $p^k$ is the highest power of $p$ which divides $a$, and for $a,b\in {\mathbb{Z}}\setminus \{0\}$ we let $v_p(a/b)=v_p(a)-v_p(b)$.
We define the [*Weil logarithmic height*]{} of $a/b\in{\mathbb{Q}}$ as $$h(a/b)=\max \{ \log |a|,\log|b|\},$$ with the convention $h(0)=0$.
We use the Landau symbol $O$ and the Vinogradov symbol $\ll$. Recall that the assertions $U=O(V)$ and $U \ll V$ are both equivalent to the inequality $|U|\le cV$ with some absolute constant $c>0$. To emphasize the dependence of the implied constant $c$ on some parameter (or a list of parameters) $\rho$, we write $U=O_{\rho}(V)$ or $U \ll_{\rho} V$.
Basic properties of resultants {#sec:resultants}
------------------------------
Here we recall the following well known properties of resultants of polynomials, see [@GaGe], that hold over any field ${\mathbb{K}}$.
\[lem:res\] Let $f,g\in{\mathbb{K}}[X]$ be polynomials of degrees $d\ge 1$ and $e\ge 1$, respectively, and let $h\in{\mathbb{K}}[X]$. Denote by $\beta_1,\ldots,\beta_e$ the roots of $g$ in an extension field. Then we have:
1. ${\mathrm{Res}\(f,g\)}=(-1)^{de}g_e^d\prod_{i=1}^e f(\beta_i)$,
2. ${\mathrm{Res}\(fg,h\)}={\mathrm{Res}\(f,h\)}{\mathrm{Res}\(g,h\)}$,
where $g_e$ is the leading coefficient of $g$.
Dynamically irreducible polynomials
-----------------------------------
The following result gives a necessary condition that a polynomial is dynamically irreducible over a finite field of odd characteristic [@GNOS Corollary 3.3].
\[lemma:stab\] Let $q$ be an odd prime power, and let $f\in{\mathbb{F}}_q[X]$ be a dynamically irreducible polynomial of degree $d\geq 2$ with leading coefficient $f_d$, nonconstant derivative $f'$, $\deg f'=k\le d-1$. Then the followings hold:
1. if $d$ is even, then ${\mathrm{Disc}\(f\)}$ and $f_d^k {\mathrm{Res}\(f^{(n)},f'\)}$, $n\geq 2$, are nonsquares in ${\mathbb{F}_q}$,
2. if $d$ is odd, then ${\mathrm{Disc}\(f\)}$ and $(-1)^{\frac{d-1}{2}}f_d^{(n-1)k+1} {\mathrm{Res}\(f^{(n)},f'\)}$, $n\geq 2$, are squares in ${\mathbb{F}_q}$.
Jacobi symbol
-------------
For $n\geq 3$, $\left(\frac{\cdot}{n}\right)$ denotes the [*Jacobi symbol*]{}, which is identical to the [*Legendre symbol*]{} for prime $n=p$. We recall the following well-known properties, see [@IwKow Section 3.5].
\[lemma:reciprocity\] For odd integers $m,n\geq 3$ we have $$\left(\frac{m}{n}\right)=(-1)^{\frac{m-1}{2}\frac{n-1}{2}}\left(\frac{n}{m}\right)
\mand
\left(\frac{2}{n}\right)=(-1)^{\frac{n^2-1}{8}}.$$
On some character sums over almost-primes {#subsect:char-sum}
-----------------------------------------
For $\eta>0$, let $\cP(\eta,M)$ denote the set of positive integers $m\leq M$ which do not have prime divisors $p\leq M^\eta$. It is well known that for for all positive $\eta <1$ and $M\geq 2$ one has $$\label{eq:P bound}
\# \cP(\eta,M)\ll\frac{M}{\eta \log M},$$ see [@Ten Part III, Theorem 6.4 and Equation (6.23)].
One important tool in the proof of Theorem \[thm:lin sqr\] is the following result which is a slightly more precise form of [@KonShp Corollary 10]. Namely, it is easy to trace the dependence of $\eta$ in the second term of the bound of [@KonShp Corollary 10] and see that the term $O_\eta\(M^{1-\eta}\)$ can be refined as $O\(\eta^{-2} M^{1-\eta}\)$. More precisely, we have:
\[lemma:char sum\] For any $\varepsilon>0$ there exists some $\eta_0>0$ such that for any positive $\eta<\eta_0$, integer $M \geq q^{1/3+\varepsilon}$, where $q\geq 2$ is not a perfect square, we have $$\left|\sum_{m \in\cP(\eta,M)}\left(\frac{m}{q}\right) \right|\ll \eta^{\eta^{-1/2}/4-1} \frac{M}{\log M} + \eta^{-2} M^{1-\eta}.$$
Furthermore, under the GRH we have a rather strong bound for sums over prime, see [@MonVau Equation (13.21)]
\[lemma:char sum-GRH\] For any positive integers $q$ and $M$, where $q\geq 2$ is not a perfect square, we have $$\left|\sum_{p \le M}\left(\frac{p}{q}\right) \right| \ll M^{1/2} \log (qM).$$
Integer solutions to hyperelliptic equations
--------------------------------------------
We also need the following effective result of B[é]{}rczes, Evertse and Győry [@BEG Theorem 2.2], which bounds the height of $\cS$-integer solutions to a hyperelliptic equation. We present it in the form needed for the proof of Theorem \[thm:lin sqr\].
Let $\cS$ be a finite set of primes of cardinality $s=\#\cS$ and define ${\mathbb{Z}}_\cS$ to be the ring of $\cS$-integers, that is, the set of rational numbers $r$ with $v_p(r)\ge 0$ for any $p\not\in \cS$. Put $$Q_\cS=\prod_{p\in \cS} p.$$
\[lemma:BEG\] Let $f\in {\mathbb{Z}}_\cS[X]$ be a polynomial of degree $d\geq 3$ without multiple zeros, and let $b\in{\mathbb{Z}}_\cS$ be a nonzero $\cS$-integer. If $x,y\in {\mathbb{Z}}_\cS$ are solutions to the equation $$f(x)=by^2,$$ then $$h(x),h(y)\leq (4ds)^{212d^4 s} Q_S^{20 d^3} \exp(O_{f}(h(b))).$$
On the height of some iterates and resultants
---------------------------------------------
We need the following simple estimates on the height of some iterates and resultants:
\[lem:iter height\] Let $f\in{\mathbb{Q}}[X]$ be a polynomial of degree $d\ge 1$ and let $\gamma\in\ov{\mathbb{Q}}$ which is not a pre-periodic point of $f$. Then, there exists a constant $C=C(f)$ depending only on $f$ such that for any $n\ge 1$ we have $$h\(f^{(n)}(\gamma)\) \ge d^{n}(h(\gamma)-C).$$
The proof follows inductively applying [@Silv Theorem 3.11], this inequality is also give in [@Silv Equation (3.8)].
\[lem:res height\] Let $f\in{\mathbb{Q}}[X]$ be a polynomial of degree $d\ge 1$. Then, for any $n\ge 1$, we have $$h\({\mathrm{Res}\(f^{(n)},f'\)}\)=O_f\(d^n\).$$
Let $f_d$ be the leading coefficient of $f$ and $\gamma_1,\ldots,\gamma_{d-1}$ be the roots of the derivative $f'$. Then ${\mathrm{Res}\(f^{(n)},f'\)}$ is defined by $${\mathrm{Res}\(f^{(n)},f'\)}=(-1)^{d^n(d-1)}(df_d)^{d^n}\prod_{i=1}^{d-1}f^{(n)}(\gamma_i).$$ We have $$\label{eq:1}
h\((df_d)^{d^n}\)\le d^n(\log|d|+h(f_d))\ll_fd^n.$$ Applying [@Silv Theorem 3.11], we also have $$\label{eq:2}
h\(f^{(n)}(\gamma_i)\)\ll_f d^n.$$ Putting and together, we conclude the proof.
Proof of Theorem \[thm:lin sqr\]
================================
An application of the square-sieve
----------------------------------
We can assume, that $f$ is dynamically irreducible over ${\mathbb{Q}}$ as otherwise its reduction $f_p$ can be dynamically irreducible at most just finitely many primes $p$.
Let $d = \deg f$. We can assume that $Q$ is large enough, thus $f$ and $f'$ are of degrees $d$ and $d-1$, respectively, modulo any prime $p\in[Q,2Q]$. From the shape of $f'$ we see that $d-1$ is odd (and thus $d$ is even).
Let $\varepsilon>0$. All of the constants in this proof may depend on $\varepsilon$ and $f$.
Put $$\label{eq:N t}
N = c_1 \log \log Q \mand t= c_2 \log\log\log Q$$ with some sufficiently small constants $c_1, c_2>0$ fixed later.
Write $$f_d \cdot {\mathrm{Res}\(f^{(n)},f'\)}=2^{\nu_n}u_n, \quad v_2(u_n)=0, \quad n\geq 2,$$ where $f_d$ is the leading coefficient of $f$.
By the Dirichlet principle there is a set $\cN\subseteq [N,N+t]$ of size $$\#\cN\geq \frac{1}{4}t$$ such that for all $r,s\in\cN$ we have $$u_r\equiv u_s \mod 4 \mand \nu_r\equiv \nu_s \mod 2.$$ Therefore, since $u_n$, $n\ge 2$, are odd, we have $$\label{eq:u nu}
u_r+u_s \equiv 2 \mod 4 \mand \nu_r+\nu_s\equiv 0 \mod 2.$$ Using that for an odd $m$ we have $2 \mid m-1$ and $8 \mid m^2-1$, we conclude that $$\label{eq:parity}
(-1)^{\frac{u_{r}+u_{s}-2}{2}\frac{m-1}{2}+(\nu_{r}+\nu_{s} ) \frac{m^2-1}{8}} =1.$$
Consider $$S=\sum_{p\in[Q,2Q]} \left|\sum_{n\in\cN} \left(\frac{f_d \cdot {\mathrm{Res}\(f^{(n)},f'\)}}{p} \right)\right|^2.$$
If $f_p$ is dynamically irreducible modulo $p$, then by Lemma \[lemma:stab\], we have $$\begin{aligned}
\sum_{n\in\cN} \left(\frac{f_d \cdot {\mathrm{Res}\(f^{(n)},f'\)}}{p} \right)& =\sum_{n\in\cN} \left(\frac{f_d ^{d-1}\cdot {\mathrm{Res}\(f^{(n)},f'\)}}{p} \right)\\
& = -\#\cN\leq -\frac{t}{4},\end{aligned}$$ as $d-1$ is odd, thus $$\label{eq:Pf S}
P_f(Q)\leq 16\frac{S}{t^2},$$ where $P_f(Q)$ is defined by .
Let $\eta_0$ as in Lemma \[lemma:char sum\] and $\eta<\eta_0$ be chosen later. Then we extend the summation for integers $m\in\cP(\eta,2Q)$, where $\cP(\eta,2Q)$ is defined in Section \[subsect:char-sum\], to obtain $$S\leq\sum_{m\in\cP(\eta,2Q)} \left|\sum_{n\in \cN} \left(\frac{f_d \cdot {\mathrm{Res}\(f^{(n)},f'\)}}{m} \right)\right|^2.$$
By Lemma \[lemma:reciprocity\], we have $$S\leq\sum_{m\in\cP(\eta,2Q)} \left|\sum_{n\in\cN} (-1)^{\frac{u_n-1}{2}\frac{m-1}{2}+\nu_n \frac{m^2-1}{8}}\left(\frac{m}{u_n} \right)\right|^2.$$ By opening the square and changing the order of summation, we see from that $$\begin{aligned}
S&\le
\sum_{n_1,n_2\in\cN} \sum_{m\in\cP(\eta,2Q)}
(-1)^{\frac{u_{n_1}+u_{n_2}-2}{2}\frac{m-1}{2}+(\nu_{n_1}+\nu_{n_2} ) \frac{m^2-1}{8}}\left(\frac{m}{u_{n_1}u_{n_2}} \right)\\
&=\sum_{n_1,n_2\in\cN} \sum_{m\in\cP(\eta,2Q)} \left(\frac{m}{u_{n_1}u_{n_2}}\right).\end{aligned}$$
Let $\cZ$ be the set of pairs $(n_1,n_2)\in\cN^2$ such that $u_{n_1}u_{n_2}$ is a square. For a pair $(n_1,n_2)\not\in\cZ$, we have by Lemma \[lem:res height\], that $$\begin{aligned}
|u_{n_1}&u_{n_2}|^{1/3+\varepsilon}\\
&\leq \exp \left((1/3+\varepsilon)\left(h\left({\mathrm{Res}\(f^{(n_1)},f' \)}\right)+h\left({\mathrm{Res}\(f^{(n_2)},f'\)} \right) \right) \right)\\
&\leq \exp\left(O_f(d^{N+t})\right) \leq Q\end{aligned}$$ if $c_1$ and $c_2$ are small enough. Then by Lemma \[lemma:char sum\], and by we have $$S \ll \# \cZ \frac{Q}{ \eta \log Q } + t^2\left( \eta^{\eta^{-1/2}/4-1} \frac{Q}{\log Q} + \eta^{-2}Q^{1-\eta} \right).$$ Recalling , we now conclude $$\label{eq:almost-final}
P_f(Q)\ll \frac{\# \cZ}{t^2} \frac{Q}{ \eta \log Q } + \eta^{\eta^{-1/2}/4-1} \frac{Q}{\log Q} +\eta^{-2} Q^{1-\eta}.$$
In the next section, we give bound on $\#\cZ$.
Perfect squares in denominators {#sec:nonsquares}
-------------------------------
We show that $\cZ$ does not contain nontrivial (off-diagonal) pairs and hence $$\label{eq:Z}
\# \cZ = \# \cN \le t.$$
Let $n_1\neq n_2$ be a pair of integers in $\cN$ such that $u_{n_1}u_{n_2}$ is a square. We can assume that $n_2>n_1$. Then, since $$u_{n_1}u_{n_2}=2^{-\nu_{n_1}-\nu_{n_2}} f_d ^2 {\mathrm{Res}\(f^{(n_1)},f'\)}{\mathrm{Res}\(f^{(n_2)},f'\)}$$ and, by Lemma \[lem:res\], $${\mathrm{Res}\(f^{(n_1)},f'\)}{\mathrm{Res}\(f^{(n_2)},f'\)}={\mathrm{Res}\(f^{(n_1)}f^{(n_2)},f'\)},$$ we obtain, recalling , that $${\mathrm{Res}\(f^{(n_1)}f^{(n_2)},f'\)}$$ is also a square.
Now, let $$f'(X)= g(X)^2 (aX+b), \qquad g(X) \in {\mathbb{Z}}[X], \ a,b \in {\mathbb{Z}}, \ a \ne 0.$$ Let $\beta_1, \ldots, \beta_m$ be the roots of $g$ (taken with multiplicities, that is, $m = (d-2)/2$).
From here, using again Lemma \[lem:res\], we obtain that $$\begin{split}
{\mathrm{Res}\(f^{(n_1)}f^{(n_2)},f'\)}&=(-1)^{(d-1)(d^{n_1}+d^{n_2})}(df_d )^{d^{n_1}+d^{n_2}}\\
&\quad \cdot \prod_{i=1}^m \(f^{(n_1)}(\beta_i)\)^2 \(f^{(n_2)}(\beta_i)\)^{2} f^{(n_1)}(\gamma)f^{(n_2)}(\gamma),
\end{split}$$ where $\gamma=-b/a\in{\mathbb{Q}}$.
Now, since $d$ is even we have, that $$f^{(n_1)}(\gamma)f^{(n_2)}(\gamma)$$ is a square in ${\mathbb{Q}}$.
We let $\cS$ be the set of primes, which consists the prime divisors of $d$ and $a$. We thus have the equation $$\alpha f^{(n_2-n_1)}(\alpha) =\beta^2,$$ where $\alpha=f^{(n_1)}(\gamma)$ and $\beta$ are $\cS$-integers in ${\mathbb{Q}}$, and $\deg f^{(n_2-n_1)}\le d^t$.
Since $0$ is not periodic, $X\not\mid f^{(n_2-n_1)}(X)$, and since $f$ is dynamically irreducible, $f^{(n_2-n_1)}$ is irreducible, and thus $Xf^{(n_2-n_1)}(X)$ is a polynomial of degree at least $3$ without multiple roots in $\ov{\mathbb{Q}}$. We can apply now Lemma \[lemma:BEG\] with the polynomial $Xf^{(n_2-n_1)}(X)$ to conclude that $$\label{eq:upper}
h(\alpha)\le \exp(O_f(d^{4t})).$$ On the other hand, since $\gamma$ is not a pre-periodic point of the polynomial $f$, by Lemma \[lem:iter height\] we have $$h(\alpha)\gg_f d^{n_1} \ge d^N.$$ We choose now a suitable constant $c_2$ in , depending only on $f$, to obtain a contradiction with . We thus conclude that there is no nontrivial pair $n_1,n_2\in\cN$ such that $ {\mathrm{Res}\(f^{(n_1)},f'\)}{\mathrm{Res}\(f^{(n_2)},f'\)}$ is a square in ${\mathbb{Q}}$ which proves .
Final optimisation
------------------
In order to conclude the proof, observe that and give $$P_f(Q) \ll\frac{1}{t} \frac{Q}{ \eta \log Q } + \eta^{\eta^{-1/2}/4-1} \frac{Q}{\log Q} +\eta^{-2} Q^{1-\eta}. $$ Let us choose $\eta$ to satisfy $$\eta^{\eta^{-1/2}/4} = t^{-1}$$ for which we derive $$\label{eq:pen-ult}
P_f(Q)\ll\frac{1}{t} \frac{Q}{ \eta \log Q} +\eta^{-2} Q^{1-\eta}. $$ Since for the above choice of $\eta$ we have $$\eta^{-1/2} \log \(\eta^{-1}\) = 4 \log t$$ we conclude that $\eta = (\log t)^{-2+o(1)}$. It is easy to check that with the choice of $t$ as in , the second term in never dominates and the result follows.
Proof of Theorem \[thm:lin sqr-GRH\]
====================================
Since the proof is very similar to that of Theorem \[thm:lin sqr\] we only sketch the main steps.
We now put $$\label{eq:N t GRH}
N = c_3 \log Q \mand t= c_4 \log\log Q$$ with some sufficiently small constants $c_3, c_4>0$ fixed later.
We recall the inequality , however this time we do not expand the sum over primes in $S$ to the set $\cP(\eta,2Q)$.
As before, let $\cZ$ be the set of pairs $(n_1,n_2)\in\cN^2$ such that $u_{n_1}u_{n_2}$ is a square. For a pair $(n_1,n_2)\not\in\cZ$, again by Lemma \[lem:res height\], with the choice , we conclude that $$\log |u_{n_1}u_{n_2}| \ll_f d^{N+t} \leq Q^{1/3}$$ if $c_3$ and $c_4$ are small enough. Hence, using Lemma \[lemma:char sum-GRH\] instead of Lemma \[lemma:char sum\], we arrive to the following analogue of $$\label{eq:almost-final-GRH}
P_f(Q)\ll \frac{\# \cZ}{t^2} \frac{Q}{ \log Q } + Q^{5/6}.$$
For $N$ and $t$ in related similarly to those in we also have the bound , which after substitution in gives $$P_f(Q)\ll \frac{1}{t} \frac{Q}{ \log Q } + Q^{5/6},$$ and the result follows.
Comments
========
We remark that our method applies to any polynomial for which we can control the existence, or at least the frequency, of perfect squares in the products ${\mathrm{Res}\(f^{(n_1)},f'\)} {\mathrm{Res}\(f^{(n_2)},f'\)}$ with $N \le n_1< n_2 \le N+t$. We also remark that we have a lot of flexibility in selecting the interval $[N, N+t]$ from which $n_1$ and $n_2$ are chosen. Besides, we do not have to use all values from the set $\cN$ in the proofs of Theorems \[thm:lin sqr\] and \[thm:lin sqr-GRH\], but limit ourselves to certain (reasonably large) subset $\widetilde \cN \subseteq \cN$ of integers with some desirable properties, and then use $\widetilde \cN$ in the argument.
Unfortunately, despite the above flexibility of the method, besides the shifted trinomials of Corollaries \[cor:trinom\] and \[cor:trinom-GRH\], we have not found any natural classes of polynomials for which this can be applied. Moreover, it is natural to try to extend Corollaries \[cor:trinom\] and \[cor:trinom-GRH\] to trinomials of odd degree. We note that in this case, the same approach as in Theorems \[thm:lin sqr\] and \[thm:lin sqr-GRH\] applies, but using Lemma \[lemma:stab\] (2) instead of Lemma \[lemma:stab\] (1). However the part about the square avoidance breaks down.
Our main goal has been to establish an unconditional result, at least with respect to the polynomials we consider. However we observe that under the celebrated [*$ABC$-conjecture*]{} one can further extend the class of polynomials to which our method applies. To sketch this argument, for an integer $k \ne0$ we define $\rho(k)$ as the product of all distinct prime divisors of $k$, that is, $$\rho(k) = \prod_{p\mid k} p,$$ which is also commonly called the [*radical*]{} of $k$. Next, we recall that Langevin [@Lang] (see also [@Gran Theorem 5]) has shown that under $ABC$-conjecture, for any polynomial $g \in {\mathbb{Z}}[X]$ of degree $e \ge 2$, under some natural conditions, and any $m \in {\mathbb{Z}}$ we have $$\rho\(g(m)\) \ge |m|^{e - 1 + o(1)}, \qquad |m| \to \infty.$$ Hence if we write $|g(m)| = u v^2$ with a squarefree $u$ and an integer $v\ge 1$, we see that $$u v \ge \rho\(g(m)\) \ge |m|^{e - 1 + o(1)}$$ and using $v = \(|g(m)|/u\)^{1/2} \ll |m|^{e/2} u^{-1/2}$, we derive $$u \ge |m|^{e - 2 + o(1)}.$$
Suppose that all roots $$|\gamma_1| > |\gamma_2|\ge \ldots \ge |\gamma_{d-1}|$$ of $f'$ are integers and the largest root $\gamma_1$ is of multiplicity one.
Additionally, assume that the iterations $f^{(n)}(\gamma_i)$ grow as expected, that is, doubly exponentially, $$|f^{(n)}(\gamma_i)| = \exp\(\(1 + o(1)\)\vartheta_i^n \), \qquad i=1, \ldots, d-1,$$ for some constants $\vartheta_i> 1$, and furthermore $$\vartheta_i < \vartheta_1^{(d-3)/d}, \qquad i =2, \ldots, d-1.$$ In this case the squarefree part of $f^{(n_1)}(\gamma_1)$ is so large, namely, it is at least $$|f^{(n_1-1)}(\gamma_i) |^{d - 3 + o(1)}
\ge \exp\(|\vartheta_1^{d^{n_1-1}(d - 3 + o(1))}\),$$ that the product of other terms $$\left| \prod_{i=2}^{d-1}f^{(n_1)}(\gamma_i) \prod_{i=1}^{d-1} f^{(n_2)}(\gamma_i) \right|
\le \exp\((1+o(1))\(\sum_{i=1}^{d-1} \vartheta_2^{ d^{n_1}} + \vartheta_1^{d^{n_2}}\)\)$$ is not large enough to complement it up to a square.
Certainly this argument can be modified in several directions to cover many other scenarios and with an appropriate generalisation of the $ABC$-conjecture and the argument of [@Lang; @Gran] to number fields, it can work without the assumption of the integrality of the critical points. We do not pursue this venue here since, as we have mentioned, our goal is deriving an result without any additional conjectures on the class of polynomials we consider.
Finally, we remark on obtaining an analogues of Theorems \[thm:lin sqr\] and \[thm:lin sqr-GRH\] when $f$ is an arbitrary polynomial with the property that its derivative is irreducible. In this case, following the same approach as in the proofs of Theorems \[thm:lin sqr\] and \[thm:lin sqr-GRH\] , we get to the point when we have to discuss when ${\mathrm{Res}\(f^{(n_1)},f'\)} {\mathrm{Res}\(f^{(n_2)},f'\)}$ is a square. Using the irreducibility assumption of $f'$, we reduce this problem to analysing when $$N_{{\mathbb{Q}}(\gamma)/{\mathbb{Q}}}\(f^{(n_1)}(\gamma)f^{(n_2)}(\gamma)\) = y^2$$ for some $y \in {\mathbb{Q}}$, where $\gamma$ is one of the roots of $f'$ and $N_{{\mathbb{Q}}(\gamma)/{\mathbb{Q}}} : {\mathbb{Q}}(\gamma) \to {\mathbb{Q}}$ is the usual field norm map. To finalise our argument, under some natural assumptions on the polynomial $g\in{\mathbb{Z}}[X]$ (in our case $g=f^{(n_2-n_1)}$), we need an effective result for the height of $S$-integer solutions to the norm equation $$N_{{\mathbb{Q}}(\gamma)/{\mathbb{Q}}}(xg(x))=y^2,$$ similar to those in [@BEG].
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors thank Andrea Ferraguti for feedback on an early version of the paper and pointing out an imprecision in the initial statement of Theorem \[thm:lin sqr\] and supplying the example at the end of Section \[sec:res\].
During the preparation of this work, L.M. was supported by the Austrian Science Fund (FWF): Project P31762, A. O. was supported by the Australian Research Council (ARC): Grant DP180100201 and I. S. was supported by the Australian Research Council ARC): Grant DP170100786.
[99]{}
N. Ali, ‘Stabilité des polynômes’, [*Acta Arith.*]{}, [**119**]{} (2005), 53–63.
M. Ayad and D. L. McQuillan, ‘Irreducibility of the iterates of a quadratic polynomial over a field’, [*Acta Arith.*]{}, [**93**]{} (2000), 87–97; Corrigendum: [*Acta Arith.*]{}, [**99**]{} (2001), 97.
R. Benedetto, L. DeMarco, P. Ingram, R. Jones, M. Manes, J. H. Silverman, T. J. Tucker, ‘Current trends and open problems in arithmetic dynamics’, [*Bull. Amer. Math. Soc.*]{}, (to appear).
A. B[é]{}rczes, J.-H. Evertse and K. Gy[" o]{}ry, ‘Effective results for hyper- and superelliptic equations over number fields’, [*Publ. Math. Debrecen*]{}, [**82**]{} (2013), 727–756.
A. Ferraguti, ‘The set of stable primes for polynomial sequences with large Galois group’, [*Proc. Amer. Math. Soc.*]{}, [**146**]{} (2018), 2773–2784.
J. von zur Gathen and J. Gerhard. [*Modern computer algebra*]{}, Cambridge University Press, 1999.
D. Gomez-Perez, A. P. Nicol[á]{}s, ‘An estimate on the number of stable quadratic polynomials’, [*Finite Fields Appl.* ]{}, [**16**]{} (2010), no. 6, 401–405.
D. Gomez-Perez, A. P. Nicol[á]{}s, A. Ostafe and D. Sadornil, ‘Stable polynomials over finite fields’, [*Revista Matem' atica Iberoamericana*]{}, [**30**]{} (2014), 523–535.
A. Granville, ‘ABC allows us can count squarefrees’, [*Intern. Math. Res. Notices*]{} [**19**]{} (1998), 991–1009.
D. R. Heath-Brown, ‘The square sieve and consecutive squarefree numbers’, [*Math. Ann.*]{}, [**266**]{} (1984), 251–259.
D. R. Heath-Brown and G. Micheli, ‘Irreducible polynomials over finite fields produced by composition of quadratics’, [*Rev. Mat. Iberoam.*]{}, (to appear).
H. Iwaniec and E. Kowalski, [*Analytic number theory*]{}, Amer. Math. Soc., Providence, RI, 2004.
R. Jones, ‘The density of prime divisors in the arithmetic dynamics of quadratic polynomials’, [*J. Lond. Math. Soc.*]{}, [**78**]{} (2008), 523–544.
R. Jones, ‘An iterative construction of irreducible polynomials reducible modulo every prime’, [*J. Algebra*]{}, [**369**]{} (2012), 114–128.
R. Jones and N. Boston, ‘Settled polynomials over finite fields,’ [*Proc. Amer. Math. Soc.*]{}, [**140**]{} (2012), 1849–1863.
S. Konyagin and I. E. Shparlinski, ‘Quadratic non-residues in short intervals’, [*Proc. Amer. Math. Soc.*]{}, [**143**]{} (2015), 4261–4269.
M. Langevin, ‘Cas d?[é]{}galit[é]{} pour le th[é]{}or[è]{}me de Mason et applications de la conjecture (abc),’, [*C. R. Acad. Sci. Paris S[é]{}r. I Math.* ]{}, [**317**]{} (1993). 441–444.
H. L. Montgomery and R. C. Vaughan, [*Multiplicative number theory I: Classical theory*]{}, Cambridge Univ. Press, Cambridge, 2006.
J. H. Silverman, [*The arithmetic of dynamical systems*]{}, Springer, New York, 2007.
G. Tenenbaum, [*Introduction to analytic and probabilistic number theory*]{}, Grad. Studies Math., vol. 163, Amer. Math. Soc., 2015.
|
---
abstract: 'Turán type inequalities for modified Bessel functions of the first kind are used to deduce some sharp lower and upper bounds for the asymptotic order parameter of the stochastic Kuramoto model. Moreover, approximation from the Lagrange inversion theorem and a rational approximation are given for the asymptotic order parameter.'
address:
- 'Department of Mathematics, Nanjing University of Information Science and Technology, 5 Panxin Rd, Pukou, Nanjing, Jiangsu, P.R. China'
- 'Institute of Applied Mathematics, Óbuda University, 1034 Budapest, Hungary'
- 'Department of Economics, Babeş-Bolyai University, Cluj-Napoca 400591, Romania'
author:
- 'István Mező${\dag}$'
- 'Árpád Baricz${\ddag}$'
title: Bounds for the asymptotic order parameter of the stochastic Kuramoto model
---
Dedicated to Boróka, Eszter and Koppány
Introduction
============
The Kuramoto model describes the phenomenon of collective synchronization, more precisely it describes how the phases of coupled oscillators evolve in time, see [@kur] for more details. Recently Bertini, Giacomin and Pakdaman [@bertini] were able to review some results on the Kuramoto model from a statistical mechanics standpoint and they gave in particular necessary and sufficient conditions for reversibility. In order to do this Bertini, Giacomin and Pakdaman [@bertini p. 278] deduced some lower and upper bounds for the asympotic order parameter, which involves the modified Bessel functions of the first kind of order zero and one. A few years later Sonnenschein and Schimansky-Geier [@bernard] obtained the asymptotic order parameter in closed form, which suggested a tighter upper bound for the corresponding scaling. Moreover, they elaborated the Gaussian approximation in complex networks with distributed degrees. In their study Sonnenschein and Schimansky-Geier [@bernard p. 3] proposed another upper bound for the asymptotic order parameter, but they presented their result without mathematical proof. All the same, by using Bernoulli’s inequality they verified that their upper bound is better than the upper bound of Bertini, Giacomin and Pakdaman [@bertini]. In this paper our aim is to make a contribution to this subject by showing the followings:
1. The bounds presented in the above mentioned papers are correct and their proofs are based on some Turán type inequalities for modified Bessel functions of the first kind.
2. The constants in the upper bounds presented by Bertini, Giacomin, Pakdaman [@bertini] and Sonnenschein, Schimansky-Geier [@bernard] are the best, and thus their bounds cannot be improved.
3. The results presented in the above mentioned papers can be extended to modified Bessel functions of the first kind of arbitrary order, based on some interesting new and recently discovered Turán type inequalities for modified Bessel functions of the first kind.
4. It is possible to obtain another approximation for the asymptotic order parameter (than in the above mentioned papers) by means of the Lagrange’s inversion theorem and also a rational approximation.
As far as we know the above mentioned subject was not studied yet in details from the mathematical point of view and we believe that the obtained results may be useful for the people working in statistical physics.
Bounds for the asymptotic order parameter
=========================================
In this section our aim is to discuss, complement and extend the results from [@bertini; @bernard] concerning bounds for the asymptotic order parameter of the stochastic Kuramoto model. Some new and recently discovered Turán type inequalities for modified Bessel functions of the first kind play an important role in this section. For more details on Turán type inequalities for modified Bessel functions of the first kind we refer to [@baricz] and to the references therein.
An alternative proof of a result on asymptotic order parameter.
---------------------------------------------------------------
Let us consider the transcendental equation $r=\Psi(2Kr),$ where $K>1,$ $\Psi(x)=I_1(x)/I_0(x)$ and $I_1,$ $I_0$ stand for the modified Bessel function of the first kind of order $1,$ and $0,$ respectively. Recently, Bertini, Giacomin and Pakdaman [@bertini] in order to prove their main result about the spectrum of a self-adjoint linear operator, presented the inequalities $$\label{ineq1}\sqrt{1-\frac{1}{K}}<r<\sqrt{1-\frac{1}{2K}}.$$ The clever proof of the left-hand side of was based on the well-known Turán type inequality $$I_1^2(x)-I_0(x)I_2(x)>0.$$ In what follows we would like to show that in fact the right-hand side of is also equivalent to a Turán type inequality involving modified Bessel functions of the first kind. To proceed, we use the same notation as in [@bertini]. To prove the right-hand side of we need to show that $$r^2+\frac{1}{2K}-1=\Psi^2(2Kr)+\frac{\Psi(2Kr)}{2Kr}-1<0,$$ that is, for $x>0$ we have $$\label{ineqpsi1}\Psi^2(x)+\frac{1}{x}\Psi(x)-1<0.$$ Now, by applying the identity [@bertini eq. 2.6] $$\label{quot1}\frac{I_1(x)}{I_0(x)}=\frac{x}{2}\left(1+\frac{x}{2}\frac{I_2(x)}{I_1(x)}\right)^{-1}$$ we obtain $$1-\frac{1}{x}\Psi(x)=\left({1+\frac{1}{x}\frac{I_1(x)}{I_2(x)}}\right)\left({1+\frac{2}{x}\frac{I_1(x)}{I_2(x)}}\right)^{-1},$$ which implies that is equivalent to $$\Psi^2(x)\left({1+\frac{2}{x}\frac{I_1(x)}{I_2(x)}}\right)<{1+\frac{1}{x}\frac{I_1(x)}{I_2(x)}},$$ which by means of the recurrence relation $$\label{rec}xI_0(x)-xI_2(x)=2I_1(x),$$ is equivalent to the Turán type inequality $$I_1^2(x)-I_0(x)I_2(x)<\frac{1}{x}I_0(x)I_1(x).$$ But, in view of the well-known Soni inequality $I_1(x)<I_0(x),$ the above Turán type inequality is a consequence of the stronger inequality [@baricz eq. 2.5] $$I_1^2(x)-I_0(x)I_2(x)<\frac{1}{x}I_1^2(x).$$ Since all of the above inequalities are valid for $x>0$ it follows that the right-hand side of is valid.
The proof of a claimed result on asymptotic order parameter.
------------------------------------------------------------
Recently, Sonnenschein and Schimansky-Geier [@bernard] proposed (without proof) an improvement of the right-hand side of as follows $$\label{ineq2}r<\sqrt[4]{1-\frac{1}{K}}.$$ In the sequel we present a proof of , which is based also on a Turán type inequality. Note that to prove we need to show that $$r^4+\frac{1}{K}-1=\Psi^4(2Kr)+\frac{2\Psi(2Kr)}{2Kr}-1<0,$$ that is, for $x>0$ we have $$\label{ineqpsi2}\Psi^4(x)+\frac{2}{x}\Psi(x)-1<0.$$ Now, by applying the identity we obtain $$1-\frac{2}{x}\Psi(x)=\left({1+\frac{2}{x}\frac{I_1(x)}{I_2(x)}}\right)^{-1},$$ which implies that is equivalent to $$\Psi^4(x)\left({1+\frac{2}{x}\frac{I_1(x)}{I_2(x)}}\right)<1,$$ which by means of the recurrence relation is equivalent to the Turán type inequality $$\label{edin}I_1^4(x)<I_0^3(x)I_2(x).$$ Since all of the above inequalities are valid for $x>0$ it follows that indeed the right-hand side of is valid. The Turán type inequality is the limiting case of the next Turán type inequality (see [@baed p. 592]) when $\nu\to-1$ $$I_{\nu+1}^3(x)I_{\nu+3}(x)>I_{\nu}(x)I_{\nu+2}^3(x),\ \ x>0,\ \nu>-1,$$ and it was shown by Idier and Collewet [@idier] that we can take the limit and the inequality remains true.
Sharpness of the results concerning the asymptotic order parameter
------------------------------------------------------------------
It is important to mention here that the powers in the left-hand side of the inequality , and in , that is $$\label{ineq3}\sqrt{1-\frac{1}{K}}<r<\sqrt[4]{1-\frac{1}{K}},$$ are the best possible. To show this observe that the inequality is equivalent to $$2<\frac{\log\left(1-\frac{1}{K}\right)}{\log r}<4\ \ \ \ \mbox{or to}\ \ \ \ 2<\frac{\log\left(1-\frac{2}{x}\Psi(x)\right)}{\log\Psi(x)}<4.$$ Here we used the inequality $I_1(x)<I_0(x),$ where $x>0,$ which shows that $\Psi$ maps $(0,\infty)$ into $(0,1),$ and thus $\log\Psi(x)<0$ for $x>0.$ Now we show that in the above inequalities the constants $2$ and $4$ are the best possible in the sense that the inequality $$\alpha<\frac{\log\left(1-\frac{2}{x}\Psi(x)\right)}{\log\Psi(x)}<\beta$$ is valid for all $x>0$ with the optimal parameters $\alpha=2$ and $\beta=4.$ This implies that in the powers $\frac{1}{2}$ and $\frac{1}{4}$ are indeed the best possible. Thus, let $$\lambda(x)=\frac{\log\left(1-\frac2x\Psi(x)\right)}{\log\Psi(x)}.$$ We prove that $\alpha=\lim\limits_{x\to0}\lambda(x)=2.$ Because of the presence of the logarithm, $\lambda(x)$ has no Taylor series around $x=0$, but it still can be expanded as follows $$\lambda(x)=\frac{\log (8)-2 \log (x)}{\log (2)-\log (x)}+x^2\frac{2 \log (x)+4 \log (2)-3 \log (8)}{24 (\log (2)-\log (x))^2}+\cdots,$$ from where the above limit immediately comes, as we stated. Note that by using the Mittag-Leffler expansion of $\Psi(x)$ it is also possible to obtain the above limit. Namely, since $$\Psi(x)=\frac{I_1(x)}{I_0(x)}=\sum_{n\geq1}\frac{2x}{x^2+j_{0,n}^2},$$ where $j_{0,n}$ stands for the $n$th positive zero of the Bessel function $J_0,$ by using the Bernoulli-l’Hospital’s rule twice we obtain that $$\lim_{x\to0}\lambda(x)=8\lim_{x\to0}\frac{\displaystyle\sum_{n\geq1}\frac{j_{0,n}^2-x^2}{(x^2+j_{0,n}^2)^2}\displaystyle\sum_{n\geq1}\frac{x}{(x^2+j_{0,n}^2)^2}+
\displaystyle\sum_{n\geq1}\frac{x}{x^2+j_{0,n}^2}\displaystyle\sum_{n\geq1}\frac{j_{0,n}^2-3x^2}{(x^2+j_{0,n}^2)^3}}{4\displaystyle\sum_{n\geq1}\frac{2x}{(x^2+j_{0,n}^2)^2}
\displaystyle\sum_{n\geq1}\frac{j_{0,n}^2-x^2}{(x^2+j_{0,n}^2)^2}+\left(1-4\displaystyle\sum_{n\geq1}\frac{1}{x^2+j_{0,n}^2}\right)\displaystyle\sum_{n\geq1}\frac{2x(x^2-3j_{0,n}^2)}{(x^2+j_{0,n}^2)^3}}=2.$$
Now, we are going to prove that the best constant $\beta$ equals to $\beta=\lim\limits_{x\to\infty}\lambda(x)=4.$ The well known asymptotic estimation $$\label{assym}I_\nu(x)=\frac{e^x}{\sqrt{2\pi x}}\left(1-\frac{4\nu^2-1}{8x}+O\left(\frac{1}{x^2}\right)\right)$$ yields that as $x$ grows $$\Psi(x)=\frac{8x-4+1+O\left(\frac{1}{x}\right)}{8x+1+O\left(\frac{1}{x}\right)}.$$ Substituting this into the definition of $\lambda(x)$ we can see that asymptotically it equals to $$\lambda(x)=\frac{\log(x-2+O\left(\frac{1}{x}\right))-\log(x)}{\log\left(8x-4+1+O\left(\frac{1}{x}\right)\right)-\log\left(8x+1+O\left(\frac{1}{x}\right)\right)}.$$ The differences of logarithms both in the nominator and denominator can be expanded at infinity by the expansion $$\log(ax+b)-\log(cx+d)=\log(a)-\log(c)+\left(\frac{b}{a}-\frac{d}{c}\right)\frac1x+O\left(\frac{1}{x^2}\right),\quad ac\neq0.$$ What we get is the following $$\lambda(x)=\frac{\frac2x+O\left(\frac{1}{x^2}\right)}{\left(\frac{1}{2}\right)\frac1x+O\left(\frac{1}{x^2}\right)}=
\frac{4+O\left(\frac{1}{x}\right)}{1+O\left(\frac{1}{x}\right)}$$ for large $x$, from which the result follows.
Extension of to the general case
--------------------------------
Let us consider the function $\Psi_{\nu}:(0,\infty)\to(0,\infty),$ defined by $\Psi_{\nu}(x)=I_{\nu+1}(x)/I_{\nu}(x),$ where $I_{\nu}$ stands for the modified Bessel function of the first kind of order $\nu.$ We are going to show that it is possible to extend the right-hand side of to the general case. Thus, we consider the transcendental equation $r=\Psi_{\nu}(2Kr)$ and we show that the next inequality holds true for all $K>\nu+1,$ $\nu\geq 0$ and $r>0$ $$\label{ineq4}r<\sqrt{1-\frac{1}{2K}}.$$ However, we first show that if $\nu\ge 0$ and $K>\nu+1,$ then the equation $$r-\Psi_\nu(2Kr)=0\label{Kcond}$$ has a positive real solution. Observe that our equation has a trivial solution at $r=0$. Moreover, the function $r-\Psi_\nu(2Kr)$ tends to infinity as $r$ grows. Hence, if the tangent of the continuous function $r-\Psi_\nu(2Kr)$ is negative at the origin then surely has a positive real solution. The derivative is as follows $$(r-\Psi_\nu(2Kr))'=1+\frac{K I_{\nu+1}(2 K r) (I_{\nu-1}(2 K r)+I_{\nu+1}(2 K r))}{I_{\nu}(2 K r){}^2}-\frac{K (I_{\nu}(2 K r)+I_{\nu+2}(2 K r))}{I_{\nu}(2 K r)}.$$ We need to take the limit when $r=0$. This can be done by using the Bernoulli-l’Hospital’s rule. It comes after some simplification that $$\left.(r-\Psi_\nu(2Kr))'\right|_{r=0}=1-\frac{K}{\nu+1},$$ from where it follows that in the case when $1-\frac{K}{\nu+1}<0$ then has a positive real solution.
Now, to prove the inequality we need to show that $$r^2+\frac{1}{2K}-1=\Psi_{\nu}^2(2Kr)+\frac{\Psi_{\nu}(2Kr)}{2Kr}-1<0,$$ that is, for $x>0$ we have $$\label{ineqpsigen}\Psi_{\nu}^2(x)+\frac{1}{x}\Psi_{\nu}(x)-1<0.$$ By applying the identity $$\label{quotgen}\frac{I_{\nu+1}(x)}{I_{\nu}(x)}=\frac{x}{2}\left(\nu+1+\frac{x}{2}\frac{I_{\nu+2}(x)}{I_{\nu+1}(x)}\right)^{-1}$$ we obtain $$1-\frac{1}{x}\Psi_{\nu}(x)=\left({1+\frac{2\nu+1}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}}\right)
\left({1+\frac{2(\nu+1)}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}}\right)^{-1},$$ which implies that is equivalent to $$\Psi_{\nu}^2(x)\left({1+\frac{2(\nu+1)}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}}\right)<{1+\frac{2\nu+1}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}},$$ which by means of the recurrence relation $$\label{recgen}xI_{\nu}(x)-xI_{\nu+2}(x)=2(\nu+1)I_{\nu+1}(x),$$ is equivalent to the Turán type inequality $$I_{\nu+1}^2(x)-I_{\nu}(x)I_{\nu+2}(x)<\frac{2\nu+1}{x}I_{\nu}(x)I_{\nu+1}(x).$$ But, in view of the well-known Soni inequality $I_{\nu+1}(x)<I_{\nu}(x),$ $\nu>-\frac{1}{2},$ $x>0,$ the above Turán type inequality is a consequence of the stronger inequality [@baricz eq. 2.5] $$\label{turanb}I_{\nu+1}^2(x)-I_{\nu}(x)I_{\nu+2}(x)<\frac{1}{x}I_{\nu+1}^2(x),$$ which holds for $\nu\geq-\frac{1}{2}$ and $x>0.$ Since all of the above inequalities are valid for $x>0$ and $\nu\geq0,$ it follows that indeed the inequality is valid.
Moreover, it can be shown that the left-hand side of can be also extended to the general case when $\nu\geq\frac{1}{2},$ and the resulting inequality is reversed (comparative to the left-hand side of ) and improves the inequality . To show the inequality $$\label{ineq5}r<\sqrt{1-\frac{1}{K}}$$ it is enough to show that for $x>0$ we have $$\label{ineqpsigen2}\Psi_{\nu}^2(x)+\frac{2}{x}\Psi_{\nu}(x)-1<0.$$ By using the above steps in view of we obtain that $$1-\frac{2}{x}\Psi_{\nu}(x)=\left({1+\frac{2\nu}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}}\right)
\left({1+\frac{2(\nu+1)}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}}\right)^{-1},$$ which implies that is equivalent to $$\Psi_{\nu}^2(x)\left({1+\frac{2(\nu+1)}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}}\right)<{1+\frac{2\nu}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}},$$ which by means of the recurrence relation is equivalent to the Turán type inequality $$\label{turaninter}I_{\nu+1}^2(x)-I_{\nu}(x)I_{\nu+2}(x)<\frac{2\nu}{x}I_{\nu}(x)I_{\nu+1}(x).$$ But, in view of the above Soni inequality the above Turán type inequality is a consequence of the stronger inequality when $\nu\geq\frac{1}{2}.$
Numerical experiments suggest that when $\nu\in\left(0,\frac{1}{2}\right)$ the equation $$I_{\nu+1}^2(x)-I_{\nu}(x)I_{\nu+2}(x)-\frac{2\nu}{x}I_{\nu}(x)I_{\nu+1}(x)=0$$ has a solution which depends on $\nu.$ Since is equivalent to , this implies that is not true for all $K>\nu+1,$ $\nu\in\left(0,\frac{1}{2}\right)$ and $r>0.$
Sharpness of the extension
---------------------------
Let us consider the extension of $\lambda,$ that is, $$\lambda_\nu(x)=\frac{\log\left(1-\frac2x\Psi_\nu(x)\right)}{\log\Psi_\nu(x)}.$$ We are going to prove that the best constant $\beta_\nu$ for which $\lambda_{\nu}(x)<\beta_{\nu}$ for $x>0$ and $\nu\geq0,$ equals to $$\beta_\nu=\lim_{x\to\infty}\lambda_\nu(x)=\frac{4}{2\nu+1}.$$ In particular, $\beta_0=\beta=4.$ The steps are the same as in the special case above when $\nu=0.$ The asymptotic estimation yields that as $x$ grows $$\label{assymp2}\Psi_\nu(x)=\frac{8x-4(\nu+1)^2+1+O\left(\frac{1}{x}\right)}{8x-4\nu^2+1+O\left(\frac{1}{x}\right)}.$$ Substituting this into the definition of $\lambda_\nu(x)$ we can see that asymptotically it equals to $$\lambda_\nu(x)=\frac{\log(x-2+O\left(\frac{1}{x}\right))-\log(x)}
{\log\left(8x-4(\nu+1)^2+1+O\left(\frac{1}{x}\right)\right)-\log\left(8x-4\nu^2+1+O\left(\frac{1}{x}\right)\right)},$$ and consequently $$\lambda_\nu(x)=\frac{\frac2x+O\left(\frac{1}{x^2}\right)}{\left(\frac{(\nu+1)^2}{2}-\frac{\nu^2}{2}\right)\frac1x+O\left(\frac{1}{x^2}\right)}=\frac{4+O\left(\frac{1}{x}\right)}{2\nu+1+O\left(\frac{1}{x}\right)}$$ for large $x$, from which the result follows. The above discussion actually shows that when $\nu\geq \frac{1}{2}$ the extension is far from being the best possible one. The next natural extension of improves the extension when $\nu\geq \frac{1}{2}$ and the power $\frac{\nu}{2}+\frac{1}{4}$ appearing in the inequality is the best possible $$\label{ineq6}r<\sqrt[4]{\left(1-\frac{1}{K}\right)^{2\nu+1}}.$$
![The graph of the function $x\mapsto \sqrt[2a+1]{\Psi_{a}^4(x)}+\frac{2}{x}\Psi_{a}(x)-1$ on $[0,10]$ in the case when $a\in\{0,1,2,3\}.$[]{data-label="fig1"}](bounds.eps){width="13cm"}
Sharp extension of to the general case
--------------------------------------
Now, we are going to show that is valid for all $K>\nu+1,$ $\nu\geq 0.3$ and $r>0.$ Note that the inequality is equivalent to $$\label{ineq7}\sqrt[2\nu+1]{\Psi_{\nu}^4(x)}+\frac{2}{x}\Psi_{\nu}(x)-1<0,$$ where $\nu\geq 0,$ $x>0.$ Using the Amos type bound $\Psi_{\nu}(x)<\Omega_{\nu}(x),$ where $\nu\geq0,$ $x>0$ and [@hg p. 94] $$\Omega_{\nu}(x)=\frac{x}{\sqrt{x^2+\left(\nu+\frac{1}{2}\right)\left(\nu+\frac{3}{2}\right)}+\nu+\frac{1}{2}},$$ we obtain that $$\sqrt[2\nu+1]{\Psi_{\nu}^4(x)}+\frac{2}{x}\Psi_{\nu}(x)-1<\sqrt[2\nu+1]{\Omega_{\nu}^4(x)}+\frac{2}{x}\Omega_{\nu}(x)-1<0,$$ where $\nu\geq0.3$ and $x>0.$ Here we used the fact that, based on numerical experiments, the smallest value of $\nu$ for which the expression $\sqrt[2\nu+1]{\Omega_{\nu}^4(x)}+\frac{2}{x}\Omega_{\nu}(x)-1$ is still negative for each $x>0$ is $0.3.$ When $\nu=0.3$ the above expression tends to zero as $x\to0.$ We believe, but were unable to prove that is also valid when $\nu\in(0,0.3),$ $K>\nu+1$ and $r>0.$ Numerical experiments (see also Fig. \[fig1\]) strongly suggest the validity of the above claim.
It is also worth to mention that the inequality is actually equivalent to the Turán type inequality $$\sqrt[2\nu+1]{\left(\frac{I_{\nu+1}(x)}{I_{\nu}(x)}\right)^{4}}\cdot\frac{I_{\nu}(x)}{I_{\nu+2}(x)}<1+\frac{2\nu}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)},$$ where $\nu\geq 0.3$ and $x>0.$ This Turán type inequality is new, and we believe, but were unable to prove that it is true also when $\nu\in(0,0.3)$ and $x>0.$ The case $\nu=0$ has been already considered in .
Another sharp extension of to the general case
----------------------------------------------
In this subsection our aim is to propose that it would be possible to extend the inequality in another way such that to keep the sharpness. We believe that if $\nu\geq0,$ $K>\nu+1$ and $r>0,$ then we have $$\label{ineq8}r<\sqrt[4(\nu+1)]{\left(1-\frac{\nu+1}{K}\right)^{2\nu+1}}.$$ It is important to mention here that by using the Bernoulli inequality $(1+x)^a\geq 1+ax,$ for $x=-\frac{1}{K}>-1$ and $a=\nu+1>0,$ then clearly we have $$\sqrt[4(\nu+1)]{\left(1-\frac{\nu+1}{K}\right)^{2\nu+1}}<\sqrt[4]{\left(1-\frac{1}{K}\right)^{2\nu+1}},$$ or in other words the inequality improves . To prove we would need to show that $$\label{ineqpsigen3}\Psi_{\nu}^{\frac{4(\nu+1)}{2\nu+1}}(x)+\frac{2(\nu+1)}{x}\Psi_{\nu}(x)-1<0.$$ By applying the identity \[quotgen\] and the recurrence relation \[recgen\] we obtain $$1-\frac{2(\nu+1)}{x}\Psi_{\nu}(x)=
\left(1+\frac{2(\nu+1)}{x}\frac{I_{\nu+1}(x)}{I_{\nu+2}(x)}\right)^{-1}=\frac{I_{\nu+2}(x)}{I_{\nu}(x)},$$ which implies that is equivalent to the Turán type inequality $$\label{ineq9}I_{\nu+1}^{4\nu+4}(x)<I_{\nu+2}^{2\nu+1}(x)I_{\nu}^{2\nu+3}(x),$$ which can be written as $$\Psi_{\nu}^{2\nu+{3}}(x)<\Psi_{\nu+1}^{2\nu+{1}}(x),$$ where $x>0$ and $\nu\geq0.$ However, we were able to show the Turán type inequality only for small values of $x.$ All the same, we believe that is true for all $x>0$ and $\nu\geq0,$ and this open problem may of interest for further research. Using the recurrence relation $$\Psi_{\nu}(x)\left(\frac{2(\nu+1)}{x}+\Psi_{\nu+1}(x)\right)=1$$ and the Amos bound $\Psi_{\nu}(x)>\Gamma_{\nu}(x),$ where $\nu\geq0,$ $x>0$ and [@amos] $$\Gamma_{\nu}(x)=\frac{x}{\sqrt{x^2+\left(\nu+\frac{3}{2}\right)^{2}}+\nu+\frac{1}{2}},$$ we obtain that $$\frac{\Psi_{\nu+1}^{2\nu+1}(x)}{\Psi_{\nu}^{2\nu+3}(x)}=\Psi_{\nu+1}^{2\nu+1}(x)\left(\frac{2(\nu+1)}{x}+\Psi_{\nu+1}(x)\right)^{2\nu+3}>
\Gamma_{\nu+1}^{2\nu+1}(x)\left(\frac{2(\nu+1)}{x}+\Gamma_{\nu+1}(x)\right)^{2\nu+3}>1$$ for $x\in(0,x_{\nu})$ and $\nu\geq0,$ where $x_{\nu}$ is the unique positive root of the equation $$\Gamma_{\nu+1}^{2\nu+1}(x)\left(\frac{2(\nu+1)}{x}+\Gamma_{\nu+1}(x)\right)^{2\nu+3}=1.$$
Finally, we mention that the inequality is also sharp. Namely, if we consider another extension of $\lambda,$ that is, $$\xi_\nu(x)=\frac{\log\left(1-\frac{2(\nu+1)}{x}\Psi_\nu(x)\right)}{\log\Psi_\nu(x)},$$ then the best constant $\gamma_\nu$ for which $\xi_{\nu}(x)<\gamma_{\nu}$ for $x>0$ and $\nu\geq0,$ equals to $$\gamma_\nu=\lim_{x\to\infty}\xi_\nu(x)=\frac{4(\nu+1)}{2\nu+1}.$$ In particular, $\gamma_0=\beta=4.$ The steps here are also the same as in the special case above when $\nu=0.$ Recall that the asymptotic estimation yields that as $x$ grows we have and substituting this into the definition of $\xi_\nu(x)$ we can see that asymptotically it equals to $$\xi_\nu(x)=\frac{\log(x-2(\nu+1)+O\left(\frac{1}{x}\right))-\log(x)}
{\log\left(8x-4(\nu+1)^2+1+O\left(\frac{1}{x}\right)\right)-\log\left(8x-4\nu^2+1+O\left(\frac{1}{x}\right)\right)},$$ and consequently $$\xi_\nu(x)=\frac{\frac{2(\nu+1)}{x}+O\left(\frac{1}{x^2}\right)}{\left(\frac{(\nu+1)^2}{2}-\frac{\nu^2}{2}\right)\frac1x+O\left(\frac{1}{x^2}\right)}=
\frac{4(\nu+1)+O\left(\frac{1}{x}\right)}{2\nu+1+O\left(\frac{1}{x}\right)}$$ for large $x$, from which the result follows.
Approximation of $r(K)$ from Lagrange’s inversion and a rational approximation
==============================================================================
In this section our aim is to propose two other approximations for the asymptotic order parameter of the stochastic Kuramoto model. The first one is deduced by using the Lagrange inversion theorem, and the second one is a rational approximation.
![The graph of the function $K\mapsto r(K)$ on $[0,3]$ together with the bounds and the approximation.[]{data-label="fig2"}](graph2.eps){width="13cm"}
Approximation from Lagrange inversion
-------------------------------------
We introduce the function $f:(0,\infty)\to\mathbb{R},$ defined by $f(r)=r-\Psi(2Kr).$ If $0<K\le1$, then there is only one non-negative real root of the equation $f(r)=0$ and this is $r=0$. But if $K>1$, there is an additional, non-trivial solution for this transcendental equation which we denote by $r(K)$. Recall the inequality and the fact that a sharper upper estimation is valid, see $$r(K)<\sqrt[4]{1-\frac{1}{K}}.$$ Thanks to the analyticity of $f$ in the neighbor of its root, one can use the Lagrange inversion theorem in its simplest form to establish a better approximation to $r(K)$. This approximation for $K>1$ reads as $$L(K)=A(K)+\frac{\Psi(s)-A(K)}{1-\Psi(s)/A(K)+2K\Psi^2(s)-2KI_2(s)/I_0(s)},\label{liestim}$$ where $$A(K)=\sqrt[4]{1-\frac{1}{K}},\quad\mbox{and}\quad s=2KA(K).$$ The above expression is nothing else but the zeroth order approximation plus the first order term in the inverse series of $f$ around the root estimation $A(K)$. The performance of this approximation is drawn on Fig. \[fig2\]. Here the upper, blue plot belongs to the upper bound $A(K)=\sqrt[4]{1-\frac1K}$, the bottom red plot belongs to the lower bound $\sqrt{1-\frac{1}{K}}$, the black one is the theoretical solution $r(K)$, while the grey plot is our approximation $L(K)$. One can see that for all $K>1$ the Lagrange estimation approximates the theoretical $r(K)$ best. Numerical calculations show that if $K\ge2.8$ then $L(K)$ already gives 6 digits accuracy. That $L(K)$ is really better than $A(K)$ can easily be seen independently from the above graph. Indeed, the series defined by the Lagrange inversion theorem converges to the solution if the center is close enough to the theoretical solution. Since we started the approximation from the point $A(K)$, this latter requirement satisfies, and $A(K)$ is the zeroth order approximation. Then one more term in the Lagrange formula (resulting $L(K)$) gets even closer to $r(K)$.
![The graph of the function $K\mapsto r(K)$ on $[0,4]$ together with the rational approximation.[]{data-label="fig3"}](PolyApprox2.eps){width="13cm"}
A rational approximation
------------------------
The advantage of $A(K)$ is that it is algebraic. While $L(K)$ approximates the theoretical solution better, it is transcendental since it contains transcendental functions. In this section we find an approximation which is not simply algebraic but *rational*. We use the well known expansion , that is, $$I_\nu(x)=\frac{e^x}{\sqrt{2\pi x}}\left(1-\frac{4\nu^2-1}{8x}+\frac{\left(4 \nu^2-1\right) \left(4 \nu^2-9\right)}{128x^2}-\frac{\left(4 \nu^2-1\right) \left(4 \nu^2-9\right) \left(4 \nu^2-25\right)}{1536x^3}+O\left(\frac{1}{x^4}\right)\right).$$ We truncate this at the “$O$” term, and substitute it into . Thus, we get a fraction in which the nominator and denominator are polynomials of $K$ and $A(K)$. Since $A(K)\approx1$ as $K$ grows, we simply write 1 in place of $A(K)$. After a simplification we get the following expression $$L_{\textrm{pol}}(K)=\frac{4 K \left(1048576 K^5-393216 K^4-276480 K^3+40320 K^2-7560 K-1575\right)}{4194304 K^6-524288 K^5-843776 K^4+376320 K^3+3936 K^2+540 K+3375}.$$
This is not, of course, better than $L(K)$. But $L_{\textrm{pol}}(K)$ offers a rather good rational approximation to $r(K)$ comparable at least with $A(K)$. The performance of $L_{\textrm{pol}}(K)$ is shown in Fig. \[fig3\]. Here the blue plot represents $A(K)$, the grey line is of $L_{\textrm{pol}}(K)$, while the bottom black plot is of $r(K)$. For some values we calculated the differences of $L_{\textrm{pol}}(K)$ and $A(K)$ with the theoretical solution
---------------------------- ---------- ----------- ------------- -------------- ---------------------
$K$ 1.5 2.0 5.0 10 100
$A(K)-r(K)$ 0.035677 0.009434 0.0001994 0.00001936 $1.59\cdot10^{-8}$
$L_{\textrm{pol}}(K)-r(K)$ 0.02818 0.0042565 $-0.000234$ $-0.0000372$ $-4.25\cdot10^{-8}$
---------------------------- ---------- ----------- ------------- -------------- ---------------------
[width]{}
<span style="font-variant:small-caps;">D.E. Amos</span>, Computation of modified Bessel functions and their ratios, [*Math. Comp.*]{} 28(125) (1974) 239–251.
<span style="font-variant:small-caps;">Á. Baricz</span>, Bounds for modified Bessel functions of the first and second kinds, [*Proc. Edinb. Math. Soc.*]{} 53(3) (2010) 575–599.
<span style="font-variant:small-caps;">Á. Baricz</span>, Bounds for Turánians of modified Bessel functions, [*Expo. Math.*]{} 33(2) (2015) 223–251.
<span style="font-variant:small-caps;">L. Bertini, G. Giacomin, K. Pakdaman</span>, Dynamical aspects of mean field plane rotators and the Kuramoto model, [*J. Stat. Phys.*]{} 138 (2010) 270–290.
<span style="font-variant:small-caps;">K. Hornik, B. Grün</span>, Amos-type bounds for modified Bessel function ratios, [*J. Math. Anal. Appl.*]{} 408 (2013) 91–101.
<span style="font-variant:small-caps;">J. Idier, G. Collewet</span>, Properties of Fisher information for Rician distributions and consequences in MRI, Available at `https://hal.archives-ouvertes.fr/hal-01072813`.
<span style="font-variant:small-caps;">Y. Kuramoto</span>, [*Chemical Oscillations, Waves, and Turbulence*]{}, Springer, Berlin, 1984.
<span style="font-variant:small-caps;">B. Sonnenschein, L. Schimansky-Geier</span>, Approximate solution to the stochastic Kuramoto model, [*Phys. Rev. E*]{} 88 (2013) Art. 052111.
|
---
abstract: 'We calculate the photon emission rate from a general atomic system in the mass-proportional CSL model. For an isolated charged particle emitting kilovolt gamma rays, our results agree with those obtained by Fu. For a neutral atomic system, photon emission is strongly suppressed for photon wavelengths much larger than the atomic radius. However, for kilovolt gamma rays, Fu’s result is modified by a structure factor that is of order unity, giving no rate suppression. Our calculation is readily generalized to the case of non-white noise, noise couplings that are not mass-proportional, and general (non-Gaussian) spatial correlation functions, and corresponding results are given. We briefly discuss the implications of our calculation for upper bounds on the CSL model parameters.'
author:
- |
Stephen L. Adler\
Institute for Advanced Study\
Einstein Drive\
Princeton, NJ 08540\
Fethi M. Ramazanoǧlu\
Department of Physics\
Princeton University\
Princeton, NJ 08544\
title: Photon emission rate from atomic systems in the CSL model
---
Introduction
============
Stochastic modifications of the Schrödinger equation, such as the continuous spontaneous localization (CSL) model, solve the measurement problem in quantum theory by giving an objective account of state vector reduction \[1\]. To assess the viability of these models, it is necessary to estimate lower and upper bounds on the stochastic model parameters, as surveyed in a recent paper of Adler \[2\]. An important upper bound on the stochastic rate parameter comes from a calculation by Fu \[3\] of the rate of noise-induced gamma radiation from free electrons, which he compares with the observed bound on 11 kilovolt gamma radiation from germanium. Adler suggested in \[2\] that in a neutral atomic system, radiation from protons, in the case of mass-proportional noise couplings, will largely cancel the radiation from electrons. Our aim in this paper is to check this assertion by a detailed calculation of stochastic noise-induced radiation in a general atomic system. We find that the asserted cancellation is present only for very long wavelength photons, whereas for the 11 kilovolt gamma rays figuring in Fu’s bound, the radiation from protons somewhat enhances, rather than reducing, that from electrons. This result can be simply understood as the effect of inclusion of the space coordinate-dependent phase factor for the radiated wave.
Thus for white noise, the upper bound on the CSL rate parameter is six orders of magnitude lower than estimated in \[2\], and hence is three orders of magnitude smaller that the lower bounds estimated in \[2\] from processes of latent image formation, assuming that latent image formation (and not subsequent development) corresponds to state vector reduction. Hence if the assumptions on which these lower bounds are based are correct, the white noise CSL model is disfavored. White noise is of course an idealization, and our calculation can be readily extended to the case of non-white noise. For non-white noise with a spectral cutoff below 11 kilovolts, there is no 11 kilovolt gamma radiation, and so in this case the germanium experiment does not set a bound on the CSL model rate parameter, and there is no conflict with the lower bounds estimated in \[2\].
This paper is organized as follows. In Sec. 2 we outline the basic strategy of the calculation, which is to replace the real noise of the CSL model by an imaginary noise, that can be represented by a perturbation term in the Hamiltonian. We write down the general form of the Hamiltonian, and give the noise structure in the white-noise and non-white noise cases.
In Sec. 3 we use standard atomic physics methods \[4\] to derive a master formula for the noise-induced photon radiation rate, in both the white noise and the non-white noise cases. In Sec. 4 we evaluate this formula for a single free electron, recovering the result of Fu \[3\] when his approximations are made. In Sec. 5 we evaluate the master formula for a hydrogenic atom, and in Sec. 6 for a general atomic system. In Sec. 7 we state the generalization of our results to a noise perturbation with general (not necessarily mass proportional) couplings to the particles, and with general spatial and time correlation functions. We conclude with a brief discussion of the implications of our calculation for CSL model phenomenology.
General strategy, Hamiltonian, and noise structure
==================================================
In the CSL model, the stochastic Schrödinger equation obeyed by the wave function $\psi$ takes the form $d\psi=-(i/\hbar)H\psi dt
+ {\cal N} \psi+...$, with $H$ the usual Hamiltonian, with the noise term ${\cal N}$ real valued, and with the ellipsis ... representing additional nonlinear terms needed to preserve state vector normalization. A real valued choice for the noise term corresponds to an imaginary addition to the Hamiltonian, and is necessary to obtain a model that describes state vector reduction. An alternative stochastic Schrödinger equation can be written with an imaginary noise term, which does not require additional nonlinear terms in the Schrödinger equation for norm preservation. This Schrödinger evolution does not lead to state vector reduction, but for the case of white noise, it is a well known result that the noise average of the density matrix obeys the same evolution equation in the real and imaginary noise cases. Since the mean rate for noise induced transitions can be calculated from the noise averaged density matrix, this implies that one can use the imaginary noise equation to calculate the mean rate for such transitions. Hence, to leading order, one can represent the noise perturbation as a self-adjoint perturbation on the Hamiltonian $H$, and use standard second order perturbation theory to evaluate its effects.
The usual justification for the use of imaginary noise is based on a calculation of the density matrix evolution in the real and imaginary noise cases using the Itô calculus, which as already noted, assumes white noise. Adler and Bassi \[5\] have recently shown, however, that in the case of non-white Gaussian noise, the noise averaged density matrix evolutions are still the same for the real and imaginary noise cases, through second order in the noise term. Hence, in the second order perturbation calculations of this paper, we can use an imaginary noise term to calculate the effects of non-white noise as well as white noise.
We will thus be considering a Hamiltonian of the form $$\label{eq:basicham}
H = H_0 + H_{em} + H_n~~~,$$ with $H_0$ the atomic system Hamiltonian, $H_{em}$ the electromagnetic perturbation describing photon emission, and $H_n$ the perturbation describing the noise. For a system of $N$ particles of charges $e_j$ and masses $m_j$, the electromagnetic perturbation is $$\label{eq:empert1}
H_{em} = \sum_{j=1}^N \frac{ie_j\hbar}{m_jc} \vec{A}({\vec{x}}_j) \cdot
\vec{\nabla}_{x_j} + O(\vec{A}^{\,2})~~~,$$ with the electromagnetic potential, for field quantization in a cubical box of size $L$, given by $$\label{eq:quantA}
\vec{A}(\vec{x}) = \sum_{{\vec{p}}} \sqrt{\frac{2\pi \hbar c^2}{
\omega_p L^3}} \left[ a_p \vec{\epsilon}_p e^{i({\vec{p}}\cdot {\vec{x}}-
\omega_p t)} + a_p^{\dag} \vec{\epsilon}_p e^{-i({\vec{p}}\cdot {\vec{x}}-
\omega_p t)} \right]~~~,$$ where $\omega_p=pc$, and where the numerical value of a unit unrationalized charge $e$ is $e^2/(\hbar c)\simeq 1/137.04$. Since we are only interested in the matrix element for emitting a single photon of wave number ${\vec{p}}$, we pull this term out from Eq. \[eq:quantA\] and, separating off the time dependence, write the electromagnetic perturbation as $$\label{eq:empert2}
\begin{split}
H_{em}=&e^{i\omega_p t} {\mathcal{W}}^p(\{{\vec{x}}\})~~~,\\
{\mathcal{W}}^p(\{{\vec{x}}\}) & = a_p^{\dag} \sqrt{\frac{2\pi \hbar c^2}{\omega_p
L^3}} \, \sum_j\frac{i e_j \hbar}{m_j c} e^{-i \vec{p} \cdot
{\vec{x}}_j} \vec{\epsilon}_p \cdot \vec{\nabla}_j~~~,
\end{split}$$ where $\vec{\nabla}_j$ is an abbreviation for $\vec{\nabla}_{x_j}$.
In the CSL model with mass-proportional couplings, the noise perturbation can be written as $$\label{eq:noisepert}
\begin{split}
H_{n}=&\int d^3z \frac{dW_t({\vec{z}})}{dt}{\mathcal{V}}({\vec{z}},\{x\})~~~,\\
{\mathcal{V}}({\vec{z}},\{{\vec{x}}\})=&-\frac{\hbar}{m_N}\sum_j m_j g({\vec{z}}-{\vec{x}}_j)~~~.
\end{split}$$ Here $g({\vec{x}})$ is a spatial correlation function, conventionally taken as the Gaussian $$\label{eq:correlfn}
g({\vec{x}}) = \left( \frac{\alpha}{2\pi} \right)^{3/2} e^{-\alpha
{\vec{x}}^2/2} = \left( \sqrt{2\pi} r_c \right)^{-3} e^{-{\vec{x}}^2/2
r_c^2}~~~,$$ with $\alpha^{-{1\over 2}}=r_c$, and with $r_c$ conventionally taken as $10^{-5}$ cm. In the case of white noise, $dW_t$ is an Itô calculus differential that obeys $$\label{eq:whitenoise}
dW_t({\vec{x}})dW_t({\vec{y}}) = \gamma dt \delta^3({\vec{x}}-{\vec{y}})~~~,$$ with $\gamma$ the noise strength parameter. The corresponding formula for the case of non-white noise is $$\label{eq:nonwhitenoise}
E\left[ \frac{dW_t({\vec{x}})}{dt} \frac{dW_{t'}({\vec{y}})}{dt'} \right] =
\frac{1}{2 \pi} \int_{-\infty}^{\infty} d\omega \, \gamma(\omega)
e^{-i \omega(t-t')} \delta^3({\vec{x}}-{\vec{y}})~~~,$$ with $E[...]$ denoting the expectation or average over the noise. When $\gamma(\omega)$ is a constant $\gamma$, Eq. \[eq:nonwhitenoise\] reduces, on integration over $t^{\prime}$, to Eq. \[eq:whitenoise\].
Master equation for the radiation rate
======================================
According to Eqs. \[eq:empert2\] and \[eq:noisepert\], the total perturbation on the atomic Hamiltonian $H_0$ is $$\label{eq:pertpot}
V(t) = \int d^3z \frac{dW_t(\vec{z})}{dt} {\mathcal{V}}(\vec{z},\{ {\vec{x}}\}) +
e^{i \omega_p t} {\mathcal{W}}^p(\{ {\vec{x}}\})~~~.$$ Expanding the transition amplitude in a perturbation series following the methods of \[4\], we get $$\label{eq:pertseries}
\begin{split}
{\left\langle f \right|}U_I(t,0){\left|i\right\rangle} & =1 + \mathcal{T}_{fi}^{(1)} +
\mathcal{T}_{fi}^{(2)} + ...~~~,\\
\mathcal{T}_{fi}^{(2)} & = -\frac{1}{\hbar^2} \int_0^t ds \int_0^s
du
\sum_k {\left\langle f \right|}V_I(s){\left|k\right\rangle} {\left\langle k \right|} V_I(u){\left|i\right\rangle}\\
& = -\frac{i}{2 \pi \hbar^2} \int_0^t ds \int_0^t du
\int_{-\infty}^{\infty} dE \, e^{\frac{i}{\hbar}(E_f-E)s}
e^{\frac{i}{\hbar}(E-E_i)u} \sum_k \frac{V_{fk}(s)V_{ki}(u)}{E_i+
i \eta-E_k}~~~,
\end{split}$$ where in the first line of the formula for $\mathcal{T}_{fi}^{(2)}$, $V_I$ denotes the interaction picture perturbation, and in the second line $V_{fk}$ and $V_{ki}$ denote matrix elements of the Schrödinger picture perturbation. To calculate the noise induced radiation, we are only interested in the terms in Eq. \[eq:pertseries\] that are bilinear in the electromagnetic and noise perturbations, so on substituting Eq. \[eq:pertpot\] and dropping irrelevant terms, we get $$\label{eq:2nd_order_matrix_element}
\begin{split}
\mathcal{T}_{fi}^{(2)} & = \frac{-i}{2 \pi \hbar^2}
\int_{-\infty}^{\infty} dE \\
& \times\left( \int_0^t ds \thinspace e^{\frac{i}{\hbar}(E_f-E)s}
\int d^3z \frac{dW_s({\vec{z}})}{ds} \int_0^t du \thinspace
e^{\frac{i}{\hbar}(E-E_i+\hbar
\omega_p)u} \sum_k \frac{{\mathcal{V}}_{fk}({\vec{z}}){\mathcal{W}}^p_{ki}}{E+i \eta -E_k} \right. \\
+ & \left. \int_0^t ds \thinspace e^{\frac{i}{\hbar}(E_f-E+\hbar
\omega_p)s} \int_0^t du \thinspace e^{\frac{i}{\hbar}(E-E_i)u}
\int d^3z \frac{dW_u({\vec{z}})}{du} \sum_k \frac{{\mathcal{W}}^p_{fk} {\mathcal{V}}_{ki}
({\vec{z}})}{E+i \eta -E_k} \right)~~~.
\end{split}$$ Taking the squared modulus of Eq. \[eq:2nd\_order\_matrix\_element\], averaging over the noise, and using the formulas for representations of the Dirac delta function given in \[4\], in the large time limit we obtain in the white noise case, $$\label{eq:2nd_order_white_noise}
E[|\mathcal{T}_{fi}^{(2)}|^2] = \frac{\gamma t}{\hbar^2} \int d^3z
\left| \sum_k \frac{{\mathcal{V}}_{fk}({\vec{z}}){\mathcal{W}}^p_{ki}}{E_i - \hbar \omega_p +i
\eta -E_k} + \frac{{\mathcal{W}}^p_{fk}{\mathcal{V}}_{ki}({\vec{z}})}{E_f + \hbar \omega_p +i
\eta -E_k} \right|^2~~~,$$ with the corresponding equation in the non-white noise case taking the form $$\label{eq:2nd_order_nonwhite_noise}
E[|\mathcal{T}_{fi}^{(2)}|^2] = \frac{t}{\hbar^2} \gamma(
\omega_p+ \frac{E_f - E_i}{\hbar}) \int d^3z \left| \sum_k
\frac{{\mathcal{V}}_{fk}({\vec{z}}){\mathcal{W}}^p_{ki}}{E_i - \hbar \omega_p +i \eta -E_k} +
\frac{{\mathcal{W}}^p_{fk} {\mathcal{V}}_{ki}({\vec{z}})}{E_f + \hbar \omega_p +i \eta -E_k}
\right|^2~~~.$$ Equations \[eq:2nd\_order\_white\_noise\] and \[eq:2nd\_order\_nonwhite\_noise\] are the master equations from which we shall calculate the noise induced radiation rate, by substituting the matrix elements of ${\mathcal{V}}$ and ${\mathcal{W}}^p$ appropriate to the various cases of interest.
Free electron: repeating Fu’s calculation
=========================================
As a first application of Eq. \[eq:2nd\_order\_white\_noise\], and a check, let us repeat the calculation of Fu \[3\] for the case of a single free electron. Assuming that the electron is initially at rest, the initial, final, and intermediate state electron wave functions are $$\label{eq:single_particle_wavefunctions}
\psi_i=\frac{1}{\sqrt{L^3}}~~, \qquad \psi_f=\frac{e^{i {\vec{q}}\cdot
{\vec{x}}}}{\sqrt{L^3}}~~, \qquad \psi_k=\frac{e^{i {\vec{k}}\cdot
{\vec{x}}}}{\sqrt{L^3}}~~~.$$ From Eqs. 4 and 5, as specialized to a single particle of charge $e$ (with $e^2/(\hbar c) \simeq 1/137$) and mass $m$, the needed matrix elements are $$\label{eq:free_electron_W}
\begin{split}
{\mathcal{W}}^p_{ki} & = 0~~~,\\
{\mathcal{W}}^p_{fk} & = -\sqrt{\frac{2\pi \hbar c}{p L^3}} \, \frac{e
\hbar}{m c} \vec{\epsilon}_p \cdot {\vec{q}}\, \delta_{{\vec{k}}-\vec{p}-{\vec{q}}}~~~,
\end{split}$$ and $$\label{eq:free_electron_V}
\begin{split}
{\mathcal{V}}_{ki}(z) & = - \frac{\hbar m}{m_N L^3} \, e^{-i {\vec{k}}\cdot {\vec{z}}-\frac{1}{2} {\vec{k}}^2 r_c^2}~~~,\\
{\mathcal{V}}_{fk}(z) & = - \frac{\hbar m}{m_N L^3} \, e^{i({\vec{k}}-{\vec{q}}) \cdot
{\vec{z}}-\frac{1}{2}({\vec{k}}-{\vec{q}})^2r_c^2}~~~.
\end{split}$$ Substituting these into Eq. \[eq:2nd\_order\_white\_noise\], we get for the noise averaged squared matrix element $$\label{eq:single_particle_T^2}
E[|\mathcal{T}_{fi}^{(2)}|^2] = \frac{\gamma t}{\hbar^2} \int d^3z
\left| \frac{\hbar}{m_N L^3} \sqrt{\frac{2\pi \hbar c}{p L^3}} \,
\frac{e \hbar}{c} \, \vec{\epsilon}_p \cdot {\vec{q}}\,
\frac{e^{-i(\vec{p}+{\vec{q}}) \cdot {\vec{z}}-\frac{1}{2}
(\vec{p}+{\vec{q}})^2r_c^2}}{-\frac{\hbar^2}{2m} (p^2+2 \vec{p} \cdot
{\vec{q}}) +\hbar c p +i \eta} \right|^2~~~.$$
Fu notes that when the photon momentum $p$ is much larger than the inverse correlation length $1/r_c$, the Gaussian factor in Eq. \[eq:single\_particle\_T\^2\] forces the electron and photon to emerge nearly back to back, that is, ${\vec{q}}\simeq -{\vec{p}}$. As a result $$\label{eq:approx}
\frac{\hbar c p}{-\frac{\hbar^2}{2m}(p^2+2{\vec{p}}\cdot {\vec{q}})}\simeq
\frac{2 m c^2}{\hbar pc}~~~,$$ which is of order 100 for $\hbar pc=$ 11 keV. Thus one can to a good approximation keep only the term $\hbar c p$ in the denominator of Eq. \[eq:single\_particle\_T\^2\], which then simplifies to $$\label{eq:single_particle_T^2_approx}
E[|\mathcal{T}_{fi}^{(2)}|^2] = \frac{\gamma t}{\hbar^2}
\left(\frac{\hbar}{m_N L^3}\right)^2 \frac{2\pi \hbar c}{p} \,
\frac{e^2 }{c^4 p^2} \left( \vec{\epsilon}_p \cdot {\vec{q}}\right)^2
e^{-(\vec{p}+{\vec{q}})^2 r_c^2}~~~.$$ Integrating over phase space for the electron and photon, summing over photon polarizations, and dividing by the elapsed time, we get for the radiated power per unit photon momentum space volume and per unit time, $$\label{eq:electron_power1}
\frac{dP}{d^3p} = \left( \frac{L}{2\pi} \right)^6 \int d^3q
\sum_{\epsilon} E[|\mathcal{T}_{fi}^{(2)}|^2] \frac{1}{t}~~~.$$ Carrying out the integrals and polarization sum, and replacing the noise parameter $\gamma$ by a new parameter $\lambda$ defined by $\gamma =8 \pi^{3/2} r_c^3 \lambda$, we get finally for the power radiation rate $$\label{eq:electron_power2}
\frac{dP}{dp} = \frac{\hbar}{c^3} \, \frac{e^2 \lambda}{\pi r_c^2
m_N^2 p}~~~.$$ This is in agreement with the result obtained by Fu \[3\], when our unrationalized charge squared $e^2$ is replaced by $e^2/(4\pi)$, corresponding to Fu’s use of a rationalized charge convention.
Hydrogenic atom
===============
We consider next a hydrogenic atom, with oppositely charged particles of masses $m_1$ and $m_2$. Equation 5 for ${\mathcal{V}}({\vec{z}},
\{{\vec{x}}\})$ now takes the form $$\begin{aligned}
\label{eq:perturbation_terms_hydrogen}
{\mathcal{V}}({\vec{z}},\{{\vec{x}}\})=&-\frac{\hbar}{m_N}{\mathcal{M}}({\vec{z}},\{{\vec{x}}\})~~~,\\
{\mathcal{M}}({\vec{z}},\{{\vec{x}}\})=&m_1g({\vec{z}}-{\vec{x}}_1)+m_2g({\vec{z}}-{\vec{x}}_2)~~~.\end{aligned}$$ Introducing the center of mass coordinate ${\vec{X}}$, total mass $M$, relative coordinate ${\vec{x}}$, and reduced mass $\mu$, by $$\begin{aligned}
\label{eq:CoM_coordinates_hydrogen}
\vec{X} =& \frac{m_1}{M}{\vec{x}}_1 + \frac{m_2}{M}{\vec{x}}_2, \quad \quad
\quad {\vec{x}}= {\vec{x}}_1 -{\vec{x}}_2~~~,\\
M=&m_1+m_2, \quad \quad \mu=\frac{m_1m_2}{M}~~~,\end{aligned}$$ we can use the fact that the Bohr radius $a_0$ is much smaller than $r_c$ to approximate ${\mathcal{M}}({\vec{z}},\{{\vec{x}}\})$ as follows, $$\label{eq:M_hydrogen}
\begin{split}
{\mathcal{M}}({\vec{z}},\{{\vec{x}}\}) &= m_1g({\vec{z}}-{\vec{x}}_1) + m_2 g({\vec{z}}-{\vec{x}}_2)\\
& =M \, g({\vec{z}}-\vec{X}) + \frac{m_1m_2}{2M}
\, ({\vec{x}}\cdot \vec{\nabla}_z)^2g({\vec{z}}-\vec{X})\\
& \cong M \, g({\vec{z}}-\vec{X})~~~,
\end{split}$$ giving $$\begin{aligned}
\label{eq:hydrogen_V_W}
{\mathcal{W}}^p(\{{\vec{x}}\}) &= a_p^{\dagger}\sqrt{\frac{2\pi \hbar c}{pL^3}} \,
\frac{ie\hbar}{c} \, \vec{\epsilon}_p \cdot \left( \frac{1}{m_1}
\, e^{-i\vec{p} \cdot {\vec{x}}_1} \, \vec{\nabla}_1 -\frac{1}{m_2} \,
e^{-i\vec{p} \cdot {\vec{x}}_2}
\, \vec{\nabla}_2 \right)~~~,\\
{\mathcal{V}}({\vec{z}},\{{\vec{x}}\}) &\cong - \frac{\hbar M}{m_N} \, g({\vec{z}}-\vec{X})~~~.\end{aligned}$$ The initial, final, and itermediate state atomic wave functions are now $$\label{eq:hydrogen_wavefunctions}
\psi_i=\frac{1}{\sqrt{L^3}}u_{\hat{i}}({\vec{x}})~~, \qquad
\psi_f=\frac{e^{i {\vec{q}}\cdot {\vec{X}}}}{\sqrt{L^3}}u_{\hat{f}}({\vec{x}})~~~,
\qquad \psi_k=\frac{e^{i {\vec{k}}\cdot {\vec{X}}}}{\sqrt{L^3}}u_{\hat{
k}}({\vec{x}})~~~,$$ where we use carets to denote the labels of hydrogenic internal states. Defining $$\label{eq:O_operator}
\mathcal{O}({\vec{k}}) = \frac{i}{M} \left( e^{-i\frac{m_2}{M}\vec{p}
\cdot {\vec{x}}} - e^{i \frac{m_1}{M}\vec{p} \cdot {\vec{x}}} \right)
\vec{\epsilon}_p \cdot {\vec{k}}+ \left( \frac{1}{m_1}
e^{-i\frac{m_2}{M}\vec{p} \cdot {\vec{x}}} +\frac{1}{m_2}
e^{i\frac{m_1}{M}\vec{p} \cdot {\vec{x}}} \right) \vec{\epsilon}_p \cdot
\vec{\nabla}_x~~~,$$ we find that the matrix elements entering the master formula are $$\begin{aligned}
\label{eq:hydrogen_W_no_approximation}
{\mathcal{W}}^p_{ki} & = \sqrt{\frac{2\pi \hbar c}{pL^3}} \, \frac{ie\hbar}{c}
{\left\langle \hat{k} \right|}\mathcal{O}(\vec{0}) {\left|\hat{i}\right\rangle}
\, \delta_{{\vec{k}}+\vec{p}}\\
{\mathcal{W}}^p_{fk} & = \sqrt{\frac{2\pi \hbar c}{pL^3}} \,
\frac{ie\hbar}{c} {\left\langle \hat{f} \right|}\mathcal{O}({\vec{k}}) {\left|\hat{k}\right\rangle} \,
\delta_{{\vec{k}}- \vec{p}-{\vec{q}}}~~~,\end{aligned}$$ and $$\begin{aligned}
\label{eq:hydrogen_V}
{\mathcal{V}}_{ki} &= -\frac{\hbar M}{m_NL^3} \, e^{-i {\vec{k}}\cdot {\vec{z}}-
\frac{1}{2} {\vec{k}}^2 r_c^2} \, \delta_{\hat{k} \hat{i}}\\
{\mathcal{V}}_{fk} &= -\frac{\hbar M}{m_NL^3} \, e^{i ({\vec{k}}-{\vec{q}}) \cdot {\vec{z}}-
\frac{1}{2} ({\vec{k}}-{\vec{q}})^2 r_c^2} \, \delta_{\hat{f} \hat{k}}~~~.\end{aligned}$$ Then without any further approximation we find $$\begin{gathered}
\label{eq:hydrogen_T^2_no_approximation}
E[|\mathcal{T}_{fi}^{(2)}|^2] = \frac{\gamma t}{\hbar^2}
\left(\frac{\hbar M}{m_N L^3}\right)^2 \frac{2\pi \hbar c}{p} \,
\frac{e^2 \hbar^2}{c^2} e^{-(\vec{p}+{\vec{q}})^2 r_c^2}\\
\times \left| - \frac{{\left\langle \hat{f} \right|} \mathcal{O}(\vec{0})
{\left|\hat{i}\right\rangle}}{E_{fi}+\frac{\hbar^2p^2}{2M} + \hbar c p -i\eta} +
\frac{{\left\langle \hat{f} \right|} \mathcal{O}(\vec{p}+{\vec{q}})
{\left|\hat{i}\right\rangle}}{E_{fi}+\frac{\hbar^2q^2}{2M} + \hbar c p
-\frac{\hbar^2 (\vec{p}+{\vec{q}})^2}{2M} +i \eta} \right|^2~~~,\end{gathered}$$ with $E_{fi}\equiv E_{\hat{f}}-E_{\hat{i}}$ the internal energy difference between the final and initial atomic states. The radiated power, per unit photon momentum space volume and per unit time, now requires a sum over the final internal atomic state $\hat{f}$, and is given by $$\label{eq:hydrogen_radiation_power}
\frac{dP}{d^3p} = \left( \frac{L}{2\pi} \right)^6 \int d^3q
\sum_{\hat{f},\, \epsilon} E[|\mathcal{T}_{fi}^{(2)}|^2]
\frac{1}{t}~~~.$$ Note that when ${\vec{p}}+{\vec{q}}=0$, the two terms in Eq. 37 cancel. Since the Gaussian $e^{-({\vec{p}}+{\vec{q}})^2 r_c^2}$ constrains $|{\vec{p}}+{\vec{q}}|$ to be not much larger than $1/r_c$, we can make this cancellation explicit by expanding in the small parameter $$\label{eq:hydrogen_series_expansion_term}
\frac{\hbar^2 \vec{p} \cdot (\vec{p}+{\vec{q}})}{M \left(\hbar c p +
\frac{\hbar^2 p^2}{2M} + E_{fi} \right)} \equiv \frac{\hbar^2
\vec{p} \cdot ({\vec{p}}+{\vec{q}})}{M D_0}~~~,$$ which keeping the leading two terms, and writing ${\vec{p}}\cdot {\vec{x}}=pz$, gives $$\label{eq:hydrogen_radiation_power_series_expanded}
\begin{split}
\frac{dP}{dp} =& p \, \frac{\hbar^3}{c}
\left(\frac{M}{m_N}\right)^2 \frac{e^2 \lambda}{\pi r_c^2}
\sum_{\hat{f}} \left\{ \, \frac{1}{M^2D_0^2} \left| {\left\langle \hat{f} \right|}
e^{-i\frac{m_2}{M} pz} - e^{i \frac{m_1}{M} pz}
{\left|\hat{i}\right\rangle} \right|^2 \right. \\
+& \left. \frac{p^2\hbar^4}{M^2D_0^4} \left| {\left\langle \hat{f} \right|} \left(
\frac{1}{m_1} e^{-i\frac{m_2}{M} pz} +\frac{1}{m_2}
e^{i\frac{m_1}{M} pz} \right) \frac{\partial}{\partial x}
{\left|\hat{i}\right\rangle} \right|^2 \right\}~~~.
\end{split}$$ For small $p$, this expression can be further simplified to $$\label{eq:hydrogen_radiation_small_p}
\begin{split}
\frac{dP}{dp} & = p^3 \, \frac{\hbar^3}{c}
\left(\frac{M}{m_N}\right)^2 \frac{e^2 \lambda}{\pi r_c^2}
\sum_{\hat{f}} \left\{ \, \frac{1}{M^2E_{fi}^2} \left|
{\left\langle \hat{f} \right|} z {\left|\hat{i}\right\rangle} \right|^2 + \frac{\hbar^4}{M^2
E_{fi}^4\mu^2} \left| {\left\langle \hat{f} \right|} \frac{\partial}{\partial x}
{\left|\hat{i}\right\rangle} \right|^2
\right\}\\
& =2\, p^3 \,\frac{\hbar^3}{c} \, \frac{1}{m_N^2} \, \frac{e^2
\lambda}{\pi r_c^2} \sum_{\hat{f}} \frac{\left| {\left\langle \hat{f} \right|} z
{\left|\hat{i}\right\rangle} \right|^2}{E_{fi}^2}
\end{split}$$ where we have used the dipole approximation formula $$\label{eq:p_expectation_to_x_expectation}
\left| {\left\langle \hat{f} \right|} \frac{\partial}{\partial x} {\left|\hat{i}\right\rangle}
\right| = \frac{\mu E_{fi}}{\hbar^2} \left| {\left\langle \hat{f} \right|} x
{\left|\hat{i}\right\rangle} \right|~~~,$$ which shows that the two terms in Eq. \[eq:hydrogen\_radiation\_small\_p\] make equal contributions. The sum in Eq. \[eq:hydrogen\_radiation\_small\_p\] has been evaluated in closed form by Dalgarno and Kingston \[6\], with the result $$\label{eq:sum}
\sum_{\hat{f}} \frac{\left| {\left\langle \hat{f} \right|} z {\left|\hat{i}\right\rangle}
\right|^2}{E_{fi}^2}=\frac{43}{8} \frac{\mu^2 a_0^6} {\hbar^4}~~~,$$ giving an explicit expression for the small $p$ radiation rate.
However, for 11 kilovolt photons, the small $p$ approximation does not apply, and instead we can simplify the formulas by making the approximation $D_0 \approx \hbar c p$, as was done by Fu in his calculation. The radiation rate then becomes $$\begin{gathered}
\label{eq:hydrogen_radiation_high_p}
\frac{dP}{dp} = \frac{1}{p} \, \frac{\hbar}{c^3}
\left(\frac{M}{m_N}\right)^2 \frac{e^2 \lambda}{\pi r_c^2}
\sum_{\hat{f}} \left\{ \, \frac{1}{M^2} \left| {\left\langle \hat{f} \right|}
e^{-i\frac{m_2}{M} pz} - e^{i \frac{m_1}{M} pz}
{\left|\hat{i}\right\rangle} \right|^2 \right. \\
+ \left. \frac{\hbar^2}{M^2 c^2} \left| {\left\langle \hat{f} \right|} \left(
\frac{1}{m_1} e^{-i\frac{m_2}{M} pz} +\frac{1}{m_2}
e^{i\frac{m_1}{M} pz} \right) \frac{\partial}{\partial x}
{\left|\hat{i}\right\rangle} \right|^2 \right\}~~~,\end{gathered}$$ which, using the completeness of the hydrogen spectrum, can be simplified to $$\begin{gathered}
\label{eq:hydrogen_radiation_high_p_simplified}
\frac{dP}{dp} = \frac{1}{p} \, \frac{\hbar}{c^3}
\left(\frac{M}{m_N}\right)^2 \frac{e^2 \lambda}{\pi r_c^2}
\sum_{\hat{f}} \left\{ \, \frac{1}{M^2} {\left\langle \hat{i} \right|} 2-2
\cos pz {\left|\hat{i}\right\rangle} \right. \\
+ \left. \frac{\hbar^2}{M^2 c^2} {\left\langle \hat{i} \right|}
\frac{\overleftarrow{\partial}}{\partial x} \left( \frac{1}{m_1^2}
+ \frac{1}{m_2^2} +\frac{2}{m_1 m_2} \cos pz\right)
\frac{\partial}{\partial x} {\left|\hat{i}\right\rangle} \right\}~~~.\end{gathered}$$
The ratio of the second term to the first can be shown to be of order $(e^2/\hbar c)^2$, so the second term can be neglected. Evaluating the first term using the hydrogen atom ground state wave function, we find the final result for high $p$ to be $$\label{eq:hydrogen_radiation_high_p_final}
\frac{dP}{dp} = 2 \left[ 1-\frac{1}{\left[ 1+\left(\frac{p a_0}{2}
\right)^2 \right]^2} \right] \frac{1}{p} \,\frac{\hbar}{c^3} \,
\frac{1}{m_N^2} \, \frac{e^2 \lambda}{\pi r_c^2}~~~.$$ For small $p$ this expression is suppressed with respect to the rate calculated by Fu, but for large $p$ it approaches twice Fu’s rate, because when the photon wave length is much smaller than the atomic radius, the electron and proton radiation rates add incoherently. For 11 kilovolt gamma radiation from hydrogen, the rate given by Eq. \[eq:hydrogen\_radiation\_high\_p\_final\] is about 1.8 times the rate for a free electron. The structure of the first term in Eqs. \[eq:hydrogen\_radiation\_high\_p\] and \[eq:hydrogen\_radiation\_high\_p\_simplified\] can be readily understood in terms of the phase factor that appears in the formula for the radiation rate of a distributed charge system, as in Eqs. (13-33) and (13-37) of the text of Panofsky and Phillips \[7\].
Many-body system
================
We turn next to a general $n$ particle atomic system, for which the electromagnetic and noise perturbations are given by Eqs. 4 and 5, with the sum over $j$ extending from 1 to $n$. In order to take account of overall momentum conservation, we separate the coordinates of the particles into a center of mass coordinate ${\vec{X}}$ and internal coordinates ${\vec{\xi}}_i~,i=1,...,n-1$, by writing
$$\label{eq:many_body_CoM_coordinates}
\begin{split}
M&=\sum_{j=1}^n m_j~~~,\\
{\vec{x}}_i &= {\vec{\xi}}_i + {\vec{X}}\quad \quad i=1,...,n-1~~~,\\
{\vec{x}}_n &= {\vec{X}}-\frac{1}{m_n} \sum_{j=1}^{n-1} m_j {\vec{\xi}}_j~~~,\\
{\vec{\xi}}_i &= {\vec{x}}_i - \sum_{j=1}^n \frac{m_j {\vec{x}}_j}{M}~~~,\\
{\vec{X}}&= \sum_{j=1}^n \frac{m_j {\vec{x}}_j}{M}~~~.
\end{split}$$
In the following equations, $\vec{\nabla}_j$ denotes the partial derivative with respect to the original coordinate ${\vec{x}}_j$, not the derivative with respect to the internal coordinate ${\vec{\xi}}_j$. Straightforward calculations show that the commutator of this partial derivative with an internal coordinate is given by $$\label{eq:commutator}
[\vec{a}\cdot \vec{\nabla}_i,\vec{b}\cdot {\vec{\xi}}_j]=\vec{a}\cdot
\vec{b} (\delta_{ij}-\frac{m_i}{M})~~~,$$ and also that the Jacobian $J$ of the transformation of Eq. \[eq:many\_body\_CoM\_coordinates\] is given by $$\label{eq:jacobian}
J=\frac{\partial({\vec{x}}_1...{\vec{x}}_n)}{\partial({\vec{X}}{\vec{\xi}}_1...{\vec{\xi}}_{n-1})}
=(-1)^{n-1}\left(1+\frac{\sum_{j=1}^{n-1} m_j}{m_n}\right)^3~~~.$$ Moreover, the kinetic term of the unperturbed hamiltonian is separated by the transformation of Eq. \[eq:many\_body\_CoM\_coordinates\] into a center of mass part and an internal part, $$\label{eq:many-body_kinetic_energy_hamiltonian}
\sum_{i=1}^n \frac{\vec{\nabla}_i^2}{2m_i} = \frac{\vec
\nabla_{X}^2}{2M} + \sum_{i=1}^{n-1} \frac{\vec
\nabla_{\xi_i}^2}{2m_i} - \frac{1}{2M} \left( \sum_{i=1}^{n-1}
\vec \nabla_{\xi_i} \right)^2~~~,$$ so that we know that wave functions are of the factorized form $$\label{eq:many-body_wavefunctions}
\psi_i=\frac{1}{\sqrt{L^3}} u_{\hat{i}}(\{ {\vec{\xi}}~\}) \qquad
\psi_f=\frac{e^{i {\vec{q}}\cdot {\vec{X}}}}{\sqrt{L^3}} u_{\hat{f}}(\{ {\vec{\xi}}~\}) \qquad \psi_k=\frac{e^{i {\vec{k}}\cdot {\vec{X}}}}{\sqrt{L^3}}
u_{\hat{k}}(\{ {\vec{\xi}}~\})~~~.$$ Using the center of mass transformation and the factorized wave functions, the noise and radiation matrix elements needed for the master formula of Eq. 12 are calculated to be $$\begin{aligned}
\label{eq:many-body_W_no_approximation}
{\mathcal{W}}^p_{ki} & = \sqrt{\frac{2\pi \hbar c}{pL^3}} \, \frac{i\hbar}{c}
{\left\langle \hat{k} \right|} \sum_j e^{-i \vec{p} \,\cdot \vec{\xi}_j} e_j
\frac{\vec{\epsilon}_p}{m_j} \cdot \vec{\nabla}_j {\left|\hat{i}\right\rangle}
\, \delta_{{\vec{k}}+\vec{p}}\\
{\mathcal{W}}^p_{fk} & = \sqrt{\frac{2\pi \hbar c}{pL^3}} \, \frac{i\hbar}{c}
{\left\langle \hat{f} \right|} \sum_j e^{-i \vec{p}\, \cdot \vec{\xi}_j} e_j \left(
i \frac{\vec{\epsilon}_p \cdot {\vec{k}}}{M} +
\frac{\vec{\epsilon}_p}{m_j} \cdot \vec{\nabla}_j \right)
{\left|\hat{k}\right\rangle} \, \delta_{{\vec{k}}- \vec{p}-{\vec{q}}}\end{aligned}$$ and $$\begin{aligned}
\label{eq:many-body_V}
{\mathcal{V}}_{ki}({\vec{z}}) &= -\frac{\hbar}{m_NL^3} \, e^{-i {\vec{k}}\cdot {\vec{z}}-
\frac{1}{2} {\vec{k}}^2 r_c^2} \, {\left\langle \hat{k} \right|} \sum_j e^{i {\vec{k}}\cdot
{\vec{\xi}}_j} m_j {\left|\hat{i}\right\rangle}\\
{\mathcal{V}}_{fk}({\vec{z}}) &= -\frac{\hbar}{m_NL^3} \, e^{i ({\vec{k}}-{\vec{q}}) \cdot {\vec{z}}- \frac{1}{2} ({\vec{k}}-{\vec{q}})^2 r_c^2} \, {\left\langle \hat{f} \right|} \sum_j e^{-i
({\vec{k}}-{\vec{q}}) \cdot {\vec{\xi}}_j} m_j {\left|\hat{k}\right\rangle}~~~.\end{aligned}$$ We now simplify Eq. 12 by making the approximation that the photon energy $\hbar \omega_p$ is much larger than both the internal energy differences and the center of mass recoil energy, that is, that $\hbar \omega_p$ is much larger than $E_i-E_k$ and $E_f-E_k$. With this approximation (which is analogous to the approximation made by Fu and also made in Eqs. 42-44 of our hydrogen atom calculation), Eq. 12 simplifies to $$\label{eq:2nd_order_simplified}
E[|\mathcal{T}_{fi}^{(2)}|^2] = \frac{\gamma t}{\hbar^2} \int d^3z
\left| \sum_k \frac{ {\mathcal{V}}_{fk}({\vec{z}}){\mathcal{W}}^p_{ki}-{\mathcal{W}}^p_{fk}{\mathcal{V}}_{ki}({\vec{z}}) }
{\hbar \omega_p} \right|^2~~~.$$ Substituting Eqs. 50-53 into Eq. \[eq:2nd\_order\_simplified\], summing over the final state $f$ by the analog of Eq. 36, and using completeness twice together with algebraic simplification using Eq. \[eq:commutator\], we get for the power radiated $$\label{eq:many-body_radiation_power}
\begin{split}
\frac{dP}{dp} &= \frac{2 \gamma}{(2\pi)^4} \,
\frac{\hbar}{m_N^2c^3} \, \frac{1}{p} \, \int \frac{d\Omega_
{\hat{p}}}{4\pi} \int d^3w \,e^{-\vec{w}^2 r_c^2}
[\vec{w}^2-(\vec{w} \cdot \hat{p})^2]
{\left\langle \hat{i} \right|} |\mathcal{N}|^2 {\left|\hat{i}\right\rangle} \\
\mathcal{N} &= \sum_j e^{-i (\vec{p}-\vec{w}) \cdot \vec{\xi}_j} e_j
\end{split}$$ Note that the internal integration to be used in evaluating the matrix element in this formula includes the Jacobian $J$ of Eq. \[eq:jacobian\], and so is $$\label{eq:internalint}
|J|\prod_{j=1}^{n-1}\int d^3 \xi_j~~~.$$ To check that Eq. \[eq:many-body\_radiation\_power\] reproduces the result of the first term of Eq. 43 for the hydrogen atom, we note first that for a two particle system one has ${\vec{x}}_1={\vec{X}}+{\vec{\xi}}_1\,, {\vec{x}}_2={\vec{X}}+{\vec{\xi}}_2$, and so ${\vec{x}}={\vec{x}}_1-{\vec{x}}_2={\vec{\xi}}_1-{\vec{\xi}}_2$, which by Eq. \[eq:many\_body\_CoM\_coordinates\] reduces to ${\vec{x}}=
{\vec{\xi}}_1(1+m_1/m_2)$. Hence $|J|d^3 \xi_1 = (1+m_1/m_2)^3 d^3 \xi_1
=d^3 x$, so the internal integration involves the conventional internal coordinate used for the hydrogen atom. The expansion in the small parameter of Eq. 37 is equivalent, in the many-body context, to setting ${\vec{w}}=0$ in $\mathcal{N}$ in Eq. \[eq:many-body\_radiation\_power\], an approximation that permits the integration over ${\vec{w}}$ to be easily done, yielding our previous formula for the hydrogen atom radiated power.
One can also apply Eq. \[eq:many-body\_radiation\_power\] to the case of a crystal lattice. Again making the approximation of neglecting ${\vec{w}}$ in $\mathcal{N}$, that is, taking $r_c$ to be large, we define $$\label{eq:single_cell_in_crystal}
f \equiv \sum_{cell} e^{-i \vec{p}\, \cdot \vec{\xi}_i} e_i~~~.$$ We then find that the matrix element appearing in Eq. \[eq:many-body\_radiation\_power\] takes the form (with $\langle...\rangle$ denoting an expectation in the initial state ${\left|\hat{i}\right\rangle} $, and with $\vec{L}_i$ a lattice displacement), $$\label{eq:many-body_N^2}
\begin{split}
\langle |\mathcal{N}|^2\rangle &= N_{cell} \left( \langle |f|^2
\rangle -|\langle f \rangle|^2\right) + \left| \sum_L e^{-i \vec{p}
\cdot \vec{L}_i} \right|^2 |\langle f \rangle|^2\\
& \cong N_{cell} \langle |f-\langle f \rangle |^2\rangle ~~~,
\end{split}$$ since the second term on the first line of Eq. \[eq:many-body\_N\^2\] grows more slowly than $N_{cell}$ for generic values of ${\vec{p}}$. Hence as long as the variance of $f$ over a unit cell is nonzero, the radiated power scales as the size of the crystal lattice (at least for lattice dimensions smaller than $r_c$).
Generalizations and discussion
==============================
Several generalizations of the formulas given above can be easily derived. First of all, if the noise Hamliltonian of Eq. 5 involves general couplings $g_i$ that may differ from the masses $m_i$, so that $$\label{eq:noisepertgen}
\begin{split}
H_{n}=&\int d^3z \frac{dW_t({\vec{z}})}{dt}{\mathcal{V}}({\vec{z}},\{x\})~~~,\\
{\mathcal{V}}({\vec{z}},\{{\vec{x}}\})=&-\frac{\hbar}{m_N}\sum_j g_j g({\vec{z}}-{\vec{x}}_j)~~~,
\end{split}$$ then in $\mathcal{N}$ in Eq. \[eq:many-body\_radiation\_power\] one replaces $e_j$ by $e_j g_j/m_j$. Secondly, our calculation, in the non-white noise case, can be viewed as calculating the radiation produced by a random gravitational potential $$\label{eq:randgrav}
V_{grav}({\vec{x}},t)=\sum_i m_i \phi({\vec{x}}_i,t)~~~,$$ with $\langle \phi\rangle_{AV}=0$ and with the correlation function $$\begin{aligned}
\label{eq:corrfn}
\langle \phi({\vec{x}},t)\phi({\vec{x}}^{\,\prime},t')\rangle_{AV}=&
\left(\frac{\hbar}{m_N}\right)^2
\frac{1}{2\pi}\int_{-\infty}^{\infty} d\omega \gamma(\omega)
e^{-i\omega(t-t')}G({\vec{x}}-{\vec{x}}^{\,\prime})~~~,\\
G({\vec{x}}-{\vec{x}}^{\,\prime})=&\int d^3z g({\vec{x}}-{\vec{z}}) g({\vec{z}}-{\vec{x}}^{\,\prime})
~~~.
$$ Since for the Gaussian $g$ of Eq. 6 one has $$\begin{aligned}
\label{eq:fouriertrans}
\int d^3x e^{i{\vec{k}}\cdot {\vec{x}}}
g({\vec{x}})=&e^{-\frac{1}{2}{\vec{k}}^2r_c^2}~~~,\\
\int d^3x e^{i{\vec{k}}\cdot {\vec{x}}} G({\vec{x}})=&\int d^3x e^{i{\vec{k}}\cdot
{\vec{x}}}\int d^3y g({\vec{x}}-{\vec{y}})g({\vec{y}})=e^{-{\vec{k}}^2r_c^2}~~~,\end{aligned}$$ for a general $G({\vec{x}})$ in Eq. \[eq:corrfn\] one simply replaces $e^{-{\vec{w}}^2 r_c^2}$ in the radiated power expressions by $$\label{eq:genfourtran}
G[{\vec{w}}]=\int d^3x e^{i{\vec{w}}\cdot {\vec{x}}} G({\vec{x}})~~~.$$ Finally, for a more general non-white noise that does not have a time-translation invariant correlation function, so that Eq. 8 is replaced by $$\label{eq:gennonwhitenoise}
E\left[ \frac{dW_t({\vec{x}})}{dt} \frac{dW_{t'}({\vec{y}})}{dt'} \right] =
\Delta(t,t') \delta^3({\vec{x}}-{\vec{y}})~~~,$$ the master formula in the non-white noise case is modified by replacing $$\label{eq:orig}
t \gamma( \omega_p+ \frac{E_f - E_i}{\hbar})$$ by $$\label{eq:gen}
\int_0^t ds \int_0^t du \Delta(s,u) e^{i(s-u)[ \omega_p+ \frac{E_f
- E_i}{\hbar}]}~~~.$$ The most general case, in which the correlation function of Eq. 61 does not factorize into a temporal correlation times a spatial correlation, can be obtained by combining results from Eqs. 61-68.
To conclude, we consider the implications of our results for CSL model phenomenology. Since we have seen that for a hydrogenic or a general atomic system emitting kilovolt gamma rays, charge neutrality does not imply a corresponding cancellation in the radiation rate, the estimates of Fu \[3\] must be taken as giving the best upper bounds on the CSL parameter $\lambda$ (defined following Eq. 20) for the white noise case. Including \[2\] a factor of $4\pi$ correction to Fu’s evaluation of the electric charge squared $e^2$, as well as \[8\] a factor of roughly 4 increase in the experimental rate limit subsequent to the value used by Fu, Fu’s calculation implies the bound $\lambda < 7 \times 10^{-11}\,{\rm s}^{-1}$, which is $\sim 3 \times 10^{6}$ larger than the standard CSL model value of $\lambda = 2.2 \times 10^{-17} {\rm s}^{-1}$. As we noted in Sec. 1, this upper bound is several orders of magnitude below the lower bound on $\lambda$ set by postulating that latent image formation (as opposed to image development) should correspond to state vector reduction. Although increasing $r_c$ to $10^{-4}$ cm decreases the 11 kilovolt photon radiation rate, and so increases the corresponding upper bound on $\lambda$, by two orders of magnitude, as discussed in \[2\] this increase in $r_c$ also increases the latent image formation lower bound on $\lambda$ by one to two orders of magnitude, and so does not eliminate the potential discrepancy.
By contrast, in the non-white noise case there is not necessarily a conflict, since the relevant radiation rate involves the noise spectral coefficient $\gamma(\omega)$ at a frequency of at least that of the emitted gamma ray, of order $10^{18}$$ {\rm s}^{-1}$ In fact, in their review \[1\], Bassi and Ghirardi suggest a cutoff in the noise frequency spectrum of order $c/r_c \sim 10^{15}$ ${\rm s}^{-1}$, which would be more than sufficient. Even a much lower frequency cutoff would suffice to explain reduction in typical measurements with measurement times of order a nanosecond or longer; for example, a cutoff of order $10^{11}$$ {\rm s}^{-1}$ would be more than adequate. This would correspond to an energy cutoff of order $10^{-4}$ eV, or a noise temperature of order $ 1$ degree K. So possibly even a non-white cosmic relic background noise field, with suitable correlator structure, coupling as a real-valued noise term ${\cal N}$ in the Schrödinger equation for $d\psi$, could explain state vector reduction in measurement situations, without coming close to violating the upper bound set by Fu’s calculation.
Acknowledgments
===============
One of us (SLA) wishes to thank Philip Pearle and Angelo Bassi for informative conversations. The work of SLA was supported in part by the Department of Energy under grant no DE-FG02-90ER40642, and the manuscript was completed while he was at the Aspen Center for Physics. The other of us (FMR) was supported by the Piel Graduate Fellowship of Princeton University.
Added Note
==========
The use of the term “power” and the symbol $P$ in Eqs. (20), (21), (36), (38), (39), (42), (43), and (55) was inadvertent; we should have used the term “rate” and the conventional symbol $\Gamma$. These formulas all give the photon radiation rate, and do not include the energy per photon factor $\hbar c p$ needed to convert them to radiated power. We wish to thank Angelo Bassi for pointing this out to us.
References
==========
Bassi A and Ghirardi GC 2003 Dynamical reduction models [*Phys. Rep.*]{} [**379**]{} 257; Pearle P 1999 Collapse models [*Open Systems and Measurements in Relativistic Quantum Mechanics (Lecture Notes in Physics*]{} vol 526) ed H-P Breuer and F Petruccione (Berlin:Springer) Adler S L 2007 [*J. Phys. A: Math. Theor.*]{} [**40**]{} 2935Fu Q 1997 [*Phys. Rev.*]{} A [**56**]{} 1806Cohen-Tannoudji C, Dupont-Roc J and Grynberg G 1992 [*Atom-Photon Interactions: Basic Processes and Applications*]{} (New York: John Wiley & Sons) Adler S L and Bassi A, in preparation Dalgarno A and Kingston A E 1960 [*Proc. Roy. Soc.*]{} A [**259**]{} 424 Panofsky W K H and Phillips M 1955 [*Classical Electricity and Magnetism*]{} (Reading, MA: Addison-Wesley) Pearle P 2007 private communication, based on Pearle P, Ring J, Collar J I and Avignone F T 1999, [*Found Phys*]{} [**29**]{} 465
|
---
abstract: 'We use the ZFIRE[^1] survey to investigate the high mass slope of the initial mass function (IMF) for a mass-complete ($\mathrm{log_{10}(M_*/M_\odot)\sim9.3}$) sample of 102 star-forming galaxies at $z\sim2$ using their [H$\alpha$]{} equivalent widths ([H$\alpha$]{}-EW) and rest frame optical colours. We compare dust-corrected [H$\alpha$]{}-EW distributions with predictions of star-formation histories (SFH) from PEGASE.2 and Starburst99 synthetic stellar population models. We find an excess of high [H$\alpha$]{}-EW galaxies that are up to 0.3–0.5 dex above the model-predicted Salpeter IMF locus and the [H$\alpha$]{}-EW distribution is much broader (10-500Å) than can easily be explained by a simple monotonic SFH with a standard Salpeter-slope IMF. Though this discrepancy is somewhat alleviated when it is assumed that there is no relative attenuation difference between stars and nebular lines, the result is robust against observational biases, and no single IMF (i.e. non-Salpeter slope) can reproduce the data. We show using both spectral stacking and Monte Carlo simulations that starbursts cannot explain the EW distribution. We investigate other physical mechanisms including models with variations in stellar rotation, binary star evolution, metallicity, and the IMF upper-mass cutoff. IMF variations and/or highly rotating extreme metal poor stars (Z$\sim0.1$[Z$_\odot$]{}) with binary interactions are the most plausible explanations for our data. If the IMF varies, then the highest [H$\alpha$]{}-EWs would require very shallow slopes ($\Gamma>-1.0$) with no one slope able to reproduce the data. Thus, the IMF would have to vary stochastically. We conclude that the stellar populations at $z\gtrsim2$ show distinct differences from local populations and there is no simple physical model to explain the large variation in [H$\alpha$]{}-EWs at $z\sim2$.'
author:
- Themiya Nanayakkara
- Karl Glazebrook
- 'Glenn G. Kacprzak'
- Tiantian Yuan
- 'David B. Fisher'
- 'Kim-Vy Tran'
- Lisa Kewley
- Lee Spitler
- Leo Alcorn
- Michael Cowley
- Ivo Labbe
- Caroline Straatman
- Adam Tomczak
bibliography:
- 'bibliography.bib'
title: 'ZFIRE: Using [H$\alpha$]{} Equivalent Widths to Investigate the In Situ Initial Mass Function at $z\sim2$'
---
Introduction {#sec:introduction}
============
The initial mass function (IMF) or the mass distribution of stars formed in a volume of space at a given time is one of the most fundamental empirically derived relations in astrophysics [@Salpeter1955; @Miller1979; @Kennicutt1983; @Scalo1986b; @Kroupa2001; @Chabrier2003; @Baldry2003]. Since the mass of a star is the primary factor that governs its evolutionary path, the collective evolution of a galaxy is driven strongly by its distribution of stellar masses [@Bastian2010]. Therefore, understanding and quantifying the IMF is of paramount importance since it affects galactic star formation rates (SFR), galactic chemical evolution, formation and evolution of stellar clusters, stellar remnant populations, galactic supernova rates, the energetics and phase balance of the interstellar medium (ISM), mass-to-light ratios, galactic dark matter content, and how we model galaxy formation and evolution [@Kennicutt1998b; @Hoversten2007].
The IMF of stellar populations can be investigated either via direct [studies that count individual stars to infer stellar ages to compute an IMF, eg., @Bruzzese2015] or indirect methods [studies that model the integrated light from stellar populations to infer an IMF, eg., @Baldry2003]. Due to current observational constraints, the number of extragalactic IMF measurements that utilizes direct measures of the IMF is limited [@Leitherer1998]. Therefore, most studies employ indirect measures of the IMF, which are affected by numerous systematic uncertainties and limitations.
Indirect IMF measures can be insensitive to low mass stellar populations since bright O, B, and red supergiant stars may outshine low mass stars. In contrast at the highest mass end, there can be an insufficient number massive stars to make a significant contribution to the detected light [@Leitherer1998; @Hoversten2008]. In addition, degeneracies in stellar population models play a significant role in the uncertainties of the derived IMFs, especially since stellar age, stellar metallicity, galactic dust, galactic SFH, and stellar IMF cannot be easily disentangled [@Conroy2013]. Furthermore, indirect IMF results may depend strongly on more sophisticated features of stellar population models \[mainly stellar rotation, binary evolution of O and B stars and the treatment of Wolf–Rayet (W-R) stars [@Wolf2000]\] and dark matter profiles of galaxies. @Smith2014 showed that galaxy by galaxy comparisons of inferred IMF mass factors via dynamical and spectroscopic fitting techniques can lead to inconsistent results due to our limited understanding of element abundance ratios, dark matter contributions, and/or more sophisticated shape of the IMFs [@McDermid2014; @Smith2014].
The concept of an IMF was first introduced by @Salpeter1955 as a logarithmic slope $\Gamma$ defined by, $$\label{eq:imf_def_salp}
\Phi(\log\ m)=dN/d(\log\ m) \propto m^{\Gamma}$$ where $m$ is the mass of a star, $N$ the number of stars within a logarithmic mass bin, and $\Gamma=-1.35$ is the slope of the IMF. Historic studies of the IMF slope at the high mass end (M $\gtrsim$ 1 [M$_\odot$]{}) showed no statistically significant differences from the value derived by Salpeter giving rise to the concept of IMF universality [@Scalo1986; @Gilmore2001]. Theoretical studies attempt to explain the concept of universal IMF by invoking mechanisms such as fragmentation of molecular clouds [@Larson1973] or feedback from the ISM [@Klishin2016]. However, there is no definitive theoretical model that can predict a given universal IMF from first principles, which limits our theoretical understanding of the fundamental physics that govern the IMF.
Should the IMF vary?
--------------------
We expect the IMF to vary since a galaxy’s metallicity, SFRs, and environment can change dramatically with time [eg., @Schwarzschild1953; @Larson1998; @Larson2005; @Weidner2013a; @Chattopadhyay2015; @Ferreras2015; @Lacey2016]. Lower metallicities, higher SFRs, and high cloud surface densities prominent at high-redshift can favour the formation of high-mass stars [@Chattopadhyay2015] while interactions between gas clumps in dense environments can suppress the formation of low mass stars [@Krumholz2010]. Furthermore, physically motivated models of early-type galaxies (ETGs) suggest scenarios in which star formation occurs in different periods giving rise to variability in the mass of the stars formed [@Vazdekis1996; @Weidner2013a; @Ferreras2015].
Following from theoretical predictions, recent observational studies have started showing increasing evidence for a non-universal IMF [@Hoversten2008; @vanDokkum2010; @Meurer2011; @Gunawardhana2011; @Cappellari2012; @Cappellari2013; @LaBarbera2013; @Ferreras2013; @Conroy2013b; @Navarro2015a; @Navarro2015d; @Navarro2015c; @Navarro2015b]. These studies investigate both early and late-type galaxies in different physical and environmental conditions and use different techniques to probe the IMF at the lower and upper mass end.
IMF studies of ETGs in the local universe infers/has-shown a high abundance of low mass stars [@vanDokkum2010; @Ferreras2013; @LaBarbera2013] with strong evidence for IMF variations as a function of galaxy velocity dispersion [@Cappellari2012; @vanDokkum2012; @Cappellari2013; @Conroy2013b], metallicity [@Navarro2015c] and radial distance within a galaxy [@Navarro2015a]. These results suggest that the IMF of ETGs are most likely to depend on the physical conditions of the galaxy when it formed bulk of its stars. Local star-forming galaxies show evidence for IMF variation as a function of galaxy luminosity [@Schombert1990; @Lee2004; @Hoversten2008; @Meurer2009; @Meurer2011], metallicity [@Rigby2004], and SFR [@Gunawardhana2011]. Comparisons between HI selected low and high surface brightness galaxies have shown the need for a systematic variation of the upper mass cutoff and/or the slope of the IMF to model the far-ultraviolet (far-UV) and [H$\alpha$]{} luminosities [@Meurer2009; @Meurer2011]. [H$\alpha$]{} EW and optical colour analysis of the SDSS [@York2000] data showed that low-luminous galaxies may be deficient in high mass stars [@Hoversten2008], while a similar analysis on the GAMA survey [@Driver2009] showed an excess of high mass stars in high star-forming galaxies [@Gunawardhana2011], both compared to expectations from a Salpeter IMF.
In spite of IMF being fundamental to galaxy evolution, our understanding of it at higher redshifts ($z\gtrsim2$) is extremely limited. IMF studies of strong gravitational lenses at $z>1$ have shown no deviation from Salpeter IMF [@Pettini2000; @Steidel2004; @Quider2009], but quiescent galaxies at $z<1.5$ have shown systematic trends for the IMF with stellar mass [@Navarro2015b]. Using local analogues to $z\sim2$ galaxies, @Navarro2015d finds evidence for an abundance of low mass stars in the early universe.
Understanding the *relic* IMF at high redshift requires populations of quiescent galaxies which are relatively rare at high redshift, extremely long integration times to obtain absorption line/kinematic features, and complicated modelling of stellar absorption line features. Since IMF defines the mass distribution of formed stars at a given time, in the context of understanding the role of IMF in galaxy evolution, it should be investigated *in situ* at an era when most galaxies are in their star forming phase and evolving rapidly to produce large elliptical galaxies found locally. Furthermore, high mass stars are absent in ETGs and therefore star-forming galaxies are imperative to study the high mass end of the IMF. Simulations have shown that $z\sim2$ universe is ideal for such studies [@Hopkins2006]. rest frame optical spectra of high redshift galaxies are dominated by strong emission lines produced by nebulae associated with high mass stars ($>$15[M$_\odot$]{}) and therefore provide a direct tracer of the high mass end of the IMF [@Bastian2010]. Due to the recent development of sensitive near infra-red (NIR) imagers and multiplexed spectrographs that take advantage of the Y, J, H, and K atmospheric windows, the $z\sim2$ universe is ideal to study rest frame optical features of galaxies.
Investigating the IMF with [H$\alpha$]{} Emission {#sec:method}
-------------------------------------------------
The total flux of a galaxy at [H$\alpha$]{} emission wavelength is the sum of the [H$\alpha$]{} emission flux, the continuum level at the same wavelength minus the [H$\alpha$]{} absorption. [H$\alpha$]{} absorption for galaxies at $z\sim2$ is $\leq$ 3% of its flux level [@Reddy2015] and therefore can be ignored. In case B recombination [@Brocklehurst1971], following the Zanstra principle [@Zanstra1927] the [H$\alpha$]{} flux of a galaxy is directly related to the number of Lyman continuum photons emitted by massive young O and B stars with masses $>$10[M$_\odot$]{}. The continuum flux at the same wavelength is dominated by red giant stars with masses between $0.7-3.0$ [M$_\odot$]{}. Therefore, [H$\alpha$]{} EW, that is the ratio of the strength of the emission line to the continuum level can be considered as the ratio of massive O and B stars to $\sim1$ [M$_\odot$]{} stars present in a galaxy.
The rest frame optical colours of a galaxy is tightly correlated with its [H$\alpha$]{} emission. The [H$\alpha$]{} flux probes the specific SFR (sSFR) of the shorter lived massive stars, while the optical colours probe the sSFR of the longer lived less massive stars. Therefore, in a smooth exponentially declining SFH, the optical colour of a galaxy will transit from bluer to redder colours with time due to the increased abundance of older less massive red stars. Similarly, with declining SFR the [H$\alpha$]{} flux will decrease and the continuum contribution of the older redder stars will increase, which will act to decrease the [H$\alpha$]{} EW in a similar SFH. The [H$\alpha$]{} EW and optical colours parameter space is degenerated in such a way that the slope of the function is equivalent to lowering the highest mass stars that are formed and/or increasing the fraction of intermediate-mass stars.
Multiple studies have investigated possibilities for IMF variation in galaxies using Balmer line flux in the context of probing SFHs [@Meurer2009; @Weisz2012; @Zeimann2014; @Guo2016; @Smit2016]. Modelling effects of IMF variation using [H$\alpha$]{} or [H$\beta$]{} to UV flux ratios have strong dependence on the assumed SFH and dust extinction of the galaxies and is only sensitive to the upper end of the high mass IMF. Apart from IMF variation [@Boselli2009; @Meurer2009; @Pflamm-Altenburg2009], stochasticity in SFH [@Boselli2009; @Fumagalli2011; @Guo2016], non-constant SFHs [@Weisz2012], and Lyman leakage [@Relano2012] can provide viable explanations to describe offsets between expected Balmer line to UV flux ratios and observed values.
@Kauffmann2014 used SFRs derived via multiple nebular emission line analysis with the 4000Å break and H$\delta_A$ absorption to probe the recent SFHs of SDSS galaxies with $\mathrm{log_{10}(M_*/M_\odot)<10}$ and infer possibilities for IMF variation. They did not find conclusive evidence for IMF variation, with contradictions in the 4000Å features with @Bruzual2003 stellar templates being attributed to errors in the spectro-photometric calibration. However, using absorption line analysis to probe possible IMF variations in actively star-forming galaxies suffers from strong Balmer line emissions that dominate and fill the absorption features. Furthermore, absorption lines probe older stellar populations, and linking them with current star-formation requires further assumptions about the SFH.
@Smit2016 used SED fitting techniques to probe discrepancies between [H$\alpha$]{} to UV SFRs ratios of $z\sim4-5$ galaxies and local galaxies. They inferred an excess of ionizing photons in the $z\sim4-5$ galaxies but the origin couldn’t be distinguished between a shallow high-mass IMF scenario or a metallicity dependent ionizing spectrum. Using broad-band imaging and SED fitting techniques to infer [H$\alpha$]{} flux has underlying uncertainties from the contamination of other emission lines that fall within the same observed filter (eg., [\[\]]{}, [\[\]]{}) and assumptions of IMF, SFH, metallicity, and dust law of the SED templates.
In this paper we use the MOSFIRE NIR spectra of galaxies obtained as a part of the ZFIRE survey [@Nanayakkara2016] along with multi-band photometric data from the ZFOURGE survey [@Straatman2016] to study the high-mass IMF of a mass complete sample of star-forming galaxies at $z\sim2$. MOSFIRE spectra has higher resolution and is able to probe into redder wavelengths (up to K band) compared to *Hubble Space Telescope (HST)* grism spectra [eg., @Zeimann2014], thus allows us to detect clear [H$\alpha$]{} nebular emissions up to $z\sim2.7$. Galaxies follow nearly the same locus in [H$\alpha$]{} EW, optical colour as long as their SFHs are smoothly increasing or decreasing [@Kennicutt1983] and we show this in later figures. The change in galaxy evolutionary tracks due to IMF is largely orthogonal to the changes in tracks due to effects of dust extinction. Therefore, our method allows stronger constraints to be made on the high-mass IMF compared to the studies that probe Balmer line to UV flux ratios and is an improvement of the recipe that was first implemented by @Kennicutt1983 and subsequently used by @Hoversten2008 and @Gunawardhana2011 to study the IMF at $z\sim0$.
The paper is organized in the following way. Section \[sec:EW\_calc\] describes the sample selection, the continuum fitting procedure, [H$\alpha$]{} EW calculation, optical colour derivation, and the completeness of our selected sample. Section \[sec:PEGASE\_models\] shows how the synthetic stellar population models (SSP) were computed. In Section \[sec:results\] we show the first results of our IMF study and identify stellar population effects that could describe the distribution of our sample. We discuss effects of dust to our analysis in Section \[sec:dust\], observational bias in Section \[sec:observational\_bias\], and starbursts in Section \[sec:star\_bursts\]. In Section \[sec:other\_exotica\] we discuss the effects of other properties such as stellar rotation, binary stars, metallicity, and the high mass cutoff of the stellar models to our analysis. Section \[sec:other\_observables\] investigates dependencies of our parameters with other observables. Section \[sec:discussion\] gives a through discussion of our results investigating the change of IMF and other possible scenarios. We conclude the paper in Section \[sec:summary\] by describing further improvements needed in the field to determine the IMF of the galaxies in the high redshift universe. Throughout the paper we refer to the IMF slope of $\Gamma=-1.35$ computed by @Salpeter1955 as the Salpeter IMF. We assume various IMFs and a cosmology with [[*Hubble*]{}]{}$= 70$ km/s/Mpc, $\Omega_\Lambda=0.7$ and $\Omega_m= 0.3$. All magnitudes are expressed using the AB system.
Observations & Data {#sec:EW_calc}
===================
Galaxy Sample Selection {#sec:sample_selection}
-----------------------
The sample used in this study was selected from the ZFIRE [@Nanayakkara2016] spectroscopic survey, which also consists of photometric data from the ZFOURGE survey [@Straatman2016]. In this section, we describe the sample selection process from the ZFIRE survey for our analysis.
ZFIRE is a spectroscopic redshift survey of star-forming galaxies at $1.5<z<2.5$, which utilized the MOSFIRE instrument [@McLean2012] on Keck-I to primarily study galaxy properties in rich environments. ZFIRE has observed $\sim300$ galaxy redshifts with typical absolute accuracy of $\mathrm{\sim15\ kms^{-1}}$ and derived basic galaxy properties using multiple emission line diagnostics. @Nanayakkara2016 give full details on the ZFIRE survey. In this study we use the subset of ZFIRE galaxies observed in the COSMOS field [@Scoville2007] based on a stellar mass limited sample reaching up to 5$\sigma$ emission line flux limits of $\mathrm{\sim3\times10^{-18}erg/s/cm^2}$ selected from deep NIR data $\mathrm{K_{AB}<25}$ obtained by the ZFOURGE survey.
ZFOURGE[^2] (PI I. Labbé) is a Ks selected deep 45 night photometric legacy survey carried out using the purpose built FourStar imager [@Persson2013] in the 6.5 meter Magellan Telescopes located at Las Campanas observatory in Chile. The survey covers 121 arcmin$^2$ in each of the COSMOS, UDS [@Beckwith2006], and CDFS [@Giacconi2001] legacy fields. Deep FourStar medium band imaging (5$\sigma$ depth of Ks$\leq$25.3 AB ) and the wealth of public multi-wavelength photometric data (UV to Far infra-red) available in these fields were used to derive photometric redshifts with accuracies $\lesssim1.5\%$ using EAZY [@Brammer2008]. Galaxy masses, ages, SFRs, and dust properties were derived using FAST [@Kriek2009] with a @Chabrier2003 IMF, exponentially declining SFHs, and @Calzetti2000 dust law. At $z\sim2$ the public ZFOURGE catalogues are 80% mass complete to $\sim10^9$[M$_\odot$]{}[@Nanayakkara2016]. Refer to @Straatman2016 for further details on the ZFOURGE survey.
ZFIRE and ZFOURGE are ideal surveys to use in this study since both provide mass complete samples. The total ZFIRE sample in the COSMOS field contains 142 [H$\alpha$]{} detected \[$>5\sigma$, redshift quality flag (Q$_z$)=3 \] star-forming galaxies that is mass complete down to [$\mathrm{log_{10}(M_*/M_\odot)}$]{}$>9.30$ (at 80% for $Ks=24.11$). Thus, our [H$\alpha$]{} selected sample contains no significant systematic biases towards SFH, stellar mass, and magnitude. Furthermore, ZFIRE contains a large cluster at $z=2$ containing 51 members with $5\sigma$ [H$\alpha$]{} detections [@Yuan2014] and therefore we are able to examine if the IMF is affected by the local environment of galaxies.
For this study, we apply the following additional selection criteria to the 142 [H$\alpha$]{} detected galaxies.\
$\bullet$ We remove active galactic nucleus (AGN) using photometric [@Cowley2016] and emission line ($\mathrm{log_{10}}$(f([\[\]]{})/f([H$\alpha$]{}))$>-0.5$; @Coil2015) criteria resulting in identifying 26 AGN with our revised sample containing $N=116$ galaxies. We note that all galaxies selected as AGN from ZFOURGE photometry by @Cowley2016 are flagged as AGN by the @Coil2015 selection. We further discuss contamination to [H$\alpha$]{} from sub-dominant AGN in Appendix \[sec:AGN\].\
$\bullet$ Galaxies must have a matching ZFOURGE counterpart such that we can obtain galaxy properties, resulting in N=109 galaxies.\
$\bullet$ We compute the total spectroscopic flux for these galaxies and remove 4 galaxies with negative fluxes resulting in N=105 galaxies. We perform stringent [H$\alpha$]{} emission quality cuts to the spectra for these 105 galaxies and remove 2 galaxies due to strong sky line subtraction issues. We further remove 1 galaxy due to an overlap of the galaxy spectra with a secondary object that falls within the same slit.\
Our final sample of galaxies used for the IMF analysis in this paper comprise of 102 galaxies. Henceforth we refer to this sample of galaxies as the ZFIRE stellar population (SP) sample.\
The redshift distribution for the [ZFIRE-SP sample]{} is shown by Figure \[fig:specz\_distribution\]. The [ZFIRE-SP sample]{} is divided into continuum detected and non-detected galaxies as described in section \[sec:cont\_fit\]. Galaxies in our sample lie within redshifts of $1.97<z<2.46$ corresponding to a $\Delta t \sim 650$ Myr.
Completeness {#sec:completeness}
------------
In order to determine any significant detection biases in our [ZFIRE-SP sample]{}, we evaluate the completeness of the galaxies selected in this analysis. We define a redshift window for analysis between $1.90<z<2.66$ ($\sim8.6$ Gpc), which corresponds to the redshifts that [H$\alpha$]{} emission will fall within the MOSFIRE K band. Note that here we discuss galaxies with [H$\alpha$]{} detections and Q$_z>1$, while in Section \[sec:sample\_selection\] we discussed the Q$_z=3$ [H$\alpha$]{} detected sample.
In the ZFOURGE catalogues used for the ZFIRE sample selection (see @Nanayakkara2016 for details), there were 1159 galaxies (including star-forming and quiescent galaxies) in the COSMOS field with photometric redshifts ($z_{photo}$) within $1.90<z_{\mathrm{photo}}<2.66$. 160 of these galaxies with $1.90<z_{\mathrm{photo}}<2.66$ were targeted in K band out of which 128[^3] were detected with at least one emission line with SNR $>5$. None of the [H$\alpha$]{} detected galaxies had spectroscopic redshifts outside the considered redshift interval. However, 3 additional galaxies (1 object with Q$_z=2$, 2 objects with Q$_z=3$) fell within $1.90<z<2.66$ due to inaccurate photometric redshifts. There were 8 galaxies targeted in K band that did not have [H$\alpha$]{} detections but do have other emission line detections (i.e. No [H$\alpha$]{} but have [\[\]]{}, [\[\]]{}, [H$\beta$]{} etc). Furthermore, there were no galaxies that were targeted in K band expecting [H$\alpha$]{} but resulted in other emission line detections.
There were 151 objects within $1.90<z<2.66$ with [H$\alpha$]{} detections (Q$_z>1$) and 26 of them were flagged as AGN following selection criteria from @Coil2015 and @Cowley2016. In the remaining 125 galaxies, 8 galaxies did not have matching ZFOURGE counterparts and 8 galaxies had low confidence for redshift detection (Q$_z=2$) from @Nanayakkara2016. We removed those 16 galaxies from the sample. Out of the 109 remaining galaxies, seven are removed due to the following reasons: four galaxies due to negative spectroscopic flux, one galaxy due to multiple objects overlapping in the spectra, and two galaxies due to extreme sky line interference.
Our sample constitutes of the remaining 102 galaxies out of which, 46 have continuum detections (see Section \[sec:cont\_fit\]). Furthermore, 38 (out of which, 16 are continuum detected) galaxies are confirmed cluster members [@Yuan2014] and the remaining 64 (out of which, 30 are continuum detected) galaxies comprise of field galaxies. 32 galaxies targeted with photometric redshifts between $1.90<z<2.66$ show no [H$\alpha$]{} emission detection. We divide our sample into 3 mass bins with masses between $\log_{10}(\mathrm{M}_\odot)<9.5$, $\mathrm{9.5\leq log_{10}(M_\odot)\leq 10.0}$, $\mathrm{10.0<log_{10}(M_\odot)}$ and show the corresponding data as described above in Table \[tab:sample\_details\].
We define observing completeness as the percentage of detected galaxies (Q$_z>1$) with photometric redshifts between $1.90<z<2.66$ and calculate it to be $\sim80\%$. However, it is possible that the 32 null detections with photometric redshifts within $1.90<z<2.66$ to have been detected if the ZFIRE survey was more sensitive. We stack the the photometric redshift likelihood functions ($P(z)$) of the ZFIRE targeted galaxies within this redshift range, to compute the expectation of detections based of photometric redshift accuracies (See @Nanayakkara2016 Section 3.2 to further details on how $P(z)$ stacking is performed) . The calculated expectation for [H$\alpha$]{} to be detected within K band is $\sim80\%$, which is extremely similar to the observed completeness. Therefore, non-detections rate is consistent with uncertainties in the photometric redshifts. To further account for any detection bias, we employ a stacking technique of the non-detected spectra in order to calculate a lower-limit to the stacked EW values. This is further discussed in Section \[sec:EW\_stacking\].
[ccccccccccc]{} $<9.5$ & 568 & 59 & 44 & 0 & 2 & 1 & 41 & 34 & 9 &\
$9.5-10.0$ & 318 & 47 & 39 & 0 & 1 & 0 & 35 & 16 & 6 &\
$10.0<$ & 273 & 54 & 45 & 0 & 20 & 1 & 26 & 6 & 5 &\
Total &1159 &160 & 128 & 0 & 23 & 2 &102 & 56 & 20 &\
Continuum fitting and [H$\alpha$]{} EW calculation {#sec:cont_fit}
--------------------------------------------------
In this section, we describe our continuum fitting method for our 102 [H$\alpha$]{} detected galaxies selected from the ZFIRE survey. Fitting a robust continuum level to a spectrum requires nebular emission lines and sky line residuals to be masked. Furthermore, the wavelength interval used for the continuum fit should be sufficient enough to perform an effective fit but should be smaller enough to not to be influenced by the intrinsic SED shape. After extensive testing of various measures used to fit a continuum level, we find the method outlined below to be the most effective to fit a continuum level for our sample.
By visual inspection and spectroscopic redshift of the galaxies in our sample, we mask out the [H$\alpha$]{} and [\[\]]{} emission line regions in the spectra. We further mask all known sky lines by selecting a region $\times 2$ the spectral resolution ($\pm5.5$Å) of MOSFIRE K band. We then use the `astropy` [@Astropy2013] sigma-clipping algorithm to mask out remaining strong features in the spectra. These spectra are then used to fit an inverse variance weighted constant level fit, which we consider as the continuum level of the galaxy. Three objects fail to give a positive continuum level using this method and for these we perform a $3\sigma$ clip with two iterations without masking nebular emission lines and sky lines. Using this method we are able to fit positive continuum levels to all galaxies in our sample. We further investigate the robustness of our measures continuum levels in Appendix \[sec:cont\_Halpha\_test\] using ZFOURGE photometric data and conclude that our measured continuum level is consistent (or in agreement) with the photometry.
We use two approaches to calculate the [H$\alpha$]{} line flux: 1) direct flux measurement and 2) Gaussian fit to sky-line blended and kinematically unresolved emission lines. Our two methods provide consistent results for emission-lines that are not blended with sky lines (see Appendix \[sec:cont\_Halpha\_test\]). By visual inspection, we selected kinematically resolved (due to galaxy rotation etc.) [H$\alpha$]{} emission lines that were not blended with sky-lines and computed the EW by integrating the line flux. Within the defined emission line region, we calculated the [H$\alpha$]{} flux by subtracting the flux at each pixel ($F_i$) by the corresponding continuum level of the pixel ($F_{cont_i}$). For the remaining sample, which comprises of galaxies with no strong velocity structure and galaxies with [H$\alpha$]{} emission with little velocity structure and/or [H$\alpha$]{} contaminated by sky lines, we perform Gaussian fits to the emission lines, to calculate the [H$\alpha$]{} flux values. We then subtract the continuum level from the computed [H$\alpha$]{} line flux.
Next, we use the calculated [H$\alpha$]{} flux along with the fitted continuum level to calculate the observed [H$\alpha$]{} EW ($H\alpha_{EW_{obs}}$) as follows:
$$\label{eq:Ha_EW_obs}
H\alpha\ EW_{obs} = \sum_{i} (1-\frac{F_i - F_{cont_i}}{F_{cont_i}}) \times \Delta \lambda_i$$
where $\Delta \lambda_i$ is the increment of wavelength per pixel. Finally, using the spectroscopic redshift ($z$) we calculate the rest frame [H$\alpha$]{} EW ($H\alpha_{EW_{rest}}$), which we use throughout the paper: $$\label{eq:Ha_EW_rest}
H\alpha\ EW = \frac{H\alpha\ EW_{obs}}{1+z}$$
We calculate EW errors by bootstrap re-sampling the flux of each spectra randomly within limits specified by its error spectrum. We re-calculate the EW iteratively 1000 times and use the 16$\mathrm{^{th}}$ and 84$\mathrm{^{th}}$ percentile of the distribution of the EW measurements as the lower and upper limits for the EW error, respectively. Since the main uncertainty arises from the continuum fitting, we do not consider the error of the [H$\alpha$]{} flux calculation in our bootstrap process.
The robustness of an EW measurement relies on the clear identification of the nebular emission line and the underlying continuum level. The latter becomes increasingly hard to quantify at high redshift for faint star forming sources due to the continuum not being detected. Therefore we derive continuum detection limits to identify robustly measured continua from non-detections.
In order to establish the limit to which our method can reliably measure the continuum, we select 14 2D slits with no continuum or nebular emission line detections to extract 1D spectra. We define extraction apertures using a representative profile of the flux monitor star and perform multiple extractions per slit depending on their spatial size. A total of 93 1D sky spectra are extracted and their continuum level is measured by masking out sky lines and performing a sigma-clipping algorithm. The error of the sky continuum fit is calculated by bootstrap re-sampling of the sky fluxes 1000 times. We consider the $1\sigma$ scatter of the bootstrapped continuum values to be the error of the sky continuum fit and $1\sigma$ scatter of the flux values used for the continuum fit as the RMS of the flux.
The comparison between the flux continuum level for the sky spectra with the [ZFIRE-SP sample]{} spectra are shown in Figure \[fig:noise\_levels\]. The median and the 2$\sigma$ standard deviation for the continuum levels of the sky spectra are $\mathrm{-2.3\times 10^{-21} ergs/s/cm^2/\AA}$ and $\mathrm{5.4 \times 10^{-20} ergs/s/cm^2/\AA}$ respectively. We consider the horizontal blue dashed line in the Figure \[fig:noise\_levels\], which is $2\sigma$ above the median sky level, to be our lower limit for the continuum detections in our sample. The [46]{} galaxies in our [ZFIRE-SP sample]{} with continuum levels above this flux level detections are considered to have a robust continuum detection. For the remaining [56]{} galaxies we consider the continuum measurement as a limit and use it to calculate a lower limit to the [H$\alpha$]{} EW values. The redshift distribution of these galaxies is shown by Figure \[fig:specz\_distribution\].
Calculating optical colours {#sec:col_calculation}
---------------------------
Rest frame optical colours for the [ZFIRE-SP sample]{} are computed using an updated version of EAZY[^4] [@Brammer2008], which derives the best-fitting SEDs for galaxies using high quality ZFOURGE photometry to compute the colours. We investigate the robustness of the rest frame colour calculation of EAZY in Appendix \[sec:EAZY colour comparision\]. The main analysis of our sample is carried out using optical colours derived using two idealized, synthetic box car filters, which probes the bluer and redder regions of the rest frame SEDs. We select these filters to avoid regions of strong nebular emission lines as explained in Section \[sec:PEGASE\_models\] and Appendix \[sec:box car filters\].
In order to allow direct comparison between ZFIRE $z\sim2$ galaxies with $z=0.1$ SDSS galaxies from HG08, we further calculate optical colours for the [ZFIRE-SP sample]{} at $z=0.1$ using blue-shifted SDSS g and r filters. Blueshifting the filters simplifies the (g$-$r) colour calculation at $z=0.1$ ([$\mathrm{(g-r)_{0.1}}$]{}) by avoiding additional uncertainties, which may arise due to K corrections if we are to redshift the galaxy spectra to $z=0.1$ from $z=0$.
Galaxy Spectral Models {#sec:PEGASE_models}
======================
In this section, we describe the theoretical galaxy stellar spectral models employed to investigate the effect of IMF, SFHs, and other fundamental galaxy properties in [H$\alpha$]{} EW vs optical colour parameter space. We use PEGASE.2 detailed in @Fioc1997 as our primary spectral synthesis code to perform our analysis and further employ Starburst99 [@Leitherer1999] and BPASSv2 [@Eldridge2016] models to investigate the effects of other exotic stellar features.
PEGASE is a publicly available spectral synthesis code developed by the Institut d’Astrophysique de Paris. Once the input parameters are provided, PEGASE produces galaxy spectra for varying time steps, which can be used to evaluate the evolution of fundamental galaxy properties over cosmic time.
Model Parameters
----------------
In this paper, we primarily focus on the effect of varying the IMF, SFH, and metallicity on [H$\alpha$]{} EW and optical colour of galaxies. A thorough description of the behaviour of PEGASE models in this parameter space can be found in [@Hoversten2008]. The parameters we vary are as follows.
- [**The IMF :**]{} we follow HG08 and use an IMF with a single power law as shown by Equation \[eq:imf\_def\_salp\]. Models were calculated with varying IMF slopes ($\Gamma$) ranging between -0.5 to -2.0 in logarithmic space. The lower and upper mass cutoffs were set to 0.5[M$_\odot$]{} and 120[M$_\odot$]{}, respectively. The IMF indication method used in this analysis is dependant on the ability of a star with a specific mass to influence the [H$\alpha$]{} emission and the optical continuum level. Stars below 1[M$_\odot$]{} cannot strongly influence the optical continuum and hence this method is not sensitive to probe the IMF below 1[M$_\odot$]{}. Furthermore stars below $\sim0.5$[M$_\odot$]{} gives no significant variation to the parameters investigated by this method [@Hoversten2008]. Therefore we leave the lower mass cutoff at 0.5[M$_\odot$]{}. The higher mass cutoff of 120[M$_\odot$]{} used is the maximum mass allowed by PEGASE. Though the high mass end has a stronger influence on the IMF identified using this method we justify the 120[M$_\odot$]{} cutoff due to the ambiguity of the stellar evolution models above this mass. Varying the upper mass cutoff has a strong effect on [H$\alpha$]{} EW and optical colours. As HG08 showed this is strongly degenerated with changing $\Gamma$. In this work we focus on $\Gamma$ parametrization, noting that changing the cutoff could produce similar effects. We further discuss the degeneracy between the high mass cutoff and the [H$\alpha$]{} EW vs optical colours slope in Section \[sec:mass\_cutoff\].
- [**The SFH :**]{} exponentially increasing/declining SFHs, constant SFHs, and starbursts are used. Exponentially declining SFHs are in the form of $\mathrm{SFR(t) = p_2\ exp(-t/p_1) / p_1 }$, with p$_1$ varying from 500 to 1500 Myr. Star bursts are used on top of constant SFHs with varying burst strength and time-scales. Further details are provided in Section \[sec:simulations\].
- [**Metallicity :**]{} models with consistent metallicity evolution and models with fixed metallicity of 0.02 are used.
The other parameters we use for the PEGASE models are as follows. We use Supernova ejecta model B from [@Woosley1995] with the stellar wind option turned on. The fraction of close binary systems are left at 0.05 and the initial metallicity of the ISM is set at 0. We turn off the galactic in-fall function and the mass fractions of the substellar objects with star formation are kept at 0. Galactic winds are turned off, nebular emissions are turned on and we do not consider extinction as we extinction correct our data.
As a comparison with HG08, in Figure \[fig:PEG\_neb\_comp\_gr\] we show the evolution of 4 model galaxies from PEGASE in the [H$\alpha$]{} EW vs [$\mathrm{(g-r)_{0.1}}$]{} colour space. The models computed with exponentially declining SFHs with p$_{1}=1000$ Myr, varying IMFs, and nebular emission lines agrees well with the SDSS data. However, the evolution of the [$\mathrm{(g-r)_{0.1}}$]{} colour shows strong dependence on the nebular emission contribution, specially for shallower IMFs. HG08 never considered the effect of emission lines in [$\mathrm{(g-r)_{0.1}}$]{} colours and the significant effect at younger ages/bluer colours are likely to be important for $z\sim2$ galaxies.
![ The evolution of PEGASE SSP galaxies in the [H$\alpha$]{} EW vs [$\mathrm{(g-r)_{0.1}}$]{} colour space. We show four galaxy models with exponentially declining SFHs computed using identical parameters but varying IMFs. The thick dark green tracks show from top to bottom galaxies with $\Gamma$ values of -0.5,-1.0,-1.35, and -2.0, respectively. The thin light green tracks follow the same evolution as the thick ones, but the nebular line contribution is not considered for the [$\mathrm{(g-r)_{0.1}}$]{} colour calculation. All tracks commence at the top left of the figure and are divided into three time bins. The dotted section of the track corresponds to the first 100 Myr of evolution of the galaxy. The solid section of the tracks show the evolution between 100 Myr - 3100 Myr ($z\sim2$) and the final dashed section shows evolution of the galaxy up to 13100 Myr ($z\sim0$). The distribution of the galaxies from the SDSS HG08 sample are shown by 2D histogram. []{data-label="fig:PEG_neb_comp_gr"}](figures/PEGASE_EW_vs_gr_neb_comp.pdf)
Figure \[fig:PEG\_spectra\] shows an example of a synthetic galaxy spectra generated by PEGASE. The galaxy is modelled to have an exponentially declining SFH with $p_1=1000$Myr and a [$\Gamma=-1.35$]{} IMF. Due to the declining nature of the SFR, the stellar and nebular contribution of the galaxy spectra decreases with cosmic time. We overlay the filter response functions of the $g_{z=0.1}$ and $r_{z=0.1}$ filters used in the analysis by HG08. As evident from the spectra, this spectral region covered by the $g_{z=0.1}$ and $r_{z=0.1}$ filters includes strong emission lines such as [\[\]]{} and [H$\beta$]{}. Therefore, the computed [$\mathrm{(g-r)_{0.1}}$]{} colours will have a strong dependence on photo-ionization properties of the galaxies.
To mitigate uncertainties in photo-ionization models in our analysis, we employ synthetic filters specifically designed to avoid regions with strong nebular emission lines. We design two box-car filters centred at 3400Å and 5500Å with a width of 450Å. The rest frame wavelength coverage of these filters corresponds to a similar region covered by the FourStar $\mathrm{J_1}$ and $\mathrm{H_{long}}$ filters in the observed frame for galaxies at $z=2.1$ and therefore requires negligible K corrections. Further details on this filter choice is provided in Appendix \[sec:filter choice 340\]. Henceforth, we refer to the blue filter as \[340\], the redder filter as \[550\], and the colour of blue filter - red filter as [$\mathrm{[340]-[550]}$]{}. The [$\mathrm{[340]-[550]}$]{} colour evolution of a galaxy is independent of the nebular emission lines.
![An example of a model galaxy spectrum generated by PEGASE. Here we show the evolution of the optical wavelength of a galaxy spectra with an exponentially declining SFH and a [$\Gamma=-1.35$]{} with no metallicity evolution. The time steps of the models from top to bottom are: 100 Myr (dark green), 1100 Myr (green), 2100 Myr (limegreen), and 3100 Myr (lightgreen). The $g_{z=0.1}$ and $r_{z=0.1}$ filter response functions are overlaid on the figure. []{data-label="fig:PEG_spectra"}](figures/PEGASE_spectra.pdf)
We also compare results using Starburst99 (S99) [@Leitherer1999] models in Appendix \[sec:SSP comparision\]. We find that PEGASE and S99 models show similar evolution and find that our choice of SSP model (PEGASE or S99) to interpret the IMF of the [ZFIRE-SP sample]{} at $z\sim2$ to be largely independent to our conclusions. However, stellar libraries that introduce rotational and/or binary stars used in these models do have an influence of the [H$\alpha$]{} EWs and [$\mathrm{[340]-[550]}$]{} colours, which we discuss in detail in Section \[sec:ssp\_issues\].
Comparison to [H$\alpha$]{} EW & optical colours at $z\sim2$ {#sec:results}
------------------------------------------------------------
We explore the IMF of $z\sim2$ star-forming galaxies using [H$\alpha$]{} EW values from ZFIRE spectra and rest frame optical colours from ZFOURGE photometry. Our observed sample used in our analysis is shown in Figure \[fig:EW\_no\_dust\_corrections\]. The left panel shows the distribution of [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours of the [ZFIRE-SP sample]{} before dust corrections are applied. We overlay model galaxy tracks generated by PEGASE for various IMFs. All models are computed using an exponentially declining SFH, but with varying time constants (p$_1$) as shown in the figure caption. For a given IMF, smoothly varying monotonic SFHs have very similar loci in this parameter space. The thick set of models (third from top) shows a slope with $\Gamma=-1.35$, which is similar to the Salpeter slope. Galaxies above these tracks are expected to contain a higher fraction of higher mass stars in comparison to the mass distribution expected following a Salpeter IMF. Similarly galaxies below these tracks are expected to contain a lower fraction of high mass stars. Galaxies have a large spread in this parameter space but we expect this scatter to decrease when dust corrections are applied to the data as outlined in Section \[sec:dust\_corrections\].
We note the large scatter of the [H$\alpha$]{} EW values with respect to the Salpeter IMF, especially the large number of high EW objects ($\gtrsim0.5$ dex above the Salpeter locus). Could this simply be due to the [ZFIRE-SP sample]{} only detecting [H$\alpha$]{} emissions in bright objects? i.e. a sample bias. First, we note our high completeness of $\sim80\%$ for [H$\alpha$]{} detections (Section \[sec:completeness\]). Second, our [H$\alpha$]{} flux limits are actually quite faint. To show this explicitly, we define [H$\alpha$]{} flux detection limits for our sample using $1\sigma$ detection thresholds for each galaxy parametrised by the integration of the error spectrum within the same width as the emission line. Figure \[fig:EW\_no\_dust\_corrections\] (right panel) shows the [H$\alpha$]{} EW calculated using [H$\alpha$]{} flux detection limits, which illustrates the distribution of the [ZFIRE-SP sample]{} if the [H$\alpha$]{} flux was barely detected. The [H$\alpha$]{} EW of the continuum detected galaxies decrease by $\sim1$ dex which suggest that our EW detection threshold is not biased towards higher [H$\alpha$]{} EW values.\
Similar to IMF, there are a number of effects that may account for the clear disagreement between the observed data and models. In subsequent sections we explore effects from\
$\bullet$ dust (Section \[sec:dust\]),\
$\bullet$ observational bias (Section \[sec:observational\_bias\]),\
$\bullet$ star bursts (Section \[sec:star\_bursts\]),\
$\bullet$ stellar rotation (Section \[sec:stellar\_rotation\]),\
$\bullet$ binary stellar systems (Section \[sec:binaries\]),\
$\bullet$ metallicity (Section \[sec:model\_Z\]), and\
$\bullet$ high mass cutoff (Section \[sec:mass\_cutoff\])\
in SSP models to explain the distribution of [H$\alpha$]{} EW vs optical colours of the [ZFIRE-SP sample]{} without invoking IMF change.
 
Is Dust the Reason? {#sec:dust}
===================
As summarized by @Kennicutt1983, the dust vector is nearly orthogonal to IMF change vector, and therefore, we expect the tracks in the [H$\alpha$]{} EW vs optical colour parameter space to be independent of galaxy dust properties. In this section, we describe galaxy dust properties.We explain how dust corrections were applied to the data and their IMF dependence and explore the difference in reddening between stellar and nebular emission line regions as quantified by @Calzetti2000 for $z\sim0$ star-forming galaxies.
We use FAST [@Kriek2009] with ZFIRE spectroscopic redshifts from @Nanayakkara2016 and multi-wavelength photometric data from ZFOURGE [@Straatman2016] to generate estimates for stellar attenuation (Av) and stellar mass for our galaxies. FAST uses SSP models from @Bruzual2003 and a $\chi^2$ fitting algorithm to derive ages, star-formation time-scales, and dust content of the galaxies. All FAST SED templates have been calculated assuming solar metallicity, @Chabrier2003 IMF, and @Calzetti2000 dust law. We refer the reader to @Straatman2016 for further information on the use of FAST to derive stellar population properties in the ZFOURGE survey.
Applying SED derived dust corrections to data {#sec:dust_corrections}
---------------------------------------------
We use stellar attenuation values calculated by FAST to perform dust corrections to our data. First, we consider the dust corrections for rest frame [H$\alpha$]{} EWs and then we correct the [$\mathrm{[340]-[550]}$]{} colours.
By using [@Cardelli1989] and @Calzetti2000 attenuation laws to correct nebular and continuum emission lines, respectively, we derive the following equation to obtain dust corrected [H$\alpha$]{} EW ($EW_i$) values: $$\label{eg:EW_dust_corrected}
log_{10}(EW_i) = log_{10}(EW_{obs}) + 0.4A_c(V)(0.62f-0.82)$$ where $EW_{obs}$ is the observed EW, $A_c$ is the SED derived continuum attenuation, and $f$ is the difference in reddening between continuum and nebular emission lines.
[@Calzetti2000] found a $f\sim2$ for $z\sim0$ star-forming galaxies, which we use for our analysis under the assumption that the actively star forming galaxies at $z\sim0$ are analogues to star-forming galaxies at $z\sim2$. Henceforth, for convenience we refer to $f=1/0.44$ [@Calzetti2000] value as $f=2$. We further show key plots in this analysis using a dust correction of $f=1$ to consider equal dust extinction between stellar and ionized gas regions. This is driven by the assumption that A and G stars that contribute to the continuum of $z\sim2$ star-forming galaxies are still associated within their original birthplaces similar to O and B stars due to insufficient time for the stars to move away from the parent birth clouds within the $<3$ Gyr time-scale.
Similarly, using @Calzetti2000 attenuation law we obtain dust corrected fluxes for the \[340\] and \[550\] filters as follows:
$$\label{eq:BC340 dust corrected}
f([340]) = f([340]_{obs}) \times 10^{0.4 \times 1.56 A_c(V)}$$
$$\label{eq:BC550 dust corrected}
f([550]) = f([550]_{obs}) \times 10^{0.4 \times 1.00 A_c(V)}$$
A complete derivation of the dust corrections presented here are shown in Appendix \[sec:dust\_derivation\].
Figure \[fig:EW\_with\_dust\_corrections\] shows the distribution of our sample before and after dust corrections are applied. In the left panels we show our sample before any dust corrections are applied, with arrows in cyan denoting dust vectors for varying $f$ values. It is evident from the figure that the galaxies in this parameter space are very dependent on the $f$ value used. For $f$ values of 1 and 2, the effect of dust is orthogonal to IMF change, while values above 2 may influence the interpretation of the IMF. We note that $f>2$ makes the problem of high [H$\alpha$]{} EW objects worse, so we do not consider such values further.
Figure \[fig:EW\_with\_dust\_corrections\] right panels show the dust corrections applied to both [H$\alpha$]{} EW and the [$\mathrm{[340]-[550]}$]{} colours for the [ZFIRE-SP sample]{}. Without the effect of dust, we expect the young star forming galaxies to show similar bluer colours and therefore, the narrower [$\mathrm{[340]-[550]}$]{} colour space occupied by our dust corrected sample is expected. With a dust correction of $f=1$, majority of the galaxies lie below the $\Gamma=-1.35$ IMF track with only $\sim1/5$th of galaxies showing higher [H$\alpha$]{} EWs. However, with $f\sim2$ dust correction, there is a significant presence of galaxies with extremely high [H$\alpha$]{} EW values for a given [$\mathrm{[340]-[550]}$]{} colour inferred from a $\Gamma=-1.35$ IMF and [$\sim$]{}60% of the galaxies lie above this IMF track.
Even $\sim\times2$ larger errors for the individual [H$\alpha$]{} EW measurements cannot account for the galaxies with the largest deviations from the Salpeter tracks. The change of $f$ from $2\Rightarrow1$ decreases the median [H$\alpha$]{} EW value by $\sim0.2$dex. However, galaxies still show a large scatter in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space with points lying well above the Salpeter IMF track.
   
The form of the attenuation law of galaxies at $z>2$ show conflicting results between studies. Observations from the Atacama Large Millimeter Array have indicated the presence of galaxies with low infra-red (IR) luminosities suggesting galaxies with attenuation similar to the Small Magellanic Cloud [SMC., @Capak2015; @Bouwens2016b]. @Reddy2015 showed an SMC like attenuation curve for $z\sim2$ galaxies at $\lambda\gtrsim2500$Å and a @Calzetti2000 like attenuation curve for the shorter wavelengths. However, *HST* grism and SED fitting analysis of galaxies at $z\sim2-6$ has shown no deviation in the attenuation law derived by @Calzetti2000 for local star-forming galaxies. Such conflicts are also apparent in simulation studies, where @Mancini2016 showed evidence for SMC like attenuation with clumpy dust regions while @Cullen2017 has shown that galaxies contain similar dust properties as inferred by @Calzetti2000.
In order to understand the role of dust laws in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space we compare the results using other dust laws such as @Pei1992 SMC dust law and @Reddy2015 $z\sim2$ dust law to correct the stellar contributions ([H$\alpha$]{} continuum and optical colours). A comparison between the distribution of galaxies obtained with different dust laws for a given $f$ is shown by Figure \[fig:EW\_with\_dust\_corrections\_with\_various\_dust\_laws\]. The fraction of galaxies with $\Delta$EW $>2\sigma$ from the $\Gamma=-1.35$ IMF track with $f=1 (f=2)$ dust corrections are $\sim20\% (\sim45\%),\sim35\% (\sim75\%),$ and $\sim15\% (\sim55\%)$ for @Calzetti2000, @Pei1992 SMC, and @Reddy2015 dust laws, respectively. However, we refrain from interpreting the differences in the distributions of the sample between the considered dust laws because the attenuation values used in the ZFIRE/ZFOURGE surveys have been derived from SED fitting by FAST using a @Calzetti2000 dust law. Compared to the adopted dust law, the change in the value of $f$ has a stronger influence on the galaxies in our parameter space and can significantly affect the EW values, which is discussed further in Section \[sec:calzetti\_factor\].
![Here we show the distribution of the [ZFIRE-SP sample]{} in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space with the [H$\alpha$]{} continuum and optical colours dust corrections applied following [**Top:**]{} [@Calzetti2000] attenuation law, [**Centre:**]{} [@Pei1992] SMC attenuation law, and [**Bottom:**]{} [@Reddy2015] attenuation law. In all panels [@Cardelli1989] attenuation law has been used to dust correct the nebular emission lines with equal amount of extinction applied to continuum and nebular emission line regions $(f=1)$. The arrows in the bottom left corner show the dust vector for a galaxy with Av=0.5 but with varying [@Calzetti1994] factors, which is shown as *f* next to each arrow. []{data-label="fig:EW_with_dust_corrections_with_various_dust_laws"}](figures/EW_plot_BC340-BC550_dust_corrected_f=1_calzettiNone.pdf "fig:") ![Here we show the distribution of the [ZFIRE-SP sample]{} in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space with the [H$\alpha$]{} continuum and optical colours dust corrections applied following [**Top:**]{} [@Calzetti2000] attenuation law, [**Centre:**]{} [@Pei1992] SMC attenuation law, and [**Bottom:**]{} [@Reddy2015] attenuation law. In all panels [@Cardelli1989] attenuation law has been used to dust correct the nebular emission lines with equal amount of extinction applied to continuum and nebular emission line regions $(f=1)$. The arrows in the bottom left corner show the dust vector for a galaxy with Av=0.5 but with varying [@Calzetti1994] factors, which is shown as *f* next to each arrow. []{data-label="fig:EW_with_dust_corrections_with_various_dust_laws"}](figures/EW_plot_BC340-BC550_dust_corrected_f=1_peiSMC.pdf "fig:") ![Here we show the distribution of the [ZFIRE-SP sample]{} in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space with the [H$\alpha$]{} continuum and optical colours dust corrections applied following [**Top:**]{} [@Calzetti2000] attenuation law, [**Centre:**]{} [@Pei1992] SMC attenuation law, and [**Bottom:**]{} [@Reddy2015] attenuation law. In all panels [@Cardelli1989] attenuation law has been used to dust correct the nebular emission lines with equal amount of extinction applied to continuum and nebular emission line regions $(f=1)$. The arrows in the bottom left corner show the dust vector for a galaxy with Av=0.5 but with varying [@Calzetti1994] factors, which is shown as *f* next to each arrow. []{data-label="fig:EW_with_dust_corrections_with_various_dust_laws"}](figures/EW_plot_BC340-BC550_dust_corrected_f=1_reddyNone.pdf "fig:")
To investigate differences between our $z\sim2$ sample with HG08 $z\sim0$ sample, we derive dust corrections to the [$\mathrm{(g-r)_{0.1}}$]{} colours. Using the following equations to apply dust corrections to g$_{0.1}$ and r$_{0.1}$ fluxes we recalculate the [$\mathrm{(g-r)_{0.1}}$]{} colours for the [ZFIRE-SP sample]{}.
$$\label{eq:g dust corrected}
f(g_i)_{0.1} = f(g_{obs})_{0.1} \times 10^{0.4 \times 1.25 A_c(V)}$$
$$\label{eq:r dust corrected}
f(r_i)_{0.1} = f(r_{obs})_{0.1} \times 10^{0.4 \times 0.96 A_c(V)}$$
We show the [H$\alpha$]{} EW vs [$\mathrm{(g-r)_{0.1}}$]{} colour comparison between ZFIRE and SDSS samples in Figure \[fig:EW\_HG08\_comp\]. The dust corrections for the [ZFIRE-SP sample]{} has been performed using a $f=1$ and $f=2$. Similar to the [$\mathrm{[340]-[550]}$]{} colour relationship, there is a significant presence of galaxies with extremely high [H$\alpha$]{} EW values and [$\sim$]{}60%of the galaxies lie above the Salpeter IMF track when dust corrections are applied with a $f=2$. Furthermore, the $z\sim2$ sample shows much bluer colours compared to HG08 sample, which we attribute to the younger ages ($\sim850$ Myr inferred from tracks with a Salpeter IMF) and the higher SFRs of galaxies at $z\sim2$.
![Comparison of the [H$\alpha$]{} EW and [$\mathrm{(g-r)_{0.1}}$]{} colours of the $z\sim2$ [ZFIRE-SP sample]{} with the HG08 $z\sim0$ sample. The HG08 sample is shown by the 2D grey histogram. The PEGASE models shown correspond to varying IMFs: from top to bottom $\Gamma=-0.5,-1.0,-1.35,\ \mathrm{and}\ -2.0$. Similar to Figure \[fig:EW\_no\_dust\_corrections\], for each IMF we show three models with exponentially declining SFHs with varying p$_1$ values (from top to bottom p$_1$=1500 Myr, 1000 Myr, and 500 Myr). Model tracks at t$>3200$ Myr are shown by the dashed lines. [**Top:**]{} [ZFIRE-SP sample]{} with dust corrections applied with a $f=1$. [**Bottom:**]{} [ZFIRE-SP sample]{} with dust corrections applied with a $f=2$. Note that HG08 uses a $f=2$ in their dust corrections. []{data-label="fig:EW_HG08_comp"}](figures/EW_plot_gr_z01_dust_corrected_f=1.pdf "fig:") ![Comparison of the [H$\alpha$]{} EW and [$\mathrm{(g-r)_{0.1}}$]{} colours of the $z\sim2$ [ZFIRE-SP sample]{} with the HG08 $z\sim0$ sample. The HG08 sample is shown by the 2D grey histogram. The PEGASE models shown correspond to varying IMFs: from top to bottom $\Gamma=-0.5,-1.0,-1.35,\ \mathrm{and}\ -2.0$. Similar to Figure \[fig:EW\_no\_dust\_corrections\], for each IMF we show three models with exponentially declining SFHs with varying p$_1$ values (from top to bottom p$_1$=1500 Myr, 1000 Myr, and 500 Myr). Model tracks at t$>3200$ Myr are shown by the dashed lines. [**Top:**]{} [ZFIRE-SP sample]{} with dust corrections applied with a $f=1$. [**Bottom:**]{} [ZFIRE-SP sample]{} with dust corrections applied with a $f=2$. Note that HG08 uses a $f=2$ in their dust corrections. []{data-label="fig:EW_HG08_comp"}](figures/EW_plot_gr_z01_dust_corrected_f=2.pdf "fig:")
In Figure \[fig:EW\_deviations\_from\_salp\] we use the $\Gamma=-1.35$ IMF tracks to compute the deviation of observed [H$\alpha$]{} EW values from a canonical Salpeter like IMF. For each [$\mathrm{(g-r)_{0.1}}$]{} galaxy colour we calculate the expected [H$\alpha$]{} EW using the standard PEGASE model computed using an exponential decaying SFH with a p$_1=1000$ Myr. We then calculate the deviation between the observed values to the expected values. Only the $f=2$ scenario is considered here to be consistent with the dust corrections applied by HG08. Our results suggest that the ZFIRE sample exhibits a log-normal distribution with a mean and a standard deviation of 0.090 and 0.321 units, respectively. Similarly for the HG08 sample, the values are distributed with a mean and a standard deviation of -0.032 and 0.250 units. Compared to HG08, the [ZFIRE-SP sample]{} shows a larger scatter and favours higher [H$\alpha$]{} EW values for a given Salpeter like IMF. A simple two sample K-S test for the [ZFIRE-SP sample]{} and HG08 gives a Ks statistic of 0.37 and a P value of $1.32\times10^{-12}$, which suggests that the two samples are distinctively different from each other. In subsequent sections, we further explore whether the differences between the $z\sim0$ and $z\sim2$ populations are driven by IMF change or other stellar population parameters.
![ Deviations of the observed [H$\alpha$]{} from the canonical Salpeter like IMF tracks in the [H$\alpha$]{} EW vs [$\mathrm{(g-r)_{0.1}}$]{} colour space. We show the $z\sim0$ SDSS HG08 sample (grey–black) and the $z\sim2$ [ZFIRE-SP sample]{}(cyan–red). Both histograms are normalized to an integral sum of 1 and best-fitting Gaussian functions are overlaid. The parameters of the Gaussian functions are shown in the legend. []{data-label="fig:EW_deviations_from_salp"}](figures/EW_deviations_from_salp.pdf)
IMF dependence of extinction values {#sec:further_dust}
-----------------------------------
Dust corrections applied to the [ZFIRE-SP sample]{}, as explained in Section \[sec:dust\_corrections\], are derived from FAST [@Kriek2009] using best-fitting model SEDs to ZFOURGE photometric data. FAST uses a grid of SED template models to fit galaxy photometric data to derive the best fit redshift, metallicity, SFR, age, and Av values for the galaxies via a $\chi^2$ fitting technique. Even though these derived properties may show degeneracy with each other (see @Conroy2013 for a review), in general FAST successfully describes observed galaxy properties of deep photometric redshift surveys [@Whitaker2011; @Skelton2014; @Straatman2016]. FAST has a limited variety of stellar templates, and therefore, we cannot explore the effect of varying IMFs on the FAST derived extinction values.
In order to examine the role of IMF on derived extinction values, we compare the distribution of ZFIRE rest frame UV and optical colours with PEGASE model galaxies. Following the same procedure used to derive the \[340\] and \[550\] filters, we design two boxcar filters centred at 1500Å (\[150\]) and 2600Å (\[260\]) with a length of 675Å. The wavelength regime covered by these two filters approximately correspond to the B and I filters in the observed frame for galaxies at $z\sim2$ (further information is provided in Appendix \[sec:filter choice 150\]). Therefore, K corrections are small and the computed values are robust.
By binning galaxies in stellar mass, we find massive galaxies to be dustier than their less less massive counterparts. We show the distribution of our sample in the rest frame UV vs rest frame optical parameter space in Figure \[fig:Av\_ZFIRE\] (left panel). PEGASE model galaxies with $\Gamma=-1.35$ and varying SFHs are shown by the solid model tracks. When we apply a [$\mathrm{A_v}$]{}=1 extinction, the models show a strong diagonal shift due to reddening of the colours in both axis. For each set of tracks, we perform a best-fitting line to the varying SFH models. The dust vector (shown by the arrow) joins the two best fit lines drawn to the models with [$\mathrm{A_v}$]{}=0 and [$\mathrm{A_v}$]{}=1 at time $t$. We define [$\mathrm{A_{v}(ZF)}$]{} to be the correction needed for each individual galaxy to be brought down parallel to the dust vector to the best fit line with [$\mathrm{A_v}$]{}=0, and is parametrized by the following equation:
$$\label{eq:Av_ZFIRE}
\begin{split}
A_{v}(\mathrm{ZF}) = -0.503\times ([340]-[550])\\
+ 1.914\times ([150]-[260])+ 0.607
\end{split}$$
Our simple method of dust parametrization is similar to the technique used by FAST, which fits SED templates to the UV continuum to derive the extinction values. The [$\mathrm{[150]-[260]}$]{} colour probes the UV continuum slope, which is ultra sensitive to dust, while the [$\mathrm{[340]-[550]}$]{} probes the optical continuum slope which is less sensitive to dust. [H$\alpha$]{} emission does not fall within these filters, and hence, is not strongly sensitive to the SFR of the galaxies.
In the right panel of Figure \[fig:Av\_ZFIRE\], we compare the derived extinction values from our method ([$\mathrm{A_{v}(ZF)}$]{}) with the extinction values derived by FAST ([$\mathrm{A_{v}(SED)}$]{}). Since @Chabrier2003 IMF at $\mathrm{M_*>1M_\odot}$ is similar to the slope of Salpeter IMF ($\Gamma=-1.35$), the comparison is largely independent of the IMF. The median and [$\sigma_{\mathrm{NMAD}}$]{} scatter of the Av values derived via FAST and our method is $\sim-0.3$ and $\sim-0.3$ respectively. Therefore, the values agree within $1\sigma$. There is a systematic bias for [$\mathrm{A_{v}(ZF)}$]{} to overestimate the extinction at lower [$\mathrm{A_{v}(SED)}$]{} values and underestimate at higher [$\mathrm{A_{v}(SED)}$]{} values. We attribute this residual pattern to age metallicity degeneracy, which is not considered in the derivation of [$\mathrm{A_{v}(ZF)}$]{}.
The choice of IMF will affect dust corrections derived from UV photometry (using FAST or our empirical method) as there is a modest dependence of the rest frame UV continuum slope on IMF for star-forming populations. We are primarily interested in IMF slopes shallower than Salpeter slope ($\Gamma> -1.35$) to explain our population of high [H$\alpha$]{} EW galaxies. For $\Gamma=-0.5$ we find the best fit model line in the left panel of Figure \[fig:Av\_ZFIRE\] shifts down by [$\sim$]{}0.1 mag. This increases the magnitude of dust corrections and extends the arrows in Figure \[fig:EW\_with\_dust\_corrections\] to bluer colours and higher EWs and does not explain the presence of high [H$\alpha$]{} EW objects. For the purpose of comparing with our default hypothesis (Universal IMF with $\Gamma=-1.35$) we adopt the FAST derived dust corrections.
 
Balmer decrements {#sec:balmer_decrements}
-----------------
Stellar attenuation values computed by fitting a slope to galaxy SEDs in UV, estimates the extinction of old stellar populations that primarily contributes to the galaxy continuum. Nebular emission lines originates from hot ionized gas around young and short-lived O and B stars. Given their short lifetime ($\sim10-20$ Myr), O and B stars are not expected to move far from their birthplace (dusty clouds), thus, the nebular emission-lines are expected to have high levels of extinction. Next, we investigate the dust properties of the stars in different star-forming environments using the luminosity ratios of nebular hydrogen lines and observed UV colours.
Luminosity ratios of nebular hydrogen lines are insensitive to the underlying stellar population and IMF parameters for a fixed electron temperature [@Osterbrock1989]. These line ratios are governed by quantum mechanics, and therefore, can be used to probe the reddening of nebular emission lines and dust geometry under the assumption that ionized gas attenuation resembles that of the underlying stellar population.
With the recent development of sensitive NIR imagers and multi-object spectrographs, studies have now started to investigate the properties of dust at $z\sim2$ [@Shivaei2015; @Reddy2015; @deBarros2015]. These studies show conflicting results on the fraction of stellar to nebular attenuation of galaxies at $z\sim2$. Here we show Balmer decrement results for a sub-sample of our [ZFIRE-SP sample]{} which shows SNR $>5$ detections for both [H$\alpha$]{} and [H$\beta$]{}. The data presented herein are a combination of data released by the ZFIRE data release @Nanayakkara2016 and additional MOSFIRE observations carried out during January 2016. Our sample comprises of 42 galaxies with both [H$\alpha$]{} and [H$\beta$]{} emissions line detections with a SNR $>5$ and 35 galaxies are part of the [ZFIRE-SP sample]{}. Further details on [H$\beta$]{} detection properties are explained in Appendix \[sec:Balmer decrement extended\].
We show the [H$\beta$]{} flux vs [H$\alpha$]{} flux for our total ZFIRE galaxies Figure \[fig:balmer\_decrement\] (left panel). The diagonal dashed line of the left panel shows the Balmer decrement for Case B recombination models with [H$\alpha$]{}/[H$\beta$]{}= 2.86 [@Osterbrock1989]. Galaxies that fall below this criteria are expected to have realistic dust models. In Figure \[fig:balmer\_decrement\] (right panel) we show the comparison between extinction computed for stars by FAST with the extinction computed for ionized gas regions using the Balmer decrement. The colour excess is computed from the Balmer decrement using: $$\label{eq:balmer_dec}
E(B-V)_{neb} = \frac{2.5}{1.163} \times \mathrm{log}_{10} \left \{ \frac{f(H\alpha)}{2.86 \times f(H\beta)} \right \}$$ The distribution of our sample in these panels is similar to @Reddy2015 results as shown by the 2D density histogram and therefore, highlights the complicated dust properties of $z\sim2$ star-forming galaxy populations.
The difference in extinction between stellar and nebular regions {#sec:calzetti_factor}
----------------------------------------------------------------
In this section, we investigate how the differences in dust properties between stellar and ionized gas regions can effect the distribution of our galaxies in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. [@Calzetti2000] showed that, for $z\sim0$ star-forming galaxies, the nebular lines are $\sim\times2$ more attenuated than the continuum regions, but at $z\sim2$ studies show conflicting results [@Shivaei2015; @Reddy2015; @deBarros2015]. Using ZFIRE data in Figure \[fig:balmer\_decrement\], we show that galaxies occupy a large range of $f$ values in our [H$\beta$]{} detected sample. We attribute the scatter in extinction to the properties of sight-lines of the nebular line regions.
Since the universe is only $\sim3$ Gyr old at $z\sim2$, the dense molecular clouds collapsed to form stars would only have had limited time to evolve into homogeneous structures within galaxies. This can give rise to differences in the dust geometries within ionizing clouds resulting in non-uniform dust sight-lines for galaxies at $z\sim2$. By varying the value of $f$ as a free parameter, we calculate the $f$ values required for our galaxies to be consistent with a universal IMF with slope $\Gamma=-1.35$. For each dust corrected [$\mathrm{[340]-[550]}$]{} colour, we compute the [H$\alpha$]{} EW of the PEGASE $\Gamma=-1.35$ IMF track with p$_1=1000$ Myr. We then use the observed and required [H$\alpha$]{} EW values to compute the $f$ as follows: $$\begin{split}
f = \left \{ \frac{\mathrm{log}_{10}(\mathrm{H\alpha\ EW})_{\Gamma=-1.35} - \mathrm{log}_{10}(\mathrm{H\alpha\ EW})_{obs}}{0.44 \times 0.62 \times A_c(v)}
+ \frac{0.82}{0.62} \right \} \\
\end{split}
\label{eq:varying_f}$$ where log$_{10}(\mathrm{H\alpha\ EW})_{\Gamma=-1.35}$ is the [H$\alpha$]{} EW of the PEGASE model galaxy for dust corrected [$\mathrm{[340]-[550]}$]{} colours of our sample and log$_{10}(\mathrm{H\alpha\ EW})_{obs}$ is the observed [H$\alpha$]{} EW.
In Figure \[fig:varying\_f\], we show the $f$ values required for our galaxies to agree with a universal IMF with a slope of $\Gamma=-1.35$. For the 46 continuum detected galaxies, $\sim30\%$ show $f<1$. It is extremely unlikely that galaxies at $z\sim2$ would have $f<1$, which suggests that ionizing dust clouds where the nebular emission lines originate from are less dustier than regions with old stellar populations. Furthermore, galaxies that lie above the Salpeter track (log$_{10}$([H$\alpha$]{} EW) $>2.2$) requires $f<1$ and therefore, even a varying $f$ hypothesis cannot account for the high EW galaxies. $\sim17\%$ of continuum detections have $f<0$ which is not physically feasible since it requires dust to have the opposite effect to attenuation. Therefore, we reject the hypothesis that varying $f$ values could explain the high [H$\alpha$]{} EWs of our galaxies.
![ The distribution of $f$ values required for galaxies in the [ZFIRE-SP sample]{} to agree with a $\Gamma=-1.35$ Salpeter like IMF. For each dust corrected [$\mathrm{[340]-[550]}$]{} colour of the [ZFIRE-SP sample]{} galaxies, we use the corresponding [H$\alpha$]{} EW of the $\Gamma=-1.35$ PEGASE track with an exponentially declining SFH with a p$_1=1000$ Myr to compute the $f$ value required for the observed [H$\alpha$]{} EW to agree with the $\Gamma=-1.35$ IMF. The vertical dashed line is the f=0 line. []{data-label="fig:varying_f"}](figures/f_hist.pdf)
Observational bias {#sec:observational_bias}
------------------
The [ZFIRE-SP sample]{} spans a large range of [H$\alpha$]{} EWs, suggesting a considerable variation in the sSFRs of the ZFIRE galaxies at $z\sim2$. High [H$\alpha$]{} EW can result due to two reasons:
1. High line flux: suggests a higher SFR in time-scales of [$\sim$]{}10 Myr.
2. Lower continuum level: suggests lower stellar mass for galaxies.
These two scenarios should be considered together: i.e., a higher line flux with lower continuum level would suggest the galaxy to be going through an extreme star-formation phase. We investigate any detection bias that could explain the our distribution of [H$\alpha$]{} EWs.
In @Nanayakkara2016, we show that the ZFIRE COSMOS K band detections are mass complete to $\mathrm{\log_{10}(M_*/M_\odot)\sim9.3}$. In Figure \[fig:Ha\_MS\_and\_cont\] we show the distribution of the [H$\alpha$]{} flux and continuum levels of our sample. It is evident from Figure \[fig:Ha\_MS\_and\_cont\] (left panel) that our galaxies evenly sample the star-forming main-sequence described by @Tomczak2014 without significant bias towards extreme [H$\alpha$]{} flux values. Therefore, we conclude that the [H$\alpha$]{} fluxes we detect are typical of star-forming galaxies at $z=2.1$.
In Figure \[fig:Ha\_MS\_and\_cont\] (right panel), we compare our [H$\alpha$]{} flux values with the derived continuum levels. Continuum detected galaxies show continuum levels that are in order of $\sim2$ mag smaller compared to the [H$\alpha$]{} fluxes, and therefore, the higher [H$\alpha$]{} EWs in our sample are primarily driven by the low continua. Note that our continuum detection level is $\sim-2.3$ log flux units. Therefore, for galaxies with only line detection, the difference between [H$\alpha$]{} flux and continuum level is much higher, which suggests much larger [H$\alpha$]{} EWs.
Several studies investigated the [H$\alpha$]{} EW of galaxies at higher redshifts ($z\geq1.5 $) [@Erb2006b; @Shim2011; @Fumagalli2012; @Kashino2013; @Stark2013; @Masters2014; @Sobral2014; @Speagle2014; @Marmol-Queralto2016; @Rasappu2016] using SED fitting techniques and/or grism spectra. We find that our [H$\alpha$]{} EWs show good agreement with EWs expected at $z\sim2$ [@Marmol-Queralto2016] and conclude that our observed [H$\alpha$]{} EW values are typical of $z\sim2$ galaxies.
However, there are no studies that use high quality spectra to study the [H$\alpha$]{} EW at $z\sim2$. Even though our [H$\alpha$]{} fluxes and EWs are typical of $z\sim2$ galaxies, in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space, a large fraction of our galaxies show high EWs for a given [$\mathrm{[340]-[550]}$]{} colour compared to the expectation by a Salpeter like IMF. Our high EWs are driven by lower continuum levels, for which we consider two possible explanations.
1. Most galaxies have quenched their starburst phase in a time-scale of $\sim10$ Myr. Therefore, the old stellar populations are still being built up explaining the lack of continuum level from the older stars.
2. Stars are being formed continuously at $z\sim2$ with a higher fraction of high mass stars.
In Section \[sec:star\_bursts\], we investigate the effects of starbursts on our study to examine how probable it is for $\sim1/3$rd of our galaxies to have quenched their star-formation within a time-scale of $\lesssim10$ Myr.
 
Can Star bursts Explain the high [H$\alpha$]{}-EWs? {#sec:star_bursts}
===================================================
Galaxies at $z\sim2$ are at the peak of their star formation history [@Hopkins2006]. We expect these galaxies to be rapidly evolving with multiple stochastic star formation scenarios within their stellar populations. If our sample consists of a significant population of starburst galaxies, it may cause significant systemic biases to our IMF analysis.
In this section, we investigate the effects of bursts on the SFHs of the galaxies. We study how the distribution of galaxies in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space may be affected by such bursts and how we can mitigate their effects. We demonstrate that our final conclusions are not affected by starbursts.
Effects of starbursts
---------------------
A starburst event would abruptly increase the [H$\alpha$]{} EW of a galaxy within a very short time-scale ($\lesssim5$ Myr). The increase in ionizing photons is driven by the extra presence of O and B stars during a starburst which increases the amount of Lyman continuum photons. Assuming that a constant factor of Lyman continuum photons get converted to [H$\alpha$]{} photons via multiple scattering events, we expect the number of [H$\alpha$]{} photons to increase as a proportion to the number of O and B stars. Furthermore, the increase of the O and B stars would drive the galaxy to be bluer causing the [$\mathrm{[340]-[550]}$]{} colours to decrease.
The ability of a starburst to drive the points away from the monotonic Salpeter track is limited. The deviations are driven by the burst fraction, which we define as the burst strength divided by the length of the starburst. If the burst fraction is small, it has a small effect. However if it is very large it dominates both the [H$\alpha$]{} and the optical light, the older population is ‘masked’, and it heads back towards the track albeit at a younger age, i.e. one is seeing the monotonic history of the burst component. The maximum deviation in our study occurs for burst mass fractions of 20–30% occurring in time-scales of 100-200 Myr or fractions of thereof, which can cause excursions of up to $\sim 1$ dex. However as we will see this only occurs for a short time.
We show the effect of a starburst on a PEGASE model galaxy with a monotonic SFH in Figure \[fig:burst\_model\_EW\_BC\]. A starburst with a time-scale of $\tau_b=100$ Myr and a strength $f_m=0.2$ (fraction of mass at $\sim3000$ Myr generated during the burst) is overlaid on the constant SFH model at time = 1500 Myr. The starburst drives the increase of [H$\alpha$]{} EW which occurs in a very short time-scale. In Figure \[fig:burst\_model\_EW\_BC\] the galaxy deviates from the constant SFH track as soon as the burst occurs and reaches a maximum [H$\alpha$]{} EW within 4 Myr. At this point, the extremely high-mass stars made during the burst will start to leave the main sequence. This will increase the number of red-giant stars resulting in higher continuum level around the [H$\alpha$]{} emission line. Therefore, the [H$\alpha$]{} EW starts to decrease slowly after $\sim4$ Myr. Once the burst stops the [H$\alpha$]{} EW drops rapidly to values lower than pre-burst levels. The galaxy track will eventually join the $\Gamma=-1.35$ smooth SFH track at a later time than what is expected by a smooth SFH model.
We further investigate the effect of starbursts with smaller time-scales ($t_b<20$ Myr) and find that the evolution of [H$\alpha$]{} EW in the aftermath of the burst to be more extreme for similar $f_m$ values. This is driven by more intense star-formation required to generate the same amount of mass within $\sim1/10$th of the time-scale. Since both [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours are a measure of sSFR we expect the evolution to strongly dependent on $f_m$ and $\tau_b$ of the burst and to be correlated with each other.
To consider effects of bursts, we adopt two complimentary approaches. First, we stack the data in stellar mass and [$\mathrm{[340]-[550]}$]{} colour bins. Stellar populations are approximately additive and by stacking we smooth the stochastic SFHs in individual galaxies and also account the effect from galaxies with no [H$\alpha$]{} detections. Secondly, we use PEGASE to model starbursts to generate Monte Carlo simulations to predict the distribution of the galaxies in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. Using the simulations we investigate whether it is likely that the observed discrepancy is driven by starbursts and also double check whether the stacking of galaxies would generate smooth SFH models.
stacking {#sec:EW_stacking}
--------
In order to remove effects from stochastic SFHs of individual galaxies in our sample, we employ a spectral stacking technique. We first divide the galaxies into three mass and dust corrected [$\mathrm{[340]-[550]}$]{} colour bins as follows.
- Mass bins: log$\mathrm{_{10}(M_*/M_\odot)}$ $\leq9.5$, $9.5<$ log$\mathrm{_{10}(M_*/M_\odot)} <10$, log$\mathrm{_{10}(M_*/M_\odot)}\geq10$
- [$\mathrm{[340]-[550]}$]{} colour bins: ([$\mathrm{[340]-[550]}$]{}) $\leq0.56$, $0.56<$ ([$\mathrm{[340]-[550]}$]{}) $<0.65$, ([$\mathrm{[340]-[550]}$]{}) $\geq0.65$
We select a wavelength interval of [$\sim$]{}1500Å centred around the [H$\alpha$]{} emission for each spectra and mask out the sky lines with approximately $2\times$ the spectral resolution. In order to avoid systematic biases arisen from narrowing down the sampled wavelength region in the rest frame, we instead redshift all spectra to a common $z=2.1$ around which most of the galaxies reside. We sum all the spectra at this redshift, in their respective bins. The error spectra are stacked in quadrature following standard error propagation techniques.
We mask out the nebular emission-line regions of the stacked spectra and use a sigma-clipping algorithm to fit a continuum (c1). The error in the continuum is assigned as the standard deviation of the continuum values of 1000 bootstrap iterations.
We visually inspect the stacked spectra to identify the [H$\alpha$]{} emission line profiles to calculate the integrated flux. Stacked [H$\alpha$]{} EW is calculated following equations \[eq:Ha\_EW\_obs\] and \[eq:Ha\_EW\_rest\].
To estimate the error on the stack due to the stochastic variations between galaxies, we use a bootstrapping technique to calculate the error of the stacked [H$\alpha$]{} EW values. We bootstrap galaxies with replacement in each bin to produce 1000 stacked spectra for each of which we calculate the [H$\alpha$]{} EW. The standard deviation of the logarithmic EW values for each bin is considered as the error of the [H$\alpha$]{} EW of the stacked spectra. We expect the bootstrap errors to include stochastic variations in the SFHs between galaxies. If our sample comprises of galaxies undergoing extreme starbursts, the effects from such bursts should be quantified within these error limits.
We stack the individual \[340\] and \[550\] fluxes of the galaxies in similar mass and [$\mathrm{[340]-[550]}$]{} colour bins. The average extinction value of the galaxies in each bin is considered as the extinction of the stacked spectra. We use this extinction value to dust correct the [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours of the stacked spectra, following recipes explained in Section \[sec:dust\_corrections\].
Figure \[fig:EW\_stacked\] shows the distribution of the stacked spectra in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space before and after dust corrections are applied. We consider dust corrections with $f=1$ and $f=2$. For $f=1$ dust correction scenario the bluest colour bin and the medium and high mass stacked data points agree with the $\Gamma=-1.35$ track. However, the redder colours bins and lowest mass galaxies on average prefer shallower IMFs. With $f=2$ dust correction, the distribution of stacked data points even with bootstrap errors suggest values $\sim0.1-0.4$ dex above the Salpeter IMF, if interpreted as an IMF variation. In both scenarios, redder galaxies show larger deviation from the canonical Salpeter IMF. If starbursts drive the distribution of the galaxies, we expect the bluer galaxies on average to show larger deviation from the $\Gamma=-1.35$ tracks.
  
To further account for any detection bias arisen from [H$\alpha$]{} undetected galaxies, we use the ZFIRE COSMOS field K band targeted galaxies with no [H$\alpha$]{} emission detections to compute a continuum (c2) contribution to the stacked spectra. We use the photometric redshifts to select 37 galaxies within $1.90<z<2.66$, which is the redshift interval the [H$\alpha$]{} emission line falls within the MOSFIRE K band. Next we perform a cut to select galaxies with similar stellar masses ($9.04<\mathrm{log_{10}(M_\odot)}<10.90$) and [$\mathrm{[340]-[550]}$]{} colours ($-0.12<$([$\mathrm{[340]-[550]}$]{})$<1.46$) to the galaxies in the [ZFIRE-SP sample]{}. The final sample comprises of 21 galaxies which we use to stack the 1D spectra in mass and [$\mathrm{[340]-[550]}$]{} colour bins.
In order to stack the spectra, first we mask out the sky regions and assume that all galaxies are at a common $z=2.1$. We mask out [H$\alpha$]{} and [\[\]]{} emission line regions and fit a continuum similar to how c1 was derived. We add c1+c2 to re-calculate the [H$\alpha$]{} EW for each of the mass and colour bins. Since the continuum level is increased by the addition of c2, the [H$\alpha$]{} EW of the spectra reduces. We note that the highest mass bin contains no [H$\alpha$]{} undetected galaxies. Figure \[fig:EW\_c1c2\_stacked\] shows the change in stacked data points when the [H$\alpha$]{} non-detected continuum is considered with a dust correction of $f=1$ and $f=2$. The maximum deviation of the stacked [H$\alpha$]{} EW values is [$\sim$]{}0.2 dex and the lowest mass and the reddest [$\mathrm{[340]-[550]}$]{} colour bins show the largest deviation. This is driven by the higher number of lower mass redder galaxies which have been targeted but not detected by the ZFIRE survey. The magnitude of the deviations are independent of the $f$ value used for the dust corrections and for both $f=1$ and $f=2$, the galaxies that show an excess of [H$\alpha$]{} EW compared to $\Gamma=1.35$ tracks still show an excess when the added c2 continuum contribution is considered. For $f=2$ dust corrections, even with considering the effect of non-detected continuum levels, majority of the stacked galaxies in our sample are significantly offset from the canonical Salpeter like IMF value.
![ Here we show the effect of adding the continuum contribution (c2) of galaxies with no [H$\alpha$]{} detections to the [ZFIRE-SP sample]{} stacks. Galaxies are stacked in mass and colour bins and the arrows show the change in EW when the contribution from c2 is added to the data. All tracks shown are similar to Figure \[fig:EW\_stacked\]. [**Top:**]{} Dust corrections applied with $f=1$. The errors for the data points are similar to Figure \[fig:EW\_stacked\] (centre panel). [**Bottom:**]{} Dust corrections applied with $f=2$. The errors for the data points are similar to Figure \[fig:EW\_stacked\] (right panel). []{data-label="fig:EW_c1c2_stacked"}](figures/EW_plot_stack_dust_corrected_c1+c2_f=1.pdf "fig:") ![ Here we show the effect of adding the continuum contribution (c2) of galaxies with no [H$\alpha$]{} detections to the [ZFIRE-SP sample]{} stacks. Galaxies are stacked in mass and colour bins and the arrows show the change in EW when the contribution from c2 is added to the data. All tracks shown are similar to Figure \[fig:EW\_stacked\]. [**Top:**]{} Dust corrections applied with $f=1$. The errors for the data points are similar to Figure \[fig:EW\_stacked\] (centre panel). [**Bottom:**]{} Dust corrections applied with $f=2$. The errors for the data points are similar to Figure \[fig:EW\_stacked\] (right panel). []{data-label="fig:EW_c1c2_stacked"}](figures/EW_plot_stack_dust_corrected_c1+c2_f=2.pdf "fig:")
Simulations of starbursts {#sec:simulations}
-------------------------
By employing spectral stacking and bootstrap techniques, we showed in Section \[sec:EW\_stacking\], that our galaxies on average favour shallower IMFs than the universal Salpeter IMF. In this section, we use PEGASE SSP models to generate simulations with starbursts to calculate the likelihood for half of the [ZFIRE-SP sample]{} to be undergoing simultaneous starbursts at $z\sim2$. Furthermore, by randomly selecting galaxies from the simulation at random times, we stack the galaxies in mass and [$\mathrm{[340]-[550]}$]{} colour bins to make comparisons with the stack properties of the [ZFIRE-SP sample]{}.
We empirically tune our burst parameters to produce the largest number of galaxies above the Salpeter track. A single starburst with time-scales of $t_b\sim100-300$ Myr and with $f_m\sim0.1-0.3$ are overlaid on constant SFH models with the starburst occurring at any time between $0-3250$ Myr of the galaxies’ lifetime. Simulation properties and the evolution of [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours during starbursts are further discussed in Appendix \[sec:simulation\_properties\].
Our final simulation grid contains 8337 possible time steps, which we use to randomly select galaxies within $2.0<z<2.5$ (similar to the time window where our observed sample lies) to perform a density distribution study and a stacking technique similar to the method described in Section \[sec:EW\_stacking\].
To quantify the probability of starbursts dominating the scatter in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space, we select 10,000 galaxies randomly from the simulated sample to calculate the relative probability galaxies occupy in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. Figure \[fig:simulation\_density\] (top panels) shows the density distribution of the selected galaxy sample. The relative probability is calculated by normalizing the highest density bin to 100%. To generate real values for the logarithmic densities we shift the distribution by 0.01 units. As evident from the figure, for both $f=1$ and $f=2$ dust corrections, there is a higher probability for galaxies to be sampled during the pre or post burst phase due to the very short time-scale the tracks take to reach a maximum [H$\alpha$]{} EW value during a starburst. $\sim90\%$ of the galaxies in the [ZFIRE-SP sample]{} lie in regions with $\lesssim0.1\%$ probability. Therefore, we conclude that it is extremely unlikely that $\sim1/5$th ($f=1$) and $1/3$rd ($f=2$) of the galaxies in the [ZFIRE-SP sample]{} to be undergoing a starburst simultaneously and rule out the hypothesis that starbursts could explain the distribution of the [ZFIRE-SP sample]{} in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space.
   
In Figure \[fig:simulation\_stacks\], we use our burst machinery to validate our stacking method. We select 100 galaxies randomly with replacement from the parent population of 100 galaxies. For each galaxy we select a random time to extract the galaxy spectra at the closest sampled time to retrieve stellar population parameters. We then stack the selected galaxies in stellar mass and [$\mathrm{[340]-[550]}$]{} colour bins. The bins are generated in such a way that the selected galaxies are distributed evenly across the bins. We repeat the galaxy selection and stacking process 100 times to calculate bootstrap errors for the stacked data points. Galaxies containing SFHs with bursts stacked in mass and [$\mathrm{[340]-[550]}$]{} colour show similar distribution to galaxy tracks with constant SFHs. Even with large $t_b$ values, the time-scale the tracks deviate significantly above the $\Gamma=-1.35$ is in the order of 1–5 Myr, and therefore it is extremely unlikely ($\lesssim5$ selected in the stacked sample of 100 galaxies) to preferentially select a large number of galaxies during this phase. Furthermore, stacked errors from bootstrap re-sampling do not deviate significantly from the $\Gamma=-1.35$ tracks. This further strengthens the point that repetitive sampling of galaxies does not yield stacks with higher [H$\alpha$]{} EW values for a given [$\mathrm{[340]-[550]}$]{} colour.
### Smaller bursts {#sec:small_bursts}
Galaxies at $z\sim2$ appear to be clumpy [eg., @Tadaki2013]. Therefore, it is possible for a single clump to have a high SFR that will add a significant contribution to the ionizing flux of a galaxy resulting in higher [H$\alpha$]{} EW values. To account for such scenarios, we perform the starburst simulations with smaller burst time-scales ($t_b\sim10-30$ Myr) and large burst strengths ($f_m\sim0.1-0.3$) and allow the galaxies to commence their constant SFH at a random time between 0 and 2500 Myr. We further constrain the bursts to redshifts between $2.0<z<2.5$ corresponding to a $\Delta t\sim650$ Myr, which is similar to the redshift distribution of our galaxies.
As described previously, we randomly select 10,000 galaxies from our simulated sample, but constraining the selection to the redshift window of $2.0<z<2.5$. We show the density distribution of our randomly selected sample with the observed [ZFIRE-SP sample]{} in Figure \[fig:simulation\_density\] (bottom panels). A large fraction of galaxies are now selected during the post-burst phase, thus with lower [H$\alpha$]{} EWs compared to the reference IMF track, specially with $f=1$ dust corrections. Since the star-bursts are now short lived but have to generate the same fraction of stellar masses as the longer lived bursts, the fraction of mass generated by the burst per unit time is extremely high. Therefore, changes in [H$\alpha$]{} EW and optical colours are much more drastic as a function of time and makes it further unlikely to select galaxies with high [H$\alpha$]{} EWs. We further test scenarios including smaller bursts within short time-scales (see Table \[tab:simulation\_param\]) and find the distribution of selected galaxies to be similar to \[fig:simulation\_density\] (top panels). We conclude that, even limiting the starbursts to a narrow redshift window, does not yield a distribution of galaxies that would explain our high [H$\alpha$]{} EW sample.
Considering Other exotica {#sec:other_exotica}
=========================
In the previous sections we have shown that the distribution of the [ZFIRE-SP sample]{} galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour cannot solely be described by dust or starbursts within a universal IMF framework. In this section, we investigate whether other exotic parameters related to SSP models such as stellar rotations, binary stars, metallicity, and the high mass cutoff could influence the distribution of the galaxies in our parameters to impersonate a varying IMF.
Stellar rotation {#sec:stellar_rotation}
----------------
First, we consider effects of implementing stellar rotation in SSP models. Rotating stellar models produce harder ionizing spectra with higher amounts of photons that are capable of ionizing Hydrogen. This is driven by rotationally induced larger Helium surface abundances and high luminosity of stars [@Leitherer2012], which results in $\sim\times5$ higher ionizing photon output by massive O stars at solar metallicity [@Leitherer2014] and can be $\gtrsim\times10$ towards the end of the main sequence evolution [@Szecsi2015]. The minimum initial mass necessary to form W-R stars is also lowered by stellar rotation resulting in longer lived W-R stars [@Georgy2012] increasing the number of ionizing photons. Therefore, stars with rotation shows higher [H$\alpha$]{} fluxes compared to systems with no rotation, resulting in higher [H$\alpha$]{} EW values. Additionally, stellar rotation also leads to higher mass loss in stars, which results in bluer stars in the red supergiant phase [@Salasnich1999]. Furthermore, stellar models with rotation results in longer lifetimes by $10\%-20\%$ [eg., @Levesque2012; @Leitherer2014]. This allows a larger build up of short lives O and B stars compared with similar IMF and SFH models with no rotation resulting in higher [H$\alpha$]{} flux values and bluer stellar populations.
S99 supports stellar tracks from the Geneva group (explained in detail in @Ekstrom2012 and @Georgy2013 and references therein), which allows the user to compute models with and without invoking stellar rotation. Models with stellar rotation assumes an initial stellar rotation velocity ([$v_{ini}$]{}) of 40% of the break up velocity of the zero-age main-sequence ([$v_{crit}$]{}).
@Leitherer2014 notes that, [$v_{ini}$]{}$=0.4$[$v_{crit}$]{} for stellar systems is of extreme nature and should be considered as an upper boundary for initial stellar rotation values. [$v_{ini}$]{} is defined as the rotational velocity the star possess when it enter the zero-age main-sequence. Depending on stellar properties and interactions with other stars [eg., @deMink2013] the initial rotational velocity of the star will be regulated with time (see Figure 12 of @Szecsi2015, where the evolution of stellar rotation has been investigated as a function of time for models with different stellar masses and initial velocities).
A realistic stellar population will contain a distribution of [$v_{ini}$]{}/[$v_{crit}$]{} values. @Levesque2012 investigated galaxy models with 70% of stars with stellar rotation following [$v_{ini}$]{}=0.4[$v_{crit}$]{} and 30% with no stellar rotation, thus allowing more realistic conditions. They found that such models show $\sim0.5$ dex less Hydrogen ionizing photons compared to a stellar population with all stars with [$v_{ini}$]{}=0.4[$v_{crit}$]{} stellar rotation and $\sim1.4$ dex higher number of Hydrogen ionizing photons compared to a stellar population with no stellar rotation.
The extent of stellar rotation required to describe observed properties of stellar populations is not well understood. Gravitational torques have been shown to prevent stars from rotating $>50\%$ of its breakup velocity during formation [@Lin2011]. @Martins2013 showed that Geneva models with stellar rotation does not reproduce the distribution of massive, evolved stars accurately and requires less amounts of convective overshooting thus lowering the required [$v_{ini}$]{}. However, recent studies demonstrate the requirement for populations of stars with extreme rotation in low-metallicity scenarios to explain the origin of narrow He emission in galaxies [@Grafener2015; @Szecsi2015] and long Gamma-ray bursts [@Woosley2006; @Yoon2006]. Stellar populations of the Large Magellanic Cloud have shown to be distributed following a two peak rotational velocity distribution with $\sim50\%$ of galaxies rotating at $\sim20\%$ of their critical velocities while $\sim20\%$ of the population having near-critical velocities [@Ramirez-Agudelo2013]. Furthermore, populations of Be stars [@Secchi1866; @Rivinius2013], which are near-critically rotating main-sequence B stars observed in local stellar populations [@Lin2015; @Yu2016; @Bastian2017], have shown evidence for the existence of rapidly rotating stars in massive stellar clusters [eg., @Bastian2017].
We show the evolution of galaxy properties in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour in Figure \[fig:rotation\_and\_binary\] (top panels). Due to limitations in rotational stellar tracks, the metallicity of the stars are kept at $Z=0.014$, but the stellar atmospheres are kept at $Z=0.020$. Invoking stellar rotation increases the [H$\alpha$]{} EW by [$\sim$]{}0.1 dex for similar IMFs and shows slightly bluer colours for a given time *t*. Further analysis of the sub-components shows us that these changes in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colours are driven by the increase in [H$\alpha$]{} flux and bluer optical colours.
Implementing stellar rotation results in similar effects of a shallower IMF ($\Gamma>-1.35$), but the deviations are not sufficient to explain the $f=2$ dust corrected [ZFIRE-SP sample]{} within a universal IMF scenario. However, with $f=1$ dust corrections, only $\sim5\%$ of the sample lie above the $\Gamma=-1.35$ track with stellar rotation models with a majority of galaxies showing steeper IMFs.
Having a large fraction of stars with extreme rotation will lead to a higher number of ionizing photons and bluer colours and could potentially explain the high-EW objects in our sample. Sustaining such high rotation requires extremely low metallicities, which we further discuss in Section \[sec:model\_Z\]. Even though we expect the actual variation of the [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours due to stellar rotation at near-solar metallicity to be much smaller than what is shown in Figure \[fig:rotation\_and\_binary\], we cannot rule out extreme stellar rotation dominant in at low metallicities (Z$\sim0.002$). Therefore, extreme stellar rotation may provide one explanation independent of the IMF to describe the distribution of our galaxies in the [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colour space (see Section \[sec:model\_Z\]). Furthermore, stellar rotation can introduce fundamental degeneracies to IMF determination which we discuss further in Section \[sec:ssp\_issues\].
Binary system evolution {#sec:binaries}
-----------------------
We consider the effect of implementing the evolution of binary stellar systems on our study. All SSP models described thus far only considered single stellar populations, i.e. there were no interactions between stars in a stellar population. However, recent observational studies in our Galaxy have shown that [$\sim$]{}50% of massive O stars are in binary systems and that the environment may have a strong influence on the dynamical and/or stellar evolution [@Langer2012; @Sana2012; @Sana2013]. Only a minority of O stars would have undisturbed evolution leading to supernovae [@Leitherer2014], thus introducing additional complexities to SSP models and strong implications for studies using these models to infer observed stellar properties. Furthermore, @Steidel2016 demonstrated the necessity of invoking models with massive star binaries to fit rest frame UV and optical features of star-forming galaxies at $z\sim2.5$.
We use the BPASS v2.0 models [@Stanway2016] to investigate the effects of invoking stellar binary evolution in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. The computed models have been released by the BPASS team only for a limited set of IMF models. We use IMF models with Z=0.02 and $\Gamma=-1.00, -1.35,\ \mathrm{and}\ -1.70$ with a lower and upper mass cutoff at 0.5[M$_\odot$]{} and 100[M$_\odot$]{},respectively. The IMF slope for stellar masses between 0.1[M$_\odot$]{}$-$0.5[M$_\odot$]{} is kept at $\Gamma=-0.30$ for all the models. We remind the reader that stars with $M_*\lesssim0.5M_\odot$ have negligible effect on the [H$\alpha$]{} EW vs optical colour parameter space. Figure \[fig:rotation\_and\_binary\] (bottom panels) compares the effect of considering stellar binary system evolution in this parameter space. Binary rotation with simple prescriptions for stellar rotation slightly increases the [H$\alpha$]{} EW (max increase for a given time is [$\sim$]{}0.2dex) and make galaxies look bluer for a given IMF at a time *t*. These changes are more prominent for galaxies with steeper IMFs and are driven by the [H$\alpha$]{} flux and optical colours of the galaxies. Furthermore, unlike effects of rotation, we see a trend on which the steeper IMFs show larger changes (up to $\sim\times2$) in [H$\alpha$]{} flux and [$\mathrm{[340]-[550]}$]{} colours compared to shallower IMFs.
Due to higher ionizing flux and longer lifetimes of massive O type stars in binary systems, galaxies look bluer at an older age compared to what is predicted by single-star models [@Eldridge2016]. The increase in ionizing flux is driven by transfer of mass between stars causing rejuvenation, generation of massive stars via stellar mergers, and stripping of Hydrogen envelope to form more hot Helium or W-R stars. Mass transfer and mergers between stars also result in larger, bluer stars at later times contributing to the stellar population to be bluer. The change of [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours due to binary system evolution at Z=0.02 is not sufficient to explain the distribution of the [ZFIRE-SP sample]{} galaxies and is significantly smaller than the contribution from stellar rotation.
Note that BPASS single stellar evolutionary models do not consider any form of stellar rotation. BPASS binary models do consider stellar rotation, but only if a secondary star accretes material from a companion. In such scenarios at Z $>0.004$ the secondary star is spun up, fully mixed, and is rejuvenated resulting it to be a zero-age main-sequence star. However, it is assumed that the star is spun down quickly and stellar rotation is not considered for the rest of it’s evolution [@Eldridge2012a; @Stanway2016 J.J. Elridge., private communication]. Since the current version of BPASS binary models does not consider aspects of stellar rotation in the context of reduction in surface gravity and the driving of extra-mixing beyond the expectations from the standard mixing-length theory as discussed in Section \[sec:stellar\_rotation\] (also see papers in the series by @Meynet2000 and @Potter2012), comparisons between S99 Geneva models and BPASS cannot be performed to constrain the net effect of introducing stellar binary stellar evolution to SSP models that consider stellar rotation.
   
Stellar metallicity {#sec:model_Z}
-------------------
@Hoversten2008 showed that the evolution of galaxies [H$\alpha$]{} EW vs optical colour parameter space was largely independent of the metallicity of the galaxies. However, these predictions were made using PEGASE models and did not account for the increase in mass loss via stellar winds and increase in ionizing flux predicted in low metallicity scenarios by models that consider stellar rotation and binary interactions.
The lack of elements such as Fe that dominate the opacity in radiation-driven stellar winds, stellar interiors, and atmospheres in low metallicity stars results in the generation of higher amount of ionizing photons [eg., @Pauldrach1986; @Vink2005; @Steidel2016]. Furthermore, at lower metallicities due to weaker stellar winds the mass loss is rate is low, thus most massive stars retain their luminosity and continue to shine for an extended time.
When stellar rotation is introduced to single stellar population models, due to the higher fraction of W-R stars in higher metallicity environments, rotational stellar models with higher metallicity show a larger increment ($\Delta$EW) in ionization flux compared to the increment seen in lower metallicity models [@Leitherer2014]. However, at $t<3100$ Myr low metallicity rotating stellar models on average show higher amount of ionising flux compared to higher metallicities.
When binary interactions are considered, the mass transfer between the binaries result in the increase of angular momentum of the stars causing an increase in stellar rotation [@deMink2013]. Additionally, at Z $\leq0.004$ if stars with [M$_*$]{} $>20$[M$_\odot$]{} has accreted $>5\%$ of its original mass, BPASS assumes that the star maintains its rapid rotation throughout its main-sequence lifetime [@Stanway2016]. This is driven by weaker stellar winds that allow the stars to maintain their rapid rotations for a prolonged period. Furthermore, rotationally induced mixing of stellar layers causes Hydrogen burning to be efficient resulting in rejuvenation of the main sequence stars. As we show in Section \[sec:stellar\_rotation\], stellar rotation increases the production of ionizing photons and therefore lower metallicity systems with binary interactions show higher [H$\alpha$]{} EW values. Lower cooling efficiencies prominent in lower metallicity environments, also result in the stars to be bluer and brighter. Comparisons between S99 Geneva models with Z=0.002 and Z=0.014 suggest metallicity to have a prominent effect in increasing the [H$\alpha$]{}-EWs compared to stellar rotation. BPASS models also show metallicity effects to be prominent compared to effects by stellar rotation and binary interactions.
Therefore, we conclude that within the scope of current stellar models, metallicity to be the prominent driver in increasing the [H$\alpha$]{}-EWs with stellar rotation and binary interactions contributing to a lesser degree.
In Figure \[fig:Z\_and\_high\_mass\] (top panels), we show the evolution of a $\Gamma=-1.35$ IMF constant SFH stellar tracks from BPASS with varying metallicities. The variation in metallicity in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour is degenerated with IMF variation. Models with lower metallicities favour higher EWs and bluer colours compared to their higher metallicity counterparts.
Next, we explore whether gas phase metallicities computed for our galaxies [@Kacprzak2015; @Kacprzak2016] suggest sufficiently low stellar metallicities to produce ionising flux to explain the high-EW galaxies within a $\Gamma=-1.35$ IMF scenario. Converting gas-phase oxygen abundance to stellar iron abundance in high-z galaxies is nontrivial. First, there are considerable systematic uncertainties in the gas phase metallicities measured using [\[\]]{}/[H$\alpha$]{} ratios. There are uncalibrated interrelations between ionization parameter, electron density and radiation field hardness at $z\sim2$ [@Kewley2013a]. For example, at a fixed metallicity, the [\[\]]{}/[H$\alpha$]{} ratio can be enhanced by a lower ionization parameter or the presence of shocks [eg., @Yuan2012; @Morales-Luis2013] and it is unknown whether the N/O ratio evolves with redshifts [@Steidel2014]. From @Kacprzak2015, the gas-phase oxygen abundance of our sample at $\mathrm{log10(M_*/M_\odot)=9.5}$ is $\sim0.5$ [Z$_\odot$]{}, however, the systematic uncertainty can be a factor of $\times$2 because of the unknown calibrations. Because of this, we emphasize that metallicity can be compared reliably in a relative sense, but not yet on an absolute scale [@Kewley2008].
Second, there is limited knowledge on how iron abundances relative to $\alpha-$element (e.g, O, Mg, Si, S and Ca) abundance change over cosmic time and in different galactic environments [eg., @Wolfe2005; @Kobayashi2006; @Yuan2015]. In addition, there is a lack of consistency in abundance scale used in stellar atmosphere modelling, stellar evolutionary tracks and nebular models [@Nicholls2016]. There are considerable variations in the \[O/Fe\] ratios that are not well-calibrated at the low metallicity end. For example, at \[Fe/H\] $< - 1.0$ the extrapolated \[O/Fe\] ratio based on Milky Way data is 0.5 [@Nicholls2016], with a $\sim0.3$ dex uncertainty in conversions of individual values [eg., @Stoll2013]. @Steidel2016 argued an average \[O/Fe\] ratio of 0.74 for $z\sim2$ UV selected galaxies at oxygen nebular metallicity of $\sim0.5$ [Z$_\odot$]{}, suggesting a substantially lower stellar metallicity of \[Fe/H\] $\sim-1.0$. If we adopt the \[O/Fe\] ratio of @Steidel2016, then we would reach the same conclusion as @Steidel2016 that our stellar abundance is \[Fe/H\] $\sim -1.0$. In this case, we cannot completely rule out extremely low metallicity scenarios to explain the distribution of galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. With $f=2$ ($f=1$) dust corrections between BPASS binary models with stellar metallicity of Z=0.02 to Z=0.002, the amount of objects that lie $2\sigma$ above the reference $\Gamma=-1.35$ track changes from $\sim40\%$ ($\sim9\%$) to $\sim9\%$ ($\sim2\%$).
Given all the uncertainties mentioned above, we think it is premature to convert our gas-phase oxygen abundance to stellar iron abundance and draw meaningful conclusions. We further note that there are significant uncertainties in massive star evolution in SSP codes and the treatment of stellar rotation and binary stars, which we discuss further in Section \[sec:ssp\_issues\].
High mass cutoff {#sec:mass_cutoff}
----------------
@Hoversten2008 showed that the high mass cutoff is degenerated with IMF in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour. In Figure \[fig:Z\_and\_high\_mass\] (bottom panels) we show various IMF slopes with constant SFHs computed for varying values of high mass cutoff. The deviation between tracks with 80[M$_\odot$]{} and 120[M$_\odot$]{} high mass cutoff varies as a function of IMF slope. Shallower IMFs will have a larger effect when the high mass cutoff is increased due to the high number of stars that will populate the high mass regions.
The maximum deviation for the high mass cutoffs for the $\Gamma=-1.35$ tracks is 0.17 dex, which cannot describe the scatter we notice in [H$\alpha$]{} EWs of our sample. Furthermore, at $z\sim2$ we expect the molecular clouds forming the stars to be of low metallicity [@Kacprzak2016], which favours the formation of high mass stars. Therefore, we require the high mass cutoff to increase, but we are limited by the maximum individual stellar mass allowed by PEGASE. BPASSv2 does allow stars up to 300[M$_\odot$]{}, however, we do not employ such high mass limits due to our poor understanding of evolution of massive stars. We conclude that it is extremely unlikely that the high mass cutoff to have a strong influence on the distribution of the [ZFIRE-SP sample]{} galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space.
   
Dependencies with other observables {#sec:other_observables}
===================================
In this section, we investigate if the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour distribution show any relationship with environment, stellar mass, SFR, and metallicity of the galaxies.
ZFIRE surveyed the @Spitler2012 and @Yuan2014 structure to great detail to probe the effects on environment on galaxy evolution. To date, there are 51 spectroscopically confirmed cluster candidates with ZFOURGE counterparts out of which 38 galaxies are included in our IMF analysis. The other 13 galaxies are removed from our analysis due to the following reasons: eight galaxies are flagged as AGN, two galaxies do not meet the $Q_z$ quality cut for our study, two galaxies give negative spectroscopic flux values and one object due to extreme sky line interference. We perform a 2-sample K-S test on the [H$\alpha$]{} EW values and [$\mathrm{[340]-[550]}$]{} colours for the continuum detected cluster and field galaxies in our [ZFIRE-SP sample]{} and find the cluster and field samples to have similar parent properties. Therefore, we conclude that there are no strong environmental effects on the distribution of galaxies in our parameter space.
For the 22 continuum detected galaxies in common between @Kacprzak2015 sample and [ZFIRE-SP sample]{}, we find no statistically significant differences between high and low metallicity samples for [H$\alpha$]{} EWs and [$\mathrm{[340]-[550]}$]{} colours.
We further use the Salpeter IMF tracks with constant SFHs to compare the EW excess of our continuum detected sample with stellar mass and SFR. In Figure \[fig:delta\_EW\_checks\] (left panels) we show the EW excess as a function of stellar mass. We divide the sample into low mass ($\mathrm{log_{10}(M_*/M_\odot)<10.0}$) and high mass ($\mathrm{log_{10}(M_*/M_\odot)\geq10.0}$) bins and compute the scatter in EW excess to find that there is a greater tendency for low mass galaxies to show larger scatter in EW offsets.
Looking for IMF change as a function of SFR is inherently problematic, esp. if SFR is itself computed from the [H$\alpha$]{} flux assuming a universal IMF. Nevertheless in order to compare with @Gunawardhana2011 we show this in Figure \[fig:delta\_EW\_checks\] (centre panels) and confirm the trend they found of EW offset for higher “SFR” objects. However we refrain from interpreting this as a systematic trend for IMF variation. By using best fit SEDs from ZFOURGE, we compute the UV+IR SFRs [@Tomczak2014] and find that there is a greater tendency for low UV+IR SFR galaxies to show larger EW offsets, which is shown by Figure \[fig:delta\_EW\_checks\] (right panels). We note that the difference in SFR between [H$\alpha$]{} and UV+IR is driven by the different time-scales of SFRs probed by the two methods.
     
Discussion {#sec:discussion}
==========
Comparison with local studies {#sec:HG08_comp}
-----------------------------
Our study follows a method first outlined by @Kennicutt1983 and later implemented on large data sets by @Hoversten2008 and @Gunawardhana2011 to study the IMF of star-forming galaxies. We find that, the distribution of [H$\alpha$]{} EWs and optical colours at $z\sim2$ to be unlikely to be driven by a sample of galaxies with a universal Salpeter like IMF. @Hoversten2008 found a trend with galaxy luminosity with low luminosity galaxies in SDSS favouring a steeper IMF and the highest luminosity ones showing a Salpeter slope. @Gunawardhana2011 found a systematic variance in IMF as a function of SFR in GAMA galaxies with the highest-SFR galaxies lying above the Salpeter track. However, we note that the use of [$\mathrm{(g-r)_{0.1}}$]{} colour by the $z\sim0$ studies may have given rise to additional complexities in the analysis, by introducing significant emission line contamination, and the use of SFR as a variable in IMF change is problematic as its calculation depends on IMF and [H$\alpha$]{} luminosity.
Comparing our results with the local galaxies of HG08 show distinctive differences in the [H$\alpha$]{} EW vs [$\mathrm{(g-r)_{0.1}}$]{} colour distribution. Since galaxies at $z\sim2$ had only $\sim3.1$ Gyr to evolve, we observe younger, bluer stellar populations giving rise to tighter [$\mathrm{(g-r)_{0.1}}$]{} colours (distributed around 0.082 mag with a standard deviation of 0.085 mag). However, HG08 galaxy sample comprise of much redder colours with a larger scatter in [$\mathrm{(g-r)_{0.1}}$]{}. In a smooth star-formation scenario, we interpret the large scatter of the HG08 sample in [$\mathrm{(g-r)_{0.1}}$]{} colour space to be driven by the large variety of ages of the galaxies.
Galaxies at $z\sim2$ show a large range in [H$\alpha$]{} EWs compared to $z\sim0$ results. In our analysis, we investigated several key factors which may contribute to the large scatter of [H$\alpha$]{} EW at $z\sim2$. Compared to $z\sim0$ galaxy populations, we expect galaxies at $z\sim2$ to be young, actively star-forming in various environments and physical conditions that may be distinctively different from local conditions. Therefore, effects such as starbursts may be prominent and dust properties may have significant variation, which can influence the observed [H$\alpha$]{} EW values. Galaxy mergers and multiple starburst phases in the evolutionary history of $z\sim0$ galaxies add additional layers of complexity. Furthermore, the presence of old stellar populations requires [H$\alpha$]{} absorption to be corrected, which we expect to be negligible at $z\sim2$. Due to the limited evolutionary time-scale at $z\sim2$ (only 3 Gyr), we consider most of these effects to have no significant influence on our analysis. However, we cannot completely rule out effects of dust sight-lines to our analysis, which we discuss further in Section \[sec:dust\_discussion\].
The development of much advanced stellar tracks and greater understanding of stellar properties allow us to explore uncertainties related to stellar modelling that may significantly influence the observed parameters of galaxies at $z\sim2$.
What do we really find?
-----------------------
In Section \[sec:observational\_bias\], we showed that our ZFIRE selected sample was not preferentially biased towards extremely high star-forming galaxies and that our mass-complete $z\sim 2$ sample is sensitive to quite low EWs. Since observational bias appears not to be the explanation we can investigate physical factors that drive the difference in the [ZFIRE-SP sample]{} from universal Salpeter like IMF scenarios in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour plane.
[H$\alpha$]{} flux is a direct probe of the SFR on time-scales of $\sim10$ Myr, the continuum level at 6563Å provides an estimate of the mass of the old stellar populations, and therefore, [H$\alpha$]{} EW is a proxy for the sSFR. Similarly, for monotonic SFHs, the optical colours change smoothly with time, so the [$\mathrm{[340]-[550]}$]{} colour is a second proxy for the sSFR, but with different IMF sensitivity. Of the two sSFR measures the [H$\alpha$]{} EW is the most sensitive to the highest mass stars, so one way to express our result is to state that there is an excess of ionising photons (i.e. [H$\alpha$]{}) at a given SFR compared to a Salpeter-slope model.
In our sample, with $f=2$ dust corrections, [$\sim$]{}50% galaxies have an excess of high mass stars for a given sSFR compared to the expectation by a Salpeter like IMF. By stacking galaxies in mass and [$\mathrm{[340]-[550]}$]{} colour bins, we can average out stochastic variations in SFHs between galaxies. Our stacking results further confirmed that on average, for all masses and sSFR values, a universal IMF cannot produce the observed galaxy distribution in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. We performed further analysis to understand other mechanisms which may drive this excess in [H$\alpha$]{} EW for a given sSFR.
Dust and starbursts {#sec:dust_discussion}
-------------------
The dust extinction values in our analysis were derived using FAST, which uses underlying assumptions of IMF and SFH to produce best fit stellar parameters to galaxy observables. Our own analysis of dust showed SED derived extinction values from the UV slope to have a strong dependence on the assumed IMF. However for the purposes of testing consistency with a universal Salpeter-slope this suffices.
We further found that differential extinction in dust between the stellar continuum and nebular emission line regions can introduce significant scatter to galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space. By analysing Balmer decrement values for a subset of galaxies in our sample, we found that, there was significant scatter in the relation between extinction of nebular and stellar continuum regions ($f$), which can be attributed to differences in dust sight-lines between galaxies. @Reddy2015 showed that this scatter in extinction to be a function of [H$\alpha$]{} SFR, where galaxies with higher star-forming activity shows larger nebular extinction compared to galaxies with low SFRs. We test this by allowing $f$ values to vary as a free parameter in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space for each galaxy to force agreement with a universal Salpeter like IMF. Our results showed extreme values for the distribution of $f$, including unphysical negative values, suggesting that it is extremely unlikely that the scatter in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space is driven solely by the variation of $f$ values.
Starbursts in galaxies can introduce significant scatter in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. We implemented a stacking procedure for the galaxies in mass and colour bins to remove stochastic SFHs of individual galaxies, treating them as an ensemble stellar population with a smooth SFH prior to $z\sim 2$. We found that, our stacks on average ($50\%$ of the $f=1$ stacks and $100\%$ of the $f=2$ stacks) favour shallower IMF slopes compared to the traditional $\Gamma =-1.35$ values from Salpeter. By performing Monte Carlo simulations of starbursts using PEGASE SSP models we found that time-scales of bursts makes it extremely unlikely for them to account for the galaxies which lie significantly above the $\Gamma=-1.35$ track.
Dependencies on SSP models and stellar libraries {#sec:ssp_issues}
------------------------------------------------
We compared the evolution of model galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colours using PEGASE and Starburst99 SSP codes to conclude that the evolution of these parameters are largely independent of the SSP models used for a given stellar library. The [$\mathrm{[340]-[550]}$]{} colours were designed in order to avoid strong emission lines regions in the rest frame optical spectra which averts the need of complicated photo-ionization codes to generate nebular emission lines. [H$\alpha$]{} flux is generated using a constant value to convert Lyman continuum photons to [H$\alpha$]{} photons, which is similar between PEGASE and S99.
We found that, stellar libraries play a vital role in determining the evolutionary tracks of galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space. Stellar libraries with rotation show higher amounts of ionizing flux which results in higher [H$\alpha$]{} EW values for a given [$\mathrm{[340]-[550]}$]{} colour. [@Leitherer2014] further showed that rotation leads to larger convective cores in stars increasing the total bolometric luminosity, which can mimic a shallower IMF. At [$\mathrm{[340]-[550]}$]{}= 0.61, introducing stellar rotation via Geneva stellar tracks with Z=0.014 results in $\Delta\mathrm{log_{10}[EW (log_{10}(\AA))]\sim0.09}$. Therefore, we found that rotation cannot itself account for the scatter of our sample in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space at near solar metallicities.
Consideration of binary stellar systems is imperative to understand the stellar properties of $z\sim2$ galaxies [@Steidel2016]. However, added complexity arises due to angular momentum transfer during binary star interactions. This may influence the rotation of the galaxies and therefore it is necessary to consider the evolution of binary stars with detailed prescriptions of stellar rotation. Metallicity of the stars become important in such scenarios, which is a strong factor that regulates the evolution of stellar rotation. However, adding additional degrees of freedom for SSP models makes it harder to constrain their values, thus resulting in extra uncertainties [@Leitherer2014]. At [$\mathrm{[340]-[550]}$]{}= 0.61, introducing the effect of binaries via BPASS models resulted in $\Delta\mathrm{log_{10}[EW (log_{10}(\AA))]\sim0.01}$. Comparing results between S99 (single stellar population stellar tracks with and without rotation) and BPASS (single and binary stellar tracks with rotation), we found stellar rotation to have a larger contribution to the $\Delta$EW compared to binaries. Direct comparisons require further work to investigate differences in the evolution of stellar systems between S99 and BPASS SSP codes.
We found low stellar metallicities (Z$\sim$0.002) to have a strong influence in increasing the [H$\alpha$]{} EWs for a given [$\mathrm{[340]-[550]}$]{} colour. At [$\mathrm{[340]-[550]}$]{}= 0.61, reducing the metallicity of BPASS binary models from Z=0.02 to Z=0.002 resulted in $\Delta\mathrm{log_{10}[EW (log_{10}(\AA))]\sim0.36}$. This was largely driven by the increase in the number of ionization photons in the stellar populations due to lower opacities, lower mass loss via stellar winds, and sustained stellar rotation. Interactions between stars also contribute to an increase in ionising flux. When considering the ionization energy generated by a stellar population, effects of stellar rotation is degenerated with the abundance of high mass stars (see Figure 16 of @Szecsi2015). Therefore, we cannot completely rule out effects of stars with extremely low metallicities to describe the distribution of our galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space. In Section \[sec:model\_Z\], we provided a thorough analysis of the gas phase metallicities derived for the ZFIRE sample by @Kacprzak2015 and @Kacprzak2016 and inferred the metal abundances of stellar systems, which is a primary regulator of ionising photons. However, uncertainties in deriving gas phase abundances of elements via nebular emission line ratios (driven by our limited understanding of the ionization parameter at low metallicities), uncertainties in computing relative abundances of $\alpha$ elements in stellar systems, and our limited understanding on linking gas phase metallicities to stellar metallicities in $z\sim2$ stellar populations constrains our ability to distinguish between effects of metallicity and IMF.
Case for the IMF {#sec:IMF_discussion}
----------------
So far we have investigated various scenarios (summarised in Table \[tab:summary\_table\]) that could explain the distribution of the [ZFIRE-SP sample]{} galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour without invoking changes in the IMF. However, none of the scenarios by itself could best describe the distribution of our galaxies.
The galaxies in our sample have stellar masses between $\log_{10}(M_*/M_\odot)=9-10$ and we expect these galaxies to grow in stellar mass during cosmic time to be galaxies with stellar masses of $\sim\log_{10}(M_*/M_\odot)=10-11$ at $z\sim0$ [@DeLucia2007; @vanDokkum2013b; @Genel2014; @Papovich2015]. Recent studies of ETGs with physically motivated models have shown the possibility for a two phase star-formation [eg., @Ferreras2015]. Furthermore recent semi-analytic models have shown that a varying IMF best reproduces observed galaxy chemical abundances of ETGs [eg., @Lacey2016; @Fontanot2017 and references therein]. According to these models, ETGs, during their starburst phases at high-redshift are expected to produce higher fraction of high mass stars (shallower IMFs). @Gunawardhana2011 showed that $z\sim0$ star-forming galaxies also show an IMF dependence, where highly star-forming galaxies prefer shallower IMFs.
If we consider a varying IMF hypothesis, our results are consistent with a scenario where star-forming galaxies form stars with a high fraction of high mass stars compared to their local ETG counterparts. With lower metallicities and higher SFRs prominent at $z\sim2$, we expect the fragmentation of molecular clouds to favour the formation of larger stars due to lower cooling efficiencies and higher heating efficiencies due to radiation from the young massive stars [@Larson2005]. @Krumholz2010 showed that radiation trapping prominent in high star-forming regions of dense gas surface density can also favour the formation of massive stars. If we allow the IMF to vary in our analysis, the distribution of the [ZFIRE-SP sample]{} in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space can be explained, however values as shallow as $\Gamma=0.5$ could be required. This could be problematic for chemical evolution models and have implications to how galaxies form and evolve [@Romano2005]. We note that invoking extremely shallow IMFs can have a significant influence on the inferred evolution of the universe. Therefore, it is imperative to fully understand these observations and test alternate explanations.
Effect of IMF variation on fundamental quantities
-------------------------------------------------
If the IMF does vary, we need to consider the potential effect on the basic parameters in our input ZFIRE survey, which were calculated using a Chabrier IMF [@Chabrier2003]. First we consider possible effects on the calculation of our rest frame [$\mathrm{[340]-[550]}$]{} colours. This should not have a significant effect for several reasons: first we are only using the spectral models as an interpolator, and by design we are interpolating only across a small redshift range. At $z=2.1$ the interpolated and observed colours agree well as discussed in Appendix B. Second we note that the main effect is an increased scatter in the EW axis (Figure \[fig:EW\_HG08\_comp\]), once dust corrected the colours are quite tight. Finally we note that at these young ages everything is quite blue, hence quite flat spectra are being interpolated.
Next is the effect on SFR and stellar mass, which have been used in many of the previous ZFIRE papers [@Yuan2014; @Tran2015; @Kacprzak2015; @Alcorn2016; @Kacprzak2016; @Kewley2016; @Nanayakkara2016], and here in our own mass selection. To quantify the change in mass we run PEGASE for $\Gamma$ and constant star-formation history models and estimate the change in $R$-band mass-to-light ratio ($\simeq$K-band at $z\simeq 2$) for ages 1–3 Gyr. We find for $-1.35<\Gamma<-0.5$ the change in mass-to-light is $<0.7$ dex, with shallower IMFs resulting in a lower stellar mass. Thus we conclude that our stellar mass selection is only slightly effected by the possible IMF variations we have identified.
The effect is much more severe for [H$\alpha$]{} derived star-formation rates [@Tran2015; @Tran2017] as these directly count the number of the most massive stars, a sensitivity we have exploited in this paper to measure IMF. For $-1.35<\Gamma<-0.5$ the change is $\sim1.3$ dex. UV and far-IR derived SFRs are more complicated. The rest frame UV is more sensitive to intermediate mass stars, at 1500Å the change in flux is $\sim0.4$ dex for $-1.35<\Gamma<-0.5$. The far-IR is from younger stars in deeper dust-enshrouded regions, at least in local galaxies [@Kennicutt1998]. It is common at high-redshift to use an indicator, which combines UV and far-IR [eg., @Tomczak2014]. These are often calibrated using stellar population models with idealized SFHs, and traditional IMFs and for a fixed dust mass the balance between UV and IR luminosities will depend on dust geometry, IMF, and SFH [@Kennicutt1998; @Calzetti2013]. Therefore, IMF change could lead to difficulties in predicting the true underlying SFR of stellar populations.
[ l l l l l l l l l l l]{} &\
[**Dust**]{} & \[sec:dust\] & PEGASE & Exp declining ($\tau=1000$Myr) & 0.020 & -0.13 & 0.10 & 20% & 46% & unlikely\
[**Observational bias**]{} & \[sec:observational\_bias\] & PEGASE & Exp declining ($\tau=1000$Myr) & 0.020 & $--$ & $--$ & $--$ & $--$ & excluded\
[**Star bursts**]{} & \[sec:star\_bursts\] & PEGASE & Constant & 0.020 & $--$ & $--$ & $--$ & $--$ & excluded\
[**Stellar rotation**]{} & \[sec:stellar\_rotation\] & S99 & Constant & 0.020 & -0.39 & -0.15 & 2% & 13% & probable\
[**Binaries**]{} & \[sec:binaries\] & BPASS & Constant & 0.020 & -0.18 & 0.05 & 9% & 39% & future work\
Single stellar systems & \[sec:model\_Z\] & BPASS & Constant & 0.020 & -0.17 & 0.06 & 13% & 37% & unlikely\
(with rotation) & & & Constant & 0.010 & -0.29 & -0.06 & 4% & 22% & probable\
& & & Constant & 0.002 & -0.47 & -0.23 & 0% & 6% & probable\
Binary stellar systems & \[sec:model\_Z\] & BPASS & Constant & 0.020 & -0.18 & 0.05 & 9% & 39% & unlikely\
(with rotation) & & & Constant & 0.010 & -0.32 & -0.08 & 4% & 22% & probable\
& & & Constant & 0.002 & -0.51 & -0.28 & 2% & 9% & probable\
& \[sec:mass\_cutoff\] & PEGASE & Constant & 0.020 & -0.01 & 0.22 & 28% & 54% & excluded\
[120[M$_\odot$]{}]{} & \[sec:mass\_cutoff\] & PEGASE & Constant & 0.020 & -0.14 & 0.09 & 17% & 39% & excluded\
Summary & Future Work {#sec:summary}
=====================
We have used data from the ZFIRE survey along with the multi-wavelength photometric data from ZFOURGE to study properties of a sample of star-forming galaxies in cluster and filed environments at $z\sim2$. By using the [H$\alpha$]{} EW and rest frame optical colours of the galaxies we performed a thorough analysis to understand what physical properties could drive the distribution of galaxies in this parameter space. We have improved on earlier analysis by deriving synthetic rest frame filters that remove emission line contamination. We analysed effects from dust, starbursts, metallicity, stellar rotation, and binary stars in order to investigate whether the distribution of the [ZFIRE-SP sample]{} galaxies could be explained within a universal IMF framework.\
We found the following:\
- [ZFIRE-SP sample]{} galaxies show a large range of [H$\alpha$]{} EW values, with $\sim1/3$rd of the sample showing extremely high values compared to expectation from a $\Gamma=-1.35$ Salpeter like IMF. Compared to the HG08 SDSS sample, galaxies at $z\sim2$ show bluer colours with a larger scatter in [H$\alpha$]{} EW values.
- The difference in extinction between nebular and stellar emission line regions ($f$) in galaxies can have a strong influence in determining the distribution of galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space. Our Balmer decrement studies for a sub-sample of galaxies showed a large scatter in $f$ values. However, we showed that considering $f$ value as a free parameter cannot describe the distribution of galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour space.
- Starbursts can increase the [H$\alpha$]{} EW to extreme values providing an alternative explanation to IMF for a subset of our galaxies with high [H$\alpha$]{} EW values. By implementing a stacking technique to remove stochastic SFHs of individual galaxies we concluded that on average our [ZFIRE-SP sample]{} still shows higher [H$\alpha$]{} EW values for a given [$\mathrm{[340]-[550]}$]{} colour compared to a $\Gamma=-1.35$ Salpeter like IMF. We further used Monte Carlo simulations to study time-scales of starbursts to conclude that it was extremely unlikely that starbursts could explain the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour distribution of a large fraction of our galaxies.
- Stellar rotation, binaries, and the high mass cutoff of SSP models could influence the distribution of galaxies in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space. However, the individual effects of these were not sufficient to explain the distribution of the observed galaxies.
- Considering multiple effects together can describe the galaxies in our parameter space. We showed that the fraction of galaxies above the $\Gamma=-1.35$ tracks reduces to $\sim5\%$ when considering stellar tracks with high initial rotations ([$v_{ini}$]{}=0.4[$v_{crit}$]{}) and equal dust extinction between nebular and stellar regions.
- Including single or binary stars with stellar rotation in extreme low metallicity scenarios can significantly increase the [H$\alpha$]{} EWs and is also one explanation to describe the distribution of our galaxies in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space. However, gas phase metallicity analysis of the ZFIRE sample by @Kacprzak2015 and @Kacprzak2016 rules our such low metallicities for our sample. We note that calibration of emission line ratios and differences between stellar and ionized gas metallicities at $z\sim2$ are uncertainties that may impact our inference about the stellar metallicity of our sample.
- A non-universal high-mass IMF, varying between galaxies, could explain the distribution of galaxies in this parameter space. The [H$\alpha$]{} excess shows a broad trend with larger offsets for the less massive $z\sim 2$ galaxies. We also confirm the same systematic trend in IMF slope with Chabrier-derived SFR as shown by @Gunawardhana2011 but we refrain from interpreting this.
- Within the scope of our study, for $-1.35<\Gamma<-0.5$ the variation in high-mass IMF slope can lead to changes in mass-to-light ratios of up to $\sim0.7$ dex. Furthermore, ignoring calibration offsets we compute that [H$\alpha$]{} SFRs can show deviations up to $\sim1.3$ dex.
IMF change is an important topic as the IMF determines basic parameters such as stellar mass and star-formation rate, which are used to derive broad conclusions about galaxy evolution. What we observe is a population of galaxies with high [H$\alpha$]{} equivalent widths, i.e. an excess of ionising photons for a given colour, and we have ruled out intermittent starbursts and alternate stellar population models as an explanation. Such high-EW objects appear to become more common at high-redshift, for example similar observations have been reported at $z\sim4$ by multiple studies [eg., @Malhotra2002; @Finkelstein2011b; @McLinden2011; @Hashimoto2013; @Stark2013] and have even been invoked at $z>5$ as an explanation for cosmological re-ionisation [@Labbe2013; @Labbe2015; @Schenker2015_thesis; @Stark2017]. It seems reasonable to hypothesize that the abundance of high-EW objects is evolving towards high-redshift and we are seeing this at $z\sim 2$.
Is IMF change responsible? This currently seems to be the only explanation that cannot be ruled out, but we do not yet understand what would drive it to vary between individual galaxies. Further study is required in order to fully comprehend the stellar population parameters of the $z\sim2$ galaxies to determine whether IMF is the main driver for the distribution of galaxies in the [H$\alpha$]{} EW vs rest frame optical colour parameter space.
Future work should consider a more thorough statistical analysis using all the broad-band colour information and multiple spectral diagnostics including simultaneously modelling of possible effects of dust, starbursts, metallicity, stellar rotation, binary star evolution, and high mass cutoff of stellar systems together with systematic variances of the IMF. A new generation of stellar models are allowing many of these parameters to be varied and tested. The launch of the James Webb Space Telescope in 2018 will provide the opportunity to probe rest frame optical stellar and near-infrared populations via high signal/noise absorption lines and will revolutionise our understanding of the processes of star-formation in the $z\sim 2$ universe.
The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain and without the generous hospitality of the indigenous Hawaiian community this study would have not been possible. T.N., K.G., and G.G.K. acknowledge Swinburne-Caltech collaborative Keck time, without which this study would have not been possible. We thank the anonymous referee for the constructive comments on our analysis. We thank Madusha Gunawardhana, Elisabete da Cunha, Richard McDermid, Luca Cortese, John Eldridge, and Selma de Mink for insightful discussions. We thank Colin Jacobs for writing a Python wrapper around PEGASE.2, which was instrumental to perform the SSP simulations. We thank Erik Hoversten for providing us with the SDSS data used in our analysis. We thank Naveen Reddy for providing us with the MOSDEF data used in his @Reddy2015 analysis. We thank the Lorentz Centre and the scientific organizers of the “The Universal Problem of the Non-Universal IMF” workshop held at the Loretnz Centre in December 2016, which promoted useful discussions among the wider community on a timely concept. K.G. acknowledges the support of the Australian Research Council through Discovery Proposal awards DP1094370, DP130101460, and DP130101667. G.G.K. acknowledges the support of the Australian Research Council through the award of a Future Fellowship (FT140100933).\
Facilities:
Robustness of continuum fit and [H$\alpha$]{} flux {#sec:cont_Halpha_test}
==================================================
The study presented in this paper relies on accurate computation of [H$\alpha$]{} flux and the underlying continuum level. In this section, we compare our computed continuum level and [H$\alpha$]{} flux values using two independent methods to investigate any systematic biases to our [H$\alpha$]{} EW values.
We examine the robustness of our continuum fit to the galaxies by using ZFOURGE imaging data to estimate a continuum level from photometry. By using a slit-box of size 0.7$''\times 2.8''$ overlaid on the $0.7''$ point spread function convolved FourStar Ks image, we calculate the photometric flux expected from the galaxy within the finite slit aperture. Justification of this slit-size comes from the spectrophotometric calibration of the ZFIRE data which is explained in detail in @Nanayakkara2016. Since we remove slits that contain multiple galaxies within the selected aperture, only 38 continuum detected galaxies and 39 galaxies with continuum limits are used in this comparison.
We then convert the magnitude to $f_\lambda$ as follows: $$f_\lambda = 10^{-0.4(mag +48.60)} \times \frac{3\times10^{-2}}{\lambda_{c}^2}\ \mathrm{erg/s/cm^2/\AA}$$ where $\lambda_c$ is the central wavelength of the MOSFIRE K band which we set to 21757.5Å. Next we compute the [H$\alpha$]{} flux contribution to $f_\lambda$ by using the photometric bandwidth of FOURSTAR Ks band ($\Delta \lambda_{FS}$=3300Å). $$F_{H\alpha_{cont}} = \frac{F_{H\alpha}}{\Delta \lambda_{FS}}$$ We then remove the [H$\alpha$]{} flux contribution to the photometric flux to compute the inferred continuum level from photometry as shown below: $$F_{cont_{photo}} = f_\lambda-F_{H\alpha_{cont}}$$ Since the [H$\alpha$]{} flux is the dominant emission line for star-forming non-AGN galaxies, we ignore any contributions from other nebular emission lines to the photometric continuum level.
We compare this photometrically derived continuum level with the spectroscopic continuum level in Figure \[fig:continuum\_Halpha\_checks\] (left panel). The median deviation of the detected continuum values are [$\sim$]{}0.026 in logarithmic flux values with a 1[$\sigma_{\mathrm{NMAD}}$]{}scatter of 0.12, which leads us to conclude that the photometrically derived continuum values agree well with the spectroscopic continuum detections, thus confirming the robustness of our continuum calculations. We further note that the large scatter of galaxies below the continuum detection level is driven by the increased fraction of sky noise, which is expected and further confirms that we have robustly established the continuum detection limit. There is no strong dependence of the [H$\alpha$]{} EW on the continuum detection levels, which suggests that the [H$\alpha$]{} EW values are not purely driven by weak continuum levels. Further analysis of such detection biases are shown in Section \[sec:observational\_bias\].
In order to test the robustness of the measured [H$\alpha$]{} flux, we compare the [H$\alpha$]{} fluxes between this study and the ZFIRE catalogue. Nebular line fluxes in the ZFIRE catalogue are measured by integrating a Gaussian fit to the emission lines [@Nanayakkara2016]. We follow a similar technique to calculate [H$\alpha$]{} fluxes for emission lines unless they show strong velocity structures.
It is vital to ascertain if Gaussian fits to emission lines would give drastically different [H$\alpha$]{} flux values compared to our visually integrated fluxes. In Figure \[fig:continuum\_Halpha\_checks\] (right panel), we show this comparison for [ZFIRE-SP sample]{} galaxies which do not have strong sky line residuals. For the continuum detections in the above subset, the median deviation between the manual limits and Gaussian fits is 0.19$\mathrm{\times 10^{-17} ergs/s/cm^2/A}$ with [$\sigma_{\mathrm{NMAD}}$]{}= 0.20$\mathrm{\times 10^{-17} ergs/s/cm^2/A}$. Therefore, [H$\alpha$]{} flux values agree with each other within error limits with minimal scatter. Single Gaussian fits would fail to describe [H$\alpha$]{} emission profiles of galaxies with strong rotations or galaxy that have undergone mergers. These features will require complicated multi-Gaussian fits to accurately provide the underlying [H$\alpha$]{} flux. All 3[$\sigma_{\mathrm{NMAD}}$]{} outliers of continuum detections contain profiles that cannot be described using single Gaussian fits. Therefore, we expect the direct integration to be the most accurate method to calculate the [H$\alpha$]{} flux for galaxies with velocity structures because it is independent of the shape of the [H$\alpha$]{} emission. We note that all galaxies with sky line contamination shows profiles that are well described by single Gaussian fits.
Due to above tests, we are confident that neither the [H$\alpha$]{} flux nor the continuum level calculations would give rise to systematic errors in our analysis. Therefore, we conclude that the [H$\alpha$]{} EW values derived for out continuum detected [ZFIRE-SP sample]{} are robust.
AGN contamination to [H$\alpha$]{} flux {#sec:AGN}
---------------------------------------
As described in Section \[sec:sample\_selection\], we flag AGN of the ZFIRE sample following @Coil2015 selection criteria. However, it is possible that weak AGN that are not flagged by our selection may still contaminate the [ZFIRE-SP sample]{} and contribute to higher [H$\alpha$]{} emission. In order to investigate effects from such AGN, we use @Coil2015 selection and the measured [\[\]]{} fluxes to compute upper limits to [H$\alpha$]{} fluxes required for the galaxies to be flagged as AGN as follows: $$\label{eq:AGN_Ha_limit}
f(H\alpha)_{inf} < \frac{f([NII])}{0.316}$$ where f([\[\]]{}) is the measured [\[\]]{} flux for our galaxies. We find that our measured [H$\alpha$]{} fluxes are $\sim\times2$ higher than the inferred [H$\alpha$]{} fluxes ($f(H\alpha)_{inf}$) computed using the above equation. Using the ratio of the measured and inferred [H$\alpha$]{} fluxes, we find the upper limit to the fraction of [H$\alpha$]{} photons produced by the strongest possible AGN that would not be flagged by the @Coil2015 selection to be $\sim0.4$. Therefore, if our sample is contaminated by weak sub-dominant AGN, we expect the AGN to be responsible for $\sim50\%$ of the observed [H$\alpha$]{} flux.
Derivation of box-car filters for IMF and dust analysis {#sec:box car filters}
=======================================================
The choice of 340 and 550 filters {#sec:filter choice 340}
---------------------------------
Due to the strong dependence of nebular emission line properties in the [$\mathrm{(g-r)_{0.1}}$]{} colour regime, we shift our analysis to synthetic box car optical filters that avoid regions of strong emission lines. Figure \[fig:filter\_coverage\] shows the wavelength coverage of our purpose built \[340\] and \[550\] box car filters along with the wavelength coverage of the FourStar filters in the rest frame of a galaxy at $z=2.1$. It is evident from the figure that $\mathrm{J1_{z=2.1}}$, $\mathrm{J3_{z=2.1}}$, and $\mathrm{Hl_{z=2.1}}$ filters avoid wavelengths with strong emission lines. We choose the median wavelength of $\mathrm{J1_{z=2.1}}$ (3400Å) and $\mathrm{Hl_{z=2.1}}$ (5500Å) filters to develop box-car filters with a wavelength coverage of 4500Å. These box-car filters are used to compute optical colours for the ZFIRE-IMF sample.
The choice of 150 and 260 filters {#sec:filter choice 150}
---------------------------------
To be consistent with our IMF analysis, which employs the [$\mathrm{[340]-[550]}$]{} colours, we compute UV filters employing a similar technique described in Appendix \[sec:filter choice 340\]. @Bessell1990 B and I filters that samples the optical wavelength regime are chosen for this purpose. For a typical galaxy at $z=2.1$, these filters sample the UV wavelength regime. Therefore, by dividing the wavelength coverage of the B and I filters by 3.1, we define a filter set that samples the UV region in the rest frame of a galaxy at $z=2.1$
We then define two box-car filters that has similar wavelength coverage of $\sim6700$Å to the blue-shifted B and I filters. The bluer filter is centred at 1500Å while the redder filter is centred at 2600Å, both with a width of 673Å. We name these filters \[150\] and \[260\] respectively, and are used in our analysis to investigate the IMF dependence of the dust parameters derived by FAST.
![ [**Top:**]{} The wavelength coverage of the \[340\] and \[550\] filters compared with the wavelength coverage of rest frame FourStar NIR filters for a galaxy at $z=2.1$. We also show the wavelength coverage of the $g_{z=0.1}$ and $r_{z=0.1}$ filters used by the HG08 analysis. The arrows denote locations of strong emission lines. [**Bottom:**]{} The wavelength coverage of the \[150\] and \[340\] filters along with @Bessell1990 filters de-redshifted from $z=2.1$. []{data-label="fig:filter_coverage"}](figures/box_car_filter_comp_JH.pdf "fig:") ![ [**Top:**]{} The wavelength coverage of the \[340\] and \[550\] filters compared with the wavelength coverage of rest frame FourStar NIR filters for a galaxy at $z=2.1$. We also show the wavelength coverage of the $g_{z=0.1}$ and $r_{z=0.1}$ filters used by the HG08 analysis. The arrows denote locations of strong emission lines. [**Bottom:**]{} The wavelength coverage of the \[150\] and \[340\] filters along with @Bessell1990 filters de-redshifted from $z=2.1$. []{data-label="fig:filter_coverage"}](figures/box_car_filter_comp_BI.pdf "fig:")
Comparison between observed colours and EAZY derived rest frame colours {#sec:EAZY colour comparision}
-----------------------------------------------------------------------
We use the observed FourStar J1 and Hl fluxes and the best-fitting SED fits of our [ZFIRE-SP sample]{} to test the robustness of the observed colours with the EAZY derived rest frame colours. In Figure \[fig:rest\_frame\_colour\_comparision\] (left panel), we show the differences between the observed (J1$-$Hl) colours and the rest frame [$\mathrm{[340]-[550]}$]{} colours (as described in Appendix \[sec:filter choice 340\]) computed from the best-fitting EAZY SED templates. We compare J1$-$Hl with [$\mathrm{[340]-[550]}$]{} colours and expect them to approximately agree by construction at $z=2.1$.
Using a PEGASE model spectrum, we compare the difference with $z$ of J1$-$Hl with [$\mathrm{[340]-[550]}$]{} colours with what we expect from SED templates. Lines go through zero at $z=2.1$ as we expect. The model spectrum is extracted at $t = 3100$ Myr from a galaxy with an exponentially declining SFH with a p$_1$ = 1000 Myr, $\gamma=-1.35$ IMF, and no metallicity evolution. We use the model spectrum to compute the [$\mathrm{[340]-[550]}$]{} colour. We then make a grid of redshifts between $z=1.8$ to $z=2.7$ with $\Delta z = 0.01$ and redshift the wavelength of the model spectra for redshifts in this grid by multiplying the wavelength by $(1+z)$. For each redshift we compute the (J1$-$Hl) colours and since we only investigate the colour difference, there is no need to consider the redshift dimming or K corrections etc.
Since the rest frame filters assume that the galaxies are at $z=2.1$, we expect the observed colours and rest frame colours to agree at this redshift. Figure \[fig:rest\_frame\_colour\_comparision\] (left panel) shows that for the model galaxy this expectation holds with a maximum deviation of [$\sim$]{}$\pm0.1$ mag in colour difference between $z=1.8$ to $z=2.7$. Galaxies in the [ZFIRE-SP sample]{} shows a much larger deviation of [$\sim$]{}$\pm0.5$ mag, which we attribute to errors in photometry as evident from the large error bars. Furthermore, zero-point corrections in the SED fitting techniques can give rise to additional systematic variations.
A similar analysis is performed on the (B$-$I) colours using the observed fluxes from the ZFOURGE survey and [$\mathrm{[150]-[260]}$]{} colours (as described in Appendix \[sec:filter choice 150\]) on the best-fitting EAZY SED templates. The same PEGASE model galaxy used for the (J1$-$Hl) comparison is used to derive the (B$-$I) colours by red-shifting the spectra to redshifts between $z=1.8$ to $z=2.7$. Figure \[fig:rest\_frame\_colour\_comparision\] (right panel) shows the comparison between observed and EAZY derived colours along with the $ideal$ expectation computed from PEGASE spectra. Due to the intrinsic shape of the SED, the redshift evolution of $\Delta$(B$-$I) is opposite to that of $\Delta$(J1$-$Hl).
![ The colour difference between observed colours and rest frame colours derived from best-fitting SED templates from EAZY. rest frame colours are computed in such a way that the wavelength coverage in the observed frame at $z=2.1$ is approximately similar to the wavelength coverage of the rest frame filters at $z=0$. [**Left:**]{} The colour difference between the observed (J1$-$Hl) colours and the [$\mathrm{[340]-[550]}$]{} colours of the galaxies in our IMF sample. Galaxies with continuum detections are shown as magenta stars while galaxies with only [H$\alpha$]{} emission are shown as blue triangles. The error bars are from the errors in J1 and Hl filters from the ZFOURGE survey photometry. The green line shows the evolution of the colour difference of a PEGASE model galaxy. The vertical dashed line denotes $z=2.1$, which is the redshift used to de-redshift the NIR filters in order to compute rest frame colours. The horizontal dashed line shown is the $\Delta(colour)=0$ line. Galaxies lying on this line shows no difference between the observed colours and the rest frame colours derived via EAZY. [**Right:**]{} Similar to left but for (B$-$I) and [$\mathrm{[150]-[260]}$]{} colours. []{data-label="fig:rest_frame_colour_comparision"}](figures/EAZY_derived_J1-Hl_col_diff.pdf "fig:") ![ The colour difference between observed colours and rest frame colours derived from best-fitting SED templates from EAZY. rest frame colours are computed in such a way that the wavelength coverage in the observed frame at $z=2.1$ is approximately similar to the wavelength coverage of the rest frame filters at $z=0$. [**Left:**]{} The colour difference between the observed (J1$-$Hl) colours and the [$\mathrm{[340]-[550]}$]{} colours of the galaxies in our IMF sample. Galaxies with continuum detections are shown as magenta stars while galaxies with only [H$\alpha$]{} emission are shown as blue triangles. The error bars are from the errors in J1 and Hl filters from the ZFOURGE survey photometry. The green line shows the evolution of the colour difference of a PEGASE model galaxy. The vertical dashed line denotes $z=2.1$, which is the redshift used to de-redshift the NIR filters in order to compute rest frame colours. The horizontal dashed line shown is the $\Delta(colour)=0$ line. Galaxies lying on this line shows no difference between the observed colours and the rest frame colours derived via EAZY. [**Right:**]{} Similar to left but for (B$-$I) and [$\mathrm{[150]-[260]}$]{} colours. []{data-label="fig:rest_frame_colour_comparision"}](figures/EAZY_derived_B-I_col_diff.pdf "fig:")
Does SSP models give identical results? {#sec:SSP comparision}
=======================================
In order to investigate whether there is a strong dependence of the [H$\alpha$]{} EW and/or [$\mathrm{(g-r)_{0.1}}$]{} colour evolution of model galaxies on the SSP models used, we compare the galaxy properties from PEGASE with that of Starburst99 [@Leitherer1999]. S99 models support the use of multiple stellar libraries. For this analysis we consider the Padova AGB stellar library which is an updated version of the @Guiderdoni1988 stellar tracks that includes cold star parameters and thermally pulsating asymptotic giant branch (AGB) and post-AGB stars.
We compute PEGASE models using a constant SFR of $1\times10^{-4}$[M$_\odot$]{}/Myr with various $\Gamma$ values. PEGASE models are scale free and are generated using a baryonic mass reservoir normalized to 1[M$_\odot$]{}. Having higher SFRs in PEGASE will result in the SFR exceeding the maximal amount of SFR possible for the amount of gas available in the galaxy reservoir before reaching $z\sim2$. The other parameters of the PEGASE models are kept as mentioned previously.
S99 models employ a different approach to compute synthetic galaxy spectra where the SFR is not normalized and therefore the SFR should be kept at a reasonable level which allows the HR diagram to be populated with sufficient number of stars during the time steps the models are executed. For S99, we use a SFR of 1[M$_\odot$]{}using Padova AGB stellar libraries with a Z of 0.02, and similar IMFs to the PEGASE models. We do not change any other parameters in the S99 models from its default values. We state the S99 parameters below.\
$\bullet$ Supernova cutoff mass is kept at 8[M$_\odot$]{}.\
$\bullet$ Black hole cutoff mass is kept at 120[M$_\odot$]{}.\
$\bullet$ Initial time is set to 0.01 Myr and time 1000 time steps are computed with logarithmic spacing up to 3100 Myr.\
$\bullet$ We consider the full isochrone for mass interpolation.\
$\bullet$ We leave the indices of the evolutionary tracks at 0.\
$\bullet$ PAULDRACH/HILLIER option is used for the atmosphere for the low resolution spectra.\
$\bullet$ Metallicity of the high resolution models are kept at 0.02.\
$\bullet$ Solar library is used for the UV line spectrum.\
$\bullet$ In order to generate spectral features in the NIR, we use microturbulent velocities of 3kms$^{-1}$ and solar abundance ratios for alpha-element to Fe ratio.
To account for the difference in the SFRs between the SSP models, we scale the PEGASE [H$\alpha$]{} flux and the corresponding continuum level to the 1[M$_\odot$]{}/yr value by multiplying by $10^{10}$. The scaling process assumes that the [H$\alpha$]{} luminosity $\propto$ SFR as shown by @Kennicutt1983. By interpolating the S99 models to the PEGASE time grid, we calculate the difference in the parameters between the two models for a given time.
All models are computed using a constant SFR. Since, the number of O and B stars that contribute strongly to the [H$\alpha$]{} flux is regenerated at a constant speed, the [H$\alpha$]{} flux reaches a constant value within a very short time-scale and maintains this value. The lifetime of these O and B stars are in the order of $\sim10$ Myr and therefore, there is no effect from the accumulation of these stars to the [H$\alpha$]{} flux. [H$\alpha$]{} flux between the SSP models show good agreement with shallower IMFs showing larger [H$\alpha$]{} flux values. This is driven by the increase in the fraction of larger O and B stars, which contributes to the increase in ionizing photons to boost the [H$\alpha$]{} flux. The [H$\alpha$]{} flux generated by the two SSP models agree within [$\sim$]{}0.03 dex. The discrepancy is slightly higher for steeper IMFs, perhaps driven by minor differences in the mass distribution of the stars in the SSP models.
The continuum level at 6563Å also show good agreement between the SSP models and the differences are $\lesssim 0.1$ dex. Unlike [H$\alpha$]{} flux, the continuum levels do not reach a constant value within 3 Gyr. This is driven by the larger lifetime of the A and G stars, which largely contributes to the galaxy continuum. The rate at which the continuum level increase is dependant on the IMF, where galaxies with steeper IMFs take longer times to reach a constant continuum level. However, having a higher fraction of smaller stars eventually leads to a higher continuum level compared to a scenario with a shallower IMF. Since the fraction of A and G stars are higher in a steeper IMF, the higher continuum level is expected. PEGASE and S99 models follow different time-scales for stellar evolution. For a given IMF, PEGASE models evolves the continuum level faster to reach a higher value compared to the S99 models. The discrepancy between the models increases up to [$\sim$]{}1500 Myr, after which it decreases to reach a constant value.
The change of the [H$\alpha$]{} EW between the two SSP models (shown by \[fig:PEGASE\_S99\_comp\] left panel) is driven by the differences in the [H$\alpha$]{} flux and the continuum level. Both the models behave similarly by decreasing [H$\alpha$]{} EW with time. Shallower IMFs show higher EW values driven by higher [H$\alpha$]{} flux and lower continuum values and the shape of the $\Delta$EW function is driven by the differences in the continuum evolution.
Furthermore, in Figure \[fig:PEGASE\_S99\_comp\] (centre panel) we investigate the evolution of [$\mathrm{[340]-[550]}$]{} colours derived between PEGASE and S99. Since the wavelngth regime covered by \[340\] and \[550\] colours do not include emission lines, a direct [$\mathrm{[340]-[550]}$]{} colour comparison between S99 models (with no emission lines) with PEGASE models (with emission lines) is possible. Models with different IMFs show distinctive differences between the derived [$\mathrm{[340]-[550]}$]{} colours. Driven by the excess of higher mass blue O and B stars, galaxies with shallower IMFs show bluer colours compared to galaxies with steeper IMFs for a given time. Steeper IMFs show a better agreement between the two SSP models. Both, PEAGSE and S99 use the same stellar tracks from the Padova group and therefore, we attribute the differences between the SSP models to differences in methods used by PEAGSE and S99 to produce the composite stellar populations.
In Figure \[fig:PEGASE\_S99\_comp\] (right panel), we compare the evolution of [H$\alpha$]{} EW with [$\mathrm{[340]-[550]}$]{} colours for PEGASE and S99 models. Following on close agreement between the evolution of [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colours between the two SSP models, in the [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour plane galaxies from both PEGASE and S99 show similar evolution. Therefore, our conclusions in this study are not affected by the choice of SSP model (PEGASE or S99) but we note that stellar libraries do play a more prominent role, which we discuss in detail in Section \[sec:other\_exotica\].
  
Nebular extinction properties of ZFIRE $z\sim2$ sample
======================================================
[H$\beta$]{} detection properties {#sec:Balmer decrement extended}
---------------------------------
Figure \[fig:balmer\_decrement\_properties\] (left panel) shows the UVJ diagram [@Spitler2014] of the ZFIRE [H$\beta$]{} targeted and detected sample. Rest frame UVJ analysis shows that our [H$\beta$]{} detected sample is a reasonably representative subset of our star forming galaxies.
Figure \[fig:balmer\_decrement\_properties\] (right panel) shows Balmer decrement values as a function of stellar mass. We calculate the median Balmer decrement for our sample to be 3.9. Using a least-squares polynomial fit to the data we find that galaxies with higher mass are biased towards high Balmer decrement values. There are 9 galaxies with Balmer decrements below 2.86, which are below the theoretical minimum value for case B recombination at $T=10,000$ K [@Osterbrock1989].
Derivation of the dust corrections to the [ZFIRE-SP sample]{} {#sec:dust_derivation}
-------------------------------------------------------------
In this section, we show how we used the @Calzetti2000 and @Cardelli1989 attenuation laws to derive extinction values for the [ZFIRE-SP sample]{}.
We first calculate the starburst reddening curve at $0.6563\mu$m using the following equation:
$$\label{eq:starburst_curve_calzetti_IR}
k'(\lambda) = 2.659(-1.857+ \frac{1.040}{\lambda}) +R'_v$$
where $\lambda$ is in $\mu m$. This equation is only valid for wavelengths between $0.63\mu m<\lambda<2.2\mu m$. Following [@Calzetti2000] the total attenuation ($R^{'}_{v}$) is set to 4.05. We use the derived value for the reddening curve to calculate the attenuation of the continuum at $0.6563\mu$m ($A_c(0.6563)$). $$\label{eq:cont attenuation}
A_{c}(0.6563) = k'(0.6563) \times \frac{A_v}{R'_v} = 0.82A_{c}(V)$$
Next we use the [@Cardelli1989] prescription to calculate the attenuation of the nebular emission lines. This law is valid for both diffuse and dense regions of the ISM and therefore we expect it to provide a reasonable approximation to the ISM of galaxies at $z\sim2$. We use the following equations to evaluate the extinction curve at 6563Å.
$$x = 1/\lambda$$
where $\lambda$ is in $\mu$m and is between $\mathrm{1.1\mu m^{-1} \leq x \leq 3.3 \mu m^{-1}}$. Wavelength dependent values $a(x)$ and $b(x) $are defined as follows: $$\begin{split}
a(x) = 1 + (0.17699 \times y) - (0.50447\times(y^2)) \\
- (0.02427\times(y^3))+ (0.72085\times(y^4)) + \\
(0.01979\times(y^5))-(0.77530\times(y^6))+ \\
(0.32999\times(y^7))
\end{split}$$ $$\begin{split}
b(x) = (1.41338*y)+(2.28305*(y^2))+(1.07233*(y^3))-\\
(5.38434*(y^4))-(0.62251*(y^5))+\\
(5.30260*(y^6))-(2.09002*(y^7))
\end{split}$$ where $y = x-1.82$. Using $a(x)$ and $b(x)$ values, the attenuation of the nebular emission line at 0.6563$\mu m$ ($A_{n}(0.6563)$) can be expressed as follows: $$\label{eq:A_n}
A_{n}(0.6563) = A_{n}[a(0.6563^{-1}) + \frac{b(0.6563^{-1})}{R''_v}] = 0.82A_{n}(V)$$ $R^{''}_{v}$ is set to 3.1 following [@Cardelli1989].
Colour excess is defined as: $$\label{eq:colour excess}
E(B-V) = A(V) /R_v$$ [@Calzetti1994] shows that at $z\sim0$ newly formed hot ionizing stars reside in dustier regions of a galaxy compared to old stellar populations. Ionizing stars mainly contribute to the nebular emission lines while the old stellar populations contribute the stellar continuum of a galaxy. Therefore, they find that nebular emission lines of a galaxy to be $\sim2$ times more dust attenuated than the stellar continuum. Here, we denote this correction factor as $f$. Using $n$ and $c$ subscripts to denote the nebular and continuum parts respectively, $$\label{eq:cal fac}
E_n(B-V) = f \times E_c(B-V)$$ Substituting equation \[eq:colour excess\] to Equation \[eq:A\_n\]:
$$A_{n}(0.6563) = 0.82 \times\ R''_v \times E_n(B-V)$$
Using Equation \[eq:cal fac\]: $$A_{n}(0.6563) = 0.82 \times R''_v \times f \times E_c(B-V)$$ $E_c(B-V)$ is computed using the @Calzetti2000 dust law, $$A_{n}(0.6563) = 0.82 \times f \times A_c(V) \frac{R''_v}{R'_v}$$
Therefore, we express the dust corrected nebular line ($f_i$([H$\alpha$]{})) and continuum flux ($f_i$(cont)) as follows:
$$\label{eq:Halpha dust corrected}
f_i(H\alpha)= f_{obs}(H\alpha) \times 10^{0.4(0.62\times f\times A_{c}(V))}$$
$$\label{eq:cont dust corrected}
f_i(cont)= f_{obs}(cont) \times 10^{0.4(0.82\times A_{c}(V))}$$
where the subscript $obs$ refers to the observed quantity while $i$ refers to the intrinsic quantity.
Since $EW_i =f_i(H\alpha)/f_i(cont))$, finally the dust corrected [H$\alpha$]{} EW can be expressed as follows: $$\label{eg:EW dust corrected}
\log_{10}(EW_i) = \log_{10}(EW_{obs}) + 0.4A_c(V)(0.62f-0.82)$$\
Next we consider the dust correction for the $z=0.1$ optical colours. Using [@Calzetti2000] attenuation law we calculate the starburst reddening curve for these wavelengths using the following equation: $$\label{eq:starburst_curve_calzetti_optical}
k'(\lambda) = 2.659(-2.156+ \frac{1.509}{\lambda} - \frac{0.198}{\lambda^2} +\frac{0.011}{\lambda^3}) +R'_v$$ This equation is different from Equation \[eq:starburst\_curve\_calzetti\_IR\], since this is valid for more bluer wavelengths between $0.12\mu m<\lambda<0.63\mu m$. Similar to Equation \[eq:cont attenuation\], we work out the attenuation for median wavelengths of the \[340\] and \[550\] filters (by definition the filter medians are respectively at $0.34\mu$m and $0.55\mu$m) as follows:.
$$\label{eq:BC340 dust corrected}
f([340]) = f([340]_{obs}) \times 10^{0.4 \times 1.56 A_c(V)}$$
$$\label{eq:BC550 dust corrected}
f([550]) = f([550]_{obs}) \times 10^{0.4 \times 1.00 A_c(V)}$$
Dust corrected fluxes are used to recalculate the [$\mathrm{[340]-[550]}$]{} colours.
The median wavelengths of the g$_{0.1}$ and r$_{0.1}$ filters are respectively at 0.44$\mu m$ and 0.57$\mu m$. Similar to \[340\] and \[550\] filters, we use Equation \[eq:starburst\_curve\_calzetti\_optical\] to calculate the attenuation for g$_{0.1}$ and r$_{0.1}$ filters as follows:
$$\label{eq:g_dust_corrected_derivation}
f(g_i)_{0.1} = f(g_{obs})_{0.1} \times 10^{0.4 \times 1.25 A_c(V)}$$
$$\label{eq:r_dust_corrected_derivation}
f(r_i)_{0.1} = f(r_{obs})_{0.1} \times 10^{0.4 \times 0.96 A_c(V)}$$
PEGASE simulations of starburst galaxies {#sec:simulation_properties}
========================================
Here we describe the PEGASE simulations discussed in Section \[sec:simulations\] to model the effects of starbursts in [H$\alpha$]{} EW vs [$\mathrm{[340]-[550]}$]{} colour parameter space. We tune burst empirical parameters to maximize the number of high [H$\alpha$]{} EW objects. We consider 4 scenarios in our simulations as shown in Table \[tab:simulation\_param\]. For each scenario we model 100 galaxies and superimpose a single starburst on PEGASE model tracks with an IMF slope of $\Gamma=-1.35$ and a constant SFH.
PEGASE model galaxies are generated from an initial gas reservoir of 1[M$_\odot$]{}. Therefore, we normalize the total mass generated by the constant SFH and the star burst to 1[M$_\odot$]{} in order to calculate the SFR for each time step. These values are used to calculate SSPs with a IMF slope of $\Gamma=-1.35$ and upper and lower mass cutoffs set at 0.5[M$_\odot$]{} and 120[M$_\odot$]{} respectively. We use a constant metallicity of 0.02 for all our simulations. The other parameters are kept similar at their default values as described in Section \[sec:PEGASE\_models\]. Following this recipe we generate the simulated galaxies with finer sampling of the time steps around the time of burst to better resolve the effects of the bursts.
[ c r r r r ]{} &\
Scenario & Start of SFH (Myr) & Time of burst (Myr) & $\tau_b$ (Myr) & $f_m$\
&\
1 & 0 & 0–3250 & 100–300 & 0.10–0.30\
2 & 0–2500 & 2585–3250 & 10–30 & 0.01–0.03\
3 & 0–2500 & 2585–3250 & 10–30 & 0.10–0.30\
4 & 0–2500 & 2585–3250 & 100–300 & 0.01–0.03\
We use the simulated galaxies from Scenario 1 (Table \[tab:simulation\_param\]) to further investigate the evolution of [H$\alpha$]{} EW and [$\mathrm{[340]-[550]}$]{} colour of galaxies during a starburst phase. 40 galaxies are chosen at random from the simulations in Figure \[fig:simulations\_delta\_EW\_vs\_col\_time\] to show the deviation of the [H$\alpha$]{} EW values from the $\Gamma=-1.35$ IMF track. In a smooth SFH, the [$\mathrm{[340]-[550]}$]{} colours are correlated with the age of the galaxies due to stellar populations moving away from the main sequence making the galaxy redder with time. However, galaxies undergo bursts at random times resulting them to deviate from the smooth SFH track at random times. Galaxies with higher burst fractions per unit time ($f_m/\tau_b$), show larger deviations due to the extreme SFRs required to generate higher amount of mass within a shorter time period. The time-scale galaxies populate above the reference IMF track is small (in the order of $\lesssim50$ Myr) compared to the total observable time window of the galaxies at $z\sim2 (\sim3$ Gyr). The [H$\alpha$]{} EW increase significantly soon after a burst within a short time-scale (in order of few Myr) and decreases rapidly to be deficient in [H$\alpha$]{} EW compared to the reference constant SFH model. Afterwards, the [H$\alpha$]{} EW increases at a slower phase until the tracks join the smooth SFH model after the burst has past.
In Figure \[fig:simulations\_delta\_EW\_vs\_col\_time\] (right panel), we select all simulated galaxies to calculate the amount of time galaxies spend with higher [H$\alpha$]{} EW values ($0.05<\mathrm{\Delta[\log_{10}(H\alpha\ EW)]}$) compared to models with smooth SFHs. We bin the galaxies according to the burst fraction per unit time to find that there is no strong dependence of it on $\Delta$Time.
  
[^1]: <http://zfire.swinburne.edu.au>
[^2]: <http://zfourge.tamu.edu>
[^3]: Note that this is different from the 142 galaxies mentioned in Section \[sec:sample\_selection\] because the sample of 142 galaxies has a Q$_z=3$, includes galaxies with no ZFOURGE counterparts (see @Nanayakkara2016 for further details) and galaxies with non-optimal ZFOURGE photometry (see @Straatman2016 for further details).
[^4]: Development version: <https://github.com/gbrammer/eazy-photoz/>
|
---
abstract: 'The session on effective Hamiltonians and chiral dynamics is overviewed, combined with a review on the bound-state problem. The progress during this session allows to remove all dependence on regularization in an effective interaction, thus to renormalize a Hamiltonian for the first time, and to solve front form as if they were instant-form equations, with all the advantages implied. HCP 4/11/20 November 2001'
address: 'Max-Planck-Institut für Kernphysik, D-69029 Heidelberg, Germany'
author:
- 'Hans-Christian Pauli'
title: 'On the effective Hamiltonian for QCD: An overview and status report'
---
Introduction
============
This community has formed in 1991 at the first light-cone meeting in Heidelberg with the ambitious aim to solve the bound-state problem in gauge field theory particularly QCD.
How far did we get? Is it fair to say that we have not yet solved our homework problem?
However, the important contributions at this [@Leutwyler01; @Roberts01; @FredericoFP01; @Schweiger01; @Plessas01; @Krassnigg01; @Mangin01; @FrewerPF01; @Walhout01; @Karmanov01; @Ligterink01; @Sugihara01; @vanIersel01] and the last meeting [@Ash00; @Wegner00; @Hil00; @Schierholz00; @TriPau00; @Pau00] let expect a faster pace in the overseeable future. Over 80 participants show how alive the field continues to be even after 11 years. Its richness has become apparent last week. 13 speakers are alone in this session. I attempt therefore to combine an overview on the present session on “Effective Hamiltonians and chiral dynamics” with a review on effective interactions in general. This seems to be in place in view of the progress at this meeting, particularly on renormalization [@FredericoFP01; @FrewerPF01] and the possibility to solve instant form rather than front form equations [@Krassnigg01] when working on the light cone.
Why working on the light-cone?
==============================
The Hamiltonian approach to a field theory was a no-go-topic for over fifty years. But combined with light-cone quantization and periodic boundary conditions [@pab85a], certain adventages inherent to the light-cone Hamiltonian approach were clear right from the outset, particularly
- the simple kinematical boosts and
- the simple vacuum properties.
This continues to be so. It was equally clear that a number of extremely difficult problems were on the road, among them zero modes, gauge invariance and gauge artifacts, the field theoretical many-body problem and Fock space truncation, non-perturbative renormalization, confinement, chiral phase transitions, just to name a few. Some of them have been solved, or better understood, as reviewed in [@BroPauPin98].
What is the homework problem? Starting from the Lagrangian density $\mathcal{L}_\mathit{QCD}$, the light-cone approach to the bound-state problem[@BroPauPin98] aims at solving the eigenvalue equation $$H_{LC}\vert\Psi\rangle = M^2\vert\Psi\rangle
.$$ If one disregards possible zero modes and works in the light-cone gauge, the (light-cone) Hamiltonian $H_{LC}=P^\mu P_\mu$ is a well defined Fock-space operator and given in [@BroPauPin98]. Its eigenvalues are the invariant mass-squares $M^2$ of physical particles associated with the eigenstates $\vert\Psi\rangle$. In general, they are superpositions of all possible Fock states with its many-particle configurations. For a meson, for example, holds
$\displaystyle
\begin{array} {rcll}
{\vert\Psi_{\rm meson}\rangle} &=& {\sum\limits_{i}\ }
{\Psi_{q\bar q}(x_i,\vec k_{\!\perp_i},\lambda_i)}
&{\vert q\bar q\rangle}
\\&+& {\sum\limits_{i}\ }
{\Psi_{g g}(x_i,\vec k_{\!\perp_i},\lambda_i)}
&{\vert g g\rangle }
\\&+& {\sum\limits_{i}\ }
{\Psi_{q\bar q g}(x_i,\vec k_{\!\perp_i},\lambda_i)}
&{\vert q\bar q g\rangle }
\\&+& {\sum\limits_{i}\ }
{\Psi_{q\bar q q\bar q }(x_i,\vec k_{\!\perp_i},\lambda_i)}
&{\vert q\bar q q\bar q \rangle}
\\&+& {\dots}.
\end{array}$
If all wave functions $\Psi_n(x_i,\vec k_{\!\perp_i},\lambda_i)$ are available, one can analyze hadronic structure in terms of quarks and gluons. For example, one can calculate the space-like form factor of a hadron quite straightforwardly by a sum of overlap integrals analogous to the corresponding non-relativistic formula [@BroPauPin98].
What are possible alternatives?
===============================
This community tries hard to have a feedback to and from other fields and activities. All these approaches have their own virtues and merits. The adventages are usually emphasized by the proponents, and therefore I shall play the devils advocate passing them shortly review, with a sometimes over-critical attitude to make the point clear.
*Phenomenological models*. Practically all our knowledge on hadron structure comes from phenomenological models. The constituent quark model particularly continues to have great success. Phenomenological approaches usually do not address to the lighter mesons like the pion, but they are extremely succesfull for the heavier hadrons and for baryons. A particularly beautiful example was presented by Plessas [@Plessas01]. I do not care so much that his model has about twenty parameters, with only three of them fitted explicitly. Such work is a useful guideline to experiment. I dream of the day when front-form based work produces similar results. His work also shows the extreme difficulty to relate wave functions of constituents to actual cross sections.\
*Chiral perturbation theory*. Leutwyler [@Leutwyler01] demonstrates to which precision a well based formalism can be driven. To some extent this also holds for the similar NJL-models. I cannot quote the huge body of literature but I mention in passing that they are not renormalizable, that the relation to QCD is unclear, and that they deal mostly with the very light mesons. Heavy flavors cannot be treated, see also [@Leutwyler01].\
*Schwinger-Dyson approaches* are potentially able to cope with the bound-state problem. Roberts [@Roberts01] emphasises the chiral aspects: Free quarks have the small current mass at large momentum, increasing to the large constituent mass at small momentum. Does this feature prevail in a bound state problem, and how?\
*DLCQ and LC approaches*. Hiller [@Hil00] addresses to diagonalize by DLCQ the light-cone Hamiltonian in physical space-time (3+1). His renormalization à la Pauli-Villars yields promising results, but he needs a super-computer to produce them. He works in a truncated Fock space. But Ligterink [@Ligterink01] concludes that Fock-space suppression is less dangerous than believed, and both Mangin-Brinet [@Mangin01] and Karmanov [@Karmanov01] report good stability of bound-state calculations in truncated spaces. The separation of soft and hard aspects by Schweiger [@Schweiger01] continue to be an important aspect of light-cone quantization.\
*Technical problems*. Basis optimalization by Sugihara [@Sugihara01] and a new algorithm by van Iersel [@vanIersel01], applied here to the Yukawa model, are very important facets. In fact the break-through in non-perturbative renormalization by Frederico [@FredericoFP01] and Frewer [@FrewerPF01], and a new insight into the nature of Melosh-transforms by Krassnigg [@Krassnigg01] represent progress on a technical level as well.
*Lattice Gauge Calculations* use practically all computer power in this world to generate a potential energy between quarks, but then a *non-relativistic* Schrödinger equation is used to calculate bound states [@Schierholz00]. This is unsatisfactory. It is generally not known that LGC’s have considerable uncertainty to extrapolate their results down to such light mesons as a pion. It is equally unknown that lattice gauge calculations get *always strict and linear confinement* even for QED, where we know the ionization threshold. The ‘breaking of the string’, or in a more physical language, the ionization threshold is one of the hot topics at the lattice conferences [@Schilling2000]. Moreover, in order to get the size of the pion, thus the form factor, another generation of computers is required, as well as physicists to run them. Such considerations and the lacking perspectives on precision have motivated Wilson, among other, to quit.
*The new Wilson approach* to QCD is based almost entirely on the front form and renormalization group analysis [@Wilson6]. Walhout [@Walhout01] gives an example for that. The original hope was to assemble the operators in an effective interaction according to their relevance with respect to the renormalization group. It is not unfair to state that not much of concrete hardcore technology has thus far emerged, despite the immense efforts over the years. In developing the formalism the similarity transform of Wilson and Glazek has played a major role [@GlazekWilson12]. The basic idea is similar as in the preceding method *Hamiltonian flow* by Wegner [@Weg94]. But as emphasized repeatedly by the latter, the similarity transform has a serious defect [@Wegner00]: As a built-in feature, it cannot account for the block structures in a gauge-field theoretical many-body Hamiltonian and therefore should be abandoned.
In conclusion, I believe that there is not much left than to proceed with the more conventional methods. In the sequel, I will briefly review only one of them, the method of iterated resolvents.
\[t\]
The method of iterated resolvents
=================================
Instead of diagonalizing the Hamiltonian by DLCQ, one might wish to reduce the many-body problem behind a field theory to an effective one-body problem. The derivation of the effective interaction becomes then the key issue.
Because of the inherent divergencies in a gauge field theory, the QCD-Hamiltonian in 3+1 dimensions must be regulated from the outset. One of the few practical ways is vertex regularization [@BroPauPin98; @Pau98], where every Hamiltonian matrix element, particularly those of the vertex interaction (the Dirac interaction proper), is multiplied with a convergence-enforcing momentum-dependent function. It can be viewed as a form factor [@BroPauPin98]. The precise form of this function is unimportant here, as long as it is a function of a cut-off scale ($\Lambda$).
By definition, an effective Hamiltonian acts only in the lowest sector of the theory (here: in the Fock space of one quark and one anti-quark). And, again by definition, it has the same eigenvalue spectrum as the full problem. I have derived such an effective interaction by the method of iterated resolvents [@Pau98], that is by systematically expressing the higher Fock-space wave functions as functionals of the lower ones. In doing so the Fock-space is not truncated and all Lagrangian symmetries are preserved. The projections of the eigenstates onto the higher Fock spaces can be retrieved systematically from the $q\bar q$-projection, with explicit formulas given in [@Pau98].
\[t\]
Let me sketch the method briefly, details may be found in [@Pau98]. DLCQ with its periodic boundary conditions has the advantage that the LC-Hamiltonian is a matrix with a finite number of Fock-space sectors, which we denumerate by $n$, with $1 < n\leq N$. The so called harmonic resolution $K= L P^+/(2\pi)$ acts as a natural cut-off of the particle number. As shown in Figure \[fig:1\], $K=3$ allows for $N=8$, and $K=4$ for $N=13$ Fock-space sectors, for example. The Hamiltonian matrix is sparse: Most of the matrix elements are zero, particularly if one includes only the vertex interaction $V$. For $n$ sectors, the eigenvalue problem in terms of block matrices reads $$\begin{aligned}
\sum _{j=1} ^{n} \langle i \vert H _n (\omega)\vert j \rangle
\langle j \vert\Psi (\omega)\rangle
= E (\omega)\ \langle i \vert\Psi (\omega)\rangle
,\label{eq:2}\end{aligned}$$ for $i=1,2,\dots,n$. I can always invert the quadratic block matrix of the Hamitonian in the last sector to define the $n$-space resolvent $G _ n$, that is $$\begin{aligned}
G _ n (\omega) = {1\over \omega- H_n (\omega)}
.\end{aligned}$$ Using $G _ n$, I can express the projection of the eigenfunction in the last sector by $$\begin{aligned}
\langle n \vert \Psi (\omega)\rangle
= G _ n (\omega)
\sum _{j=1} ^{n-1} \langle n \vert H _n (\omega)\vert j \rangle
\ \langle j \vert \Psi (\omega) \rangle
,\end{aligned}$$ and substitute it in Eq.(\[eq:2\]). I then get an effective Hamiltonian where the number is sectors is diminuished by 1: $$H _{n -1} (\omega) = H _n (\omega)
+ H _n(\omega) G _ n (\omega) H _n (\omega)
.$$ This is a recursion relation, which can repeated until one arrives at the $q\bar q$-space. The fixed point equation $ E (\omega ) = \omega $ determines all eigenvalues.
\[t\]
For the block matrix structure as in Figure \[fig:1\], with its many zero matrices, the reduction is particularly easy and transparent. For $K=3$ one has a sequence of effective interactions: $$\begin{aligned}
\begin{array} {@{}l@{\,}c@{\,} l l@{\,}c@{\,}l@{}l@{}}
H_8 &=& T_{8} ,
& H_7 &=& T_{7} + V G _8 V ,
\\
H_6 &=& T_{6} + V G _7 V ,
& H_5 &=& T_{5} + V G _6 V.
\end{array}
\label{eq:6}\end{aligned}$$ The remaining ones get more complicated, *i.e.* $$\begin{aligned}
\begin{array} {@{}l@{\,}c@{\,}l@{}}
H_4 &=& T_{4} + V G _7 V + V G _7 V G _6 V G _7 V
,\\
H_3 &=& T_{3} + V G _6 V + V G _6 V G _5 V G _6 V + V G _4 V
,\\
H_2 &=& T_{2} + V G _3 V + V G _5 V
,\\
H_1 &=& T_{1} + V G _3 V + V G _3 V G _2 V G _3 V .
\end{array}
\hspace{-5em}\label{eq:7}\end{aligned}$$ For $K=4$, the effective interactions in Eq.(\[eq:6\]) are different, see for example [@Pau98], but it is quite remarkable that they are the same in Eq.(\[eq:7\]). In fact, the effective interactions in sectors 1-4 are independent of $K$: The *continuum limit $K\rightarrow\infty$ is then trivial*, and will be taken in the sequel.
In the continuum limit, the effective Hamiltonian in the $q\bar q$-space $H_1=H_\mathit{eff}$ is thus $$\begin{aligned}
\begin{array} {@{}l@{\,}c@{\,} l@{\,} l @{\,}l @{\,}l @{\,}l@{\,}}
H_\mathit{eff} &=& T &+& V G _3 V &+& V G _3 V G _2 V G _3 V ,
\\
&=& T &+& U_\mathit{conser}
&+& U_\mathit{change} .
\end{array}
\label{eq:8}\end{aligned}$$ The effective interaction has two contributions: A flavor-conserving $U_\mathit{conser}$ and a flavor-changing piece $U_\mathit{change}$. The flavor-changing interaction can not get active in flavor-off-diagonal mesons.
The dressed propagators in Eq.(\[eq:8\]) and Figure \[fig:2\] are exact. The iterated resolvents resum perturbative diagrams to all orders.
Their conversion to free propagators with effective vertices $\alpha\rightarrow\overline\alpha (Q)$, represented in Figure \[fig:3\] by the thin lines and the open circles, respectively, is an approximation coupled with four well specified assumptions [@Pau98].
The wavy line in Figure \[fig:3\] should not be mistaken as a single gluon exchange. The effective gluon corresponds to a particular resummation of infinitely many gluons.
Henceforward I deal only with the flavor conserving interaction.
The one-body equation
=====================
The effective one-body equation for flavor off-diagonal mesons (mesons with a different flavor for quark and anti-quark) becomes thus [@Pau00; @Pau98]: $$\begin{aligned}
M^2\psi_{\lambda_1\lambda_2}(x,\vec k_{\!\perp})
= \sum _{ \lambda_1',\lambda_2'}
\!\!\int\!\! dx' d^2 \vec k_{\!\perp}'
\nonumber\\ \times
\ U_{\lambda_1\lambda_2;\lambda_1'\lambda_2'}
(x,\vec k_{\!\perp};x',\vec k_{\!\perp}')
\ \psi_{\lambda_1'\lambda_2'}(x',\vec k_{\!\perp}') +
\nonumber\\ +
\left[
\frac{\overline m^2_{1} + \vec k_{\!\perp}^{\,2}}{x} +
\frac{\overline m^2_{2} + \vec k_{\!\perp}^{\,2}}{1-x}
\right]
\psi_{\lambda_1\lambda_2}(x,\vec k_{\!\perp})
,\label{eq:9}\end{aligned}$$ an integral equation with the kernel $$\begin{aligned}
\nonumber
U_{\lambda_1\lambda_2;\lambda_1'\lambda_2'}
(x,\vec k_{\!\perp};x',\vec k_{\!\perp}') = -
\frac{4\overline m_1 \overline m_2}{3\pi^2}
\\ \times
\frac{\overline\alpha(Q)}{Q^2} \overline R(Q)
\frac{S_{\lambda_1\lambda_2;
\lambda_1'\lambda_2'}(x,\vec k_{\!\perp};x',\vec k_{\!\perp}')}
{\sqrt{ x(1-x) x'(1-x')}}
.\label{eq:10}\end{aligned}$$ Here, $M ^2$ is the eigenvalue of the invariant-mass squared. The associated eigenfunction $\psi\equiv\Psi_{q\bar q}$ is the probability amplitude $\langle x,\vec k_{\!\perp};\lambda_{1},\lambda_{2}\vert\psi\rangle$ for finding a quark with momentum fraction $x$, transversal momentum $\vec k_{\!\perp}$ and helicity $\lambda_{1}$, and correspondingly the anti-quark with $1-x$, $-\vec k_{\!\perp}$ and $\lambda_{2}$. The (effective) quark masses $\overline m _1$ and $\overline m _2$ and the (effective) coupling constant $\overline\alpha$ are given below. The mean Feynman-momentum transfer of the quarks is denoted by $Q^2=Q ^2 (x,\vec k_{\!\perp};x',\vec k_{\!\perp}')$, $$\begin{aligned}
Q ^2 =
-\frac{1}{2}\left[(k_{1}-k_{1}')^2 + (k_{2}-k_{2}')^2\right]
,\end{aligned}$$ the spinor factor $S=S(x,\vec k_{\!\perp};x',\vec k_{\!\perp}')$ by $$\begin{aligned}
&&\hspace{-1.8em}S_{\lambda_1\lambda_2;\lambda_1'\lambda_2'}
(x,\vec k_{\!\perp};x',\vec k_{\!\perp}')\!=\!
\label{eq:12}\\
&\hspace{-1em}=&\!\!\!\!\!\left[\overline u(k_1,\lambda_1)\gamma^\mu
u(k_1',\lambda_1')\right]
\left[ \overline v(k_2',\lambda_2') \gamma_\mu
v(k_2,\lambda_2)\right]
\nonumber\\
&\hspace{-1em}=&\!\!\!\!\!\left[\overline u(k_1,\lambda_1)\gamma^\mu
u(k_1',\lambda_1')\right]
\left[\overline u (k_2,\lambda_2)\gamma_\mu
u(k_2',\lambda_2')\right]
,\nonumber\end{aligned}$$ since $
\overline v(k_2',\lambda_2') \gamma_\mu
v(k_2,\lambda_2) =
\overline u (k_2,\lambda_2)\gamma_\mu
u(k_2',\lambda_2')
$ holds as a general identity. One deals thus only with the $u$-spinors. Opposed to the earlier conventions [@BroPauPin98] they are normalized in Eq.(\[eq:12\]): $
\overline u (k,\lambda) u (k,\lambda') =
\delta_{\lambda\lambda'}
.$ The spinor factor S is tabulated explicitly in [@Pau00]. The regulator function $\overline R(x',\vec k'_{\!\perp};\Lambda)$ restricts the range of integration as function of some mass scale $\Lambda$. Note that Eq.(\[eq:9\]) is a fully relativistic equation. I have derived essentially the same equation also with Wegner’s Hamiltonian flow equations, see [@Pau00].
Renormalization
===============
The effective Hamiltonian in Eq.(\[eq:9\]) depends on a regulator scale $\Lambda$ through three quantities.\
First, it depends on $\Lambda$ through the effective quark masses $\overline m _f \equiv \overline m _f (\Lambda)$ which are hidden also in the Dirac spinors. They are given in Eq.(90) of [@Pau98] in terms of the bare $\alpha$ and $m_f$: $$\begin{aligned}
\overline m_f^2 &=& m_f^2
\left(1+\frac{\alpha}{\pi}\frac{n_c^2-1}{2n_c}
\ln{\frac{\Lambda^2}{m_g^2}}\right)
.\label{eq:13}\end{aligned}$$ In the expression a second regularization parameter $m_g$ appears which conceptually is a kinematical gluon mass. The corresponding gluon mass diagram gives $$\begin{aligned}
\overline m_g^2 &=& m_g^2 - \frac{\alpha}{4\pi}
\sum_{f=1}^{n_f} m_f^2 \ln{\frac{\Lambda^2}{4m_f^2}}
,\label{eq:14}\end{aligned}$$ see Eq.(91) of [@Pau98]. The physical gluon mass must vanish due to gauge invarinace, thus $\overline m_g=0$, which expresses $m_g$ in terms of $m_f$.\
Second, it depends on $\Lambda$ through the effective coupling $\overline \alpha(Q) \equiv \overline \alpha(\Lambda,Q)$. The expression in Eq.(100) of [@Pau98] is rewritten here conveniently in terms of an arbitrary scale $\kappa$: $$\begin{aligned}
\frac{1}{\overline{\alpha}(\Lambda,Q)}
&=& \frac{1}{\alpha}
- \frac{11n_c-2n_f}{12\pi}\ln{(\Lambda^2/\kappa^2)}
\nonumber\\
&+& \frac{11n_c}{12\pi} \ln{\left((\mu_g^2+Q^2)/\kappa^2\right)}
\nonumber\\
&-& \frac{2}{12\pi} \sum_{f=1}^{n_f}
\ln{\left((\mu_f^2+Q^2)/\kappa^2\right)}
,\label{eq:15}\end{aligned}$$ with $\mu_f = 2 m_f$ and $\mu_g = 2 m_g$.\
Third, the Hamiltonian depends on $\Lambda$ through the regularization function which here is the soft cut-off $$\begin{aligned}
\overline R(Q) \equiv \overline R(\Lambda,Q)
= \frac{\Lambda^2}{\Lambda^2+Q^2}
.\end{aligned}$$ The dependence on the unphysical parameter $\Lambda$ must be removed, $$\begin{aligned}
\frac{d}{d\Lambda} H_\mathit{LC} \left(
\overline m(\Lambda),
\overline \alpha(\Lambda),
\overline R(\Lambda) \right) = 0
,\end{aligned}$$ as required by renormalization theory, but how? The non-perturbative renormalization of $H$ was stuck for many years by the fact that the vertex function $\overline \alpha(\Lambda)$ and the regulator $\overline R(\Lambda)$ are so intimately coupled in Eq.(\[eq:9\]). It was always clear that one could add non-local counter terms [@Wilson6], but it was utterly unclear how to construct them. The progress comes from the recent work on the $\uparrow\downarrow$-model [@FredericoFP01; @FrewerPF01]: Adding to $\overline R(\Lambda,Q)$ a counterterm $C(\Lambda,Q)$ and requiring that the sum $ R(\Lambda,Q)=\overline R(\Lambda,Q)+C(\Lambda,Q)$ be independent of $\Lambda$, determines $C(\Lambda,Q)$. One remains with $$\begin{aligned}
R(\Lambda,Q) = \overline R(\Lambda,Q)+C(\Lambda,Q) =
\frac{\mu^2}{\mu^2+Q^2}
.\end{aligned}$$ In line with renormalization theory, one then can go to the limit $\Lambda\longrightarrow\infty$ and $\mu$ becomes a parameter of the theory.
The cut-off dependence in $\overline \alpha(\Lambda,Q)$, Eq.(\[eq:15\]), can then be removed by replacing the bare coupling constant $\alpha$ by the cut-off dependent running coupling constant $\alpha_\Lambda$, *i.e.* $$\begin{aligned}
\alpha_\Lambda=
\frac{6\pi}{11n_c-2n_f}\ \frac{1}{\ln{(\Lambda/\kappa)}}
.\label{eq:19}\end{aligned}$$ The *renormalized* vertex function, $$\begin{aligned}
\frac{1}{\overline{\alpha}(Q)}
&=& \frac{11n_c}{12\pi} \ln{\left((\mu_g^2+Q^2)/\kappa^2\right)}
\nonumber\\
&-& \frac{2}{12\pi} \sum_{f=1}^{n_f}
\ln{\left((\mu_f^2+Q^2)/\kappa^2\right)}
,\label{eq:20}\end{aligned}$$ does not depend explicitly on $\Lambda$, and the scale $\kappa$ becomes an other parameter of the theory.
In completing renormalization for the masses, Eqs.(\[eq:13\]) and (\[eq:14\]) are first rewritten for $n_c=3$ as $$\begin{aligned}
\overline m_f^2 &=&
m_f^2+\frac{8m_f^2}{3\pi}
\left(1-\frac{\ln{m_g/\kappa}}{\ln{\Lambda/\kappa}}\right)
\alpha \ln{\frac{\Lambda}{\kappa}}
,\\
m_g^2 &=&
\sum_{f=1}^{n_f} \frac{m_f^2 }{2\pi}
\left(1-\frac{\ln{2m_f/\kappa}}{\ln{\Lambda/\kappa}}\right)
\alpha \ln{\frac{\Lambda}{\kappa}}
.\end{aligned}$$ Inserting the running coupling constant from Eq.(\[eq:19\]) leaves us with $$\begin{aligned}
\begin{array} {@{}l @{\,}c@{\,} l@{}}
\displaystyle
\overline m_f^2 &=&
\displaystyle
m_f^2+\frac{8m_f^2}{3\pi} \frac{6\pi}{33-2n_f}
\left(1-\frac{\ln{m_g/\kappa}}{\ln{\Lambda/\kappa}}\right) ,
\\
\displaystyle
m_g^2 &=&
\displaystyle
\sum_{f=1}^{n_f} \frac{m_f^2 }{2\pi}\frac{6\pi}{33-2n_f}
\left(1-\frac{\ln{2m_f/\kappa}}{\ln{\Lambda/\kappa}}\right) .
\end{array}\hspace{-5em}\end{aligned}$$ Finally, I go to the limit $\Lambda\rightarrow\infty$ and express the bare masses in terms of the dressed ones: $$\begin{aligned}
m_f^2 = \frac{33-2n_f}{49-2n_f} \overline m_f^2
,\ m_g^2 = \frac{3}{49-2n_f} \sum_{f=1}^{n_f}\overline m_f^2
.\end{aligned}$$ This completes the program of renormalization: For the first time, ever, the dependence on a cut-off $\Lambda$ has been removed completely from a field theoretical Hamiltonian. Notice that this step rests on the contributions [@FredericoFP01; @FrewerPF01] to this meeting.
\[t\]
\[t\]
The locking of the coupling constant
====================================
The Lagragian for QCD has 7 parameters: the 6 flavor quark masses $m_f$ and the coupling constant $\alpha$. The renormalized effective Hamiltonian has one parameter more: The 6 flavor masses $\overline m_f$, and the two scales $\kappa$ and $\mu$. This is in full accord with renormalization theory, since whatever the model, one has a scale at which one experiments.
The renormalized vertex function of Eq.(\[eq:20\]) deserves some further discussion. Most importantly it has a *finite value at $Q=0$*. The coupling constant locks its-self, as one says.
One should think that $\kappa$ is entirely fixed by the coupling constant of measured at sufficiently high $Q$. But taking, as usual, the value $\overline \alpha(M_Z)=0.118$ at the Z mass $M_Z=91.2\mbox{ GeV}$, one observes a rather dramatic dependence of $\kappa$ on the number of flavors included: $$\begin{aligned}
\begin{array} {@{}ccccc@{}}
n_f & \kappa & \alpha_0 & \mu_g & \mu_b \\
- &[\mbox{MeV}]& - &[\mbox{MeV}]&[\mbox{MeV}] \\
4 & 153.1 & 0.9566 & 387.7 & 336.7 \\
5 & 87.84 & 0.5446 & 434.1 & 359.6 \\
6 & 45.33 & 0.3848 & 488.2 & 467.2
\end{array}
\label{eq:24}\end{aligned}$$ Changing $n_f$ from 4 to 6 changes $\kappa$ by a factor of 4! The dependence of $\alpha_0 \equiv \overline \alpha(0)$ on $n_f$ is less pronounced even if one puts all flavor masses equal to $\overline m_f = 350 \mbox{ MeV}$, as done conveniently in Eq.(\[eq:24\]). The corresponding functions $\overline \alpha (Q)$ are displayed in Figure \[fig:4\]. The $n_f+1$ parameters in Eq.(\[eq:20\]) are unpleasant to work with and it is useful to introduce the approximate expression $$\begin{aligned}
\overline \alpha _b(Q)
&=& \frac{12\pi}{33-2n_f}
\frac{1}{\ln{\left((\mu_b^2+Q^2)/\kappa^2\right)}}
.\end{aligned}$$ The only parameter $\mu_b$ is fixed by $\alpha _0$ and given in Eq.(\[eq:24\]) as well. As shown in Figures \[fig:4\] and \[fig:5\] by the dashed line, $\overline \alpha _b(Q)$ is almost un-discernible from $\overline \alpha (Q)$.
Using the more physical mass parameters from Eq.(\[eq:31\]) produces $$\begin{aligned}
\begin{array} {@{}ccccc@{}}
n_f & \kappa & \alpha_0 & \mu_g & \mu_b \\
- &[\mbox{MeV}]& - &[\mbox{GeV}]&[\mbox{MeV}] \\
4 & 153.1 & 0.4002 & .9941 & 1.007 \\
5 & 87.88 & 0.2133 & 2.981 & 4.099 \\
6 & 75.23 & .09844 & 99.14 & 685.8
\end{array}\end{aligned}$$ and the corresponding curves $\overline \alpha (Q)$ in Figure \[fig:5\]. Latest here I have to abandon my earlier conjecture [@Pau98] that a momentum-dependent vertex function could be related to confinement in any way. In fact, the above curves have so little structure that one can replace them in a bound state calculation by the constant $\alpha_0$. Henceforward I will give up thus $\kappa$ in favor of $\alpha=\alpha_0$ and change notation from $\overline m_f$ to $m_f$ .
The $\uparrow\downarrow$-model as an application
================================================
\[t\]
$\overline u$ $\overline d$ $\overline s$ $\overline c$ $\overline b$
----- --------------- --------------- --------------- --------------- --------------- --
$u$ 768 871 2030 5418
$d$ 140 871 2030 5418
$s$ 494 494 2124 5510
$c$ 1865 1865 1929 6580
$b$ 5278 5278 5338 6114
: \[tab:2.2\] The calculated mass eigenvalues in MeV. Those for singlet-1s states are given in the lower, those for singlet-2s states in the upper triangle.
In light-cone parametrization, the quarks are at relative rest when $\vec k _{\!\perp}= 0$ and $ x = \overline x \equiv m_1/(m_1+m_2)$. For very small deviations from these equilibrium values the spinor matrix is proportional to the unit matrix, with [@Pau00] $$\begin{aligned}
\langle\lambda_1,\lambda_2\vert S\vert\lambda_1'\lambda_2'\rangle
\sim 4 m_1 m_2
\ \delta_{\lambda_1,\lambda_1'}
\ \delta_{\lambda_2,\lambda_2'}
.\end{aligned}$$ For very large deviations from equilibrium, particularly for $\vec k_{\!\perp}^{\prime\,2} \gg \vec k_{\!\perp} ^{\,2}$, holds $$\begin{aligned}
Q ^2 \simeq\vec k_{\!\perp}^{\prime\,2}
,\hskip2em\mbox{ and } \hskip2em
\langle\uparrow\downarrow\vert S\vert\uparrow\downarrow\rangle
\simeq 2\vec k_{\!\perp}^{\prime\,2}
.\end{aligned}$$ Both extremes are combined in the $\uparrow\downarrow$-model [@Pau00]: $$\begin{aligned}
&&\frac{S} {Q ^2} \equiv
\frac{4 m_1 m_2}{Q ^2} + 2
\Longrightarrow
\frac{4 m_1 m_2}{Q ^2} + 2 R(\Lambda,Q)
,\nonumber\\
&&\mathrm{with}\quad
R(\Lambda,Q) = \frac{\mu^2}{\mu^2+Q^2}
.\end{aligned}$$ It interpolates between two extremes: For small momentum transfer, the ‘2’ generated by the hyperfine interaction is unimportant and the dominant Coulomb aspects of the first term prevail. For large momentum transfers the Coulomb aspects are unimportant and the 2 dominates. Eq.(\[eq:9\]) therefore is replaced by $$\begin{aligned}
\lefteqn{
M^2
\psi(x,\vec k_{\!\perp}) = \left[
\frac{ m^2_{1} + \vec k_{\!\perp}^{\,2}}{x} +
\frac{ m^2_{2} + \vec k_{\!\perp}^{\,2}}{1-x}
\right]\psi(x,\vec k_{\!\perp}) }
\nonumber\\
\lefteqn{
-
\frac{\alpha}{3\pi^2} \!\int
\frac{ dx' d^2 \vec k_{\!\perp}'}
{\sqrt{ x(1-x) x'(1-x')}}\ \psi(x',\vec k_{\!\perp}')\ \times}
\nonumber\\ & & \ \times
\left(\frac{4 m_1 m_2}{Q ^2} +
\frac{2\mu^2}{\mu^2+Q^2}\right),
\label{eq:30}\end{aligned}$$ where $\psi(x,\vec k_{\!\perp})\equiv
\langle x,\vec k_{\!\perp}; \uparrow,\downarrow
\vert \psi\rangle $. With the canonical 8 parameters of the $\uparrow\downarrow$-model [@Pau00], $$\begin{aligned}
\begin{array} {@{}c@{\,}c@{\quad}c@{\,}c@{\,}c@{\ \ }c@{\,}c@{\,}c@{}}
\alpha & \mu & m_u & m_d & m_s & m_c & m_b & m_t \\
0.690 & 1.33 & 406 & 406 & 508 & 1.67 & 5.05 & 174 \\
- &\mbox{GeV}&\mbox{MeV}&\mbox{MeV}&\mbox{MeV}
&\mbox{GeV}&\mbox{GeV}&\mbox{GeV},
\end{array}
\label{eq:31}\end{aligned}$$ all masses of the physical mesons have been calculated according to Eq.(\[eq:30\]). They are compiled in Table \[tab:2.2\]. The empirical masses are compiled in Table \[tab:1.2\]. The agreement between the two is amazing. To the best of my knowledge there is no other model which can describe *all mesons* quantitatively from the $\pi$ up to the $\Upsilon$ from a common point of view, which here is QCD.
The proposed pion of the $\uparrow\downarrow$-model is rather different from the pions in the literature. I have found no evidence that the vacuum condensates are important, but I conclude that the pion is describable by a QCD-inspired theory: The very large coupling constant in conjunction with a very strong hyperfine interaction makes it a ultra strongly bounded system of constituent quarks. More then 80 percent of the constituent quark mass is eaten up by binding effects. No other physical system has such a property.
The numerical wavefunction $\psi(x,\vec k_{\!\perp})$ can be fitted with only one free parameter, *i.e.* $$\begin{aligned}
&&\psi(x,\vec k _{\!\perp}) =
\frac{\mathcal{N}}{\sqrt{x(1-x)}}
\nonumber\\ &&\times
\frac
{\left(1+\displaystyle
\frac{m^2\left(2x-1\right)^2+\vec k_{\!\perp}^{\,2}}
{4x(1-x)\ m^2} \right)^{\frac{1}{2}}}
{\left(1+\displaystyle
\frac{m^2\left(2x-1\right)^2+\vec k_{\!\perp}^{\,2}}
{4x(1-x)\ p_a^2} \right)^2}
,\label{eq:25}\end{aligned}$$ with $p_a=1.338 m$ [@Pau00]. The explicite form of the wavefunction can used to calculate the form factor and thus the exact root-mean-square radius $\langle r^2\rangle = -6\left.{dF_2(Q^2)}/{dQ^2}\right\vert_{Q^2=0}$ analytically [@PaM01]. The size of the $q\bar q$ wavefunction turns out as $\langle r^2\rangle = (0.33\mbox{ fm})^2$, half as large as the empirical value $\langle r^2\rangle_{\mathrm{exp}} = (0.67\mbox{ fm})^2$.
\[t\]
$\overline u$ $\overline d$ $\overline s$ $\overline c$ $\overline b$
----- --------------- --------------- --------------- --------------- --------------- --
$u$ 768 892 2007 5325
$d$ 140 896 2010 5325
$s$ 494 498 2110 —
$c$ 1865 1869 1969 —
$b$ 5278 5279 5375 —
: \[tab:1.2\] Empirical masses of the flavor-off-diagonal physical mesons in MeV. Vector mesons are given in the upper, scalar mesons in the lower triangle.
The parameter $p_a=1.338\,m$ in Eq.(\[eq:25\]) plays the role of an effective Bohr momentum of the constituents in the pion. The mean momentum of the constituents is thus 40 percent larger than their mass, which means that they move highly relativistically quite in contrast to the constituents of atoms or nuclei. No wonder that potential models thus far have failed for the pion.
This completes one of my goals: I have a pion with the correct mass, and I have an analytic expression for its light-cone wave function. Eq.(\[eq:25\]) could be used thus as a baseline for calculating the higher Fock-space amplitudes, as explained in [@Pau98]. It could well be that the wavefunction obtained from such a simple model suffices already to be consistent with recent experiments [@Ash00].
Discussion: Front or instant form?
==================================
Eq.(\[eq:9\]) is a frame frame-independent, covariant, and fully relativistic front-form equation, with certain boosts being kinematic and trivial [@BroPauPin98]. One pays for these advantages with the fact that the transversal components for total angular momentum $\vec J = \vec L + \vec S$ (the spin $\vec S=\frac{1}{2}(\sigma_1+\sigma_2)$ is not to be confused with the spinor factors $S$) are complicated dynamical operators in the front form, see for example [@BroPauPin98]. Only $J_z$ is simple and kinematic. The eigenvalues and eigenfunctions of Eq.(\[eq:9\]) can thus be labeled with $J$. Despite this, Trittmann and Pauli [@TriPau00], in their numerical solution of the QED-version of Eq.(\[eq:9\]) for different $J_z$, have done so by using the standard (non-relativistic) spectroscopic terms $^{2S+1} L _{J} ^{J_z}$. By inspection of the numerical results they found that the eigenvalues can be arranged in multiplets which are ($2J+1$)-fold degenerate modulo numerical accuracy. The authors could not find a plausible answer for that in terms of the light-cone formalism.
Now, we seem understand that better. In the contribution to this session, Krassnigg [@Krassnigg01] shows that there exists a unitary tranformation $\Omega$ which transforms the typical combination of Lepage-Brodsky spinors in Eq.(\[eq:9\]) to an other combination with only Bjørken-Drell spinors.
Unitary transfomartions do not change the eigenvalue and Eq.(\[eq:9\]) is identically transcribed to an equation for the reduced wave function $\varphi_{s_1 s_2}(\vec k)$, which reads for equal masses $$\begin{aligned}
\begin{array}{l@{}l@{}l@{}l@{}}
&&\left[M^2-4\left(m^2+\vec k^{\,2}\right)\right]
\varphi _{s_1 s_2}(\vec k)
\label{eq:33}\\
&=&
\displaystyle
\sum _{s'_1, s'_2}
\!\int\!\! d^3 \vec k'
\ \widetilde U _{s_1 s_2;s'_1 s'_2} (\vec k;\vec k')
\ \varphi_{s'_1 s'_2}(\vec k') .
\end{array}
\hspace{-6em}\end{aligned}$$ Since the spinors $u(k,s)$ in the kernel $$\begin{aligned}
\begin{array}{l@{}l@{}l@{}l@{}}
&\ &
\displaystyle
\widetilde U _{s_1 s_2 ;s'_1 s'_2} =
-\frac{8m}{3\pi^2}
\frac{\overline\alpha(Q)}{Q^2}R(Q) \times
\\ &&
\displaystyle %\times
\left[\overline u(k_1,s_1)\gamma^\mu
u(k_1',s_1')\right]
\left[\overline u (k_2,s_2)\gamma_\mu
u(k_2',s_2')\right] ,
\end{array}
\hspace{-6em}\end{aligned}$$ by definition are Bjørken-Drell spinors, one can not recognize the front-form orign of this equation, see [@Krassnigg01] for further details. They are a set of four coupled integral equations in the usual momentum space. Formally spoken, they are instant-form equations in the rest frame, and, by inspection, they are invariant under spatial rotations. Its eigenfunctions can therefore be labeled as $\varphi_{J(L)S}$, as usual, with eigenvalues $M_{J(L)S}^2$ being ($2J+1$)-fold strictly degenerate multiplets.
These aspects can also be reversed. Suppose that some phenomenological model yields momentum space wavefunctions $\varphi_{s_1 s_2}(\vec k)$. The transformations in [@Krassnigg01], which lead to Eq.(\[eq:33\]), can be inverted and used to generate light-cone wave functions $\psi_{\lambda_1\lambda_2}(x,\vec k _{\!\perp})$ with helicities $\lambda_1$ and $\lambda_2$. These can be used then as a reasonable approximation in existing formulas for the cross-sections.
Perspectives
============
The light-cone community has not yet solved its homework problem, but it has gone a long way: (1) The role of zero modes and vacuum structure is better understood. (2) Effective interactions can be formulated and even renormalized. (3) Bound-state wavefunctions can be calculated with a technical effort comparable to or less than in the instant form. (4) Simple models can generated which are not in conflict with experiments. (5) Last not least, much of the work can be done analytically.\
Despite the limited progress one should not be discouraged from continued efforts on the homework problem. After all, the Hamiltonian approach to *any* field theory an in *any* form has been disrupted in 1949 when Feynmans action oriented approach did appear on the scene.
Not solved are the aspects of confinement. At present one does not understand its orign. Not solved are also the aspects of the chiral phase transition. But one should emphasize that the solution to the bound-state problem takes place at temperature zero, possibly after a phase transition. Due to the fit to experiment, quark masses are finite and actually large. The present approach thus can not contribute to question like “What happens if quark masses vanish?” It starts where other approaches end.
In QED, hyperfine interactions and the Lamb shift are comparable in size. For QCD, the importance of the hyperfine interaction has been quantified by the $\uparrow\downarrow$-model, but the Lamb shift is a completely open question.
In QED, the Lamb shift arises by a photon in flight moving relative to the hydrogen bound-state. That alone suffices to give part of the answer for QCD, by Eq.(\[eq:8\]). Going to the diagonal representation gives $$\begin{aligned}
U_\mathit{Lamb} = VG_3V =
\sum_n V\vert n\rangle\frac{1}{\omega-M_n^2 }\langle n\vert V\end{aligned}$$ The symbol $\sum_n $ refers to a summation (or integration) over all meson states and gluon states. The invariant mass squared $$\begin{aligned}
M_n^2 = \frac{M_b^2 +q_{\!\perp}^2}{1-y} + \frac{q_{\!\perp}^2}{y} \end{aligned}$$ corresponds to a free colored gluon with longitudinal momentum fraction $y$ and transversal momentum $\vec q_{\!\perp}$. It moves back to back to a colored meson bound-state of mass $M_b$. We have not much of an idea on the bound-state spectrum of colored mesons, neither experimentally nor theoretically. We dont even know, whether such mesons are bound at all, but I see no immediate objection why one could not give it a try by the above methods, particularly a suitably adjusted $\uparrow\downarrow$-model.
I conclude that the calculation of the Lamb shift in QCD is an interesting and important problem particularly for the pion. Can we challenge the lattice community to get help on that?
[99]{}
H. Leutwyler, these proceedings.
C. Roberts, *l.c.*
T. Frederico *et al.*, *l.c.*, hep-ph/0111136
A. Schweiger, *l.c.*
W. Plessas, *l.c.*
A. Krassnigg *et al.*, *l.c.*, hep-ph/0111260
M. Mangin-Brinet, *l.c.*
M. Frewer *et al.*, hep-ph/0111039
T. Walhout, *l.c.*
T. Karmanov, *l.c.*
R. Ligterink, *l.c.*
T. Sugihara, *l.c.*
M. van Iersel, *l.c.*
D. Ashery, Nucl. Phys. B (Proc.Suppl.) **90** (2000) 67; see also *l.c.*
Franz Wegner, Nucl. Phys. B (Proc.Suppl.) **90** (2000) 141; H.C. Pauli, Nucl. Phys. B (Proc.Suppl.) **90**(2000) 147.
J. Hiller, Nucl. Phys. B (Proc.Suppl.) **90** (2000) 170, and *l.c.*
G. Schierholz, Nucl. Phys. B (Proc.Suppl.) **90** (2000) 207.
U. Trittmann and H.C. Pauli, Nucl. Phys. B (Proc.Suppl.) **90** (2000) 161.
H.C. Pauli, Nucl. Phys. B (Proc.Suppl.) **90** (2000) 147, 259.
H.C. Pauli and S.J. Brodsky, Phys. Rev. [**D32**]{} (1985) 1993, 2001.
S.J. Brodsky, H.C. Pauli, and S.S. Pinsky, Phys. Rep. **301** (1998) 299-486.
K. Schilling, Nucl. Phys. B(Proc.Suppl.) **83** (2000) 140.
K.G. Wilson, T.S. Walhout, A. Harindranath, W.M. Zhang, R.J. Perry, S.D. G[ł]{}azek, Phys. Rev. **D49** (1994) 6720-6766.
S.D. G[ł]{}azek and K.G. Wilson, Phys. Rev. **D48** (1993) 5863; *ibid.* **D49** (1994) 4214.
F. Wegner, Ann. Physik **3** (1994) 77.
H.C. Pauli, Eur. Phys. J. **C7** (1998) 289. hep-th/9809005, and in: New directions in Quantum Chromodynamics, C.R. Ji and D.P. Min, Eds., American Institute of Physics, 1999, p. 80-139. hep-ph/9910203.
H.C. Pauli, and A. Mukherjee, to appear in Int.Jour.Mod.Phys. (2001), hep-ph/0104175. H.C. Pauli, Submitted to Nucl.Phys. A(2001).
|
---
abstract: 'Using the adaptive optics assisted near infrared integral field spectrometer SINFONI on the VLT, we have obtained observations of the Circinus galaxy on parsec scales. The morphologies of the H$_2$ (1-0)S(1) 2.12$\mu$m and Br$\gamma$ 2.17$\mu$m emission lines are only slightly different, but their velocity maps are similar and show a gradient along the major axis of the galaxy, consistent with rotation. Since $V_{\mathrm{rot}}$/$\sigma$ $\approx$ 1 suggests that random motions are also important, it is likely that the lines arise in a rotating spheroid or thickened disk around the AGN. Comparing the Br$\gamma$ flux to the stellar continuum indicates that the star formation in this region began $\sim$10$^8$ yr ago. We also detect the \[Si[vi]{}\] $1.96\mathrm{\mu m}$, \[Al[ix]{}\] $2.04\mathrm{\mu m}$ and \[Ca[viii]{}\] $2.32\mathrm{\mu m}$ coronal lines. In all cases we observe a broad blue wing, indicating the presence of two or more components in the coronal line region. A correlation between the ionisation potential and the asymmetry of the profiles was found for these high excitation species.'
address: 'Max-Planck-Institut für extraterrestrische Physik, Postfach 1312, D-85741 Garching, Germany'
author:
- 'F. Mueller Sánchez, R. I. Davies, F. Eisenhauer,'
- 'L. J. Tacconi, and R. Genzel'
title: 'Near IR diffraction-limited integral-field SINFONI spectroscopy of the Circinus galaxy'
---
Integral Field Spectroscopy; SINFONI; Active galaxies; Circinus; starburst
Introduction
============
The star formation history, the mass distribution, and the stellar and gas dynamics on scales of less than a few parsecs in the nuclei of Seyfert galaxies, are some of the main debated issues in the context of active galactic nuclei. The Circinus galaxy, at a distance of $4.2\pm0.8$ Mpc (Freeman et al. [@freeman]), is an ideal subject to study because of its proximity (1 = 20 pc). It is a large, highly inclined ($i=65^\circ$), spiral galaxy that hosts both a typical Seyfert 2 nucleus and a circumnuclear starburst. This work comprises a summary of the results obtained after analyzing the SINFONI datacube of the Circinus galaxy presented in Mueller Sánchez et al. [@muellersan06]
The instrument: SINFONI
=======================
SINFONI comprises an Adaptive Optics facility (AO-Module), developed by ESO (Bonnet et al. [@bonnet03]) and SPIFFI, a NIR integral field spectrograph developed by MPE (Eisenhauer et al. [@eis03]). The AO-Module consists of an optical relay from the telescope’s Cassegrain focus to the SPIFFI entrance focal plane, which includes a deformable mirror conjugated to the telescope pupil. The curvature is updated on the 60 actuators at 420 Hz, with a closed-loop bandwidth of 30–60 Hz, to compensate for the aberrations produced by the turbulent atmosphere. Under good atmospheric conditions an adequate correction can be obtained over the full $1\times2$ arcmin$^2$ FOV with stars up to $R = 17$ mag. The SPIFFI integral field spectrometer records simultaneously the spectra of all image points in a two-dimensional field of view. The image scale of SPIFFI allows sampled imaging at the diffraction limit of the telescope (0.0125/pix), seeing limited observations (0.125/pix) and an intermediate image scale (0.05/pix), over a FOV of $0.8{\hbox{$^{\prime\prime}$}}\times0.8$, $8{\hbox{$^{\prime\prime}$}}\times8$, and $3.2{\hbox{$^{\prime\prime}$}}\times3.2$ respectively. The spectral resolution of the spectrometer ranges between 2000-5000 for the three covered atmospheric bands: $J$ (1.1$\mu$m–1.4$\mu$m), $H$ (1.45$\mu$m–1.85$\mu$m) and $K$ (1.95$\mu$m–2.45$\mu$m). The instrument is fully cryogenic, and it is equipped with a Rockwell 2k$\times$2k HAWAII-2RG array. Figure \[spiffi\] shows an inside view of the main components of SPIFFI. The light enters from the top. The pre-optics with a filter wheel re-image the object plane from the AO module onto the image slicer, providing the three different image scales. The image slicer rearranges the two dimensional field onto a one-dimensional pseudo longslit. A grating wheel disperses the light and a short focal length camera then images the spectra on the detector. After some processing of the raw data, the outcome is a data cube with two spatial and one spectral dimensions.
Nuclear dust emission
=====================
Images of the 2$\mathrm{\mu m}$ continuum and the Br$\gamma$ and $\mathrm{H_2}$ (1-0)S(1) line emission, as well as the $^{12}$CO(2-0) band flux are presented in Figure \[linemaps\]. The SINFONI PSF was found to be wll represented by a symmetrical moffat function, with a FWHM spatial resolution of 0.2.
In Circinus, although the AGN is highly obscured, it is revealed indirectly by the compact $K$-band non-stellar core. The fraction of stellar light in the nucleus is deduced from comparison of the $EW(^{12}$CO(2-0)) absorption bandhead with star clusters models. The intrinsic equivalent width is pretty much independent of star formation history for an ensemble of stars as predicted by the models, and it has an almost constant value of $\sim12$Å(Davies et al. [@davies06]). The stellar continuum, which accounts for only $\sim$15% of the total $K$-band luminosity in the central arcsec, shows a much broader distribution than the non-stellar part. This extended morphology is also reflected in the $\mathrm{H_2}$ (1-0)S(1) and Br$\gamma$ line profiles. On these scales, less than 20 pc, an exponential profile with $r_{\mathrm{d}}$ = $4^{+0.5}_{-0.1}$ is an excellent match for each of the lines, consistent with the hypothesis that the stars and the molecular gas reside in a thickened disk or rotating spheroid, as suggested by the kinematics (see Section \[starformation\]).
We have partially resolved the non-stellar $K$-band source. A quadrature correction of its FWHM with that of the PSF yields an intrinsic size of $\sim$2 pc, consistent with that found by Prieto et al. [@pri04] and also with the sizes predicted by the unification model, in which the hot dust ($\sim$1000 K) lies $\sim$1 pc away from the AGN.
Star formation activity and gas kinematics {#starformation}
==========================================
As the AGN is highly obscured, no broad line region is visible. This, together with the symmetry of the narrow line emission and its uniform velocity field, suggests that the Br$\gamma$ emission is associated with star formation activity surrounding the Seyfert nucleus. This picture is supported by the similar morphologies in the Br$\gamma$, $^{12}$CO(2-0) and H$_2$ maps, and the consistency of the velocity fields and dispersion maps of the Br$\gamma$ with the H$_2$. By comparing the evolution of the $EW(\mathrm{Br\gamma})$ and $\nu_{\mathrm{SN}}$/$L_{\mathrm{K}}^*$ with time, a moderate duration of the star formation $t_{scl} = 10^8$ yr, and an age of $8\times 10^7$ yr fit best our observational constrains of $EW(\mathrm{Br\gamma})$ = 30 Åand $\nu_{\mathrm{SN}}$/$L_{\mathrm{K}}^*$ = $1.47\times10^{-10}$ $L_{\odot}^{-1}$ yr$^{-1}$, indicating the presence of a young stellar population within a few parsecs ($R<8$ pc) of the active nucleus.
The projected velocity maps of the H$_2$ (1-0)S(1) and the Br$\gamma$ lines show a velocity gradient consistent with simple rotation with the same major axis as the large scale galaxy as can be seen in Figure \[velmaps\]. By correcting the velocity maps for inclination ($i$ = 65$^{\circ}$), a rotation velocity of 75kms$^{-1}$ at 8 pc from the nucleus was found. The intrinsic velocity dispersion of the stars in the nuclear region is almost constant with a value of $\sim$70kms$^{-1}$. Using these values we obtained an intrinsic velocity to dispersion ratio in the maps of $V_{\mathrm{rot}}$/$\sigma$ $\approx$ 1.1 Because this ratio is $\sim$1, it is an indication that, in addition to rotation, random motions are also significant in the nuclear region.
The coronal lines {#coronal}
=================
In our observations of Circinus the highly ionised \[Si[vi]{}\], \[Ca[viii]{}\] and \[Al[ix]{}\] emission lines are detected. In all three profiles we observe asymmetric and broadened lines, indicating the presence of two or more components inside the coronal line region. Indeed, the three asymmetric coronal lines were best fitted by the superposition of two Gaussians as can be seen in Figure \[clprofiles\] for the case of the \[Ca[viii]{}\] emission line. The kinematics of the coronal gas were also analyzed by comparing the emission line profiles of the three coronal lines. Figure \[clprofiles\] also shows the results for the three high excitation lines detected with SINFONI. The blue shifted ($-100$ to $-200$kms$^{-1}$) broad ($>$300kms$^{-1}$) wing is spatially extended. On the other hand, the narrow component is at systemic velocity and spatially compact. These characteristics suggest that the two components originate in different regions and could even be excited by different mechanisms. In the case of the broad component, its blue shift indicates that part of the gas must arise in outflows around the AGN (Rodríguez-Ardilla et al. [@rod04]). The most likely scenario is that this comprises a multitude of outflowing cloudlets producing many narrow components at different velocities which combine to produce the observed morphology and profile. It has been suggested that similar out-flowing cloudlets in NGC1068 might arise from ablation of the larger clouds (Cecil et al. [@cecil02]). In the case of the narrow component, the fact that it is compact, centered on the nucleus, and at the systemic velocity suggest that it originates physically close to the AGN. The narrow line width ($\sim$180kms$^{-1}$) indicates that they must be excited by photoionization from a hard ionising continuum (Oliva [@oliva99]).
Interestingly, we observe correlations between the ionisation potential (IP) and both the blueshift and relative strength of the broad wing, indicating that different species (with different IP) originate in different regions. We suggest that the higher ionization species have faster outflow velocities because they are closer to the AGN where the radiative acceleration is stronger (and the cloudlets have not been slowed by travelling a long distance through the interstellar medium), and weaker fluxes because there are fewer photons hard enough to ionise them.
Bonnet, H., et al. 2003, SPIE, 4839, 329
Cecil, G.,Dopita, M. A., Groves, B., Wilson, A. S., Ferruit, P., Pécontal, E., Binette, L. 2002, ApJ, 568, 627 Davies, R. I., et al. 2006 in preparation
Freeman, K. C., Karlsson, B., Lyngå, G.,Burrell, J. F., van Woerden, H., Goss, W. M., Mebold, U. 1977, A&A, 55, 445 Eisenhauer, F. et al. 2003, SPIE, 4844, 1548 Mueller Sánchez, F., et al. 2006 in preparation Oliva, E., Marconi, A., Moorwood, A. F. M. 1999, A&A, 342, 87 Prieto A., Meisenheimer, K., Marco, O., et al. 2004, ApJ, 614, 135 Rodríguez-Ardilla A., Pastoriza, M., Viegas, S., Sigut, T., Pradhan, A. 2004, A&A, 425, 457
|
---
abstract: 'Shnirelman’s theorem is applied to solving Diophantine equations, and also discussing of the problems of a representation of Gaussian integers by a sum of odd Gaussian primes.'
author:
- Felix Sidokhine
bibliography:
- 'references.bib'
title: 'Shnirelman’s Theorem Applications'
---
Introduction
============
In our note we study the systems of Diophantine equations and connect this problem to Shnirelman’s theorem: “Every integer $n > 7$ is a sum at most $s_0$ (Shnirelman’s constant) odd primes” [@Ramare:2013aa]. The problem of the representation of Gaussian integers by a sum of odd Gaussian primes is explored from an angle of Shnirelman’s theorem.
Diophantine Equations and Shnirelman’s Theorem
==============================================
\[th1\] The system of diophantine equations: $$\begin{cases}
x_{11} + x_{12} + x_{13} + x_{14} = a \\
x_{21} + x_{22} + x_{23} + x_{24} = b \\
\forall s \text{ } x_{1s} + x_{2s} = p_s \text{ where } p_s \text{ is an odd prime}
\end{cases}$$ has a solution $(x_{11},...,x_{24})$ where $x_{ij} \in \mathbb{Z}_+$[^1] if $a > 10$, $a \geq b > 0$ and $a + b \equiv 0 {\ (\mathrm{mod}\ 2)}$.
The proof uses induction. Let the theorem be true up to some $n -2 = a + b$. Let the theorem be false for $n = a' + b'$. By Shnirelman’s theorem with $s_0 = 4$, there exist $p \geq q \geq r \geq l \in \pi_\mathbb{Z}$[^2] such that $n = a' + b' = p + q + r + l$. Let us consider 4 cases:
- Case 1: $b' \leq l$, then we have a solution: $\begin{cases}
p + q + r + (l - b') = a' \\
0 + 0 + 0 + b' = b' \\
\end{cases}$
- Case 2: $l < b' \leq r + l$, then we have a solution: $\begin{cases}
p + q + (r + l - b') + 0 = a' \\
0 + 0 + (b' - l) + l = b'
\end{cases}$
- Case 3: $r + l < b' \leq q +r + l$, then we have a solution: $\begin{cases}
p + (q + r + l - b') + 0 + 0 = a' \\
0 + (b'-r-l) +r + l = b'
\end{cases}$
- Case 4: $q + r +l < b' < p + q+ r + l$, then we have a solution: $\begin{cases}
(p + q + r + l - b') + 0 + 0 + 0 = a' \\
(b' - q - r - l) + q + r + l = b'
\end{cases}$
From this, it follows that the theorem is true.
\[th2\] Given a system of diophantine equations where $a > 6$ and $a \geq b > 0$: $$\begin{cases}
\sum_{i=1}^k x_{1i} = a \\
\sum_{j=1}^k x_{2j} = b \\
\forall s \text{ } x_{1s} + x_{2s} = p_s \text{ where } p_s \text{ is an odd prime}
\end{cases}$$ there exists such $k$, where $k \leq k_0$ (constant), that the system has a solution $(x_{11},...,x_{2n})$ where $x_{ij} \in \mathbb{Z}_+$.
The proof mimics the one used for theorem \[th1\].
Based on these two theorems, we can propose the following conjecture which will be useful to use when working with Gaussian integers.
\[cj1\] Given the system of equations where $a > 6$ and $a \geq b > 0$: $$\begin{cases}
\sum_{i=1}^k x_{1i} = a \\
\sum_{j=1}^k x_{2j} = b \\
\forall s \text{ } x_{1s}^2 + x_{2s}^2 = t_s \text{ where $t_s$ is either a prime } p_s \equiv 1 {\ (\mathrm{mod}\ 4)} \text{ or } t_s = p_s^2 \text{ where } p_s \equiv 3 {\ (\mathrm{mod}\ 4)}.
\end{cases}$$ there exists such $k$, where $k \leq k_1$ (constant), that the system has a solution $(x_{11},...,x_{2n})$ where $x_{ij} \in \mathbb{Z}_+$.
Let us consider $z \in \mathbb{Z}[i]$ and let $z = a + ib$. If the above conjecture is true, then it implies that $z$ is a sum of at most $k_1$ odd Gaussian primes with non-negative real and imaginary parts [@Knill:aa].
Gaussian Integers, Odd Gaussian Primes and Shnirelman’s Theorem
===============================================================
Let us introduce the notion of “odd” and “even” Gaussian integers, where $z$ is odd if $z \equiv 1 {\ (\mathrm{mod}\ 1+i)}$ and even if $z \equiv 0 {\ (\mathrm{mod}\ 1+i)}$.
Our global experiment was to see whenever any element from the set $\Gamma_G = \{ z \in \mathbb{Z}[i] | {\operatorname{Re}}(z) > 0 \land -{\operatorname{Re}}(z) < {\operatorname{Im}}{z} \leq {\operatorname{Re}}(z) \}$ could be represented as a sum of at most $k_0$ elements from the set $\Gamma_\pi = \{ z \in \pi_{\mathbb{Z}[i]} | {\operatorname{Re}}(z) > 0 \land -{\operatorname{Re}}(z) < {\operatorname{Im}}(z) \leq {\operatorname{Re}}(z) \}$. Some of our computations and results are presented in appendix.
Our experimental work has allowed us to observe some interesting facts about Gaussian integers, notably given $z \in \Gamma_G$, and either ${\operatorname{Im}}(z) = {\operatorname{Re}}(z) - 1$ or ${\operatorname{Im}}(z) = {\operatorname{Re}}(z)$, $z$ cannot be represented as a sum of odd primes from $\Gamma_\pi$. This has led us to consider a different choice for our set $\Gamma_G = \{ z \in \mathbb{Z}[i] | {\operatorname{Re}}(z) > 0 \land {\operatorname{Im}}(z) \geq 0 \}$ and the following set $K_\pi = \{ z \in \pi_{\mathbb{Z}[i]} | {\operatorname{Re}}(z) \geq 0 \land {\operatorname{Im}}(z) \geq 0 \}$. It is worth noting that in both cases $\Gamma_G$ is a maximal set of non-associated Gaussian integers.
Studying the results from [@Knill:aa] and our experimental data we have formulated the following conjecture:
\[cj2\] Given $A = \{ z \in \mathbb{Z}[i] | {\operatorname{Re}}(z) > 0 \land {\operatorname{Im}}(z) > 0 \}$ then for all $z$ in $A$, such that $\max({\operatorname{Re}}(z),{\operatorname{Im}}(z)) \geq 7 $, $z$ is a sum of at most 3 odd primes from the set $K_\pi$.
In our study of this conjecture, we have formulated the following proposition:
Given that for any $z_0 \in A$ and some integer $k \geq 2$, such that $z_0 \equiv k {\ (\mathrm{mod}\ 1+i)}$ and $\max({\operatorname{Re}}(z_0),{\operatorname{Im}}(z_0)) \geq c_0 > 0$, the equation $\sum_{i=1}^k x_i = z_0$ has a solution $(x_1,...,x_k)$ where $x_i \in K_\pi$. Then for any $w_0 \in A$ where $w_0 \equiv k+1 {\ (\mathrm{mod}\ 1+i)}$ and $\max({\operatorname{Re}}(w_0),{\operatorname{Im}}(w_0)) \geq c_0 + 3$, the equation $\sum_{i=1}^{k+1} x_i = w_0$ has a solution $(x_1,...,x_{k+1})$ where $x_i \in K_\pi$
The proof is by induction. Let the proposition be true for some $k_0$. Let it be false for $k_0 + 1$, therefore there exists such $w$ that $w \equiv k_0 + 1 {\ (\mathrm{mod}\ 1+i)}$ and $\max({\operatorname{Re}}(w),{\operatorname{Im}}(w)) \geq c_0 + 3$ for which the proposition fails. Let ${\operatorname{Im}}(w) \geq c_0 + 3$, then $w - 3i \equiv k_0 {\ (\mathrm{mod}\ 1+i)}$ and $\max({\operatorname{Re}}(w -3i),{\operatorname{Im}}(w -3i)) \geq c_0$ contradicting the inductive hypothesis. The case ${\operatorname{Re}}(w) \geq c_0 + 3$ is analogous.
If we assume that every odd Gaussian integer $z$ where ${\operatorname{Re}}(z)>0$, ${\operatorname{Im}}(z)>0$ and $\max({\operatorname{Re}}(z),{\operatorname{Im}}(z)) \geq c_0$ is a sum of 3 odd Gaussian primes with non-negative real and imaginary parts, then we can use proposition 1 to get the following theorem:
Any element $a \in \{ z \in \mathbb{Z}[i] | {\operatorname{Re}}(z) > 0 \land {\operatorname{Im}}(z) > 0 \land \max({\operatorname{Re}}(z),{\operatorname{Im}}(z)) > c_1 \}$ is a sum of at most 4 odd Gaussian primes belonging to the set $K_\pi = \{ z \in \pi_{\mathbb{Z}[i]} | {\operatorname{Re}}(z) \geq 0 \land {\operatorname{Im}}(z) \geq 0 \}$
If we consider $\mathbb{Z}$ as a subset of $\mathbb{Z}[i]$, there are even elements of $\mathbb{Z}$ which can not be represented as a sum of 2 elements from the set $\pi_{\mathbb{Z}} \cap \pi_{\mathbb{Z}[i]}$. However, if one of the hypotheses below is true then the elements of $\mathbb{Z}$ as a subset of $\mathbb{Z}[i]$ could be represented as a sum of at most $s_0$ primes from $\pi_{\mathbb{Z}} \cap \pi_{\mathbb{Z}[i]}$.
- Hypothesis 1: There exists $c_0 > 0$ such that every integer $n \geq c_0$, $n \equiv 2 {\ (\mathrm{mod}\ 4)}$ is a sum of 2 primes where $p_i \equiv 3 {\ (\mathrm{mod}\ 4)}$
- Hypothesis 2: There exists $c_0 > 0$ such that every integer $n \geq c_0$, $n \equiv 1 {\ (\mathrm{mod}\ 4)}$ is a sum of 3 primes where $p_i \equiv 3 {\ (\mathrm{mod}\ 4)}$
- Hypothesis 3: There exists $c_0 > 0$ such that every integer $n \geq c_0$, $n \equiv 0 {\ (\mathrm{mod}\ 4)}$ is a sum of 4 primes where $p_i \equiv 3 {\ (\mathrm{mod}\ 4)}$
- Hypothesis 4: There exists $c_0 > 0$ such that every integer $n \geq c_0$, $n \equiv 3 {\ (\mathrm{mod}\ 4)}$ is a sum of 5 primes where $p_i \equiv 3 {\ (\mathrm{mod}\ 4)}$
Indeed if at least one of the above hypotheses is true, then the following theorem is also true:
\[th130\] For all $n \in \mathbb{Z}$ where $n \geq c_1$ and $c_1$ is a constant, there exists $m$ such that ${\sum_{i=1}^m x_i = n}$ has a solution $(p_1,...,p_m)$ where $\forall i$ $p_i \equiv 3 \mod 4$, and $p_i$ is prime and $m$ takes the values ranging from 2 to $s_1$ (constant).
Let hypothesis 2 be true and consider the following 3 cases:
- $\forall n$ where $n \equiv 0 {\ (\mathrm{mod}\ 4)} \land n \geq c_0 + 3$ is a sum of four odd primes. Since $n -3 \equiv 1 {\ (\mathrm{mod}\ 4)}$ and $n -3 \geq c_0$ the proof follows hypothesis 2.
- $\forall n$ where $n \equiv 3 {\ (\mathrm{mod}\ 4)} \land n \geq c_0 + 6$ is a sum of five odd primes. This is true due to the previous case.
- $\forall n$ where $n \equiv 2 {\ (\mathrm{mod}\ 4)} \land n \geq c_0 + 9$ is a sum of six odd primes. This is true due to the previous case.
From these cases, we can conclude that $c_1 = c_0 + 9$ and $s_1 = 6$.
Using conjecture \[cj2\] and theorem \[th130\] we can formulate the following hypothesis:
For all $z \in \Gamma_G$, where $\max({\operatorname{Re}}(z),{\operatorname{Im}}(z)) \geq c_0$, $z$ is a sum of at most $k_0$ odd primes from the set $K_\pi = \{ z \in \pi_{\mathbb{Z}[i]} | {\operatorname{Re}}(z) \geq 0 \land {\operatorname{Im}}(z) \geq 0\}$, where $k_0$ is a constant.
Based on our experimental evidence, we can also propose the following conjecture:
For all $z \in \Gamma_G$, where $\max({\operatorname{Re}}(z),{\operatorname{Im}}(z)) \geq 6$, $z$ is a sum of at most 3 odd Gaussian primes from the set $S_\pi = \{ z \in \pi_{\mathbb{Z}[i]} | {\operatorname{Re}}(z) \geq 0 \land -{\operatorname{Re}}(z) < {\operatorname{Im}}(z)\}$ (i.e. $z = \sum_{l=1}^k p_l$ where $p_l \in S_\pi$ is an odd Gaussian prime and $||p_l|| < || z||)$.
Discussion and Conclusion
=========================
In the present note we have constructed a solution of a system of Diophantine equations of the first degree in several variables over a set of non-negative integers by the Shnirelman theorem. We have demonstrated this approach on Gaussian integers. Based on experimental data on the representation of Gaussian integers by a sum of Gaussian primes we have shown that there is an infinite number of elements of a maximal set of non-associated Gaussian integers $\Gamma_G$ which cannot be represented as a sum of two or three Gaussian primes belonging to a set $K_\pi$ $(\Gamma_G \subset K_\pi \subset \mathbb{Z}[i])$. In this case we have formulated the Goldbach conjecture for $\mathbb{Z}[i]$ as some analogy of the Shnirelman theorem. However, according to our experimental evidence we expect that any Gaussian integer of $\Gamma_G$, excepting a finite number, can represent by a sum at most three odd Gaussian primes belonging to a set $S_\pi$ $(\Gamma_G \subset K_\pi \subset S_\pi \subset \mathbb{Z}[i])$ where $S_\pi$ is a minimal subset of $\mathbb{Z}[i]$ to satisfy such conditions.
Appendix
========
[|l|l|l|l|l|]{}
\
$z \in \Gamma$ & & Representation\
$z=9$ & $p=3$ & $q=3$ & $r=3$ & $z=p+q+r$\
$z=13$ & $p=7$ & $q=3$ & $r=3$ & $z=p+q+r$\
$z=17$ & $p=11$ & $q=3$ & $r=3$ & $z=p+q+r$\
$z=21$ & $p=11$ & $q=7$ & $r=3$ & $z=p+q+r$\
$z=25$ & $p=11$ & $q=11$ & $r=3$ & $z=p+q+r$\
$z=29$ & $p=19$ & $q=7$ & $r=3$ & $z=p+q+r$\
$z=33$ & $p=19$ & $q=7$ & $r=7$ & $z=p+q+r$\
$z=37$ & $p=19$ & $q=11$ & $r=7$ & $z=p+q+r$\
\
$z=7$ & $p=3+2i$ & $q=2-i$ & $r=2-i$ & $z=p+q+r$\
$z=11$ & $p=7$ & $q=2+i$ & $r=2-i$ & $z=p+q+r$\
$z=15$ & $p=8+3i$ & $q=5-2i$ & $r=2-i$ & $z=p+q+r$\
$z=19$ & $p=11$ & $q=4+i$ & $r=4-i$ & $z=p+q+r$\
$z=23$ & $p=13+2i$ & $q=7$ & $r=3-2i$ & $z=p+q+r$\
$z=27$ & $p=13+2i$ & $q=11$ & $r=3-2i$ & $z=p+q+r$\
$z=31$ & $p=23$ & $q=4+i$ & $r=4-i$ & $z=p+q+r$\
$z=35$ & $p=23$ & $q=6+i$ & $r=6-i$ & $z=p+q+r$\
\
$z=6+i$ & $p=2+i$ & $q=2+i$ & $r=2-i$ & $z=p+q+r$\
$z=6+3i$ & $p=2+i$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=7+2i$ & $p=3$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=7+4i$ & $p=3+2i$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=8+i$ & $p=2+i$ & $q=3$ & $r=3$ & $z=p+q+r$\
$z=8+3i$ & $p=3+2i$ & $q=3$ & $r=2+i$ & $z=p+q+r$\
$z=8+5i$ & $p=3+2i$ & $q=3+2i$ & $r=2+i$ & $z=p+q+r$\
$z=9+2i$ & $p=3+2i$ & $q=3$ & $r=3$ & $z=p+q+r$\
$z=9+4i$ & $p=5+2i$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=9+6i$ & $p=5+4i$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=19+16i$ & $p=10+9i$ & $q=6+5i$ & $r=3+2i$ & $z=p+q+r$\
$z=27+24i$ & $p=23+22i$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=39+36i$ & $p=35+34i$ & $q=2+i$ & $r=2+i$ & $z=p+q+r$\
$z=48+45i$ & $p=25+24i$ & $q=20+19i$ & $r=3+2i$ & $z=p+q+r$\
$z=50+i$ & $p=23$ & $q=20+i$ & $r=7$ & $z=p+q+r$\
$z=50+3i$ & $p=40+i$ & $q=6+i$ & $r=4+i$ & $z=p+q+r$\
$z=50+7i$ & $p=40+i$ & $q=5+4i$ & $r=5+2i$ & $z=p+q+r$\
$z=50+11i$ & $p=36+5i$ & $q=9+4i$ & $r=5+2i$ & $z=p+q+r$\
$z=50+15i$ & $p=35+8i$ & $q=10+3i$ & $r=5+4i$ & $z=p+q+r$\
$z=50+19i$ & $p=35+16i$ & $q=10+i$ & $r=5+2i$ & $z=p+q+r$\
$z=50+25i$ & $p=35+18i$ & $q=10+3i$ & $r=5+4i$ & $z=p+q+r$\
$z=50+41i$ & $p=45+38i$ & $q=3+2i$ & $r=2+i$ & $z=p+q+r$\
$z=50+43i$ & $p=35+34i$ & $q=10+7i$ & $r=5+2i$ & $z=p+q+r$\
$z=50+45i$ & $p=25+24i$ & $q=20+19i$ & $r=5+2i$ & $z=p+q+r$\
$z=50+47i$ & $p=25+24i$ & $q=20+19i$ & $r=5+4i$ & $z=p+q+r$\
\
$z=6+5i$ & $p=3+2i$ & $q=2+i$ & $r=2-i$ & $z=p+q+ir$\
$z=7+6i$ & $p=3+2i$ & $q=3+2i$ & $r=2-i$ & $z=p+q+ir$\
$z=8+7i$ & $p=5+4i$ & $q=2+i$ & $r=2-i$ & $z=p+q+ir$\
$z=9+8i$ & $p=5+4i$ & $q=3+2i$ & $r=2-i$ & $z=p+q+ir$\
$x=11+10i$ & $p=8+3i$ & $q=2+i$ & $r=6-i$ & $z=p+q+ir$\
$z=19+18i$ & $p=10+9i$ & $q=8+5i$ & $r=4-i$ & $z=p+q+ir$\
$z=27+26i$ & $p=19+16i$ & $q=5+2i$ & $r=8-3i$ & $z=p+q+ir$\
$z=39+38i$ & $p=35+34i$ & $q=3+2i$ & $r=2-i$ & $z=p+q+ir$\
$z=48+47i$ & $p=23+22i$ & $q=20+19i$ & $r=6-5i$ & $z=p+q+ir$\
$z=49+48i$ & $p=25+24i$ & $q=20+19i$ & $r=5-4i$ & $z=p+q+ir$\
$z=50+49i$ & $p=25+24i$ & $q=20+19i$ & $r=6-5i$ & $z=p+q+ir$\
[|l|l|l|l|]{}
\
$z \in \Gamma$ & &Representation\
$z =6$ & $p=3$ & $q=3$ & $z=p+q$\
$z =10$ & $p=7$ & $q=3$ & $z=p+q$\
$z =14$ & $p=11$ & $q=3$ & $z=p+q$\
$z =18$ & $p=11$ & $q=7$ & $z=p+q$\
$z =30$ & $p=19$ & $q=11$ & $z=p+q$\
$z =42$ & $p=23$ & $q=19$ & $z=p+q$\
\
$z = 8$ & $p=5+2i$ & $q=3-2i$ & $z=p+q$\
$z =12$ & $p=6+i$ & $q=6-i$ & $z=p+q$\
$z =16$ & $p=13+2i$ & $q=3-2i$ & $z=p+q$\
$z =20$ & $p=10+i$ & $q=10-i$ & $z=p+q$\
$z =24$ & $p=12+7i$ & $q=12-7i$ & $z=p+q$\
$z =28$ & $p=15+2i$ & $q=13-2i$ & $z=p+q$\
$z =32$ & $p=17+2i$ & $q=15-2i$ & $z=p+q$\
$z =36$ & $p=21+4i$ & $q=15-4i$ & $z=p+q$\
$z =44$ & $p=27+2i$ & $q=17-2i$ & $z=p+q$\
\
$z = 6+2i$ & $p=3+2i$ & $q=3$ & $z=p+q$\
$z = 6+4i$ & $p=3+2i$ & $q=3+2i$ & $z=p+q$\
$z = 7+i$ & $p=4+i$ & $q=3$ & $z=p+q$\
$z = 7+3i$ & $p=5+2i$ & $q=2+i$ & $z=p+q$\
$z = 7+5i$ & $p=5+4i$ & $q=2+i$ & $z=p+q$\
$z = 9+7i$ & $p=6+5i$ & $q=3+2i$ & $z=p+q$\
$z= 19+17i$ & $p=13+12i$ & $q=6+5i$ & $z=p+q$\
$z= 27+25i$ & $p=25+24i$ & $q=2+i$ & $z=p+q$\
$z =36+2i$ & $p=26+i$ & $q=10+i$ & $z=p+q$\
$z= 39+37i$ & $p=36+35i$ & $q=3+2i$ & $z=p+q$\
$z= 48+46i$ & $p=25+24i$ & $q=23+22i$ & $z=p+q$\
$z= 49+47i$ & $p=26+25i$ & $q=23+22i$ & $z=p+q$\
$z= 50+48i$ & $p=25+24i$ & $q=25+24i$ & $z=p+q$\
$z =51+i$ & $p=47$ & $q=4+i$ & $z=p+q$\
$z =51+3i$ & $p=45+2i$ & $q=6+i$ & $z=p+q$\
$z =51+7i$ & $p=45+2i$ & $q=6+5i$ & $z=p+q$\
$z =51+9i$ & $p=47+8i$ & $q=4+i$ & $z=p+q$\
$z =51+25i$ & $p=48+23i$ & $q=3+2i$ & $z=p+q$\
$z= 51+31i$ & $p=49+30i$ & $q=2+i$ & $z=p+q$\
$z =51+49i$ & $p=48+47i$ & $q=3+2i$ & $z=p+q$\
\
$z = 6+6i$ & $p=5+4i$ & $q=2-i$ & $z=p+iq$\
$z = 7+7i$ & $p=5+4i$ & $q=3-2i$ & $z=p+iq$\
$z = 8+8i$ & $p= 7+2i$ & $q=6-i$ & $z=p+iq$\
$z = 9+9i$ & $p=5+4i$ & $q=5-4i$ & $z=p+iq$\
$z =10+10i$ & $p=6+5i$ & $q=5-4i$ & $z=p+iq$\
$z =11+11i$ & $p=6+5i$ & $q=6-5i$ & $z=p+iq$\
$z=12+12i$ & $p=11+6i$ & $q=6-i$ & $z=p+iq$\
$z=16+16i$ & $p=14+i$ & $q=15-2i$ & $z=p+iq$\
$z=19+19i$ & $p=18+17i$ & $q=2-i$ & $z=p+iq$\
$z=27+27i$ & $p=25+24i$ & $q=3-2i$ & $z=p+iq$\
$z=39+39i$ & $p=35+34i$ & $q=5-4i$ & $z=p+iq$\
$z=48+48i$ & $p=46+41i$ & $p=7-2i$ & $z=p+iq$\
$z=49+49i$ & $p=25+24i$ & $q=25-24i$ & $z=p+iq$\
$z=50+50i$ & $p=48+47i$ & $q=3-2i$ & $z=p+iq$\
$z=51+51i$ & $p=26+25i$ & $q=26-25i$ & $z=p+iq$\
[^1]: $\mathbb{Z}_+ = \{ n \in \mathbb{Z} | n \geq 0\}$
[^2]: $\pi_\mathbb{Z} = \{ n \in \mathbb{Z} | n \text{ is prime} \}$
|
---
abstract: 'Jointly using data from multiple similar sources for the training of prediction models is increasingly becoming an important task in many fields of science. In this paper, we propose a framework for [*generalist and specialist*]{} predictions that leverages multiple datasets, with potential heterogenity in the relationships between predictors and outcomes. Our framework uses ensembling with stacking, and includes three major components: 1) training of the ensemble members using one or more datasets, 2) a no-data-reuse technique for stacking weights estimation and 3) task-specific utility functions. We prove that under certain regularity conditions, our framework produces a stacked prediction function with oracle property. We also provide analytically the conditions under which the proposed no-data-reuse technique will increase the prediction accuracy of the stacked prediction function compared to using the full data. We perform a simulation study to numerically verify and illustrate these results and apply our framework to predicting mortality based on a collection of variables including long-term exposure to common air pollutants.'
author:
- Boyu Ren
- Prasad Patil
- Francesca Dominici
- Giovanni Parmigiani
- Lorenzo Trippa
bibliography:
- 'reference.bib'
title: |
Cross-study Learning\
for Generalist and Specialist Predictions
---
Introduction
============
New advances in technologies, for example biomarker assays in biomedical studies, enable the generation of rich datasets. It is increasingly common for researchers to have access to multiple ($K > 1$) studies, or more generally sets of data, able to answer the same or similar scientific questions [@klein2014data; @kannan2016public; @manzoni2018genome]. Although datasets from multiple studies may contain the same outcome variable $Y$ and covariates $X$ (for example, patient survival and pre-treatment prognostic profiles in clinical studies), the $(X, Y)$ joint distributions $P_1, \ldots, P_K$ are typically different, due to distinct study populations, study designs and technological artifacts [@simon2003pitfalls; @rhodes2004large; @patil2015test; @sinha2017assessment]. In this article, we focus on the task of developing prediction models using multiple datasets, accounting for the heterogeneity across the $(P_k, k = 1,\ldots,K)$ study-specific distributions. We introduce a distinction between two classes of *prediction functions* (PFs) depending on the goal of the prediction problem in the multi-study setting: *generalist* and *specialist* prediction functions.
Generalist predictions are directed to hypothetical future studies $K+1, K+2, \ldots$. The training strategy to develop a generalist prediction function depends on relations and similarities between studies. For example, the study-specific geographic areas or assays can be relevant in the development of prediction models. If studies are considered exchangeable, i.e. joint analyses are invariant to permutations of the study indices, then a model which consistently predicts accurately across the available $K$ studies is a good candidate for a generalist use, to predict $Y$ in future studies $k > K$. This class of prediction functions has been studied in the literature [@sutton2008recent; @tseng2012comprehensive; @pasolli2016machine] and several contributions are based on hierarchical models [@warn2002bayesian; @babapulle2004hierarchical; @higgins2009re]. Similarly, when the exchangeability assumption is inadequate, joint models for multiple studies can incorporate information on relevant relations between studies to construct generalist models[@moreno2012unifying]. For example, when $K$ studies are collected at different time points $t_1 < t_2 < . . . < t_K$, the development of a generalist model can incorporate potential cycles or short-term trends.
Specialist predictions are in contrast directed to predicting future outcomes $Y$ based on covariates $X$ in the context of a specific study $k$ in $\{ 1, \ldots K \}$ –for example a geographic area– represented by one of the $K$ datasets. Bayesian models can be used to borrow information and leverage $K-1$ datasets in addition to the targeted study $k$. Typically the degree of heterogeneity of the distributions $(P_1, \ldots , P_K)$ affects the extent of improvement in accuracy that one achieves with multi-study models compared to simpler models developed using only data from study $k$.
Recently, the use of ensemble methods has been proposed to develop generalist prediction functions based on multi-study data collections [@patil2018training; @zhang2019robustifying; @loewinger2019covariate]. In particular, stacking [@wolpert1992stacked; @breiman1996stacked] is used to combine prediction functions $\{\hat Y_k(\cdot), k = 1,\ldots , K\}$, each trained on a single study $k$, into a single generalist prediction function that targets contexts $k > K$. The weights assigned to each model $\hat Y_k$ in stacking are often derived by maximizing a utility function representative of the performance of the resulting prediction function. In this manuscript, our focus will be on collections of exchangeable studies. Nonetheless, the application of stacking does not require this exchangeability assumption, and the optimization of the ensemble weights can be tailored to settings where exchangeabilty is implausible. Importantly, stacking allows investigators to capitalize on multiple machine learning algorithms, such as random forest or neural networks, to train the study-specific functions $\hat Y_k$. We investigate within the stacking framework [@patil2018training] the optimization of the ensemble weights assigned to a collection of single-set prediction functions (SPFs), generated with arbitrary machine learning methods. Each SPF is trained by a single study $k$ or combining multiple studies. The ensemble weights will approximately maximize a utility function $U$ which we estimate using the entire collection of $K$ studies (generalist prediction) or only data in study $k$ (specialist prediction). Notably, stacking as currently implemented in multi-study learning can potentially suffer from over-fitting due to data reuse (DR): the same datasets generate SPFs and contribute (with others) to guiding the optimization of the stacking weights. With the aim of mitigating overfitting we introduce a no data reuse (NDR) procedure that still includes three key components of the staking methodology: the training of SPFs, the estimation of the utility function $U$, and the optimal choice of the ensemble weights.
In this manuscript we compare procedures to weight SPFs with and without data reuse. We use the mean squared error (MSE) as our primary metric to evaluate prediction accuracy. Our results prove that, when the number of studies $K$ and the sample sizes $n_k$ become large, both stacking with DR and NDR achieve a performance similar to that of an oracle benchmark. The oracle is defined as the linear combination of the SPFs’ limits ($\lim_{n_k} Y_ k$) that minimizes the MSE in future studies $k > K$. Our results bound the MSE difference between the oracle ensembles and two stacking procedures, with and without data reuse. We use these asymptotic results to describe similarities between stacking and multi-study Bayesian hierarchical models when the SPFs are linear. Related bounds have been studied for the single-study setting in [@vdl1] and in the functional aggregation literature [@juditsky2000functional; @juditsky2008learning]. We also illustrate that if the oracle predictions lie within the convex hull of the SPFs limits $(\lim_{n_k} Y_k; k=1,\ldots,K)$, then stacking produces prediction functions that are asymptotically equivalent to the oracle. We finally provide finite sample comparisons of stacking with DR and NDR. We identify a threshold value for the number of datasets $K$, which depends on the cross-study heterogeneity, below which NDR stacking reduces the MSE.
We apply our NDR and DR stacking procedures to predict mortality in Medicare beneficiaries enrolled before 2002. The datasets contain demographic and health-related information of the beneficiaries at the zipcode-level and measurements of air pollutants. We are interested in predicting the number of deaths per 10,000 person-years. In distinct analyses, we partitioned the database into state-specific datasets ($K=50$) and in county-specific dataset ($K=58$). We compare NDR and DR stacking relative performances. The results are aligned with our analytic results. Indeed with hold-out data we verified that in the first analysis, with $K=10$ state-level datasets (high heterogeneity; the remaining 40 are used as validation datasets), NDR produced generalist predictions with better accuracy than DR. In contrast, with SPFs developed with county-specific datasets (low cross-study hetherogeneity) DR staking predictions are more accurate than with NDR stacking. These comparisons were confirmed by iterated analyses with random sets of $K=10$ states and $K=10$ counties.
Generalist and Specialist Predictions
=====================================
Notation
--------
We observe $K$ studies $k = 1, \ldots, K$, with sample sizes $n_k$. For individual $i$ in study $k$ we have a vector of features $x_{i,k} \in \mathcal X$ and the individual outcome $y_{i,k} \in \mathbb R$. We use $\mathcal S = \{(x_{i,k}, y_{i,k});i =1, \ldots, n_k, k = 1, \ldots, K\}$ to indicate the collection of all $K$ datasets. Based on these, we define a library of training sets (LTS), denoted as $\mathcal D$, which includes $T$ members $D_1,\ldots ,D_T$. Each $D_t$ is a set of $(i, k)$ indices, where $i \in \{1, 2, \ldots, n_k\}$ is the sample index within a study $k$. The set $D_t$ can include indices with different $k$ values (see for example $D_3$ in Figure \[fig:D-example\]). We call a collection $\mathcal D$ study-specific if $T = K$ and $D_t = \{(1, t), \ldots,(n_t, t)\}$, with $t = 1, \ldots, K$.
![An illustration of the relation between studies and training sets, where $\mathcal D=\{D_1,D_2,D_3\}.$[]{data-label="fig:D-example"}](subsampling.pdf)
We consider $L$ different learners —a learner is a method of generating a prediction function, such as linear regression, random forest, or a neural network. Training learner $\ell$ on set $D_t \in \mathcal D$ generates a single-study prediction function (SPF) noted as $\hat Y_t^\ell: \mathcal X \to \mathbb R$. The set of all SPFs is $\hat{\mathcal Y} = \{\hat Y_t^\ell(\cdot);\ell=1,\ldots,L,t = 1,\ldots,T\}$. Let $W$ be a subset of $\mathbb R^{TL}$. With stacking [@wolpert1992stacked], we combine the $\hat{\mathcal Y}$ components into $\hat Y_w:\mathcal X \to \mathbb R$ via:
$$\hat Y_w (\cdot) = \sum_{\ell=1}^L\sum_{t=1}^Tw_{\ell, t} \hat Y_t^\ell(\cdot),$$
where $w = (w_{\ell,t};\ell\leq L, t\leq T)$ is a vector of weights in $W$.
We want to use $\hat Y_w$ for prediction in a target population with unknown joint $(X, Y)$ distribution $\pi$. The performance of $\hat Y_w$ is quantified by its expected utility $U$, quantifying accuracy in the target population: $$U(w;\pi) = \int_{(x,y)} u(\hat Y_w(x), y) \; d\pi(x,y),$$ where $u(\hat y, y)$ is a utility function, e.g. $u(\hat y, y) = -(\hat y - y)^2$. The distribution $\pi$ is unknown and we estimate $U(w; \pi)$ with $$\hat U(w;\nu) = \sum_{k = 1}^K \frac{\nu_k}{n_k}\sum_{i=1}^{n_k} u(\hat Y_w(x_{i,k}), y_{i,k}).
\label{eq:U-est}$$
The weights $\nu_k\geq 0$, $\sum_k \nu_k = 1$, are user-specified and are designed to capture the relation between the target $\pi$ and the set of distributions $P_1,\ldots , P_K$. In this paper, we are interested in generalist prediction, which corresponds to $\nu_k = \frac{1}{K}$, for $k = 1, \ldots, K$, and specialist prediction, which corresponds to $\nu_k=1$ for study $k$ in $1, \ldots , K$ and 0 for the remaining $K - 1$ studies.
Generalist prediction
---------------------
The target distribution $\pi$ coincides with a future sequence of heterogeneous studies $K + 1, K + 2, \ldots$, and the utility of a generalist prediction function $\hat Y_w$ can be represented as $$U_g(w) = \lim_{I\to\infty}I^{-1}\sum_{i=1}^I U(w;P_{K+i}).$$ where the subscript $g$ reminds us that the limit is taken in the generalist case. We will consider scenarios where the above limit is well defined for any $w \in W$ with probability 1. If $P_1,P_2,\ldots$, are exchangeable, i.e. there exists $Q$ such that $P_k|Q\overset{iid}{\sim} Q, k = 1, 2, \ldots,$ then $U_g(w)$ can be rewritten as, $$U_g(w) = \int \left[\int_{(x,y)} u(\hat Y_w(x), y)dP(x,y)\right]dQ.$$ Changing the order of integration, $$U_g(w) = \int_{(x,y)} u(\hat Y_w(x), y) dP_0(x,y),$$ where $P_0$ is the mean of $Q$, i.e. $P_0(\cdot) = \int P(\cdot)dQ$.
When $\pi = P_0$, we can use $\nu_k = 1/K$ for $k = 1,\ldots, K$ in expression (\[eq:U-est\]) to approximate $U_g(w)$. Note that in several applications the sequence $P_1,P_2\ldots$ may not be exchangeable. For example, it can be better modeled by a Markov Chain [@shumway2017time] i.e. $P_k|P_1,\ldots,P_{k-1} = P_k|P_{k-1}$. Throughout this manuscript we will not need to specify the model $Q$, but we will assume the exchangeability of the sequence $P_1,P_2,\ldots$.
Specialist prediction
---------------------
In this case, the target population distribution $\pi$ coincides with $P_k$, for a single $k \in \{1, 2, \ldots , K\}$. The expected utility of a specialist prediction function is $$U_s(w;k) = U(w; P_k) = \int_{(x,y)} u(\hat Y_w(x), y)dP_k(x,y).$$ We can use the empirical distribution of study $k$ to estimate $P_k$, and the implied specification of $\nu$ in (\[eq:U-est\]) is $\nu_i = 1$ for $i = k$ and 0 otherwise.
Generalist and specialist stacking
==================================
We use stacking for generalist and specialist predictions in multi-study settings. Recall the definition of a stacked prediction function $\hat Y_w(\cdot) = \sum_{\ell\leq L, t\leq T} w_{\ell, t}\hat Y_t^\ell(\cdot)$ based on a set of SPFs $\hat{\mathcal Y}$ and weights $w\in W$. We indicate as [*oracle*]{} weights $$\begin{gathered}
w_g = \operatorname*{arg\,max}_{w\in W} U_g(w)\\
w_s^{(k)} = \operatorname*{arg\,max}_{w\in W} U_s(w;k).\end{gathered}$$ Note that $W\subset \mathbb R^{TL}$.
Constraints on or penalties applied to select parameters like $w$ can lead to identical results. For example, in several optimization problem constraining an $KL-$dimensional parameter to $W = \{w:\|w\|_2\leq c\}$ is equivalent to the unconstrained optimization with an $L_2$ penalty on the parameter. The use of penalties in the estimation of stacking weights has been discussed in [@breiman1996stacked; @leblanc1996combining]. One of the main arguments is that members of the library of SPFs $\hat{\mathcal Y}$ tend to be correlated, especially those that are trained on the same set $D_t$.
Stacking with data reuse
------------------------
A direct approach to select $w_g$ and $w_s^{(k)}$ consists in optimizing the $\hat U(w; \nu)$ estimates of $U_g(w)$ and $U_s(w; k)$. When the studies are exchangeable $\hat U(w;K^{-1} {\mathbf{1}}_K)$ can be used to select $w_g$. The estimation of stacking weights attempts to provide values close to the oracle solution $w_g$. If instead we develop a specialist prediction function for study $k$, we can optimize $\hat U(w; e_k)$ to select $w_s^{(k)}$, where $e_k$ is a $K-$dimensional vector with the $k$-th component to be one and all others zero. This approach reuses data. Training an SPF $\hat Y^{\ell}_t$ uses part of the data $D_t\subset \mathcal S$ that are then reused to compute $\hat U(w;\nu)$. Data reuse makes $\hat U(w;\nu)$ a biased estimator of $U_g(w)$ and $U_s(w;k)$. In the next paragraph we illustrate a simple example where the bias of $\hat U(w;\nu)$ due to data reuse makes the selection of $w$, denoted as $\hat w$, erroneously favors those $\hat Y_t^\ell$ generated from studies with large $\nu_k$.
Consider a scenario where $\mathcal D$ is study-specific and $K=2$. Let $u(y,y') = -(y-y')^2$. We only observe $y_{i,k}$ without any covariates and we assume $y_{i,k}\sim N(\mu_k,1)$ for $k=1,2$ where $\mu_k\sim N(0,1)$. Let $n_1=n_2 = n$. In this simple example, we generate a library of SPFs with two constant functions $\hat Y_1(\cdot) = \bar y_1$ and $\hat Y_2(\cdot) = \bar y_2$, where $\bar y_k = n^{-1}\sum_i y_{i,k}$. Under the constraint that $W = \Delta_1$, where $\Delta_1$ is the standard 1-simplex, the weights that optimize $\hat U(w;\nu)$ is $\hat w = (\hat w_1, \hat w_2) = (\nu_1,\nu_2)$, while the oracle weights $w_g = (w_{g,1}, w_{g,2})$ that optimize $U_g(w)$ are $$w_g = \left\{
\begin{array}{cc}
\left(\frac{|\bar y_2|}{|\bar y_1| + |\bar y_2|}, \frac{|\bar y_1|}{|\bar y_1| + |\bar y_2|}\right) & \bar y_1\cdot \bar y_2 < 0, \\
(1,0) & |\bar y_1|\leq |\bar y_2|,~~\bar y_1\cdot \bar y_2 \geq 0,\\
(0,1) & |\bar y_1|> |\bar y_2|,~~\bar y_1\cdot \bar y_2 \geq 0.
\end{array}\right.$$
The oracle weights favor $\hat Y_1$, i.e. $w_{g,1}>w_{g,2}$, if $\text{cMSE}(\hat Y_1) < \text{cMSE}(\hat Y_2)$, where cMSE indicates the conditional MSE of a SPF $\hat Y_{t}^\ell$: $$\text{cMSE}{(\hat Y)} = \int_{(x,y)} \left(y - \hat Y(x)\right)^2 dP_0(x,y).$$ The cMSE measures the actual prediction performance of $\hat Y_t^\ell$ across studies given the observed data $(y_{1,1},\ldots,y_{1,n},y_{2,1},\ldots,y_{2,n})$. Note that in our example, $\text{cMSE}(\hat Y_k) = |\bar y_k|^2 + 2$, $k=1,2$. On the other hand, $\hat w$ favor $\hat Y_k$ whenever $\nu_k>\nu_{k'}$, regardless of the cMSE of each SPF.
To understand the discrepancy described above, we examine the bias of $\hat U(w;K^{-1}{\mathbf{1}}_K)$ to $U_g(w)$, defined as $\mathbb E\left(\hat U(w;K^{-1}{\mathbf{1}}_K) - U_g(w)\right)$, where the expectation is taken over all observed data $(y_{1,1},\ldots,y_{1,n},y_{2,1},\ldots,y_{2,n})$. By definition, we have $$\begin{aligned}
\hat U(w;K^{-1}{\mathbf{1}}_K) - U_g(w) =& \underbrace{2(\nu_1w_2 + \nu_2w_1)\bar y_1\bar y_2+ 2(\nu_1w_1 \bar y_1^2 + \nu_2w_2 \bar y_2^2)}_{\text{data-reuse}}\\
+& 2 - \nu_1\overline{y_1^2} - \nu_2\overline{y_2^2},\end{aligned}$$ where $\overline{y_k^2} = n^{-1}\sum_{i}y_{i,k}^2$. The first two terms on the right-hand side exist because of data reuse, that is, we evaluate the utility of $w_1\hat Y_1 + w_2\hat Y_2$ using the training data of $\hat Y_1$ and $\hat Y_2$.
It follows that $$\mathbb E\left(\hat U(w;K^{-1}{\mathbf{1}}_K) - U_g(w)\right)
= \frac{2(n+1)}{n}(\nu_1w_1 + \nu_2w_2).$$ We can see data-reuse introduces a non-zero bias to $\hat U(w;K^{-1}{\mathbf{1}}_K)$. This bias term is not always maximized at $w_g$. In fact, if $\nu_1\geq \nu_2$, $\nu_1w_1 + \nu_2w_2$ is maximized at $w = (1,0)$. In this case, the bias term would shift $\hat w_1$ towards $1$, which would make $\hat w_1$ larger than $w_{g,1}$ if $w_{g,1}\neq 1$. The strength of this shift increases as $\nu_1$ increases, which explains the reason that $\hat w_1$ increases as $\nu_1$ increases.
The effect of the bias term on the discrepancy of $\hat w$ to $w_g$ is particularly pronounced when training specialist PFs for study $k$ with $\hat U(w;e_k)$. In our example, $\hat w = e_k$ regardless of the values of cMSE of $\hat Y_k$, which in our setting also captures the prediction accuracy of $\hat Y$ on future samples in study $k$. This result also generalizes to $K>2$ and to the setting where $L>1$ with at least one of the single learner using $-u(y,y')$ as its loss function. The specialist PF for study $k$ is then equal to the SPF trained on study $k$, and we do not borrow any information from other studies, even though they share the same hyper-distribution of mean of the outcome $Y$ with study $k$.
Stacking without data reuse {#sec:zero-stacking}
---------------------------
A common approach to limit the effects of data reuse is cross-validation (CV). CV in stacking is implemented by using part of the data for the training of the library of PFs $\hat{\mathcal Y}$ and the rest of the data for the estimation of $w$ (see for example [@breiman1996stacked]). How to split the data in multi-study settings is not as obvious as in the single-study setting due to the multi-level structure of the data. We consider two approaches based on CV. We first introduce their primary characteristics and their precise definitions are deferred to Section 3.2.1 and 3.2.2.
1. **Within-set** (${\text{CV}_{\text{ws}}}$). For this approach, we assume that sets $D_t$ are mutually exclusive. An $M$-fold ${\text{CV}_{\text{ws}}}$ includes $M$ iterations. At each iteration, we randomly partition each $D_t$ into $D_{t,1}$ and $D_{t,2}$. We use $\{D_{t,1};t\leq T\}$ to generate the class of SPFs and predict outcomes for samples in $\{D_{t,2};t\leq T\}$. The final selection of $w$ maximizes a utility estimate that involves all predictions generated across the $M$ iterations.
2. **Cross-set** (${\text{CV}_{\text{cs}}}$). This approach can handle LTS with overlapped $D_t$ sets and involves a pre-defined number of iterations. At each iteration, we randomly select $T'$ sets $D_t\in\mathcal D$ to generate the library of SPFs. We then predict outcomes for samples in the rest of $D_t$ sets using each member of the library. The final selection of $w$ maximizes a utility function that involves predictions generated across all interations.
### Within-set CV
We describe the ${\text{CV}_{\text{ws}}}$ procedure in the multi-study setting. It can be used to estimate generalist and specialist utilities. An $M-$fold ${\text{CV}_{\text{ws}}}$ for stacking includes four steps. Without loss of generality, we assume that $|D_t|$ is divisible by $M$ for $t=1,\ldots, T$, where $|D_t|$ is the cardinality of $D_t$.
1. Randomly partition each index set $D_t$ into $M$ equal-sized subsets and denote them as $D_{t,m}, m = 1,\ldots,M$.
2. For every $m=1,\ldots,M$, train $\hat Y_{t,m}^\ell$ with $\{(x_{i,k},y_{i,k});(i,k)\in D_{t}\cap D_{t,m}^c\}$ for $\ell = 1,\ldots, L$ and $t = 1,\ldots,T$.
3. For a sample with index $(i,k)$, denote the only index $m$ such that $(i,k)\in D_{t,m}$ by $m(i,k)$. The estimated utility function for $w$ is $$\hat U_{\text{ws}}(w;\nu) = \sum_{k=1}^K\frac{\nu_k}{n_k}\sum_{i=1}^{n_k}u\left(\sum_{\ell,t}w_{\ell,t}\hat Y_{t,m(i,k)}^\ell(x_{i,k}), y_{i,k}\right).
\label{eq:cvws-utility}$$ and $$\hat w^{\text{ws}} = \operatorname*{arg\,max}_{w\in W} \hat U_{\text{ws}}(w;\nu).$$
4. The ${\text{CV}_{\text{ws}}}$ stacked PF is $$\hat Y_{w}^{\text{ws}} = \sum_{\ell, t} \hat w^{\text{ws}}_{\ell,t}\hat Y_t^\ell.$$
For specialist predictions, ${\text{CV}_{\text{ws}}}$ can solve the data-reuse related problem in Section 3.1. In particular, we consider the example in Section 3.1. Assume $D_t$ is study-specific. Denote $\hat w_{s}^{\text{ws}} = \operatorname*{arg\,max}_{w} \hat U_{\text{ws}}(w;e_1)$ the ${\text{CV}_{\text{ws}}}$ selected weights for the specialist PF of study 1 and $\hat w_s = \operatorname*{arg\,max}_w \hat U(w;e_1)$. We measure the prediction accuracy of a PF $\hat Y_w$ with the expected MSE on study 1 is $\text{MSE}_1(w) = \int_{\mu_1}(\mu_1 - \hat Y_w)^2dP(\mu_1) + 1$, where $P(\mu_1)$ is the distribution of $\mu_1$.
Since there is no analytic expression for $\hat w^{\text{ws}}_s$, we use Monte Carlo simulation (1000 replications) to compare $\text{MSE}_1(\hat w_s^{\text{ws}})$ and $\text{MSE}_1(\hat w_1)$. We set $n = 90$ and $\mu_1\sim N(0,0.1)$, where borrowing information from study 2 is beneficial for the estimation of $\mu_1$. When varying $M$ from 3 to 15, we observe that $\mathbb E\left[\text{MSE}_1(\hat w_1) - \text{MSE}_1(\hat w_s^{\text{ws}})\right]$ first increases from 0.0014 to 0.002 then decrease when $M>8$ to $0.0012$ at $M=15$. Here the expectation is taken over $(y_{1,1},\ldots,y_{1,n},y_{2,1},\ldots,y_{2,n})$. This indicates that ${\text{CV}_{\text{ws}}}$-based approach produces a more accurate PF that stacking with data reuse but the advantage decreases if $M$ is large.
In contrast we illustrate that, if we compare ${\text{CV}_{\text{ws}}}$ and stacking with data reuse for generalist predictions, the difference between the resulting estimates of $U(w)$, $\hat U(w;\nu) - \hat U_{\text{ws}}(w;\nu)$, converges to zero faster than the difference between $\hat U(w;\nu)$ and its limit as $n\to\infty$ for any fixed $K$, rendering $\hat U(w;\nu)$ to be asymptotically identical to $\hat U(w;\nu)$.
To see this result, we first consider the example in Section 3.1 with fixed $\mu = (\mu_1,\mu_2)^\intercal$ and bounded $W$. The utility function for stacking with data reuse is $\hat U(w;(1/2,1/2)) = w^\intercal \hat \Sigma w - 2 \hat b^\intercal w + (\overline{y_1^2} + \overline{y_2^2})/2$, where $$\hat \Sigma = \left[
\begin{array}{cc}
\bar y_1^2&\bar y_1\bar y_2\\
\bar y_1\bar y_2&\bar y_2^2
\end{array}
\right],~~ \hat b = \left[
\begin{array}{c}
\bar y_1 \bar y\\
\bar y_2 \bar y
\end{array}
\right],$$ and $\bar y = (\bar y_1 + \bar y_2)/2$. Let $\bar y_{k,-m} = (n(M-1)/M)^{-1}\sum_{i\notin D_{k,m}} y_{k,i}$ and $\bar y_{k,m} = (n/M)^{-1}\sum_{i\in D_{k,m}} y_{k,i}$. We use a 2-fold ${\text{CV}_{\text{ws}}}$ to select $w$ for generalist predictions. The associated utility function is $\hat U_{\text{ws}}(w;(1/2,1/2))$ and $$\hat U_{\text{ws}}(w;(1/2,1/2)) = w^\intercal \hat \Sigma_{\text{ws}} w - 2 \hat b_{\text{ws}}^\intercal w + \frac{\overline{y_1^2} + \overline{y_2^2}}{2},$$ where $$\small
\hat \Sigma_{\text{ws}} = M^{-1}\left[
\begin{array}{cc}
\sum\limits_{m=1}^2 \bar y_{1,-m}^2& \sum\limits_{m=1}^2 \bar y_{1,-m}\bar y_{2,-m}\\
\sum\limits_{m=1}^2 \bar y_{1,-m}\bar y_{2,-m}& \sum\limits_{m=1}^2 \bar y_{2,-m}^2\\
\end{array}\right],~~\hat b_{\text{ws}} = (2M)^{-1}\left[
\begin{array}{c}
\sum\limits_{m=1}^2(\bar y_{1,m} + \bar y_{2,m})\bar y_{1,-m}\\
\sum\limits_{m=1}^2(\bar y_{1,m} + \bar y_{2,m})\bar y_{2,-m}
\end{array}\right].
\normalsize$$ Note by construction, $\bar y_k = (M-1)/M\bar y_{k,-m} + 1/M\bar y_{k,m}$. Therefore, $$\sum_{m} \bar y_{k,-m}y_{k',-m} = \frac{M^3 - 2M^2 + M}{(M-1)^2} \bar y_k\bar y_{k'} + \sum_m\left(\bar y_{k,m}\bar y_{k',m} - \bar y_k\bar y_{k'}\right),$$ for any $k,k' \in \{1,2\}$. It is straightforward to show that $${\text{var}}\left(\sum_m\left(\bar y_{k,m}\bar y_{k',m} - \bar y_k\bar y_{k'}\right)\right) = \frac{1}{4}{\text{var}}\left((\bar y_{k,1} - \bar y_{k,2})(\bar y_{k',1} - \bar y_{k',2})\right) = \frac{1 + \mathbb I(k=k')}{n^2}.$$ Therefore $
\sum_{m} \bar y_{k,-m}y_{k',-m} = 2\bar y_k\bar y_{k'} + O_p(1/n).
$ Similarly, we can prove that $\sum_m (\bar y_{k,m} + \bar y_{k',m})\bar y_{k,-m} = 2M\bar y \bar y_k + O_p(1/n)$ for $k,k'\in\{1,2\}$ and $k\neq k'$. Based on these results, if $w$ is bounded by a finite constant, we have $
|\hat U_{\text{ws}}(w;(1/2,1/2)) - \hat U(w;(1/2,1/2))| \leq O_p(1/n).
$
On the other hand, the limit of $\hat U(w;(1/2,1/2))$ as $n\to\infty$ is $w^\intercal \mu\mu^\intercal w - w^\intercal \mu\mu^\intercal {\mathbf{1}}_2 + \mu^\intercal \mu/2 + 1$. Since $\bar y_k\bar y_{k'} = \mu_k\mu_{k'} + O_p(1/\sqrt{n})$ and $\bar y_{k}\bar y = \mu_k(\mu_1 + \mu_2)/2 + O_p(1/\sqrt{n}) $ by central limit theorem and delta method, $\hat U(w;(1/2,1/2))$ converges to its limit with rate $1/\sqrt{n}$. Hence $|\hat U_{\text{ws}}(w;(1/2,1/2)) - \hat U(w;(1/2,1/2))|$ is ignorable compared to the random fluctuation of $\hat U(w;(1/2,1/2))$ when $n$ is large.
This result on convergence rate also holds when $E(Y|X)$ is linear and $\hat Y_t^\ell$ is trained with an ordinary least squares (OLS) regression. Consider $K$ studies, $$y_{i,k} = \beta_k^\intercal x_{i,k} + \epsilon_{i,k},
\label{eq:diff-sim}$$ where $\beta_k$ is a study-specific regression coefficient and $\epsilon_{i,k}$ are $N(0,\sigma^2)$ noise terms. The $x_{i,k}\sim N(0,I)$ are *iid* $p$-dimensional covariate vectors in all $K$ studies. In the following proposition, we show the respective rates at which $|\hat U_{\text{ws}}(w;1/K{\mathbf{1}}_K)-\hat U(w;1/K{\mathbf{1}}_K)|$ and $|\hat U(w;1/K{\mathbf{1}}_K) - \lim_{n\to\infty}\hat U(w;1/K{\mathbf{1}}_K)|$ converge to $0$.
\[prop:cv-eq\] Assume $\mathcal D$ is study-specific and $n_k=n$ for $k\in\{1,2,\ldots,K\}$, where $n$ is divisible by $M$. Fix $\beta_1,\ldots,\beta_K$ and assume the data are generated with (\[eq:diff-sim\]). Let $L=1$ and the single learner be an OLS procedure. If any sub-matrix $X_k'$ formed by $(1-1/M)n$ rows of $X_k$ is invertible for every $k$, and $u(\hat y,y) = -(\hat y-y)^2$, then for any $w\in W$, where $W$ is a bounded set in $\mathbb R^{T}$, the following inequality holds $$\label{eq:diff-both}
\begin{aligned}
\sup_{w\in W}\left|\hat U(w;K^{-1}{\mathbf{1}}_K) - \hat U_{\text{ws}}(w;K^{-1}{\mathbf{1}}_K)\right| &\leq O_p(1/n),\\
\sup_{w\in W}\left|\hat U(w;K^{-1}{\mathbf{1}}_K) - \lim_{n\to\infty}\hat U(w;K^{-1}{\mathbf{1}}_K)\right| &\leq O_p(1/\sqrt{n}).
\end{aligned}$$
The above proposition indicates that the difference between utility functions in data reuse stacking and ${\text{CV}_{\text{ws}}}$ is order of magnitude smaller than the random fluctuation in $\hat U(w;K^{-1}{\mathbf{1}}_K)$, and in turn establishes the asymptotic equivalence of utility functions of stacking with data reuse and ${\text{CV}_{\text{ws}}}$. Since the results in Proposition \[prop:cv-eq\] concerns uniform convergence, the near equivalence of $\hat U(w;K^{-1}{\mathbf{1}}_K)$ and $\hat U_{\text{ws}}(w;K^{-1}{\mathbf{1}}_K)$ can translate into asymptotic equivalence of $\hat w$ and $\hat w^{\text{ws}}$, provided the limit of $\hat U(w;K^{-1}{\mathbf{1}}_K)$ has a unique maximizer in $W$.
In Figure \[fig:prop-rate\], we plot the estimated $\left|\hat U(w;K^{-1}{\mathbf{1}}_K) - \hat U_{\text{ws}}(w;K^{-1}{\mathbf{1}}_K)\right|$ and $|\hat U(w;K^{-1}{\mathbf{1}}_K) - \lim_{n\to\infty} \hat U(w;K^{-1}{\mathbf{1}}_K)|$at $w = K^{-1}{\mathbf{1}}_K$ as a function of $n$. We set $K=20$, $p = 10$ and $\beta_k\sim N({\mathbf{1}}_p, I)$. We use a 5-fold ${\text{CV}_{\text{ws}}}$.
### Cross-set CV and stacking with no data reuse
In this section, we focus on leave-one-set-out ${\text{CV}_{\text{cs}}}$ where $T$ iterations are performed. At each iteration a different $D_t$ is held out. We first introduce this CV scheme when $\mathcal D$ is study-specific hence $T = K$:
1. Generate the library of SPFs $\hat{\mathcal Y}$ using every set in $\mathcal D$. Note that this library remains the same across $T$ iterations.
2. At iteration $t$, evaluate the utility of $w$ using $D_t$ with SPFs that are not trained on $D_t$: $$\hat U_{\text{cs}}^{(t)}(w;\nu) = \frac{1}{n_t}\sum_{i = 1}^{n_t} u\left(\sum_{\ell,t}\mathbb I(t'\neq t) w_{\ell,t'} \hat Y_{t'}^\ell(x_{i,t}), y_{i,t}\right).
\label{eq:loo-ind}$$
3. Combine all $\hat U_{\text{cs}}^{(t)}$ across $T$ iterations and evaluated at a scaled $w$, yielding the utility function $\hat U_{\text{cs}}(w;\nu)$ for the selection of $w$ in ${\text{CV}_{\text{cs}}}$: $$\hat U_{\text{cs}}(w;\nu) = \sum_t \nu_t\hat U_{\text{cs}}^{(t)}(w;\nu) = \sum_{t=1}^K \frac{\nu_t}{n_t}\sum_i u\left(\sum_{\ell,t'}\frac{\mathbb I(t'\neq t)}{1-\nu_t}w_{\ell,t'}\hat Y_{t'}^\ell(x_{i,t}), y_{i,t}\right).
\label{eq:cvcs-U-study}$$ The scaling factor $(1-\nu_t)^{-1}$ is used to extrapolate the predicted value given by the full ensemble using the prediction from the partial ensemble. For example, in generalist predictions for exchangeable studies with $\nu_t = K^{-1}$, we expect that $\hat Y_k^\ell(x)\approx \hat Y_{k'}^\ell(x)$ for $k'\neq k$ and hence $(\sum_{k,\ell}\hat Y_k^\ell(x))/(\sum_{k\neq k',\ell}\hat Y_k^\ell(x))\approx K/(K-1)$.
4. Let $\hat w^{\text{cs}} = \operatorname*{arg\,max}_{w\in W} \hat U_{\text{cs}}(w;\nu)$, ${\text{CV}_{\text{cs}}}$ stacked PF is $$\hat Y_{w}^{\text{cs}} = \sum_{\ell, t} \hat w_{\ell, t}^{\text{cs}}\hat Y_{t}^\ell.$$
To understand the rationale for (\[eq:cvcs-U-study\]), we consider applying ${\text{CV}_{\text{cs}}}$ for generalist predictions. Note that under exchangeable $P_k$ distributions, $k=1,\ldots$, $\hat U_{\text{cs}}^{(t)}(w)$ is an unbiased estimator of $U_g(w^{(t)})$, where $w^{(t)}$ is equal to $w$ except for components $w_{\ell,t}$ $\ell = 1,\ldots,L$, which are set to zero: $$\mathbb E\left[\hat U^{(t)}_{\text{cs}}(w)\right] = \mathbb E\left[U_g(w^{(t)})\right],$$ where the expectation is taken over $\mathcal S$. For $\{\nu_t;t\leq K\}\in \Delta_{K-1}$, it follows that $$\mathbb E\left[\sum_{t=1}^K \nu_t \hat U^{(t)}_{\text{cs}}(w)\right] = \sum_{t=1}^K \nu_t\mathbb E\left[U_g(w^{(t)})\right].$$
Consider the Taylor expansion of $U_g(w^{(t)}),~t=1,\ldots,K$, around ${\mathbf{0}}$. Since $\sum_t \nu_t = 1$, $$\sum_t \nu_t U_g(w^{(t)}) = U_g({\mathbf{0}}) + \frac{\partial U_g}{\partial w^\intercal}({\mathbf{0}}) \sum_t \nu_k w^{(t)} + o(\|w\|).$$ By construction $\sum_t \nu_t w^{(t)} = ((1-\nu_t)w_{\ell,t};\ell=1,\ldots,L,t = 1,\ldots,K)$. Let $S$ be a $KL\times KL$ diagonal matrix with the diagonal term corresponding to $w_{\ell,t}$ as $1-\nu_t$, we have $\sum_t \nu_t w^{(t)} = Sw$.
Based on the results above, we know that $$\mathbb E\left[\nu_t\sum_{t=1}^K \hat U^{(t)}_{\text{cs}}(w)\right] = \mathbb E U_g({\mathbf{0}}) + \mathbb E\left[\frac{\partial U_g}{\partial w^\intercal}({\mathbf{0}})\right]Sw + o(\|w\|).$$ If $w$ is defined close to $\mathbf{0}$, e.g. $W = \{w:\|w\|_1\leq 1\}$, the linear term in the above expansion dominates higher order terms of $w$ and we have $$\mathbb E\left(\sum_k\nu_k\hat U^{(k)}_{\text{cs}}(w)\right) = \mathbb EU_g(Sw) + o(\|w\|).$$ Therefore, a nearly unbiased estimator of $U_g(w)$ is $$\hat U_{\text{cs}}(w;\nu) = \sum_k\nu_k\hat U_{\text{cs}}^{(k)}\left(S^{-1} w\right).$$ Expanding the above equation, we get the expression in (\[eq:cvcs-U-study\]).
In general, $\mathcal D$ is not necessarily study-specific and it might contains $D_t$’s that overlap. The utility function for ${\text{CV}_{\text{cs}}}$ in the general case can be constructed in the similar manner as when $\mathcal D$ is study-specific. In the first place, we modify $\hat U^{(t)}_{\text{cs}}(w)$, which estimates the expected utility of the PF combining the library of SPFs with weight $w^{(t)}$, into $$\hat U^{(t)}_{\text{cs}}(w) = \frac{1}{|D_t|} \sum_{(i,k) \in D_t} u\left( \sum_{\ell, t'}\mathbb I(s_t\cap s_{t'}=\emptyset)w_{\ell,t'}\hat Y_{t'}^\ell(x_{i,k}), y_{i,k}\right),$$ where $s_t = \{k:(i,k)\in D_t\text{ for some }i=1,\ldots,n_k\}$ is the list of studies with at least one sample in $D_t$. This modified $\hat U^{(t)}_{\text{cs}}$ guarantees no-date-reuse even if $D_t$’s are overlapped, since the set of studies that are involved in evaluating the utility of the stacked PF is mutually exclusive to the set of studies that are used in training SPFs in the considered stacked PF.
With the no-data-reuse property, it follows that with exchangeable distributions $P_1,P_2,\ldots$ $$\mathbb E\left[\hat U_{\text{cs}}^{(t)}(w)\right] = \mathbb E\left[U_g(w^{(t)})\right],$$ where $w^{(t)}$ is equal to $w$ except for all elements $w_{\ell,t'}$, such that $s_{t'}\cap s_t\neq \emptyset$ and $\ell=1,\ldots,L$, which are equal to zero.
In the study-specific $\mathcal D$ scenario, each $\hat U_{\text{cs}}^{(t)}(w)$ is combined into $\hat U_{\text{cs}}(w;\nu)$ according a relative importance $\nu_t$. This relative importance can be interpreted as the total probability mass assigned to data in $D_t$ in the empirical distribution of $\mathcal S$: $$\hat \pi(x,y) = \sum_k\frac{\nu_k}{n_k}\sum_i \mathbb I\left((x,y) = (x_{i,k},y_{i,k})\right).$$
With this definition, in the general case, the relative importance of $D_t$ is $
\gamma_t = \sum_{k\in s_t} \nu_k n_{k,t}/n_k,
$ where $n_{k,t}$ is the number of samples from study $k$ that are present in $D_t$. As in the case for study-specific $\mathcal D$, we can use these $\gamma_t$ to combine $\hat U_{\text{ws}}^{(t)}(w)$.
With Taylor expansion of $U_g(w^{(t)})$ around ${\mathbf{0}}$, we get $$\mathbb E\left(\sum_t\gamma_t \hat U_{\text{cs}}^{(t)}(w)\right) = \left(\sum_t \gamma_t\right)\mathbb EU_g({\mathbf{0}}) + \mathbb E\left[\frac{\partial U_g}{\partial w^\intercal}({\mathbf{0}})\right]\sum_t\gamma_t w^{(t)} + o(\|w\|).$$ Let $\Gamma$ be a $KL\times KL$ diagonal matrix with the element corresponds to $w_{\ell,t}$ equal to $
\sum_{t'} \gamma_{t'}\mathbb I(s_{t'}\cap s_t = \emptyset)w_{\ell,t}$. By the definition of $w^{(t)}$, $\sum_t \gamma_t w^{(t)} = \Gamma w$. Therefore we have $$\left(\sum_t\gamma_t\right)^{-1}\mathbb E\left(\sum_{t}\gamma_t \hat U_{cs}^{(t)}(w)\right) = \mathbb EU_g({\mathbf{0}}) + \mathbb E\left[\frac{\partial U_g}{\partial w^\intercal}({\mathbf{0}})\right] \frac{\Gamma w}{\sum_t \gamma_t} + o(\|w\|).$$
If the linear term dominates higher order terms in the above expansion, we have $$\left(\sum_t\gamma_t\right)^{-1}\mathbb E\left(\sum_{t}\gamma_t \hat U_{cs}^{(t)}(w)\right) = \mathbb E U_g\left(\frac{\Gamma w}{\sum_t \gamma_t}\right) + o(\|w\|),$$ and again, an approximated unbiased estimator of $U_g(w)$ is $$\left(\sum_t\gamma_t\right)^{-1} \left(\sum_{t}\gamma_t \hat U_{cs}^{(t)}\left(\sum_t \gamma_t\Gamma^{-1}w\right)\right).$$ Expand the above expression, we get the estimated utility function for ${\text{CV}_{\text{cs}}}$ for a general $\mathcal D$: $$\hat U_{\text{cs}}(w;\nu) = \sum_t \tilde \gamma_t |D_t|^{-1}\sum_{(i,k)\in D_t} u\left(\sum_{\ell, t'}\frac{\mathbb I\left(s_{t'}\cap s_t=\emptyset\right)}{r_t}w_{\ell,t'}\hat Y_{t'}^{\ell}(x_{i,k}), y_{i,k}\right),
\label{eq:cvcs-U}$$ where $\tilde\gamma_t = \gamma_t/(\sum_{t'}\gamma_t)$ and $r_t = \sum_{t'} \tilde\gamma_{t'} \mathbb I\left(s_{t'}\cap s_t=\emptyset\right)$.
An implicit assumption built in (\[eq:cvcs-U\]) is that none of the $r_t$’s is zero. This is equivalent to the requirement that for each $D_t$, there exists at least one $D_{t'}$ that contains a completely different list of studies than $D_t$. It is not too stringent if we only allow each $D_t$ contains samples from a subset of studies.
We now apply $\hat U_{\text{cs}}(w;(1/2,1/2))$ to select $w$ for the example in Section 3.1. Note in this example, $\mathcal D$ is study-specific. It follows that $$\hat U_{\text{cs}}(w;(1/2,1/2)) = -\left(2w_1^2\bar y_1^2 = 2w_2^2\bar y_2^2 - 2\bar y_1\bar y_2 + \frac{\overline{y_1^2}+\overline{y_2^2}}{2}\right).$$ The maximzier of $\hat U_{\text{cs}}(w;(1/2,1/2))$ is $
\hat w_{\text{cs}} = \left(\bar y_2^2/(\bar y_1^2+\bar y_2^2), \bar y_1^2/(\bar y_1^2 + \bar y_2^2)\right)$. Like the oracle weights $w_g$, $\hat w_{\text{cs}}$ depend on $\bar y_1$ and $\bar y_2$. We can compare the cMSE of PFs specified by $\hat w$ and $\hat w_{\text{cs}}$: $$\text{cMSE}(\hat Y_{w}^{\text{cs}}) - \text{cMSE}(\hat Y_{\hat w}) = \frac{-(\bar y_1^2 - \bar y_2^2)^2(\bar y_1 + \bar y_2)^2}{4(\bar y_1^2 + \bar y_2^2)^2}\leq 0.$$ The equality holds if and only if $\bar y_1 = \bar y_2$. This comparison shows that ${\text{CV}_{\text{cs}}}$ outperforms stacking with data-reuse for selecting generalist PFs.
In light of Proposition \[prop:cv-eq\], which indicates the asymptotic equivalence of ${\text{CV}_{\text{ws}}}$ to stacking with DR as $n\to\infty$ with $K$ fixed, we will refer to ${\text{CV}_{\text{cs}}}$ as stacking with NDR and will denote $\hat U_{\text{cs}}(w;\nu)$ as $\tilde U(w;\nu)$ and $\hat w_{\text{cs}}$ as $\tilde w$ in the remainder of this manuscript. The relative performance of ${\text{CV}_{\text{cs}}}$ to stacking with DR, in a general setting, will be discussed in Proposition \[prop:comp\] and a condition under which ${\text{CV}_{\text{cs}}}$ outperforms stacking with DR is illustrated.
${\text{CV}_{\text{ws}}}$ and ${\text{CV}_{\text{cs}}}$ have their strengths and limitations. Datasets for selecting $w$ in ${\text{CV}_{\text{cs}}}$ are not used to generate $\hat{\mathcal Y}$ and are indeed “external”. This is not the case for ${\text{CV}_{\text{ws}}}$. For example when $\mathcal D$ is study-specific, $\hat Y_{t,m}^\ell$ trained on $D_{t,-m}$ will be used to predict samples in $D_{t,m}$ from the same study $t$, which might still lead to optimistic estimation of $U_g(w)$, as observed for CV in model selection [@zhang1993model]. On the other hand, ${\text{CV}_{\text{cs}}}$ at each iteration considers linear combinations of sets of SPFs with lower cardinality compared to $\hat{\mathcal Y}$, whose cardinality is $TL$. In addition, ${\text{CV}_{\text{cs}}}$ cannot handle specialist predictions for certain types of $\mathcal D$. For example, $\hat U_{\text{cs}}(w;e_k)$ is not well defined if $\mathcal D$ is study-specific.
Penalization in stacking
------------------------
Adding a penalty to the utility function $\hat U(w;\nu)$ is a common practice for selecting weights $w$ in stacking [@breiman1996stacked; @leblanc1996combining]. Flexible forms of penalties on $w$ can deal with a wide variety of relationships between SPFs in $\hat{\mathcal Y}$. For example, group LASSO can be used when SPFs can be organized into related groups. In this section, we leverage this flexibility for specialist predictions when $n_k$ is small.
When $n_k$ is small, the estimated prediction accuracy of a PF is highly variable. This disadvantage is further compounded by the fact that under certain conditions, specialist PFs fail to incorporate information from other studies. For instance, stacked PFs for specialist predictions derived from stacking with DR, when OLS regression serves as the single learner, do not put any weights on SPFs derived from studies other than the target study(see Section 3.1). To overcome the above disadvantage arising from small $n_k$, we introduce a penalized utility function that promotes shrinkage of specialist PFs towards generalist PFs.
The penalized $\hat U(w;e_k)$ is defined as follows. $$\hat w_{p}^{(k)} = \operatorname*{arg\,max}_{w\in W} \hat U(w;e_k) - \lambda \|w - \hat w_g\|_2^2,
\label{eq:penalty}$$ where $\lambda>0$ is a tuning parameter, $\hat w_g = \operatorname*{arg\,max}_{w\in W} \hat U(w;\nu_g)$ and $\nu_g$ is a set of study weights used in generalist utility. We use a leave-one-out cross-validation to select the turning parameter $\lambda$. For sample $i$ in study $k$, we generate $\hat{\mathcal Y}$ using data with this sample excluded. We then calculate the prediction error of the resulting stacked PF with weights $\hat w_p^{(k)}$ on sample $i$. This procedure is repeated over a set of candidate values for $\lambda$. We specify that the candidate values decrease as $n_k$ increases.
In Figure \[fig:small-spe-illu\], we illustrate the effect of penalization in training of specialist PFs for a study with small sample size ($n_1 = 10$). As $\lambda$ increases from $10^{-3}$, the expected MSE in study 1, defined as $
\int_{(x,y)} (y - \hat Y(x))^2dP_1(x,y),
$ of the penalized specialist PFs first decreases, indicating the benefit of shrinking the specialist weights towards the generalist weights. The expected MSE is minimized at $\lambda\approx 8$ and when $\lambda$ increases beyond it, the expected MSE starts to increase. The details of the distribution assumptions for this example is describe in Section 5.1.
[.45]{} {width="\linewidth"}
[.45]{} {width="\linewidth"}
Properties of generalist prediction models
==========================================
We examine properties of generalist PFs $\hat Y_{w}$ when $w$ are obtained with ($\hat w$) and without ($\tilde w$) data reuse when $W = \{\|w\|_1\leq 1\}$. Recall that under the exchangeability assumption, $\hat w = \operatorname*{arg\,max}_{w\in W}\hat U(w;K^{-1}{\mathbf{1}}_K)$ and $\tilde w = \operatorname*{arg\,max}_{w\in W}\tilde U(w;K^{-1}{\mathbf{1}}_K)$. For the remainder of this manuscript, we will assume $\hat u(\hat y, y) = -(\hat y - y)^2$. We work under the assumption that the data generating distribution underlying the multi-study collection is a hierarchical model, and $\mathcal D$ will be study-specific. In the last part of this manuscript, we explore and discuss the results derived when this assumption is relaxed.
We present two properties of generalist predictors. First, the expected MSE of the generalist PFs in future $k>K$ studies, as determined by $\hat w$ and $\tilde w$, converge to the MSE of an oracle PF $Y_{w_g^0}$, and the discrepancy between the MSEs will be bounded by a monotone function of $K$ and $\min_k n_k$. Second, we investigate under which circumstances stacking without data reuse has better MSE compared to stacking with data reuse.
The joint hierarchical model underlying available and future datasets is: $$\begin{aligned}
&y_{i,k} = f_k(x_{i,k}) + \epsilon_{i,k},\\
&f_k \sim F,~x_{i,k}\buildrel iid \over\sim F_X,
\end{aligned}
\label{eq:hier-mod}$$ for $i=1,\ldots,n_k$ and $k=1,2,\ldots$. Here $f_k: \mathbb R^p\to\mathbb R$, $k\ge 1$, are $iid$ random functions with marginal distribution $F$. The mean of $F$ is indicated as $f_0 = \int f dF(f)$. Covariate vectors $x_{i,k}\in \mathcal X$ have the same distribution $F_X$ with finite second moment across all datasets, and the noise terms $\epsilon_{i,k}$ are independent with mean zero and variance $\sigma^2$.
Our propositions \[prop:bound\] and \[prop:comp\] will assume that:
1. There exists an $M_1<\infty$ such that for any $k>0$ and $\ell\leq L$, $$\sup_{x\in\mathcal X} |f_k(x)|\leq M_1,~a.e.\text{ and}~\sup_{x\in\mathcal X}|\hat Y_k^\ell(x)|\leq M_1,~ a.e.$$ The first a.e. is with respect to the joint distribution of $f_k$ whereas the second a.e. concerns the joint distribution of $\mathcal S$.
For example, if $\mathcal X$ is a compact set and outcomes $Y$ are bounded, the SPFs trained with a linear regression model with a $L_1$ constraint on the regression coefficients, i.e. a LASSO regression model, or with tree-based regression models satisfy the assumption.
2. There exist $M_2<\infty$, $p_{\ell}>0$ and functions $Y_{k,\ell}$ for $k=1,\ldots, K$, $\ell = 1,\ldots,L$, such that $\sup_{x\in\mathcal X}|Y_{k}^\ell(x)|\leq M_1 ~a.e.,$ and, $$\int_{x} n_k^{2p_{\ell}}\left(\hat Y_{k}^\ell(x) - Y_{k}^\ell(x)\right)^2dF_X(x)\leq M_2,$$ Here $Y_k^\ell$ is the limit of $\hat Y_k^\ell$ as $n_k$ goes to infinity. For example, if the learner is an OLS model, then $p_\ell < 1/2$.
Let $X_k = (x_{1,k},\ldots,x_{n_k,k})^\intercal$ and $Y_k = (y_{1,k},\ldots,y_{n_k,k})$. The predicted outcomes for study $k$, based on a SPF $\hat Y_{k'}^\ell$, is denoted as $\hat Y_{k'}^\ell(X_k) = (\hat Y^\ell_{k'}(x_{i,k});i\leq n_k)^\intercal$. When $u(y,y') = -(y-y')^2$, we have $$\begin{gathered}
\hat U(w;K^{-1}{\mathbf{1}}_K) = w^\intercal \hat \Sigma w - 2\hat b^\intercal w + K^{-1}\sum_{k}n^{-1}_k\sum_iy_{i,k}^2,\\
\tilde U(w;K^{-1}{\mathbf{1}}_K) = w^\intercal \tilde \Sigma w - 2\tilde b^\intercal w + K^{-1}\sum_{k}n^{-1}_k\sum_iy_{i,k}^2,
\end{gathered}
\label{eq:quadratic-form}$$ where $\hat\Sigma, \tilde\Sigma, \hat b,$ and $\tilde b$ are defined as follows, $$\begin{gathered}
\hat\Sigma_{k,k';\ell,\ell'} = \sum_{i=1}^K \frac{\left(\hat Y_{k}^{\ell} (X_i)\right)^\intercal\hat Y_{k'}^{\ell'}(X_i)}{n_i K},~~b_{k;\ell} = \sum_{i=1}^K \frac{\left(\hat Y_{k}^{\ell} (X_i)\right)^\intercal Y_{i}}{n_iK},\\
\tilde\Sigma_{k,k';\ell,\ell'} = \sum_{i\neq k,i\neq k'} \frac{K\left(\hat Y_{k}^{\ell} (X_i)\right)^\intercal\hat Y_{k'}^{\ell'}(X_i)}{n_i(K-1)^2},~~ \tilde b_{k;\ell} = \sum_{i\neq k} \frac{\left(\hat Y_{k}^{\ell} (X_i)\right)^\intercal Y_i}{n_i(K-1)}.\end{gathered}$$ Note that $\hat \Sigma$ and $\tilde \Sigma$ are $KL\times KL$ matrices, $w$, $\hat b$ and $\tilde b$ are $KL$-dimensional vectors. $\hat \Sigma_{k,k';\ell,\ell'}$ is the element corresponding to $w_{k,\ell}$ and $w_{k',\ell'}$ while $\hat b_{k;\ell}$ is the element corresponding to $w_{k,\ell}$.
We define the oracle generalist stacking weights $w_g^0$ based on the limits of $\hat Y_k^\ell$: $$w_g^0 = \operatorname*{arg\,max}_{w\in W} \int_{x,y} u(Y_w(x),y)dP_0(x,y),$$ where $Y_w = \sum_{\ell,t} w_{\ell,t} Y_t^\ell$ and $P_0$ is the average joint distribution of $(X,Y)$ across studies $k\ge1$. The cross-study MSE associated with a stacking weight $w$ is defined as $$\psi(w) = \int_{x,y} (y - Y_w(x))dP_0(x,y) = w^\intercal \Sigma w - 2 b^\intercal w + \int_{y}y^2 dP_0(y),$$ where $\Sigma_{k,k;\ell,\ell'} = \int_{x} Y_k^\ell(x)Y_{k'}^{\ell'} dF_X(x)$ and $b_{k,\ell} = \int_{x,y} yY_k^\ell(x)dP_0(x,y)$.
Generalist models and oracle ensembles
--------------------------------------
In Proposition \[prop:bound\] we compare $\hat Y_{\hat w}$ and $\hat Y_{\tilde w}$ to oracle prediction, using the metrics $\mathbb E(\psi(\hat w) - \psi(w_g^0))$ and $\mathbb E(\psi(\tilde w) - \psi(w_g^0))$.
\[prop:bound\] Let $L\geq 2$ and $u(x,y) = -(x-y)^2$. Consider $K$ available datasets and future $k>K$ studies from model (\[eq:hier-mod\]). If (A1) and (A2) hold, then $$\mathbb E\left(\psi(\hat w) - \psi(w_g^0)\right) \leq C_0\sqrt{\log(KL)}K^{-1/2} + C_1(\min_kn_k)^{-\min_{\ell}p_{\ell}},$$ and, $$\mathbb E\left(\psi(\tilde w) - \psi(w_g^0)\right) \leq C_0'\sqrt{\log(KL)}K^{-1/2} + C_1'(\min_kn_k)^{-\min_{\ell}p_{\ell}},$$ where the expectations are taken over the joint distribution of the data $\mathcal S$. $C_0$, $C_0'$, $C_1$ and $C_1'$ are constants, independent of $K$ and $n_k$.
The above proposition shows that if we have enough studies and samples in each study, then the estimated generalist PFs $\hat Y_{\hat w}$ and $\hat Y_{\tilde w}$ have similar accuracy compared to $Y_{w_g^0}$.
Generalist predictions with and without data reuse
--------------------------------------------------
We compare the prediction accuracy, as indicated by $\psi(\hat w)$ and $\psi(\tilde w)$, of generalist PFs trained with and without data reuse. We start from a specific example, followed by a general result on the relative accuracy levels of PFs.
Consider $u(y,y') = -(y-y')^2$ and $L=1$. Assume that $n_k=n$ and $f_k(x_{i,k}) = \beta_k^\intercal x_{i,k}$, where each component of $\beta_k$ is an independent $U(\beta_0-\tau,\beta_0+\tau)$ random variable for $k=1,\ldots,K$. Let each component in $x_{i,k}\in R^p$ be a $U(-\sqrt{3},\sqrt{3})$ random variable and $\epsilon_{i,k}$ be $iid$ $U(-\sqrt{3},\sqrt{3})$ random variables for $i=1,\ldots,n$, $k=1,\ldots,K$. Let the learner be an OLS model, therefore $\hat Y_k^\ell(x) = \hat \beta_k^\intercal x$. Denote ${\bm{\beta}}= (\beta_1,\ldots, \beta_K)$ and $\hat {\bm{\beta}}= (\hat \beta_1,\ldots, \hat \beta_K)$.
In this setting, we have $\Sigma_{k,k'} = \beta_k^\intercal \beta_{k'}$, $b_{k} = \beta_k^\intercal \beta_0$ and $$\begin{gathered}
\hat \Sigma_{k,k'} = (nK)^{-1}\sum_s \hat \beta_k^\intercal X_{s}^\intercal X_s \hat \beta_k,~\hat b_k = (nK)^{-1} \sum_s \hat \beta_k^\intercal X_s^\intercal Y_s,\\
\tilde \Sigma_{k,k'} = \frac{K}{n(K-1)^2} \sum_{s\neq k,k'} \beta_k^\intercal X_{s}^\intercal X_s \hat \beta_k,~\tilde b_k = \left(n(K-1)\right)^{-1} \sum_{s\neq k} \hat \beta_k^\intercal X_s^\intercal Y_s.\end{gathered}$$
To understand the behavior of $\hat w$ and $\tilde w$, we first consider the bias of $(\hat \Sigma, \hat b)$ and $(\tilde \Sigma, \tilde b)$ with respect to $(\Sigma, b)$, as captured by the difference between their expectation over the joint distribution of the observed data $\mathcal S$: $$\begin{aligned}
&\mathbb E(\hat \Sigma(k,k')) - \mathbb E(\Sigma(k,k')) = -\frac{p(p+1)}{Kn(n-p-1)}(1-\delta_{i,j}),\\
&\mathbb E(\tilde \Sigma(k,k')) - \mathbb E(\Sigma(k,k')) = \left\{
\begin{array}{ll}
(K-1)^{-1}(\beta_0^\intercal \beta_0 + p\tau^2+\frac{p}{n-p-1})&i=j,\\
-(K-1)^{-2}\beta_0^\intercal\beta_0&i\neq j,
\end{array}\right.\\
&\mathbb E(\hat b(k)) - b(k)) = \frac{p\tau^2 + p/n}{K},~\mathbb E(\tilde b(k)) - b(k)) = 0.\end{aligned}$$ The above equalities indicate that stacking without data reuse estimates off-diagonal elements of $\Sigma$ without bias while zero-out stacking estimates $b$ without bias. However, the equalities don’t provide a direct comparisons of the relative performances of stacking with and without data reuse.
The next step is to derive an approximation of $\hat w$ and $\tilde w$ to compare the stacking procedures based on $\psi(w)$. One approximation considers the optimization of $\hat U(w;K^{-1}{\mathbf{1}}_K)$ and $\tilde U(w;K^{-1}{\mathbf{1}}_K)$ at the limit when $n\to\infty$. In this case, if $W = \mathbb R^K$ $$\hat w\approx K^{-1}{\mathbf{1}}_K,~~\tilde w\approx \frac{K-2}{K-1}K^{-1}{\mathbf{1}}_K + \frac{1}{K-1}\left(\frac{1}{K}S{\bm{\beta}}^\intercal{\bm{\beta}}- \frac{K-1}{K}I\right){\mathbf{1}}_K,$$ where $S = \text{diag}\{\|\beta_1\|_2^{-2},\ldots,\|\beta_K\|_2^{-2}\}$. Note that each component of $(S{\bm{\beta}}{\bm{\beta}}^\intercal/K - (K-1)/KI){\mathbf{1}}_K$ decreases as $\tau$ increases. When $\tau = 0$, each component is equal to $K^{-1}$ whereas when $\tau\to\infty$, the limit is approximately $-(K-1)/K$. We can find $\tilde w$ is a shrunk version of $\hat w$ towards zero. For study with larger $\beta_k$, the strength of shrinkage for $w_k$ tends to be larger. A Monte-Carlo simulation determines that based on the above approximations, $\mathbb E\left(\psi(\hat w)\right) > \mathbb E\left(\psi (\tilde w)\right)$ when $\tau \gtrsim \sqrt{K}$. This bound is verified by a simulation study (see Figure \[fig:zero-out-comp\]).
![Comparison of stacking PFs constructed with and without data reuse when $K=2$ (left) and $K=9$ (right). We set $p=10$, $\beta_0 = 1_K$, $n = 200$, $\sigma=1$ and vary $\tau^2$. The difference in $\mathbb E\psi_K(w)$ is calculated with 1,000 replications. Oracle is the predictor $\hat Y_{w_g}$, “new study” means we train weights with a set of new studies that are not used in constructing $\hat{\mathcal X}$ and “merge” means we merge all studies to train a single regression model to serve as the generalist PF.[]{data-label="fig:zero-out-comp"}](p_10_K_2.pdf "fig:") ![Comparison of stacking PFs constructed with and without data reuse when $K=2$ (left) and $K=9$ (right). We set $p=10$, $\beta_0 = 1_K$, $n = 200$, $\sigma=1$ and vary $\tau^2$. The difference in $\mathbb E\psi_K(w)$ is calculated with 1,000 replications. Oracle is the predictor $\hat Y_{w_g}$, “new study” means we train weights with a set of new studies that are not used in constructing $\hat{\mathcal X}$ and “merge” means we merge all studies to train a single regression model to serve as the generalist PF.[]{data-label="fig:zero-out-comp"}](p_10_K_9.pdf "fig:")
Motivated by the simulation results, we investigate under what circumstances $\mathbb E\psi(\tilde w)$ is smaller than $\mathbb E\psi(\hat w)$. We present our results in Proposition \[prop:comp\] about the characterization of the relative performance of $\hat Y_{\hat w}$ and $\hat Y_{\tilde w}$ in a general setting, when data are generated from the model (\[eq:hier-mod\]).
\[prop:comp\] Assume the data are generated via $(\ref{eq:hier-mod})$ with $n_k = n$ and assumptions A1-A2 hold. Denote $$\sigma_f^2 = \int_f \left(\int (f(x) - f_0(x))dF_X(x)\right)^2dF(f).$$ There exists $\kappa>0$ such that when $$8\sqrt{e}(2M_1^2 + M_1 \sigma_f)\sqrt{\log((K-1)L)}((K-1)L)^{-1/2} \leq \kappa M_1 \sigma_f \sqrt{\log(KL)}(KL)^{-1/2},
\label{eq:better-cond}$$ $
\mathbb E(\psi(\hat Y_{\hat w})) + C^*n^{-\min_\ell p_\ell} \geq \mathbb E(\psi(\hat Y_{\tilde w})),
$ where $\mathbb E$ is taken over $\mathcal S$.
$\sigma_f$ is a metric to measure the heterogeneity across studies since the only difference of one study to the other, based on our model assumption, is $\mathbb E(y_{i,k}|x_{i,k}) = f(x_{i,k})$. Note that $(K-1)\log(KL)/(K\log((K-1)L))$ increases as $K$ increases when $K$ is small and starts to decrease to 1 when $K$ gets large. Therefore, if $$\frac{8\sqrt{e}(2M_1^2+M_1\sigma_f)}{\kappa M_1\sigma_f}\leq 1,$$ then $\mathbb E\psi(\tilde w)$ is always smaller than $\mathbb E\psi(\hat w)$ up to a term $C^*n^{-\min_\ell p_\ell}$. If the ratio is larger than 1, only $K$ that is small enough to satisfy (\[eq:better-cond\]) can guarantee the superiority of $\tilde w$. We also note that $
8\sqrt{e}(2M_1^2+M_1\sigma_f)/(\kappa M_1\sigma_f)
$ is a decreasing function in $\sigma_f$. This means if $\sigma_f$ is large, the upper bound for $K$ such that $\mathbb E \psi(\tilde w)\leq \mathbb E\psi(\hat w)$ will increase.
The proposition provides a rough guideline to select between stacking with and without data reuse. If the number of studies are relatively small, we would prefer stacking without data reuse to stacking with data reuse, as the former outperforms the latter even with low $\sigma_f$. On the other hand, when $K$ is large, we might turn to stacking with data reuse more often unless there is strong evidence indicating $\sigma_f$ is extremely high.
In Figure \[fig:l1\_res\], we examine the relative performance of two stacking approaches across a range of $K$ and cross-study heterogeneity with a simulation study. We can see stacking without data reuse outstrips stacking with data reuse exclusively when $\tau$ is above a threshold defined by a function of $\sqrt{K}$. Only when $\sigma$ is small with moderate $K$, stacking with data reuse shows significant advantage over stacking without data reuse.
A more clear-cut recommendation based on Proposition \[prop:comp\] can be challenging since $M_1$ and $\kappa$ are unknown and $\sigma_f$ is also not observed. However, one can adapt a non-parametric model to approximate $f_k$ within each study and estimate these quantities to refine the rough guideline above, which might only be appropriate if $n_k$ is large.
Simulation studies
==================
In this section, we first illustrate the effectiveness of the technique in Section 3, proposed for specialist predictions of small studies, through simulated datasets. We then examine the analytical results in Section 4 using numerical examples. We investigate empirically whether the error bound of the estimated stacking predictors in Proposition \[prop:bound\] is tight, and verify that the preferable region of stacking with NDR in comparison to with DR is aligned with our theoretical characterization. We conclude this section with an example illustrating how to extend generalist predictions to non-exchangeable studies.
Specialist predictions for small studies
----------------------------------------
We specify the following generative model for the simulated dataset to examine the performance of the specialist predictor derived from the modified utility function (\[eq:penalty\]). In addition, we also consider the performance of the generalist predictor derived based on the utility function $\hat U(\hat Y_w;1/K1_K)$ and the specialist predictor without small-sample based penalization. $$\begin{gathered}
y_{i,k} = \beta_k^\intercal x_{i,k} + \epsilon_{i,k},\\
\beta_k\sim N(1_p,I_p),~\epsilon_{i,k}\sim N(0,25),\end{gathered}$$ where $p$ is the number of covariates, $1_p$ is a $p$-vector of ones and $I_p$ is the $p\times p$ identity matrix. We set $p=10$ and $K=5$ with $n_k = 100$ for $k = 2,\ldots,5$ and $n_1$ varying from $10$ to $50$. In Figure \[fig:small-n-spe\], we illustrate the RMSEs of the three predictors in consideration when applied to predict new samples in study 1. We set $D_t$ to be all data from study $t$, $t=1,\ldots,K$ and ordinary least square (OLS) regression is the single-set learner. We use negative squared loss as $u(\cdot,\cdot)$ and set $w(n) = 100/n$.
![*RMSE of the specialist predictor, the generalist model and the penalized specialist predictor on future samples in study 1.*[]{data-label="fig:small-n-spe"}](small_spec.pdf)
From the results we can see that when $n_1$ is small ($<40$), the generalist predictor outperforms unpenalized specialist predictor. This is as expected since all five studies are similar to each other. The penalized specialist predictor, on the other hand, is not sensitive to $n_1$ and has the lowest RMSE (except for $n_1=10$) among all three predictors.
Error bound of generalist stacking predictors
---------------------------------------------
We illustrate the difference in the prediction error, $\mathbb E\left(\psi(\hat Y_{\hat w_g})\right) - \mathbb E\left(\psi(Y_{w_g^0})\right)$, considered in Proposition 1 with a numeric example and compare the actual difference to the analytic upper bound as $n_k$ and $K$ change. We use a similar generative model specification as in Section 5.1 but specify that each component of $\beta_k$ follows $U[0,1]$ and each component of $x_{i,k}$ follows $U[-1,1]$. In addition, we assume that $\epsilon_{i,k}\sim U[-1,1]$. The reason to replace the normal distributions with uniform distributions is to satisfy the boundedness assumption for $g_k$ and $\hat f_k$. We set $n_k = n$ for all $k$ and calculate with Monte Carlo simulation the difference $\mathbb E\left(\psi(\hat Y_{\hat w_g})\right) - \mathbb E\left(\psi(Y_{w_g^0})\right)$ for $n = 100, 200, 400$ as $K$ increases from $5$ to $50$ with increment $1$ and for $K=5, 15, 20$ as $n$ increases from $20$ to $100$ with increment $5$. We use the same constraint on $w$, i.e. $\|w\|_1\leq 1$ as in the proposition. The results are derived from 1000 simulation replicates and shown in Figure 5.
  \[fig:upper-bound\]
From the figures we can find except for small $K$ and $n$, the difference in $\psi$ is approximately a linear function of both $\sqrt{\log{K}/K}$ or $n^{-1/2}$ when fixing $n$ or $K$. This indicates that the actual difference in $\psi$ changes at the same order as the upper bound we discovered in Proposition 1. Indeed, under this simulation scenario, the above results implies the upper bound is probably tight and we cannot improve the results about the convergence rate of $\hat Y_{\hat w_g}$ to $Y_{w_g^0}$.
Comparison between stacking with and without data reuse
-------------------------------------------------------
We also perform a simulation analysis to check if the transition bound provided in Proposition 2 correctly delimits the region where stacking without data reuse supersedes stacking with data reuse. We use the same simulation scenario as in Section 5.2 but vary the variance of $\beta_k$ by changing the range of the corresponding uniform distributions and number of studies $K$. We then calculate the prediction accuracy on future studies of the generalist predictors derived with stacking with and without data reuse under the constraint that $\|w\|_1\leq 1$.
![*Difference between standard stacking predictor and zero-out predictor in terms of out-of-study prediction error. Both stacking approaches are constrained with $\|w\|_1\leq 1$. The dash line indicates the upper bound on $K$ within which stacking without data reuse has better prediction accuracy.*[]{data-label="fig:l1_res"}](l1_res.pdf)
Non-exchangeable studies
------------------------
To conclude the simulation study section, we present a numeric experiment where we illustrate the flexibility of the stacking approach to incorporate non-exchangeable studies. Specifically, we assume that $K$ studies are collected at time point $t_k=k$ and the study-specific regression coefficients $\beta_k$ follows an AR1 model. $$\beta_{k} = \rho\beta_{k-1} + \sqrt{1-\rho^2}\epsilon_{k},$$ where $\rho$ is a constant between 0 and 1, which indicates the dependence between studies that are collected at close proximity in time. $\epsilon_{k}$ are independent normal noise with mean zero and covariance matrix $I_p$. Once we simulate $\beta_k$, we use the same generative model for $x_{i,k}$ and $y_{i,k}$ as in Section 5.1.
To account for the non-exchangeability between studies, we set the study-specific weight $\nu_k$ based on the distance between study $k$ and the future study, which is assumed to be collected at time $K+1$. Specifically, we assume $\nu_k = 1/(K+1-k)$. The choice here is rather arbitrary but it incorporates the fact that most recent studies will be emphasized when training the generalist predictors for study $K+1$. The performance of this particular choice of stacking weights in the simulated dataset is shown in Figure \[fig:my\_label\]. We consider three different values of $\rho$, correpsonding to high, medium and no dependence between studies.
![Comparsions of methods when studies are generated using an AR-1 model. $\rho$ indicates the correlation between $\beta_k$ of two adjacent time points.[]{data-label="fig:my_label"}](AR1_sim.pdf)
Application
===========
We apply our generalist predictors on an environmental health dataset containing observed mortality rate (person-year) across 31,414 unique zip codes in the entire U.S. For each ZIP code, the mortality rate is available from 1999 to 2016. The exposure to air pollution agents, such as PM2.5, is calculated for each ZIP code as the average observed levels from 1998 to 1999. In addition to the measurement of air pollution agents and the outcome, we also have access to ZIP code-level demographic covariates. All these covariates are measured before 1999. Demographic covariates consists of temperature, humidity, percentage of ever smokers, black population, median household income, median value of housing, percentage below the poverty level, percentage less than high school education, percentage of owner-occupied housing units, and population density.
We define the generalist prediction task in this dataset as the prediction of ZIP code-level mortality for a state based on data from other states and the specialist prediction task as the prediction for a specific state based on all available data. For generalist predictions, we randomly select 10 states to train an ensemble of state-specific prediction models and use our stacking approaches to combine these models to predict mortality rate for the rest of states. The metric we use to evaluate the performance of the stacked model is the average RMSE across all 40 testing states. We repeat this procedure 20 times. We consider two different approaches for generalist problems: stacking with DR and stacking with NDR. The same analysis is then performed for the county-level dataset from California. For this dataset, the number of testing counties are 48. The results are shown in \[fig:my\_label3\].
![*Comparisons of the performance of stacking-based methods to merging. Stacking-based methods are derived with $L1$ penalty with or without data reuse. Each boxplot illustrates the variability of the prediction accuracy, evaluated with RMSE, of a specific method. The variability for generalist predictions is estimated through 20 replicates of random partitioning of training and testing states. The variability for specialist predictions is estimated through 10-fold cross-validation.*[]{data-label="fig:my_label3"}](app_heterogeneity.pdf)
From the figure we can find that when the dataset contains all states, supposedly with higher between studies heterogeneity, the performance of NDR is slightly better than that of DR, whereas if the dataset contains only county-level studies in California, which has smaller cross-study heterogeneity than the nationwide dataset, the advantage of NDR disappears and DR now has a smaller RMSE than NDR. This result is consistent with what we find in Proposition \[prop:comp\], which indicates for a fixed $K$, NDR only outperforms DR when the cross-study hetergeneity is large.
Acknowledgements {#acknowledgements .unnumbered}
================
Research supported by the U.S.A.’s National Science Foundation grant NSF-DMS1810829 (BR, LT and GP) and the U.S.A.’s National Institutes of Health grant NCI-5P30CA006516-54 (GP).
Proof of Proposition 1
======================
Partition each study evenly into $M$ pieces and denote the covariate matrix for the $m$-th piece in study $k$ as $X_{k,m}$, the corresponding responses as $Y_{k,m}$. Let $X_{k,-m}$ and $Y_{k,-m}$ denote the entire covariate matrix and outcome vector for study $k$, excluding $X_{k,m}$ and $Y_{k,m}$. At iteration $m$, $
{\text{CV}_{\text{cs}}}$ with study-specific $\mathcal D$ fits an OLS model to each study based on $(X_{k,-m}, Y_{k,-m})$. We denote the estimated regression coefficients as $$\hat\beta_{k,m} = (X_{k,-m}^\intercal X_{k,-m})^{-1}X_{k,-m}^\intercal Y_{k,-m}.$$
The utility function for ${\text{CV}_{\text{ws}}}$ can be written as $$\hat U_{\text{ws}}(w;K^{-1}{\mathbf{1}}_K) = (Kn)^{-1}\sum_{k=1}^K \sum_{m = 1}^M \left\|Y_{k,m} - \sum_{k'=1}^K w_{k'}X_{k,m}\hat\beta_{k',m}\right\|^2_2 = w^\intercal \Sigma_1 w - 2 b_1^\intercal w + \sum_k\|Y_k\|^2_2.$$ The utility function for data reuse stacking is $$\hat U(w;K^{-1}{\mathbf{1}}_K) = (Kn)^{-1}\sum_{k=1}^K \left\|Y_k - \sum_{k'=1}^K w_{k'}X_{k}\hat \beta_{k'}\right\|_2^2 = w^\intercal \Sigma_2 w - 2 b_2^\intercal w + \sum_k\|Y_k\|^2_2,$$ where $\hat\beta_k = (X_k^\intercal X_k)^{-1}X_k^\intercal Y_k$ is the OLS estimate of regression coefficients based on all data from study $k$.
$\Sigma_1$ and $\Sigma_2$ are both $K\times K$ matrices and the $(i,i')$-th element of them are $$\begin{gathered}
\Sigma_1(i,i') = (Kn)^{-1}\sum_{k=1}^K\sum_{m=1}^M \hat \beta_{i,m}^\intercal X_{k,m}^\intercal X_{k,m} \hat\beta_{i',m}\\
\Sigma_2(i,i') = (Kn)^{-1}\sum_{k=1}^K\hat \beta_{i}^\intercal X_{k}^\intercal X_{k} \hat\beta_{i'}.\end{gathered}$$ $b_1$ and $b_2$ are $K$-dimensional vectors with the $i$-th elements $$\begin{gathered}
b_1(i) = (Kn)^{-1}\sum_{k=1}^K\sum_{m=1}^M \hat \beta_{i,m}^\intercal X_{k,m}^\intercal Y_{k,m},~~
b_2(i) = (Kn)^{-1}\sum_{k=1}^K\hat \beta_i^\intercal X_{k}^\intercal Y_k.\end{gathered}$$
Note that we have the following relationship between $\hat \beta_{k,m}$ and $\hat \beta_k$: $$\label{eq:prop1-link}
\begin{aligned}
\hat\beta_{k,m} = \hat \beta_k + (X_k^\intercal X_k)^{-1}X_{k,m}^\intercal(I - P_{k,m})^{-1}(X_{k,m}\hat \beta_k - Y_{k,m}),
\end{aligned}$$ where $P_{k,m} = X_{k,m}(X_{k}^\intercal X_k)^{-1}X_{k,m}^\intercal$. With the assumptions about the distribution of data and central limit theorem, we have the following characterization: $$\label{eq:prop1-rudi}
\begin{gathered}
\frac{1}{n}X_k^\intercal X_k = I_p + O_p(1/\sqrt{n})\\
\frac{1}{n}X_{m,k}^\intercal X_{m,k} = \frac{1}{M}I_p + O_p(1\sqrt{n})\\
\frac{1}{n}X_{m,k}^\intercal Y_{m,k} = \frac{1}{M}\beta_k + O_p(1\sqrt{n})\\
\hat \beta_k = \beta_k + O_p(1/\sqrt{n})\\
\hat \beta_{k,m} = \hat \beta_k + O_p(1/\sqrt{n})\\
P_{k,m} = O_p(1/n).
\end{gathered}$$ Define $\delta \hat\beta_{k,m} = \hat\beta_{k,m} - \hat\beta_{k}$. We have $$\begin{aligned}
n^{-1}\hat\beta_{i,m}^\intercal X_{k,m}^\intercal X_{k,m} \hat\beta_{i', m} = &\hat\beta_i(n^{-1}X_{k,m}^\intercal X_{k,m})\hat\beta_{i'} + \delta\hat\beta_{i,m}^\intercal(n^{-1}X_{k,m}^\intercal X_{k,m})\hat\beta_{i'} \\
+& \hat\beta_{i}^\intercal(n^{-1}X_{k,m}^\intercal X_{k,m})\delta\hat\beta_{i',m} + O_p(1/n).\end{aligned}$$ Note that by (\[eq:prop1-link\]) and (\[eq:prop1-rudi\]) $$\begin{aligned}
\hat\beta_i^\intercal (n^{-1}X_{k,m}^\intercal X_{k,m}) \delta\hat\beta_{i',m} =& \beta_i^\intercal (n^{-1}X_{k,m}^\intercal X_{k,m}) \delta\hat\beta_{i',m} + O_p(n^{-1})\\
= & \beta_i^\intercal (n^{-1}X_{k,m}^\intercal X_{k,m})(X_{i'}^\intercal X_{i'})^{-1}X_{i',m}^\intercal(I_{n/M} - P_{i',m})^{-1}(X_{i',m}\hat\beta_{i'} - Y_{i',m}) + O_p(n^{-1})\\
=& \beta_i^\intercal (I+O_p(1/\sqrt{n}))(X_{i'}^\intercal X_{i'})^{-1}X_{i',m}^\intercal(X_{i',m}\hat\beta_{i'}-Y_{i',m}) + O_p(1/n).\end{aligned}$$ Therefore $$\sum_m \hat\beta_i^\intercal (n^{-1}X_{k,m}^\intercal X_{k,m}) \delta\hat\beta_{i',m} = \beta_i^\intercal (I+O_p(1/\sqrt{n}))(X_{i'}^\intercal X_{i'})^{-1}\left((X_{i'}^\intercal X_{i'})\hat\beta_{i'} - X_{i'}^\intercal Y_{i'}\right) + O_p(1/n) = O_p(1/n),$$ by the definition of $\hat\beta_{i'}$. Similarly, we get $\sum_m \delta\hat\beta_{i,m}^\intercal(n^{-1}X_{k,m}^\intercal X_{k,m})\hat\beta_{i'} = O_p(1/n)$. And $|\Sigma_1(i,i') - \Sigma_2(i,i')| = O_p(1/n)$ for every $i,i'\leq K$. The same procedure can be applied to prove $|b_1(i) - b_2(i)| = O_p(1/n)$ by noting that $n^{-1}X_{k,m}^\intercal Y_{k,m} = 1/M\beta_k + O_p(1/\sqrt{n})$. Since $w$ is defined on a bounded set: $\|w\|\leq C$ and $K$ is fixed and finite, we immediately get that for all $w\in W$ $$|w^\intercal (\Sigma_1 - \Sigma_2) w - 2(b_1-b_2)^\intercal w| \leq CO_p(1/n).$$
Proof of Proposition 2
======================
Define $\hat\psi(w)$ as $$\hat \psi(w) = \hat U(w;K^{-1}1_K) - \frac{\sum_k Y_k^\intercal Y_k}{n_k K} + \int_{y} y^2dP_0(y).$$ Similarly, define $\tilde \psi(w) = \tilde U(w;K^{-1}1_K) - K^{-1}\sum_kn_k^{-1}Y_k^\intercal Y_k + \int_{y} y^2dP_0(y)$.
We first note the following lemma for the upper bounds of two differences $|\psi(\hat w) - \psi(w_g^0)|$ and $|\psi(\hat w) - \psi(w_g^0)|$.
$|\psi(\hat w) - \psi(w_g^0)|$ and $\psi(\tilde w) - \psi(w_g^0)$ can be bounded as follows. $$\begin{gathered}
|\psi(\hat w) - \psi(w_g^0)| \leq 2\sup_{w\in W}|\psi(w) - \hat\psi(w)|,\\
|\psi(\tilde w) - \psi(w_g^0)| \leq 2\sup_{w\in W}|\psi(w) - \tilde\psi(w)|.\end{gathered}$$ \[lm:bound\]
We prove the inequality for $\hat w$ and similar steps can be followed to verify the other inequality. Note that $$\psi(\hat w) - \psi(w_g^0) = \psi(\hat w) - \hat \psi(\hat w) + \hat \psi(\hat w) - \hat \psi(w_g^0) + \hat \psi(w_g^0) - \psi(w_g^0).$$ By definition $\psi(\hat w) - \psi(w^0_g) \geq 0$ and $\hat \psi (\hat w) - \hat \psi(w^0_g)\leq 0$, therefore $$|\psi(\hat w) - \psi(w^0_g)| \leq |\psi(\hat w) - \hat \psi(\hat w)| + |\hat \psi(w^0_g) - \psi(w^0_g)| \leq 2\sup_{w\in W}|\psi(w) - \hat \psi(w)|.$$
When $W = \{w:\|w\|_1\leq 1\}$, we have $$\sup_{w\in W}|\psi(w) - \hat \psi(w)| \leq \|\text{vec}(\Sigma-\hat \Sigma)\|_{\infty} + \|b-\hat b\|_{\infty},$$ where $\|\cdot\|_{\infty}$ is the $L^{\infty}$-norm of a vector and $\text{vec}(\cdot)$ is the vectorization of a matrix. With Lemma \[lm:bound\], it follows $$\mathbb E[\psi(\hat w) - \psi(w_g^0)] \leq 2\mathbb E\|\text{vec}(\Sigma-\hat \Sigma)\|_{\infty} + 2\mathbb E\|b-\hat b\|_{\infty}.$$
The following lemma provides an upper bound for $\mathbb E\|\text{vec}(\Sigma-\hat \Sigma)\|$ and $\mathbb E\|\text{vec}(\Sigma-\tilde \Sigma)\|$.
If assumption A1 and A2 hold, we have the following bounds for $\mathbb E\|\text{vec}(\Sigma-\hat \Sigma)\|_{\infty}$ and $\mathbb E\|\text{vec}(\Sigma-\tilde \Sigma)\|_{\infty}$. $$\begin{gathered}
\mathbb E\|\text{vec}(\Sigma-\hat \Sigma)\|_{\infty} \leq 4\sqrt{2e}M_1^2\sqrt{\log(KL)/K} + 2M_1M_2(\min_k n_k)^{-min_\ell p_\ell},\\
\mathbb E\|\text{vec}(\Sigma-\tilde \Sigma)\|_{\infty} \leq 8\sqrt{2e}M_1^2\sqrt{\log(KL)/K} + 4M_1M_2(\min_k n_k)^{-min_\ell p_\ell}.
\end{gathered}$$ \[lm:sig-bound\]
First note that $$\hat A_{k,\ell;k',\ell'} - A_{k,\ell;k',\ell'} = K^{-1}\sum_{i=1}^K n_s^{-1}\sum_{i=1}^{n_s} \left(\hat Y_{k}^{\ell}(x_{s,i})\hat Y_{k'}^{\ell'}(x_{s,i}) - \int _x Y_{k}^{\ell}(x)Y_{k'}^{\ell'}(x)dF_X(x)\right).$$ Denote $\int _x Y_{k}^{\ell}(x)Y_{k'}^{\ell'}{k'}^{\ell'}(x)dF_X(x)$ as $\langle Y_{k}^{\ell}, Y_{k'}^{\ell'}\rangle$, we have $$\begin{aligned}
\|\hat \Sigma - \Sigma\|_{\infty} \leq & K^{-1}\sum_s n_s^{-1}\sum_i \left\|\left(\hat Y_k^\ell(x_{s,i})\left(\hat Y_{k'}^{\ell'}(x_{s,i}) - Y_{k'}^{\ell'}(x_{s,i})\right);k,k'\leq K, l,l'\leq L\right)\right\|_{\infty}\\
+&K^{-1}\sum_s n_s^{-1}\sum_i \left\|\left(Y_{k'}^{\ell'}(x_{s,i})\left(\hat Y_k^\ell(x_{s,i})- Y_k^\ell(x_{s,i})\right);k,k'\leq K, l,l'\leq L\right)\right\|_{\infty}\\
+&K^{-1}\left\|\left(\sum_s n_s^{-1}\sum_i \left(Y_k^\ell(x_{s,i})Y_{k'}^{\ell'}(x_{s,i}) - \langle Y_k^\ell, Y_{k'}^{\ell'}\rangle\right);k,k'\leq K, l,l'\leq L\right)\right\|_{\infty}
\end{aligned}
\label{eq:bound-inter1}$$ By assumption A1, we have $$\left|\hat Y_k^\ell(x_{s,i})\left(\hat Y_{k'}^{\ell'}(x_{s,i}) - Y_{k'}^{\ell'}(x_{s,i})\right)\right|\leq M_1|\hat Y_{k'}^{\ell'}(x_{s,i}) - Y_{k'}^{\ell'}(x_{s,i})|,~a.e.$$ Combined with assumption $A2$, we have $$\mathbb E\left\|\left(\hat Y_k^\ell(x_{s,i})\left(\hat Y_{k'}^{\ell'}(x_{s,i}) - Y_{k'}^{\ell'}(x_{s,i})\right);k,k'\leq K, l,l'\leq L\right)\right\|_{\infty} \leq M_1M_2 (\min_k n_k)^{-\min_\ell p_\ell}.$$ The same upper bound holds the second term to the right-hand side of (\[eq:bound-inter1\]).
Define vector $\alpha_{s,i} = \left( Y_{k}^{\ell}(x_{s,i})Y_{k'}^{\ell'}(x_{s,i}) - \langle Y_{k}^{\ell}, Y_{k'}^{\ell'}\rangle;k,k'\leq K, l,l'\leq L\right)$. Invoke Lemma 2.1 in [@juditsky2000functional], we have $$\begin{aligned}
W\left(\sum_{s=1}^K n_s^{-1}\sum_{i=1}^{n_s} \alpha_{s,i}\right) \leq &W\left(\sum_{s=1}^{K-1} n_s^{-1}\sum_{i=1}^{n_s} \alpha_{s,i}\right) + (n_{K}^{-1}\sum_{i=1}^{n_K} \alpha_{K,i})^\intercal\nabla W\left(\sum_{s=1}^{K-1}n_s^{-1}\sum_i\alpha_{s,i}\right) \\
+&c^*(M)\|n_K^{-1}\sum_i\alpha_{K,i}\|_{\infty}^2,\end{aligned}$$ where $M=K^2L^2$, $c^*(M) = 4e\log M$, $W(z) = 1/2\|z\|_q^2:\mathbb R^M\to\mathbb R$ and $q=2\log M$. It follows that $$\mathbb E\left[W\left(\sum_{s=1}^K n_s^{-1}\sum_{i=1}^{n_s} \alpha_{s,i}\right)\right] \leq \mathbb E\left[W\left(\sum_{s=1}^{K-1} n_s^{-1}\sum_{i=1}^{n_s} \alpha_{s,i}\right)\right] + c^*(M)\mathbb E \|n_K^{-1}\sum_i\alpha_{K,i}\|_{\infty}^2,
\label{eq:recur}$$ since $\alpha_{k,i}$ and $\alpha_{k',i}$ are independent when $k\neq k'$ and $\mathbb E(\alpha_{k,i})=0$. The inequality in (\[eq:recur\]) implies a recursive relationship and repeatedly apply it for $K$ times we get $$\mathbb E\left[W\left(\sum_{s=1}^K n_s^{-1}\sum_{i=1}^{n_s} \alpha_{s,i}\right)\right] \leq c^*(M)\sum_{s=1}^K n_s^{-1}\mathbb E\|\sum_{i}\alpha_{s,i}\|_{\infty}^2.$$
By assumptions A1 and A2 again, we have $
\left|Y_{k}^{\ell}(x_{s,i})Y_{k'}^{\ell'}(x_{s,i}) - \langle Y_{k}^{\ell},Y_{k'}^{\ell'} \rangle\right|\leq 2M_1^2,~a.e.
$ Therefore, $$\mathbb E\left[W\left(\sum_{s=1}^K n_s^{-1}\sum_{i=1}^{n_s} \alpha_{s,i}\right)\right] \leq c^*(M)4KM_1^4 = 32e\log(KL)KM_1^4.$$ Since $W(z)\geq 1/2\|z\|_{\infty}^2$, it follows $$K^{-1}\mathbb E\|\sum_s n_s^{-1}\sum_i \alpha_{s,i}\|_{\infty}\leq K^{-1}\sqrt{32e\log(KL)KM_1^4} = 4\sqrt{2e}M_1^2\sqrt{\log(KL)/K}.$$
The above steps also apply for the bound on $\mathbb E\|\tilde\Sigma - \Sigma\|_\infty$ by noting that $$\resizebox{1 \textwidth}{!}
{
$ \frac{K\left\|\sum\limits_{s\neq k, k'} n_s^{-1}\sum_i \left(\hat Y_k^\ell(x_{s,i}) \hat Y_{k'}^{\ell'}(x_{s,i}) - \langle Y_k^\ell, Y_{k'}^{\ell'}\rangle\right)\right\|_{\infty}}{(K-1)^2}
\leq 2\frac{\left\|\sum\limits_{s\neq k, k'} n_s^{-1}\sum_i \left(\hat Y_k^\ell(x_{s,i}) \hat Y_{k'}^{\ell'}(x_{s,i}) - \langle Y_k^\ell, Y_{k'}^{\ell'}\rangle\right)\right\|_{\infty}}{K-1-\mathbb I(k\neq k')}.$
}$$
We then prove similar bounds for $\mathbb E\|b - \hat b\|_{\infty}$ and $\mathbb E\|b - \tilde b\|_\infty$.
If assumption A1 and A2 hold, we have the following bounds for $\mathbb E\|b - \hat b\|_{\infty}$ and $\mathbb E\|b - \tilde b\|_\infty$. $$\begin{gathered}
\mathbb E\|b - \hat b\|_{\infty} \leq (M_1+\sigma)M_2(\min_k n_k)^{-\min_\ell p_\ell} + 8\sqrt{e}(2M_1^2 + M_1\sigma)\sqrt{\log(KL)/K},\\
\mathbb E\|b - \tilde b\|_{\infty} \leq (M_1+\sigma)M_2(\min_k n_k)^{-\min_\ell p_\ell} + 8\sqrt{e}(2M_1^2 + M_1\sigma)\sqrt{\log((K-1)L)/(K-1)}.
\end{gathered}$$ \[lm:b-bound\]
Note that $$\begin{aligned}
\|\hat b_{k,\ell} - b_{k,l}\|_{\infty} = &K^{-1}\left\|\sum_s n_s^{-1}\sum_i \hat Y_{k}^{\ell}(x_{s,i})y_{s,i} - Y_{k}^{\ell}(x_{s,i})y_{s,i}\right\|_{\infty}\\
+ &K^{-1}\left\|\sum_s n_s^{-1}\sum_i Y_{k}^{\ell}(x_{s,i})y_{s,i} - Y_{k}^{\ell}(x_{s,i})f_0(x_{s,i})\right\|_{\infty}\\
+ &K^{-1}\left\|\sum_s n_s^{-1}\sum_iY_{k}^{\ell}(x_{s,i})f_0(x_{s,i}) - \langle Y_k^\ell, f_0 \rangle\right\|_\infty.\end{aligned}$$
Follow the same step as in the proof of Lemma 2 with assumption A1 and A2, as well as Lemma 2.1 in [@juditsky2000functional], we have $$\begin{aligned}
\mathbb E\|\hat b - b\|_{\infty} \leq &(M_1+\sigma)M_2(\min_k n_k)^{-\min_\ell p_\ell} + (2M_1^2+M_1\sigma)\left(1+\sqrt{4e\log(KL)(K-1)}\right)K^{-1} \\
+&4M_1^2K^{-1/2}\sqrt{e\log(KL)}\\
\leq& (M_1+\sigma)M_2(\min_k n_k)^{-\min_\ell p_\ell} + 8\sqrt{e}(2M_1^2+M_1\sigma)\sqrt{\log(KL)}K^{-1/2}\end{aligned}$$
The proof is completed by noting that $$\begin{aligned}
&\frac{K}{K-1}\left(K^{-1} \sum_{s\neq k} n_s^{-1}\sum_i \hat Y_{k}^{\ell}(x_{s,i})y_{s,i} - Y_{k}^{\ell}(x_{s,i})y_{s,i} \right)\\
=&\frac{1}{K-1} \left(\sum_{s\neq k} n_s^{-1}\sum_i \hat Y_{k}^{\ell}(x_{s,i})y_{s,i} - Y_{k}^{\ell}(x_{s,i})y_{s,i} \right).\end{aligned}$$
Combining the results in Lemma \[lm:sig-bound\] and Lemma \[lm:b-bound\] we get the results in Proposition \[prop:bound\].
Proof of Proposition 3
======================
Let $\lim_{n\to\infty} \hat U(w;K^{-1}{\mathbf{1}}_K) = \hat U_0(w)$ and $\lim_{n\to\infty}\tilde U(w;K^{-1}{\mathbf{1}}_K) = \tilde U(w)$. The quadratic and linear coefficients for $\hat U_0$ and $\tilde U_0$ are $$\begin{gathered}
\hat\Sigma_0 = \lim_{n\to\infty} \hat \Sigma = \left[\langle Y_k^\ell, Y_{k'}^{\ell'}\rangle;k,k'\leq K,\ell,\ell'\leq L\right],\\
\tilde\Sigma_0 = \lim_{n\to\infty} \tilde \Sigma = \frac{K}{(K-1)^2}\left[(K-1-\mathbb I(k\neq k'))\langle Y_k^\ell, Y_{k'}^{\ell'}\rangle;k,k'\leq K, \ell,\ell'\leq L\right],\\
\hat b_0 = \lim_{n\to\infty} \hat b = (\langle Y_k^\ell, \bar Y^\ell\rangle;k\leq K, \ell \leq L),\\
\hat b_0 = \lim_{n\to\infty} \hat b = (\langle Y_k^\ell, \bar Y_{-k}^\ell\rangle;k\leq K, \ell \leq L),\end{gathered}$$ where $\bar Y^\ell = K^{-1}\sum_k Y^\ell_k$ and $\bar Y^\ell_{-k} = (K-1)^{-1}\sum_{k'\neq k} Y^\ell_{k'}$. With assumption A2 along with the proof for Lemma 2, we know that $$\mathbb E\left|\hat U(\hat w;K^{-1}{\mathbf{1}}_K) - \hat U_0(\hat w)\right|\leq Cn^{-\min_\ell p_\ell}.$$ Since both $\hat U$ and $\tilde U$ are smooth with respect to $w$, with Taylor expansion and assumptions A1 and A2, we have $$|\psi(\hat w) - \psi(\hat w_0)|\leq C^*n^{-\min_\ell p_\ell},$$ where $\hat w_0 = \operatorname*{arg\,max}_{w\in W} \hat U_0(w)$. The same bound applies for $|\psi(\tilde w) - \psi(\tilde w_0)|$. Therefore, we can focus on study the difference $\psi(\hat w_0) - \psi(\tilde w_0)$.
Using results from Lemma \[lm:b-bound\], we can find an upper bound for $\psi(\tilde w_0) - \psi(w_g^0)$: $$\mathbb E\left(\psi(\tilde w_0) - \psi(w_g^0)\right) \leq 8\sqrt{e}(2M_1^2 + M_1 \sigma_f)\sqrt{\log((K-1)L)}(K-1)^{-1/2},$$ here $\sigma_f^2 = \int_{f} \left(\int_x f(x)dF_X(x) - \int_x f_0(x)dF_X(x)\right)^2dF(f)$.
We now find a lower bound for $\mathbb E(\psi(\hat w_0) - \psi(w^0_g))$. Invoking Theorem 3.1 in [@juditsky2000functional] by noting that $\hat U_0$ is induced with a stacking problem with $KL$ samples observed and the noise associated with the observation is the deviation of $f_k$ to $f_0$. With an appropriately chosen large constant $\kappa$, the theorem indicates that $$\mathbb E(\psi(\hat w_0)) - \mathbb E(\psi(w^0_g)) \geq \kappa M_1 \sigma_f \sqrt{\log(KL)}(KL)^{-1/2}.$$ Therefore, if $$8\sqrt{e}(2M_1^2 + M_1 \sigma_f)\sqrt{\log((K-1)L)}((K-1)L)^{-1/2} \leq \kappa M_1 \sigma_f \sqrt{\log(KL)}(KL)^{-1/2},$$ we can find $\mathbb E\left(\psi(\hat w_0) - \psi(\tilde w_0)\right) + C^{*}n^{-\min_\ell p_\ell} \geq 0$.
|
---
abstract: 'We present an extended analysis of the wave-vector dependent shear viscosity of monatomic and diatomic (liquid chlorine) fluids over a wide range of wave-vectors and for a variety of state points. The analysis is based on equilibrium molecular dynamics simulations, which involves the evaluation of transverse momentum density and shear stress autocorrelation functions. For liquid chlorine we present the results in both atomic and molecular formalisms. We find that the viscosity kernel of chlorine is statistically indistinguishable with respect to atomic and molecular formalisms. The results further suggest that the real space viscosity kernels of monatomic and diatomic fluids depends sensitively on the density, the potential energy function and the choice of fitting function in reciprocal space. It is also shown that the reciprocal space shear viscosity data can be fitted to two different simple functional forms over the entire density, temperature and wave-vector range: a function composed of *n*-Gaussian terms and a Lorentzian type function. Overall, the real space viscosity kernel has a width of 3 to 6 atomic diameters which means that the generalized hydrodynamic constitutive relation is required for fluids with strain rates that vary nonlinearly over distances of the order of atomic dimensions.'
author:
- 'R. M. Puscasu'
- 'B. D. Todd'
- 'P. J. Daivis'
- 'J. S. Hansen'
title: An extended analysis of the viscosity kernel for monatomic and diatomic fluids
---
Introduction \[sect:intro\]
===========================
Fluid dynamics at atomic and molecular scales still present a challenge for theoreticians as well as for experimentalists. Molecular dynamics (MD) is a computational tool that has contributed significantly to the fundamental understanding of these systems by providing information about processes not directly approachable by experimental studies. A central problem in the study of fluids at such small length and time scales is the computation of meaningful transport properties. Many equilibrium molecular dynamics (EMD) as well as nonequilibrium molecular dynamics (NEMD) simulations of nanofluids have been performed since the early 1980s [@bitsanis_1988; @bitsanis_1990; @oconnell_1995; @koplik_1995; @nie_2003]. In most of these simulations the stress was treated as being dependent of the local strain rate rather than the entire strain rate distribution in the system. Todd *et al.* have recently shown that in all but the simplest flows (e.g. planar Couette and Poiseuille flows) and for velocity fields with high gradients in the strain rate over the width of the real space viscosity kernel, non-locality can play a significant role [@todd_2008; @todd_2008_a]. In the case of a homogeneous fluid, a local viscosity defined by Newton’s viscosity law as $$P_{xy}(\mathbf{r},t)= -\eta_{0}\int\limits_{0}^{t} \int\limits_{-\infty}^{\infty} \delta(\mathbf{r}-\mathbf{r}',t-t')\dot{\gamma}(\mathbf{r}',t') d\mathbf{r}'dt' \label{eqn:homo_localceq}$$ even exhibits singularities at points where the strain rate is zero [@travis_1997; @travis_2000; @zhang_2004; @zhang_2005]. In Eq. (\[eqn:homo\_localceq\]) $P_{xy}(\mathbf{r},t)$ represents the $(x,y)$ off-diagonal component of the pressure tensor, $\dot {\gamma}(\mathbf{r},t)$ is the shear strain rate at position $\mathbf{r}$ and time $t$, and $\eta_{0}$ is the local shear viscosity. In general, a nonlocal constitutive equation that allows for spatial and temporal non-locality can be expressed as [@alley_1983; @evans_1990] $$P_{xy}(\mathbf{r},t)= -\int\limits_{0}^{t} \int\limits_{-\infty}^{\infty}\eta(\mathbf{r}-\mathbf{r}',t-t')
\dot{\gamma}(\mathbf{r}',t') d\mathbf{r}'dt' , \label{eqn:homo_nonlocalceq}$$ for a homogeneous fluid. In the situation where the strain rate is constant in time and only varies with respect to the spatial coordinate $y$, Eq. (\[eqn:homo\_nonlocalceq\]) can be written as $$P_{xy}(y)= -\int\limits_{-\infty}^{\infty} \eta(y-y')\dot{\gamma}(y') dy' .
\label{eqn:srcttime_homo_nonlocalceq}$$ In reciprocal space Eq. (\[eqn:srcttime\_homo\_nonlocalceq\]), can be expressed as $$\tilde{P}_{xy}(k_{y})= - \tilde{\eta}(k_{y}) \tilde{\dot{\gamma}}(k_{y}) ,
\label{eqn:kspace_homo_nonlocalceq}$$ where $k_{y}$ is the $y$ component of the wave-vector as defined later in Section \[sect:III\]. Such a constitutive equation is expected to be necessary for the description of flows in highly confined systems, due to the large change in the strain rate with position in the vicinity of the wall [@travis_1997].
The best available theoretical predictions of the wave-vector dependent viscosity are based on mode-coupling theory and generalized Enskog theory [@leutheusser_1982_1; @leutheusser_1982_2; @yip_1982]. However, these theories do not quantitatively agree with data obtained via computer simulations [@alley_1983]. The theoretical predictions focus on the transverse momentum density autocorrelation function, which is found by an iterative numerical solution of a system of nonlinear equations. Consequently, the theories do not result in analytical expressions for the correlation functions or the wave-vector dependent transport coefficients, which are the focus of the present study. More recently, a modified collective mode approach has been successfully applied by Omelyan *et al.* [@omelyan_2005] to the TIP4 model of water. In contrast to other semi-phenomenological approaches used for instance in TIP4P and SPC/E models of water by Bertolini *et al.* [@bertolini_1995] and Palmer [@palmer_1994], Omelyan *et al.* reproduced the reciprocal space kernel using a relatively small number of modes.
In this paper, we extend the work done by Hansen *et al.* [@hansen_2007_2] and focus on computing the spatially non-local viscosity kernel for monatomic and diatomic fluids over a wider range of wave-vectors, state points and potential energy functions. We are specifically interested in identifying functional forms that fit the reciprocal space kernel data. On the basis of these results, we are be able to assess the length scale (i.e. the width of the real space kernel) over which the governing generalized constitutive relation Eq. (\[eqn:srcttime\_homo\_nonlocalceq\]) must be used. We expect that non-local transport phenomena would be relevant in shock waves [@alley_1983; @holian_1980; @holian_1998; @reed_2006; @reed_2003], shear banding [@dhont_1999], flows of micellar solutions [@masselon_2008], suspensions of rigid fibers [@schiek_1995] and jammed or glassy systems [@bocquet_2008].
This paper is structured as follows: In Section \[sect:IIa\] we give an overview of the general formulation and the expressions for the complex wave-vector and frequency dependent viscosity are given. In Section \[sect:III\] we describe the simulation methodology and conditions. In Section \[sect:IV\] we present our molecular dynamics simulation data and compare the results of our monatomic and diatomic viscosity kernels, particularly the shape of the kernels. Finally, we summarize and conclude our analysis in Section 5.
Methodology \[sect:method\]
===========================
We shall briefly introduce the main conceptual background used in this work, namely, the wave-vector dependent momentum density, stress and viscosity in the atomic and molecular formulations.
Wave-vector dependent momentum density for atomic and molecular fluids \[sect:IIa\]
-----------------------------------------------------------------------------------
For a single component atomic fluid the real space microscopic momentum density is given by [@evans_1990] $$\mathbf{J}(\mathbf{r},t)= \rho_{a}(\mathbf{r},t)\mathbf{v}(\mathbf{r},t)= \sum_{i=1}^{N_{a}} m_{i} \mathbf{v}_{i}(t)\delta(\mathbf{r}-\mathbf{r}_{i}) \label{eqn:af_atomic_momentum_density}$$ where $\rho_{a}(\mathbf{r},t)=\sum_{i=1}^{N_{a}} m_{i} \delta(\mathbf{r}-\mathbf{r}_{i})$ is the mass density, $m_i$, $\mathbf{r}_{i}$ and $\mathbf{v}_{i}$ are the mass, position and velocity of atom $i$. The summation runs over the number of atoms $N_a$ in the system. The Fourier transform of the momentum density is $$\tilde{\mathbf{J}}(\mathbf{k},t)= \sum_{i=1}^{N_{a}} m_{i} \mathbf{v}_{i}(t) e^{i\mathbf{k}\cdot\mathbf{r}_{i}} \label{eqn:af_atomic_momentum_density_kspace}$$ while the Fourier transform of the mass density is $\tilde{\rho}_{a}(\mathbf{k},t)=\sum_{i=1}^{N_{a}} m_{i} e^{i\mathbf{k}\cdot\mathbf{r}_{i}}$. We define the Fourier transform of a function $f(r)$ as $\mathcal{F}[f(r)]=\tilde{f}(k)=\int_{-\infty}^{\infty}e^{ikr}f(r)dr$.
The atomic representation of the momentum density for a molecular fluid can be written in real space as [@todd_2007]: $$\mathbf{J}^{A}(\mathbf{r},t) = \rho_{a}(\mathbf{r},t)\mathbf{v}(\mathbf{r},t)=\sum_{i=1}^{N_{m}}\sum_{\alpha=1}^{N_{s}} m_{i\alpha}\mathbf{v}_{i\alpha}(t)\delta(\mathbf{r}-\mathbf{r}_{i\alpha}) \label{eqn:atomic_momentum_density}$$ where the mass density is defined as $\rho_{a}(\mathbf{r},t)=\sum_{i=1}^{N_{m}}\sum_{\alpha=1}^{N_{s}} m_{i\alpha} \delta(\mathbf{r}-\mathbf{r}_{i\alpha})$. The inner summation extends over the $N_{s}$ mass points in a molecule and the outer summation extends over the number of molecules $N_{m}$ in the system. In general, $N_{s}$ depends on the molecule index $i$ for a multicomponent system, but in our systems $N_{s}$ is the same for all molecules and the interaction sites are assumed to have identical mass, namely $m_{i\alpha}$.
The Fourier transform of the momentum density is $$\tilde{\mathbf{J}}^{A}(\mathbf{k},t)= \sum_{i=1}^{N_{m}}\sum_{\alpha=1}^{N_{s}} m_{i\alpha} \mathbf{v}_{i\alpha}(t)
e^{i\mathbf{k}\cdot\mathbf{r}_{i\alpha}}
\label{eqn:atomic_momentum_density_kspace}$$ where the transformed mass density is $\tilde{\rho}_{a}(\mathbf{k},t)= \sum_{i=1}^{N_{m}}\sum_{\alpha=1}^{N_{s}} m_{i\alpha} e^{i\mathbf{k}\cdot\mathbf{r}_{i\alpha}}$. For molecules composed of $N_{s}$ atoms we can define the mass of molecule $i$, $M_{i}=\sum_{\alpha=1}^{N_{s}} m_{i\alpha}$, position of the molecular center of mass as $\mathbf{r}_{i}=\sum_{\alpha=1}^{N_{s}} m_{i\alpha} \mathbf{r}_{i\alpha} / M_{i}$, position of site $\alpha$ of molecule $i$ relative to the center of mass of molecule $i$ as $\mathbf{R}_{i\alpha}=\mathbf{r}_{i\alpha}-\mathbf{r}_{i}$, and center of mass momentum of the molecule as $\mathbf{p}_{i}=\sum_{\alpha=1}^{N_{s}} \mathbf{p}_{i\alpha}$. This means that the atomic mass density can be written in **k**-space as $\tilde{\rho}_{a}(\mathbf{k},t)=\sum_{i=1}^{N_{m}}\sum_{\alpha=1}^{N_{s}} m_{i\alpha}e^{i\mathbf{k}\cdot(\mathbf{r}_{i}+\mathbf{R}_{i\alpha})}$. If we expand this relation further we can express the atomic mass density in terms of molecular mass density in which we define the mass density in the molecular representation as $\tilde{\rho}_{m}(\mathbf{k},t)=\sum_{i=1}^{N_{m}}M_{i}e^{i\mathbf{k}\cdot \mathbf{r}_{i}}$ in reciprocal space and as $\rho_{m}(\mathbf{r},t)=\sum_{i=1}^{N_{m}}M_{i}\delta(\mathbf{r}-\mathbf{r}_{i})$ in real space, respectively. In a similar way we can expand the atomic momentum density about the molecular center of mass: $\tilde{\mathbf{J}}^{A}(\mathbf{k},t)=\sum_{i=1}^{N_{m}}\sum_{\alpha=1}^{N_{s}} m_{i\alpha} \mathbf{v}_{i\alpha}(1+i\mathbf{k}\cdot\mathbf{R}_{i\alpha}+\dots)e^{i\mathbf{k}\cdot\mathbf{r}_{i\alpha}}$.
The Fourier transform of the momentum density in the molecular representation can then be defined as $$\tilde{\mathbf{J}}^{M}(\mathbf{k},t)= \sum_{i=1}^{N_{m}}M_{i} \mathbf{v}_{i}(t) e^{i\mathbf{k}\cdot\mathbf{r}_{i}} \label{eqn:atomic_momentum_density_kspace_2}$$ A complete procedure for expressing the mass and momentum densities in physical and reciprocal space for atomic and molecular fluids has been discussed in more detail by Todd and Daivis [@todd_2007].
Wave-vector dependent pressure tensor \[sect:IIb\]
--------------------------------------------------
For a monatomic system the wave-vector dependent pressure tensor is defined as [@todd_2007] $$\tilde{\mathbf{P}}(\mathbf{k},t)= \sum_{i=1}^{N} \frac{\mathbf{p}_{i} \mathbf{p}_{i}}{m_{i}}e^{i\mathbf{k} \cdot \mathbf{r}_{i}}-\frac{1}{2} \sum_{i=1}^{N}\sum_{j\ne i}^{N} \mathbf{r}_{ij} \mathbf{F}_{ij}g(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{r}_{i}} \label{eqn:atomic_atomic_pressure_tensor}$$ where $\mathbf{F}_{i}$ is the force on atom $i$ due to atom $j$ and $g(\mathbf{k})=(e^{i\mathbf{k} \cdot \mathbf{r}}-1)/i\mathbf{k} \cdot \mathbf{r}=\sum_{n=0}^{\infty}(i\mathbf{k}\cdot \mathbf{r})^{n}/{(n+1)!}$ is the Fourier transform of the Irving-Kirkwood $O_{ij}$ operator [@irving_1950].
For a molecular system the molecular pressure tensor is the pressure calculated using the intermolecular forces and the molecular center of mass momenta. The atomic pressure on the other hand includes all atomic momenta and all interatomic forces and constraint forces. Thus the wave-vector dependent pressure tensor for constrained diatomic fluid can be written in atomic representation as $$\begin{aligned}
\tilde{\mathbf{P}}^{A}(\mathbf{k},t)&=&\sum_{i=1}^{N_m}\sum_{\alpha=1}^{2} \frac{\mathbf{p}_{i\alpha} \mathbf{p}_{i\alpha}}{m_{i\alpha}}e^{i\mathbf{k} \cdot \mathbf{r}_{i\alpha}} \nonumber \\
&-&\frac{1}{2} \sum_{i=1}^{N_m} \sum_{\alpha=1}^{2} \sum_{j\neq i}^{N_m} \sum_{\beta=1}^{2} \mathbf{r}_{i\alpha j\beta} \mathbf{F}_{i\alpha j\beta}
g(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{r}_{i\alpha j\beta}} \nonumber \\
&+&\sum_{i=1}^{N_m} \sum_{\alpha=1}^{2} \mathbf{r}_{i\alpha} \mathbf{F}_{i\alpha}^{C}
g(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{r}_{i\alpha}}
\label{eqn:atomic_pressure_tensor}\end{aligned}$$ where $\mathbf{F}_{i\alpha j \beta}$ is the LJ force acting on site $\alpha$ of molecule $i$ due to site $\beta$ of molecule $j$, and $\mathbf{F}_{i\alpha}^{C}$ is the bond constraint force on site $\alpha$ of molecule $i$. $\mathbf{r}_{i\alpha j \beta} =\mathbf{r}_{j\beta}-\mathbf{r}_{i\alpha}$ is the minimum image separation of site $\alpha$ of molecule $i$ from site $\beta$ of molecule $j$.
The pressure tensor for a diatomic molecule in the molecular representation is defined as $$\begin{aligned}
\tilde{\mathbf{P}}^{M}(\mathbf{k},t)= \sum_{i=1}^{N_m} \frac{\mathbf{p}_{i} \mathbf{p}_{i}} {M_{i}}e^{i\mathbf{k} \cdot \mathbf{r}_{i}} -\frac{1}{2} \sum_{i=1}^{N_m} \sum_{j\ne i}^{N_m} \mathbf{r}_{ij} \mathbf{F}_{ij}^{N}g(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{r}_{ij}} \label{eqn:mol_pressure_tensor}\end{aligned}$$ where, $\mathbf{F}_{ij}^{N}$ represents the total intermolecular force on molecule $i$ due to molecule $j$. $\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}$ is the minimum image separation of the center of mass of molecule $i$ from the center of mass of molecule $j$. In cases where two sites on two different periodic images of the same molecule may interact, the correct value of $\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}$ corresponding to the particular images of molecule $i$ and $j$ in $\mathbf{F}_{i\alpha j\beta}$ must be used. Though this is unlikely to happen in diatomic fluids it is particularly important in simulation of long molecules. The momenta appearing in these equations, $\mathbf{p}_{i\alpha}$, $\mathbf{p}_{i}$, are the momenta appearing in the respective atomic and molecular equations of motion Eqs. (\[eqn:atomic\_hamilton\_eqm\_isokinetic\]) and (\[eqn:mol\_hamilton\_eqm\_isokinetic\]).
Wave-vector and frequency dependent viscosity \[sect:IIc\]
----------------------------------------------------------
The complex wave-vector and frequency dependent viscosity can be evaluated by using two different expressions in terms of the Fourier-Laplace transform of the transverse momentum density autocorrelation function (ACF), $C_{\perp}(\mathbf{k},t)$, and the Fourier-Laplace transform of the stress tensor autocorrelation function, $N(\mathbf{k},t)$ [@evans_1990]. We define the complex Laplace transform (one-sided Fourier transform) as $\mathcal{L} [f(t)]=\tilde{f}(w)=\int_{0}^{\infty}f(t)e^{-i\omega t}dt$. We also note that for the sake of simplicity of notation and consistency with the notation used in previous publications, in what follows, we drop the tilde sign over the Fourier transformed correlation functions and keep the tilde notation over the Fourier-Laplace transformed correlation functions only. If we set $\Gamma_{\mathbf{k}}=(0,k_{y},0)$ and $J_{x}$ is the component of the momentum density in the *x* direction, the expression for the wave-vector and frequency dependent viscosity in terms of $\tilde{C}_{\perp}(k_{y},\omega)$ takes the form [@evans_1990]: $$\tilde{\eta}(k_{y},\omega)=\frac{\rho}{k_{y}^{2}}\frac{C_{\perp}(k_{y},t=0)-i\omega \tilde{C}_{\perp}(k_{y},\omega)}{\tilde{C}_{\perp}(k_{y},\omega)} \label{eqn:etakw}$$ where $\rho$ is the number density of the fluid and $\tilde{C}_{\perp}(k_{y},\omega)$ is the Laplace transform of the ensemble averaged transverse momentum density autocorrelation function $C_{\perp}(k_{y},t)$, which is defined as $$C_{\perp}(k_{y},t)=\frac{1}{V}\Big\langle J_{x}(k_{y},t) J_{x}(k_{y},t=0)\Big\rangle . \label{eqn:eqtcacf}$$ The zero time value of $C_{\perp}(k_{y},t=0)$ in the thermodynamic limit is $$C_{\perp}(k_{y},t=0) = \rho k_{B}T . \label{eqn:zteqtcacf}$$ $k_{B}$ is Boltzmann’s constant. Due to finite time averaging and finite system size, the value of $C(k_y,t=0)$ obtained from simulation differs insubstantially from the exact value given by $$C_{\perp}(k_{y},t=0) = \rho k_{B}T \frac{3N-4}{3N} \label{eqn:ezteqtcacf}$$ because the total peculiar kinetic energy and three components of the momenta are constants of the motion in our simulations. We also note that the number of degrees of freedom in the simulated system has not been taken into account in Eq. (\[eqn:zteqtcacf\]), therefore we use the simulated value in our calculations to ensure numerical consistency of the computed properties.
The expression for the wave-vector and frequency dependent viscosity in terms of the autocorrelation function of the shear stress $\tilde{N}(k_{y},\omega)$ takes the form: $$\tilde{\eta}(k_{y},\omega)=\frac{\tilde{N}(k_{y},\omega)}{C(k_{y},t=0)/{\rho}{k_B}T - k^{2} \tilde{N}(k_{y},\omega)/i\omega\rho}
\label{eqn:etakw_N}$$ where $$\tilde{N}(k_{y},\omega)=\frac{1}{Vk_{B}T}\mathcal{L} \Big[ \Big \langle P_{xy}(k_{y},t) P_{xy}(k_{y},0) \Big\rangle \Big] . \label{eqn:eqnacf}$$ In the zero wave-vector limit, a generalization of the Green-Kubo expression for the shear viscosity allows the transverse momentum flux to be in an arbitrary direction rather than along a coordinate axis and can be written in terms of the stress tensor as [@hansen_1986; @daivis_1994]: $$\eta=\frac{V}{10k_{B}T}\int_{0}^{\infty} \Big\langle \mathbf{P}^{os}(t):\mathbf{P}^{os}(0) \Big\rangle dt \label{eqn:eta_isotropic_1}$$ where the *os* superscript denotes the traceless symmetric part of the stress tensor $\mathbf{P}^{os}(t)=\frac{1}{2}[\mathbf{P}(t)+\mathbf{P}^{T}(t)]-\frac{1}{3}tr[\mathbf{P}(t)]\mathbf{1}$ and $V$ is the simulation volume. In an isotropic fluid, because the tensor, $\mathbf{P}^{os}$, appearing in Eq. (\[eqn:eta\_isotropic\_1\]) is traceless and symmetric, the shear viscosity is also traceless and symmetric. Consequently, the shear viscosity can be expressed in terms of the invariant $I$ of the shear viscosity tensor as $\eta=I/10$.
In the case $\omega \to 0$ and $k_{y} \to 0$, relation (\[eqn:etakw\_N\]) reduces to the Green-Kubo formula [@hansen_1986]. All the non-zero wave-vector integrals, Eq. (\[eqn:eqnacf\]), converge to zero [@evans_1990] therefore the relation in Eq. (\[eqn:etakw\]) must be used when non-zero wave-vector viscosities are calculated. By computing the integrals one can computationally verify the zero values of the zero-frequency limit of the function $\tilde{N}(\mathbf{k},\omega)$ and thus demonstrate why neither substitution of $\omega=0$ into Eq.(\[eqn:etakw\_N\]) nor evaluation of Eq. (\[eqn:eqnacf\]) at non-zero wave-vector yields the zero-frequency wave-vector dependent viscosity.
Simulation details \[sect:III\]
===============================
We use the Edberg, Evans, and Morriss algorithm [@edberg_1986; @edberg_1987_2; @daivis_1992] with an improved cell neighbour list construction algorithm [@matin_2003] to perform equilibrium simulations at constant $N$,$V$,$T_{M}$. $N$ is either the number of atoms or molecules and $T_{M}$ is the molecular temperature as defined by Eq. (\[eqn:molecular\_temp\]). For the atomic fluids studied in this work, the atoms interact via a Lennard-Jones (LJ) or Weeks-Chandler-Andersen (WCA) potential energy function [@weeks_1971]. The LJ atoms have an interaction potential truncated at $r_{c}=2.5\sigma$ and WCA atoms have an interaction potential truncated at $r_{c}=2^{1/6}\sigma$ [@weeks_1971]. In general: $$\Phi_{ij}(r_{ij})=
\begin{cases}
4 \epsilon \Bigg[ { \bigg( \frac{\displaystyle \sigma}{\displaystyle r_{ij}} \bigg) }^{12} -
{\bigg( \frac{\displaystyle \sigma}{\displaystyle r_{ij}} \bigg) }^{6} \Bigg] - \Phi_{c}, & r_{ij} < r_{c} \\
0, & r_{ij} \ge r_{c}
\end{cases}
\label{eqn:WCA_pot}$$ where $r_{ij}$ is the interatomic separation, $\epsilon$ is the potential well depth, and $\sigma$ is the value of $r_{ij}$ at which the unshifted potential is zero. The shift $\Phi_{c}$ is the value of the unshifted potential at the cutoff $r_{ij}=r_{c}$, and is introduced to eliminate the discontinuity in the potential energy. At distances greater than the cutoff distance $r_c$, the potential is zero.
Our diatomic model of liquid chlorine is similar the the one used by Edberg *et al.* [@edberg_1987], Hounkonnou *et al.* [@hounkonnou_1992], Travis *et al.* [@travis_1995; @travis_1995_a] and more recently by Matin *et al.* [@matin_2000; @matin_2000_erratum] to allow a direct comparison of our results with previous work. This model represents chlorine as a diatomic LJ molecule with $r_{c}=2.5\sigma$ and a fixed bond length of $0.63\sigma$. For an adequate representation of the properties of chlorine the LJ parameters are: $\sigma=3.332$Å and $\epsilon/k_{B}=178.3$K. Liquid chlorine systems of 108 and 864 molecules are studied at a reduced site number density of $\rho_{a}=1.088$ and a reduced molecular temperature $T_{M}=0.970$. We summarize the most important simulation parameters in Table \[tab:sim\_details\_table\].
\[tab:sim\_details\_table\]
All our simulations were carried out in a cubic box with periodic boundary conditions. The fifth-order Gear predictor corrector algorithm [@gear_1966; @gear_1971] with time step $\delta t=0.001$ was employed to solve the equations of motion. The equations of motion can be written for a monatomic fluid in the isokinetic ensemble (at equilibrium) as [@evans_1990]: $$\mathbf{\dot{r}}_{i}=\frac{\mathbf{p}_{i}}{m_{i}}, \quad \mathbf{\dot{p}}_{i}=\mathbf{F}_{i} -\zeta_{A}\mathbf{p}_{i}
\label{eqn:atomic_hamilton_eqm_isokinetic}$$ where $i$ denotes atom $i$. $\mathbf{r}_{i}$ is the position, $\mathbf{p}_{i}$ is the momentum and $m_{i}$ is the mass of the designated atom. $\mathbf{F}_{i}$ is the force on atom $i$ due to other atoms and $\zeta_{A}$ is the atomic thermostat multiplier. The thermostat multiplier is chosen so as to fix the kinetic temperature. We use the value of $\zeta_{A}$ that results from the application of Gauss’ principle of least constraint to the imposition of constant kinetic temperature: $$\zeta_{A}=\frac{\sum_{i=1}^{N}\mathbf{F}_{i}\cdot \mathbf{p}_{i}}{\sum_{i=1}^{N}\mathbf{p}_{i}^{2}} .
\label{eqn:atomic_thermostat_multiplier}$$ The atomic temperature $T_{A}$ for a system of $N_{a}$ atoms with no internal degrees of freedom is defined as $$T_{A}= \frac{1}{(d N_{a}-{N_c})k_{B}}\bigg\langle \sum_{i=1}^{N_{a}} \frac{\mathbf{p}_{i}^{2}}{m_{i}} \bigg\rangle
\label{eqn:atomic_temp}$$ Here angled brackets denote an ensemble average, $d$ is the dimensionality of the atomic system, $N_c$ is the number of constraints on the system (including constraints for conserved quantities).
The equations of motion (EOM) for a molecular fluid can be formulated in either atomic or molecular versions. In fact the molecular versions of the homogeneous isothermal EOM with a molecular thermostat at equilibrium are similar to atomic EOM with a molecular thermostat, provided that all of the relevant forces are included [@todd_2007]. The thermostatted EOM for molecular systems are given by [@todd_2007] $$\mathbf{\dot{r}}_{i\alpha}=\frac{\mathbf{p}_{i\alpha}}{m_{i\alpha}}, \quad
\mathbf{\dot{p}}_{i\alpha}=\mathbf{F}_{i\alpha} + \mathbf{F}_{i\alpha}^{C}
- \zeta_{M}\frac{m_{i\alpha}}{M_{i}}\mathbf{p}_{i}
\label{eqn:mol_hamilton_eqm_isokinetic}$$ where $i\alpha$ denotes site $\alpha$ on molecule $i$. $\mathbf{r}_{i\alpha}$ is the position, $\mathbf{p}_{i\alpha}$ is the momentum and $m_{i\alpha}$ is the mass of the site. The force on a site is separated into two terms: $\mathbf{F}_{i\alpha}$ is the contribution due to the Lennard-Jones type interactions on site $\alpha$ of molecule $i$ and $\mathbf{F}_{i\alpha}^{C}$ is either the constraint force or the bonding force. $\zeta_{M}$ is the molecular thermostat multiplier, given by $$\zeta_{M}=\frac{\sum_{i=1}^{N_{m}}\mathbf{F}_{i}\cdot \mathbf{p}_{i}/M_{i}}{\sum_{i=1}^{N_{M}}\mathbf{p}_{i}^{2}/M_{i}}
\label{eqn:mol_thermostat_multiplier_2}$$ where $\mathbf{F}_{i}$ is the total force acting on molecule $i$ and $M_{i}$ is the mass of molecule $i$. $\zeta_{M}$ is derived from Gauss’ principle of least constraint and acts to keep the molecular center of mass kinetic temperature $T_M$ constant. Several algorithms are available for this purpose [@allen_1989]. The molecular temperature $T_M$ is defined by $$T_{M}= \frac{1}{(d N_{m}-N_{c})k_{B}}\bigg\langle \sum_{i=1}^{N} \frac{\mathbf{p}_{i}^{2}}{m_{i}} \bigg\rangle \label{eqn:molecular_temp}$$ where $N_c$ is the number of translational center-of-mass degrees of freedom and depends on the total number of sites and the number of constraints on the system. The details of the constraint algorithm used to calculate $\mathbf{F}_{i\alpha}^{C}$ have been discussed previously [@edberg_1986; @edberg_1987; @morriss_1991]. All the systems were equilibrated for at least $10^6$ time steps. The results from production runs were ensemble averaged over 14 runs, each of length $10^6$ steps (i.e. a total of $1.4 \times 10^7$ time steps). The transverse momentum density ACFs were computed over at least $20$ reduced time units and the stress ACFs were computed over at least $40$ reduced time units. Both the transverse momentum density and stress ACFs were computed at wave-vectors $k_{yn}=2\pi n/L_{y}$ where mode number $n$ is from 0 to 40 with increment 2 and $L_{y}=[N_{a}/ \rho]^{1/3}$. For the remainder of this paper we drop the $n$ index in $k_{yn}$ for simplicity. The ACFs were Laplace transformed with respect to time using Filon’s rule [@allen_1989]. We do not report the frequency dependent viscosities in this work. The wave-vector and frequency dependent viscosities were calculated using Eqs. (\[eqn:etakw\]) and (\[eqn:etakw\_N\]). Eq. (\[eqn:etakw\]) was used to obtain the non-zero wave-vector and frequency dependent viscosities and Eq. (\[eqn:etakw\_N\]) was used to obtain the zero wave-vector viscosity.
In this work, all quantities are expressed in reduced Lennard-Jones units. Thus, our reference units are the: reduced length $r^{*}=r/ \sigma$, reduced number density $\rho^{*}=\rho / \sigma^{3}$, reduced temperature $T^{*}=k_{B} T/ \epsilon$, reduced time $t^{*}=t/{(\sigma (m/ \epsilon)}^{1/2}$, reduced pressure $\mathbf{P}^{*}=\mathbf{P}(\sigma^{3}/\epsilon)$, reduced energy $E^{*}=E/\epsilon$ and reduced viscosity $\eta^{*}=\eta\sigma^{2}/ \sqrt{(m\epsilon)}$. For the remainder of this paper we apply these units and omit writing the asterisk. We will not distinguish between $T_{M}$ and $T_{A}$, but simply use $T$ to indicate the temperature.
Results and discussion\[sect:IV\]
=================================
The autocorrelation functions were evaluated for both 108 and 864 molecule system in order to determine whether the results were system size dependent. There were no observed differences within the statistical errors in both monatomic and diatomic systems. We also note that in order to limit the number of figures, we do not display the results for the transverse momentum density and stress autocorrelation functions. However, we must mention that for monatomic systems both quantities (i.e. the transverse momentum density and stress ACFs) were in good agreement with those previously observed for Lennard-Jones monatomic liquids and their running integrals have fully converged which suggests that the correlation functions have decayed to zero.
Viscosity kernels in reciprocal space \[sect:IVb\]
--------------------------------------------------
The reciprocal space kernels for monatomic and diatomic fluids are plotted in figures (\[fig:etak\_am\]-\[fig:etak\_2\]). The error bars are smaller than the symbol sizes and therefore omitted in figures \[fig:etak\_1\] and \[fig:etak\_2\]. Generally the statistical reliability of reciprocal space kernel data increases as $k_{y}$ increases.
\[tab:atomic\_param\]
Our zero wave-vector, zero frequency viscosities for monatomic fluids agree well with those available in the literature. For the WCA system at the state point ($\rho_{a}=0.375$, $T=0.765$) we found ${\eta}_{0}=0.27{\pm}0.01$ which agrees with the results of Hansen *et al.* $0.273$ [@hansen_2007_2; @alley_1983], while at the state point ($\rho_{a}=0.840$, $T=1.0$) we found ${\eta}_{0}=2.29{\pm}0.07$, in agreement with the results of Matin *et al.* ($2.1\pm0.2$) [@matin_2000]. For chlorine we found ${\eta}_{0}=6.89{\pm}0.32$, which agrees with the limiting values (6.7${\pm}$0.4) of the shear or elongational viscosities at zero strain rate [@matin_2000].
![\[fig:etak\_am\] $\tilde{\eta}(k_{y})$ versus $k_{y}$ for chlorine calculated using atomic and molecular formalisms ($\rho_{a}=1.088$, $T=0.97$, $N_{a}=1728$).](etak_am.eps)

The wave-vector dependent viscosity for diatomic systems, figure \[fig:etak\_am\], depicts a similar behaviour within the statistical uncertainty in both atomic and molecular formalisms.
It has been shown previously that numerous one parameter functions failed to capture the behaviour of the reciprocal space kernel data [@hansen_2007_2]. We therefore present the best fits with two or more fitting parameters. We have identified two functional forms that fit the data well: an $N_{G}$ term Gaussian function $$\tilde{\eta}_{G}(k_{y})={\eta}_{0} \sum_{j}^{N_{G}} A_{j} \exp(-k^{2}_{y}/2\sigma^{2}_{j}) \qquad A_{j}, \sigma_{j} \in {\mathbb{R}_{+}}
\label{eqn:fit_gauss}$$ and a Lorentzian type function $$\tilde{\eta}_{L}(k_{y})=\frac{{\eta}_{0}}{1+\alpha \left| k_{y} \right|^\beta } \qquad \alpha , \beta \in {\mathbb{R}_{+}} ,
\label{eqn:fit_lorentz}$$ We present the best fits of the data to (i) a two-term Gaussian function with freely estimated amplitudes (i.e. unconstrained fitting) termed as $\tilde{\eta}_{G_{2}}$, (ii) to a two-term Gaussian function with interdependent amplitudes (i.e. constrained fitting $\sum_{j}^{N_{G}} A_{j}=1$) given by Hansen *et al.* [@hansen_2007_2] and termed as $\tilde{\eta}_{G_{2H}}$, (iii) to a four-term Gaussian function with freely estimated amplitudes, termed as $\tilde{\eta}_{G_{4}}$ and (iv) to the Lorentzian type function, Eq. (\[eqn:fit\_lorentz\]). In order to measure the magnitude of the residuals we use the residual standard deviation defined as $s_{r}=\sqrt{\sum_{n=1}^{n_{s}}r^{2}/(n_{s}-n_{p})}$ where $n_s$ is the number of data points, $n_p$ is the number of fitting parameters, and $r$ is the residual [@peck_2008]. After an iterative curve fitting procedure the accurate estimation of ${\eta}_0$ was kept fixed allowing all other parameters in Eqs. (\[eqn:fit\_gauss\]) and (\[eqn:fit\_lorentz\]) to be used as fitting parameters. In Table \[tab:atomic\_param\] we have listed the fitting parameters for monatomic and diatomic molecular fluids and compared to the previous results where possible.
A useful check of the fitting can be performed by calculating the total Gaussian amplitudes which should converge to the value of 1, Table \[tab:tot\_amp\].

\[tab:tot\_amp\]
The reciprocal space results presented in figure \[fig:etak\_1\] for LJ and chlorine systems demonstrate that the four-term Gaussian function fits the data much better than the other two forms with a difference between the data and the fit of less than 0.5$\%$, see figure \[fig:etak\_1\](c). The two-term Gaussian $\tilde{\eta}_{G_{2H}}$ fits the kernel data better than the Lorentzian-type function in the low-$k_{y}$ region, figure \[fig:etak\_1\](a), which suggests a more Gaussian-like behaviour in the low-$k_{y}$ region, a fact previously observed by Hansen *et. al* [@hansen_2007_2] for atomic fluids modeled with WCA potentials. Nevertheless the difference between the two-term Gaussian fit and data is less than 2$\%$ which still makes the $\tilde{\eta}_{G_{2H}}$ a good analytical three parameter approximation of the reciprocal space viscosity kernel. The maximum difference between the Lorentzian-type fit and Gaussian fits are around 4$\%$ while the maximum difference between the Gaussian fits is about 2$\%$, see figure \[fig:etak\_1\](d). Essentially, this suggests that, when computing the real space kernels, the four-term Gaussian functional form is to be trusted. It is obvious that eight parameters in the four-term Gaussian make its use less convenient, but on the other hand the Gaussian function can analytically be inverse Fourier transformed while the inverse Fourier transform of the Lorentzian-type function can only be evaluated numerically for general values of $\beta$.
Figure \[fig:etak\_2\](a) shows the kernel data for a WCA fluid at two different densities along with two sets of data published previously by Hansen *et al.* [@hansen_2007_2]. EMD is the set obtained from an equilibrium MD simulation at the same state point ($\rho_{a}=0.375$, $T=0.765$) and NEMD is the set obtained from a nonequilibrium MD simulation based on the sinusoidal transverse force (STF) method. An excellent agreement between both sets of data was found. Figure \[fig:etak\_2\](b) shows the normalized fit to Eq. (\[eqn:fit\_lorentz\]). The normalized kernels, figure \[fig:etak\_2\](b), show a similar behavior for $k_n \le 4$. Though the higher density kernel is slightly lower for $k_n \ge 4$ they show a similar limiting behavior. This effect was not seen by Hansen *et al.* due to lack of data for high wave vectors. Figure \[fig:etak\_2\](d) indicates that despite the difference between the interaction potentials, the results for LJ and WCA fluids are very close. This confirms that transport is dominated by repulsive interactions, rather than attractive. The sharper kernel for the diatomic system, figure \[fig:etak\_2\](d), suggests a more Lorentzian-type behavior in the low wave-vector region. It is also important to mention that even though the fitting parameters are significantly affected by temperature, the resulting kernels vary weakly over the range of temperatures chosen here. This was also observed by Hansen *et al.* for WCA monatomic fluids.
Viscosity kernels in physical space \[sect:IVc\]
------------------------------------------------

The viscosity kernel in reciprocal space is an even function since it is symmetric about the origin; thus the the real space kernel is symmetric because the Fourier transform keeps the even properties of the function. This means that the viscosity kernel in physical space can be found via an inverse Fourier cosine transform, $F_{c}^{-1}[\dots]$ (a special case of the continuous Fourier transform arising naturally when attempting to transform an even function), of the viscosity kernel in reciprocal space. Since the integral is being computed over an interval symmetric about the origin (i.e. -$\infty$ to +$\infty$), the second integral must vanish to zero, and the first may be simplified to give: $$F_{c}^{-1}[\tilde{\eta}(k_{y})]=\eta (y)=
\sqrt{\frac{2}{\pi}}\int\limits_{0}^{\infty}\tilde{\eta} (k_{y})\cos(k_{y}y)dk_y . \label{eqn:ifct}$$ The inverse Fourier cosine transform of the Gaussian function, Eq. (\[eqn:fit\_gauss\]), exists [@papoulis_1962] and it is even possible to obtain an analytical expression. For an $N_{G}$ term Gaussian function the inverse Fourier cosine transform is $$\eta_{G}(y)=\frac{{\eta}_{0}}{\sqrt{2\pi}} \sum_{j}^{N_{G}} A_{j} \sigma_{j} \exp[-(\sigma_{j}y)^{2}/2] \qquad A_{j}, \sigma_{j} \in {\mathbb{R}_{+}} . \label{eqn:ft_fit_gauss}$$
Though the Lorentzian-type function given in Eq. (\[eqn:fit\_lorentz\]) fulfills the criteria for having an inverse Fourier transform $\eta_{L}(y)$ (i.e. the function is absolutely integrable, square integrable and the function and its derivative are piecewise continuous), the integral in Eq. (\[eqn:ifct\]) is not readily obtained analytically in the general case. However, the integral can be evaluated numerically. In this work, a Simpson method has been employed for this purpose.
The real space kernels for atomic and diatomic fluids at zero frequency are presented in figure \[fig:etay\_1\], \[fig:etay\_3\] and \[fig:etay\_2\_1\]. Figure \[fig:etay\_1\](a) shows the resulting kernels for chlorine and figure \[fig:etay\_1\](b) shows the resulting kernels for a LJ fluid extracted from two-term and four-term Gaussian functions Eq. (\[eqn:ft\_fit\_gauss\]), and inverse Fourier transform of Eq. (\[eqn:fit\_lorentz\]). We find very little difference between the kernels obtained via two- and four-term Gaussians for these systems. Figures \[fig:etay\_1\](c) and \[fig:etay\_1\](d) show the differences between all three fits. It can be seen that there exists a significant difference (almost 25$\%$) between the kernels extracted from Gaussian and Lorentzian type functions for small $y$. The discrepancy decreases rapidly as $y$ increases and becomes approximately zero for $y \ge 1.5$. The width of the kernel for chlorine is roughly 4-6 atomic diameters, figure \[fig:etay\_1\](a), and 3-5 atomic diameters for monatomic LJ and WCA fluids, figure \[fig:etay\_1\](b).
For monatomic systems at relatively low densities ($\rho_{a}=0.375-0.480$), the real space kernels are affected considerably by the functional form chosen to fit the reciprocal space kernel, figure \[fig:etay\_3\]. For instance, the equally weighted two-term Gaussian function, figure \[fig:etay\_3\](a), distorts the real space kernels and predicts a noticeably higher $\eta(y=0)$ value. As we increase the density, the discrepancy between Gaussian functions, as well as between Gaussian and Lorentzian-type functions, only partially reduces, figure \[fig:etay\_3\](b,d). The width of the kernel for WCA fluids at low density is roughly 2-4 atomic diameters.

In figures \[fig:etay\_2\_1\](a) and \[fig:etay\_2\_1\](b) we compare the unnormalized kernel data in $y$ space extracted from four-term Gaussian and Lorentzian-type functional forms for all the simulated systems. Despite the fact that the difference between the reciprocal kernels is less than 4$\%$, figure \[fig:etak\_1\](d) (e.g. chlorine - dashed line), the kernels for the corresponding systems in real space look noticeably different (figure \[fig:etay\_2\_1\](a) and figure \[fig:etay\_2\_1\](b)), for all the systems (e.g. chlorine - dashed dotted line). However, the zero wave-vector viscosities obtained from both functional forms are very close, with less than 2$\%$ error.
------------------------ ------------------------
 
------------------------ ------------------------
![\[fig:g\_r\_am\] Pair-distance correlation function $g(r)$ and normalization factors $\xi_{g}$, Eq. (\[eqn:norm\_g\_r\]): WCA \[(a) $\rho_{a}=0.375$, (b) $\rho_{a}=0.480$ both at $T=0.765$\], (c) WCA ($\rho_{a}=0.840$, $T=1.0$); LJ (d) ($\rho_{a}=0.840$, $T=1.0$); chlorine (e) ($\rho_{a}=1.088$, $T=0.97$). For clarity, the RDFs are shifted upwards by 3 units.[]{data-label="fig:g_r_ad"}](g_r_ad.eps)
------------------------ ------------------------
 
------------------------ ------------------------
We can determine the zero wave-vector viscosities $\eta_{0}=\eta(k=0,\omega =0)$ by integrating the real space kernel over $y$, and thus test our numerical analysis. The zero wave-vector viscosity $\eta_{0}$ obtained by a Gaussian function $\eta_{G}(y)$ is $$\eta_{0}=\int\limits_{-\infty}^{\infty} \eta_{G}(y)dy=\frac{{\eta}_{0}}{\sqrt{2\pi}}
\int\limits_{-\infty}^{\infty} \Big\{ \sum_{j} A_{j} \sigma_{j} \exp [-(\sigma_{j}y)^{2}/2] \Big\} dy . \label{eqn:lev}$$ Since the general analytical expression for $\eta_{L}(y)$ does not exist [@hansen_2007_2] we evaluate the integral numerically and present the results from all functional forms in Table \[tab:am\_leffv\]. A comparison of the viscosities in Table \[tab:am\_leffv\] with the simulated zero frequency zero wave-vector shear viscosities given in Table \[tab:atomic\_param\] shows an integration error of less than 3$\%$. This confirms the accuracy of our numerical analysis techniques.
\[tab:am\_leffv\]
It is of interest to discuss the real space viscosity kernels for monatomic and diatomic systems from a structural point of view. For this purpose we define a structural normalization factor $$\xi_{g}=\frac{\int\limits_{0}^{\infty} r [g(r)-1]^2 dr}{\int\limits_{0}^{\infty}[g(r)-1]^2 dr} \label{eqn:norm_g_r}$$ where $g(r)$ is the radial distribution function (RDF). Eq. (\[eqn:norm\_g\_r\]) is a measure of the range over which the correlation function decays to zero and therefore could be regarded as a correlation length of the radial distribution function. The RDF (or structure factor in reciprocal space) can be defined either in terms of the vector norm $r_{ij}$ between the atoms $i$ and $j$ or between the centres of mass of molecules $i$ and $j$: $g(r)=\Big\langle \frac{\sum_{i=1}^{N}\sum_{j>1}^{N} \delta(|\mathbf{r}-\mathbf{r}_{ij}|)}{4\pi r^{2}N\rho} \Big\rangle$, where $N$ is the total number of atoms or molecules, and $\rho$ is the atomic or molecular number density.
The radial distribution functions and normalization factors are presented in figure \[fig:g\_r\_ad\]. We can see that the RDF are typical monatomic and diatomic Lennard-Jones pair correlation functions. $\xi_{g}$ generally increases as we increase the density and temperature from $0.605$ at state point $\rho_{a}=0.375$, $T=0.756$ to $0.730$ at $\rho_{a}=0.480$, $T=1.0$ and only slightly increases as we increase the cutoff distance, i.e. switch from WCA system to a LJ system at the same state point. $\xi_{g}$ for chlorine at state point $\rho_{a}=1.088$, $T=0.97$ was found to be $0.585$.
The normalized kernels with respect to $\eta(y=0)$ and normalization factor $\xi_{g}$ are shown in figures \[fig:etay\_2\_2\](a) and \[fig:etay\_2\_2\](b). While generally the width of unnormalized kernels increases as we increase the density (figures \[fig:etay\_2\_1\](a) and \[fig:etay\_2\_1\](b)), the width of the normalized kernels of WCA fluids decreases marginally as we increase the density from $0.376$ (continuous line) to $0.840$ (short-dashed line). The LJ system shows a slightly narrower kernel (dotted line) compared to the WCA system at the same state point (short-dashed line). Though the kernels obtained from both functional forms are quite close to each other (almost identical for values of $y$ of about half of the atomic or molecular diameters, i.e. $y=0.5\sigma$), we can see in figure \[fig:etay\_2\_2\](a) and \[fig:etay\_2\_2\](b) that the structural normalization did not completely remove the discrepancy between the normalized kernels of the WCA system at different densities, and the normalized kernels of WCA, LJ and chlorine systems, for values higher than $y=\sigma$. If we recall that figure \[fig:etay\_2\_2\](b) is based on a Lorentzian-type fit, a further question as to whether the kernel differences are due to numerical analysis, i.e. the choice of the fitting function or due to improper structural factor arises. A four-term Gaussian only shows a slightly narrower kernels which suggests a need for a more comprehensive structural normalization.
Conclusion\[sect:V\]
====================
The wave-vector dependent viscosity of monatomic Lennard-Jones, monatomic Weeks-Chandler-Andersen and diatomic (liquid chlorine) fluids over a large wave-vector range and for a variety of state points has been computed. The equilibrium molecular dynamics calculation involved the evaluation of the transverse momentum density and shear stress autocorrelation functions in both atomic and molecular hydrodynamic representations for molecular fluids. The main results can be summarized as follows:
\(i) For monatomic fluids the shape of the normalized viscosity kernel in reciprocal space in the low wave-vector region is the same for a whole range of densities considered here. Though the normalized reciprocal kernels insignificantly decreases with the density they show a similar limiting behaviour at high $k_{y}$ values. For the LJ potential compared to a WCA potential we find higher viscosities in the low wave-vector region but the normalized shape of the kernels are almost identical.
\(ii) For liquid chlorine, the wave-vector dependent viscosity shows a similar behaviour in both atomic and molecular formalisms within statistical uncertainty.
\(iii) While a relatively simple Lorentzian-type function fits the atomic and diatomic data well over the entire range of $k_{y}$ at all the state points it is not possible to analytically inverse Fourier transform it to the real space domain. Therefore one may consider an expansion up to a four-term Gaussian which gives better accuracy in reciprocal space compared to the Lorentzian-type function. Our analysis of the high $k_{y}$ regime reveals that the two-term equally weighted Gaussian functional form is inaccurate in predicting the real space kernels whilst the unequally weighted Gaussian only slightly improves the fit.
\(iv) The overall conclusion is that the real space viscosity kernel for chlorine has a width of roughly 4-6 atomic diameters while for monatomic systems at high densities the width is about 3-5 atomic diameters and 2-4 atomic diameters at low densities. This means that generalized hydrodynamics must be used in predicting the flow properties of molecular fluids on length scales where the gradient in the strain rate varies significantly on these scales. Consequently a nonlocal constitutive equations should be invoked for a complete description of flows at atomic and molecular scales under such conditions.
Finally, our results for molecular fluids should also provide a good test for more complex molecular systems and the methodology can easily be used for instance in chain-like molecules.
[48]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , ****, ().
, , , , ****, ().
, , ****, ().
, , ****, ().
, , ****, ().
, , , ****, ().
, , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , ****, ().
, ****, ().
, ** (, ).
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , ****, ().
, , ****, ().
, ****, ().
, ** (, ).
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , ****, ().
, ** (, ).
, ** (, ).
, ****, ().
, ** (, ).
, , ** (, ).
, ** (, ).
|
---
abstract: 'Silicon (Si) is one of the most extensively studied materials owing to its significance to semiconductor science and technology. While efforts to find a new three-dimensional (3D) Si crystal with unusual properties have made some progress, its two-dimensional (2D) phases have not yet been explored as much. Here, based on a newly developed systematic *ab initio* materials searching strategy, we report a series of novel 2D Si crystals with unprecedented structural and electronic properties. The new structures exhibit perfectly planar outermost surface layers of a distorted hexagonal network with their thicknesses varying with the atomic arrangement inside. Dramatic changes in electronic properties ranging from semimetal to semiconducting with indirect energy gaps and even to one with direct energy gaps are realized by varying thickness as well as by surface oxidation. Our predicted 2D Si crystals with flat surfaces and tunable electronic properties will shed light on the development of silicon-based 2D electronics technology.'
author:
- Kisung Chae
- Duck Young Kim
- 'Young-Woo Son'
title: 'Atomically flat two-dimensional silicon crystals with versatile electronic properties'
---
Introduction
============
Recently, various 2D materials with weak van der Waals (vdW) interlayer interaction have been extensively studied due to their unusual properties [@geim_van_2013; @novoselov_two-dimensional_2005; @lee_atomically_2014]. Examples of these include graphene, hexagonal boron nitride, black phosphorous, and transition metal dichalcogenides. Not only that they have shown superior physical and chemical properties, some of the models in theoretical physics such as massless Dirac fermions have also been realized in experiments, which otherwise have not been observed in conventional materials [@geim_van_2013]. Nevertheless, many practical issues about large-scale synthesis, processing for defects, and contaminant control still need to be resolved [@geim_van_2013; @lin_defect_2016], which are critical for them to be realized as next generation electronic devices and energy applications.
Silicon, on the other hand, has served as a mainstay of semiconductor technologies, and a vast amount of advanced processing techniques have been accumulated for decades. It is mainly due to its abundance on the Earth surface as well as the existence of a single oxide form (SiO$_2$) which is advantageous to the mass production of a single-element device that is free from phase separations. These make undoubtedly Si be unique in current semiconductor technologies. Therefore, despite very active researches on the aforementioned 2D materials as a next generation platform for various applications, the best candidate may still be Si itself. This leads us to believe that discovering a novel 2D phase of Si materials with desirable physical properties would be important. Compared with the number of efforts for new bulk phase of Si [@wentorf_two_1963; @kasper_clathrate_1965; @besson_electrical_1987; @von_schnering_lithium_1988; @gryko_low-density_2000; @malone_ab_2008; @kim_synthesis_2015; @rapp_experimental_2015; @botti_low-energy_2012; @wang_direct_2014; @lee_computational_2014; @lee_ab_2016; @guo_new_2015; @luo_si10:_2016; @liu_new_2017], however, searching for a new 2D Si crystalline phase has not been remarkably succeeded yet, and only a few theoretical predictions [@morishita_formation_2008; @bai_graphene-like_2010; @morishita_first-principles_2010; @morishita_surface_2011; @spencer_reconstruction_2012; @guo_structural_2015; @sakai_structural_2015; @aierken_first-principles_2016] and experimental reports [@nakano_preparation_2005; @nakano_soft_2006; @kim_synthesis_2011; @lu_synthesis_2011; @kim_scalable_2014; @ohsuna_monolayer--bilayer_2016] exist in the literature.
Silicene [@takeda_theoretical_1994; @guzman-verri_electronic_2007; @cahangirov_two-_2009; @vogt_silicene:_2012; @tao_silicene_2015; @le_lay_2d_2015], a monolayer form of the 2D Si crystals which is analogous to graphene, cannot form a stable layered structure by itself since surface of the silicene is chemically reactive, so that the adjacent silicene layers form strong covalent bonds [@geim_van_2013; @vogt_silicene:_2012]. This is due to the strong preference of Si for the sp$^3$ hybridization over the sp$^2$ in contrast to carbon with the same number of valence electrons. Thus, silicene may not be a good candidate for a scalable 2D phase of Si [@geim_van_2013]. In addition, due to the strong covalent bonding character of Si, as-cleaved surfaces inevitably have unpaired electrons localized at dangling bonds on the surface, which makes an energetically unfavorable situation. This is evidenced by prevalent severe surface reconstruction to reduce the number of unpaired electrons as can be seen in most of the previously reported 2D Si crystals [@bai_graphene-like_2010; @morishita_first-principles_2010; @morishita_surface_2011; @spencer_reconstruction_2012; @guo_structural_2015; @sakai_structural_2015; @aierken_first-principles_2016; @ohsuna_monolayer--bilayer_2016]. In all the cases, however, some of the surface atoms still remain under-coordinated even after the reconstruction, implying that those atoms prone to form strong covalent bonding with one another as pointed out in the case of silicene [@geim_van_2013; @vogt_silicene:_2012].
In this work, we theoretically predict a series of novel 2D allotrope of Si crystals constructed by a concrete *ab initio* materials search strategy. The predicted 2D crystals show characteristic structural features as follows. The crystal is composed of two parts: (1) the atomically flat surface layers and (2) the inner layer connecting them through sp$^3$-like covalent bonds as seen in FIG. \[fig1\]. The surface layer features perfectly planar stable hexagonal framework unlike other 2D Si crystals that have hitherto been studied [@bai_graphene-like_2010; @morishita_first-principles_2010; @morishita_surface_2011; @spencer_reconstruction_2012; @guo_structural_2015; @sakai_structural_2015; @aierken_first-principles_2016; @nakano_preparation_2005; @nakano_soft_2006; @kim_synthesis_2011; @lu_synthesis_2011; @kim_scalable_2014; @ohsuna_monolayer--bilayer_2016], in which buckled surfaces are predominant. Moreover, the crystal is completely free from coordination number (CN) defects. The structures composed of two parts are revealed stable against serious perturbations as will be discussed later.
Computational Details
=====================
We concisely describe a novel 2D crystal structure prediction method: namely, Search by *Ab initio* Novel Design *via* Wyckoff position Iterations in the Conformational Hypersurface (abbreviated as `SANDWICH`), which explores the conformational hyperspace to find various local minima systematically. The method is especially suited for predicting 2D phases of covalent materials by designing the 2D crystals free of CN defects. This can be achieved by building surface and inner parts with different symmetries from each other, and by joining the two parts in such a way that under-coordinated atoms at the interface are compensated by one another. By doing so, the crystal becomes stabilized by eliminating dangling bonds.
Specifically, we chose surface layers to have a space group of P6/mmm (No. 191), while the bulk maintains a sp$^3$ bonding character. Among special positions in the given space group, we find that Wyckoff sites of $e$ (0, 0, $\pm$z) with a point group of 6mm and $i$ \[(1/2, 0, $\pm$z), (0, 1/2, $\pm$z) and (1/2, 1/2, $\pm$z)\] with the group of 2mm are suitable for building the CN defect-free crystals. Also, we considered two relative positions of the surface layers in this study, represented by displacement vectors of $\vec{d}$=$\mathbf{0}$ and $\vec{d}$ = 0.5 $\vec{a}_{1}$ + 0.5 $\vec{a}_{2}$, where $\vec{a}_1$ and $\vec{a}_2$ are lattice vectors. With all the settings described so far, we generated structures by varying the number of atoms in the inner layer ($n$) consecutively from 0 up to 9. Our method exhausts all the possible combinations for atomic positions of 2D crystals with given constraints: surface symmetry, choice of Wyckoff positions and thickness as schematically shown in FIG. S1. We note that the method is undoubtedly advantageous since it explores almost all the structures from a highly probable subset of the entire search space for given conditions. More detailed descriptions of our `SANDWICH` method can be found in the supplementary material.
We performed a series of first-principles calculations to obtain optimized structures by using Vienna *ab initio* simulation package (`VASP`) code [@kresse_efficient_1996; @kresse_efficiency_1996]. Conjugate gradient method was used to find the equilibrium structures with a force criterion of 1 meV/Å. For Kohn-Sham orbital, core part was treated by using projector augmented wave method [@kresse_ultrasoft_1999], while the valence part was approximated by linear expansion of a plane wave basis set with the kinetic energy cutoff of 450 eV. Self-consistent field of DFT was iterated until the differences of the total energy and eigenvalues are less than 10$^{-7}$ eV. Numerical integration in the first Brillouin zone (BZ) was done on the $\Gamma$-centered 12$\times$12$\times$1 grid meshes generated by a Monkhorst-Pack scheme. The exchange-correlation functional of Perdew-Burke-Ernzerhof was used to build the Hamiltonian of an electron-ion system [@perdew_generalized_1996]. For a better description of electronic structures with a band gap, a hybrid functional of Heyd-Scuseria-Ernzerhof [@krukau_influence_2006] as implemented in the `VASP` code was used. Dynamical stability was also checked on each of the relaxed structures. Phonon dispersion spectra were generated by using a direct method [@parlinski_first-principles_1997] as implemented in `phonopy` package [@togo_first_2015]. To obtain force constants, we used 4$\times$4$\times$1 and 5$\times$5$\times$1 supercells to generate displaced configurations. In this case, the k-point sampling in the BZ was done on the 24$\times$24$\times$1 and 25$\times$25$\times$1 Monkhorst-Pack grids.
Results and Discussion
======================
![Atomic configuration of the 2D Si crystals. (a) A ball-stick model of the ground state geometry of a 2D Si crystal. Si atoms in top and bottom surface layers are shown in brown and blue balls, respectively, while those in the inner layer, sandwiched by the surface layers, are shown in gray balls. The displacement vector ($\vec{d}$) refers to the relative in-plane displacement of the two surface layers. (b) Schematic diagram of general 2D Si crystals. The unitcell is drawn in red with lattice parameters $\vec{a}_{1}$ and $\vec{a}_{2}$ marked. The inner layer is represented as a distorted tetrahedron, in which filled and dashed wedges by a Cram representation indicate bonds going out of and into the paper, respectively. Top and perspective views of the tetrahedrons in the inner layer are attached on aside, where the atoms in different layers are shaded differently using a gray gradient.[]{data-label="fig1"}](figure1.pdf){width="\columnwidth"}
The 2D Si crystals constructed by the `SANDWICH` method demonstrate unique structural features. As we intentionally put together two symmetrically distinct parts (surface and inner layers) so that CN defects on both components are fully compensated by each other, all the atoms in the crystal have a CN of 4 without any dangling bonds as shown in FIG. \[fig1\](a). This condition is particularly favored for Si atoms which show strong preference for sp$^3$ bonding. We find that the four-coordinated networks are maintained in the fully relaxed equilibrium structures. Moreover, the crystals exhibit atomically flat surface structures without buckling or reconstruction, which is uncommon for 2D Si crystals except for silicene bilayers [@bai_graphene-like_2010; @aierken_first-principles_2016]. Note that those bilayers with planar surfaces are nothing but the cases in our model with ($\vec{d}$=$\mathbf{0}$, $n$=0) [@bai_graphene-like_2010] and ($\vec{d}$ = 0.5 $\vec{a}_{1}$ + 0.5 $\vec{a}_{2}$, $n$=0) [@aierken_first-principles_2016].
![Classification of the 2D Si crystals. (a) Type of distortions in a tetrahedron building block. On the undistorted regular tetrahedron, skew lines along in-plane directions are marked as thick dashed lines. Bond stretch, bending, and twist are shown aside. The undistorted configurations are drawn in dotted lines for comparison, and restoration forces due to the distortion are shown in blue arrows. (b) Classification of the crystals due to various lattice distortions. Local and global stresses are shown in blue and black arrows, respectively. Group 1: local distortions in the surface layers mainly due to bond bending lead to the global stresses of positive (negative) normal stress along x (y) axis. Group 2: the unitcell is deformed by twist-like distortions in the inner layer, resulting in negative (positive) normal stress along x (y) axis. Group 3: dihedral angle distortions from different inner layers yield nonzero shear stress, so that the deformation of the unitcell becomes asymmetric ($|\vec{a}_{1}|\ne|\vec{a}_{2}|$).[]{data-label="fig2"}](figure2.pdf){width="\columnwidth"}
Looking into the structures in detail, we find that the surface and inner layers have different bonding characteristics as expected from the fact that they initially had different symmetries. The surface layers show distorted hexagonal lattices with additional bonds toward the inner layer, while the atoms in the inner layer (hereafter called as bridge atoms as denoted in FIG. \[fig1\](b)) form distorted tetrahedral bonding with its two opposite edges parallel to the plane of the surface (FIG. \[fig1\]). The atomic arrangement of the inner layer is similar to that of the {100} surfaces of the cubic diamond phase (dSi) distorted by an in-plane shear strain. We note that the key role of the outermost bridge atom located on the bond center of the surface atoms for stabilizing the characteristic planar surface structure of the crystals, which would not have been realized otherwise. Due to the discrepancy of the preferred local environments for the two parts in the common unitcell, some of the local structures must be distorted for compatibility. For instance, the angle between those bridge atoms and surface atoms is largely deviated from the ideal value of $\sim$109.5 degrees ($\degree$) to $\sim$60$\degree$. Also, the angles between the two skew lines (marked as thick dashed lines in FIG. \[fig2\](a)) are deviated from the right angle (90$\degree$) for some of the tetrahedrons made by bridge atoms, which creates torsional restoration forces as explained in FIG. \[fig2\](a). Among the unique structural features shared by the crystals in this study (namely, flat surfaces without CN defects), we find that a set of 2D crystals constructed with the same $\vec{d}$ and $n$ display a variety of microstructures depending on the atomic arrangement in the inner layer. This comes from the various relative orientations of the distorted sp$^3$ bonds as explained above.
The global stress of the crystals is also affected critically by the $\vec{d}$, $n$ and the atomic arrangement in the inner layer, making the unitcells distorted in various ways (Table S1). Thus, for the sake of discussion, we classify the crystals into three distinct groups. (1) The crystals in the first group (group 1) are characterized by the same magnitudes of the lattice vectors ($|\vec{a}_1|$=$|\vec{a}_2|$) and reduced cell angle ($\gamma<60\degree$). In this case, the unitcell is distorted by the normal components ($\sigma_x$ and $\sigma_y$) only, as the shear component ($\tau_{xy}$) is vanished due to symmetry as illustrated in FIG. \[fig2\](b). All the crystals in this group have $\vec{d}$ = 0.5 $\vec{a}_1$ + 0.5 $\vec{a}_2$. Those crystals fall into the C222 space group, except for the case of $n$=2 which is in the Cmme group as in the Table S1. The magnitude of the global stress seems to be decreasing for thicker crystals (or with the increased $n$), indicated by the $\gamma$ approaching to 60$\degree$. (2) The crystals in the second group (group 2) feature the symmetric lattice vectors ($|\vec{a}_1|=|\vec{a}_2|$) with $\gamma>$60$\degree$. As in the case of group 1, only the $\sigma_x$ and $\sigma_y$ distort the unitcell without shear strain because of the symmetry; the crystals also fall into the space group of C222. Crystals in group 1 and 2 differ by the signs of the normal stresses as indicated in FIG. \[fig2\](b). The crystals in group 2 have two subgroups of $\vec{d}$ = $\mathbf{0}$ and $\vec{d}$ = 0.5 $\vec{a}_1$ + 0.5 $\vec{a}_2$. They behave differently in terms of the $\gamma$ with respect to the $n$ as can be seen in the Table S1. In the former case, symmetry of the surface layers overtakes that of the inner layer, and the $\gamma$ approaches to 60$\degree$ as the $n$ increases. On the other hand, the crystals with the $\vec{d}$ = 0.5 $\vec{a}_1$ + 0.5 $\vec{a}_2$ subgroup are dominated by the symmetry of the inner layer, making the $\gamma$ approach to 90$\degree$ as the $n$ increases. (3) Lastly, group 3 crystals are constructed with nonzero $\tau_{xy}$, resulting in the asymmetric lattice vectors ($|\vec{a}_1|\ne|\vec{a}_2|$) as well as the space group of P2 with lower symmetry. In this case, the principal stresses ($\sigma_1$ and $\sigma_2$) do not agree with the $\sigma_x$ and $\sigma_y$ due to the $\tau_{xy}$.
![Stability of the crystals. (a) Cohesive energy (E$_{\mathrm{coh}}$) of the 2D crystals with respect to the number density. (b) Harmonic phonon dispersion spectra for a representative structure in each group.[]{data-label="fig3"}](figure3.pdf){width="\columnwidth"}
Stability of the crystals is confirmed as shown in FIG. \[fig3\]. Cohesive energies (E$_{\mathrm{coh}}$) for each crystal are plotted with respect to the number density, defined as the number of Si atoms in the unitcell divided by the unitcell area ($|\vec{a}_1\times\vec{a}_2|$). In all the proposed cases, well-defined energy minima are shown as in FIG. \[fig3\](a). When compared with other Si structures, the crystals in this study show slightly higher E$_{\mathrm{coh}}$. For example, with a number density of $\sim$0.4 Si/Å$^2$, the crystal with a thickness of 0.5 nm shows a higher E$_{\mathrm{coh}}$ by 0.02 eV/atom than that of 2$\times$1-dSi (100) with a thickness of 0.6 nm. We ascribe this to the significant distortion in bond angle especially near the surface as mentioned above. We note, however, that the 2D crystals in this study may stay more stable than other crystals in a chemical environment because the dangling bonds on the surface were eliminated. This chemical stability undoubtedly becomes a critical factor for device realization [@geim_van_2013; @lin_defect_2016]. Moreover, a recently synthesized allotrope of the 3D Si crystals, namely Si$_{24}$, also shows distorted bond angles ranging from 93.73$\degree$ to 123.17$\degree$, and extended bond distances with a higher total energy than that of the ground state dSi by 0.09 eV/Si [@kim_synthesis_2015]. Thus, for realization of stable Si crystals, it could be more critical for the individual Si atoms to satisfy the ground state CN than to maintain the exact bond angles and distances of the ideal sp$^3$ bonding found in dSi. We also provide harmonic phonon dispersion spectra in FIG. \[fig3\](b) for the 2D crystals in the three groups. In all the cases, we confirm that the crystals are dynamically stable.
Furthermore, we find that the new 2D Si crystals are quite stable against strong perturbations beyond a usual harmonic interatomic force regime. To check such a structural stability, we generated computationally “shaken” structures that have been proved useful to check the stabilities of other crystal structures [@pickard_ab_2011] (see supplementary material). Starting from the fully relaxed crystal in group 1 with $n$=2, we displaced every Si atom in a random direction with a fixed amount of 0.5 Å in the 2$\times$2 supercell. Note that we used the significantly large displacement when compared with that of 0.01 Å used to obtain harmonic interatomic force constants (FIG. \[fig3\](b)). Then, each of the perturbed structures was relaxed to the corresponding energy minimum configurations by using the conjugate gradient method. By comparing $\sim$5,500 randomly generated configurations, we found that our proposed structure was retained in the $\sim$3,500 samples after relaxation, proving that the structure is robust against severe thermal fluctuations as shown in the FIG. S2.
The fundamental electronic structures of the crystals are closely related to their thickness ($n$) and the crystal classification defined above, displaying a wide variety of electronic properties ranging from metallic to semiconducting. For instance, all the crystals categorized as group 1 are semiconductors, showing a finite bandgap ($\Delta>0$) with their sizes decreasing with an increasing $n$ as shown in FIG. \[fig4\](a). The maximum size of the energy gap is $\sim$0.5 eV for the thinnest crystal ($n$=1). On the other hand, most of the crystals in group 2 and group 3 are metallic. Note that the $\Delta$ in this case is a measure of the overlap between the conduction and valence energy bands. For group 2, the $\Delta$ behaves differently with respect to the $n$ depending on the subgroup ($\vec{d}$) as discussed above.
![Electronic structures of the 2D crystals. (a) Bandgap ($\Delta$) as a function of thickness ($n$) is shown for the 2D crystals in each group. In group 2 and 3, empty and filled symbols indicate the displacement vectors $\vec{d}$ = $\mathbf{0}$ and $\vec{d}$ = 0.5 $\vec{a}_1$ + 0.5 $\vec{a}_2$. Electronic band dispersion of (b) group 1 with $n$=2, (c) group 2 with $n$=3 and (d) group 3 with $n$=4. A schematic diagram of the 1st BZ with high symmetry points is shown above (d).[]{data-label="fig4"}](figure4.pdf){width="\columnwidth"}
![Oxidation of the 2D Si crystal. (a) Potential energy surface of the 2D crystal by an O probe atom with a height of 2 Å. Atoms are superimposed on the map: large blue and small white balls indicate surface silicon atoms and underneath bridge silicon atoms, respectively. The minimum energy for adsorption site at b$_1$ is set to be zero in the color bar shown below. (b) Electronic band dispersion and (c) atomic structure of the fully oxidized crystal in ground state. Adsorbed oxygen atoms are denoted by small red balls.[]{data-label="fig5"}](figure5.pdf){width="\columnwidth"}
To reveal the origin of various electronic phases in the crystal families, we provide electronic band dispersions for each group in FIG. \[fig4\](b)-(d). Edges of the valence and the conduction bands are located at different momenta ($\vec{k}$), indicating that the crystals are either indirect semiconductors or semimetals. For the crystals with a space group of C222 (group 1 and 2), we note that the relative energetic positions of the bands at $\vec{k}$ = 0.5 $\vec{b}_1$ ($\epsilon_{M_1}$) and $\vec{k}$ = 0.5 $\vec{b}_1$ + 0.5 $\vec{b}_2$ ($\epsilon_{M_2}$) are directly responsible for the electronic phase transition. Here, $\vec{b}_1$ and $\vec{b}_2$ indicate the reciprocal lattice vectors in the BZ as shown in FIG. \[fig4\]. That is to say, when the conduction band minimum (CBM) is located at M$_1$, the crystals represent finite energy gaps ($\Delta>0$), while semimetallic phases with $\Delta<0$ are realized for crystals with the CBM located at M$_2$ as seen in FIG. \[fig4\](b) and (c). Note that the crystals in group 1 and 2 belong to the former and the latter cases, respectively. Because crystals are classified by the lattice parameters of the unitcell (FIG. \[fig2\]), we ascribe the electronic phase transition to the changes in the local atomic structures due to the lattice parameters, especially to the $\gamma$. We firstly confirm that the electronic wave functions for the CBM are highly localized near the surface layers (FIG. S3), indicating that the surface geometry is mainly responsible for the electronic phase transition. We further verify this by pure shear deformation of the unitcell only to change the $\gamma$ without changing $|\vec{a}_1|$ and $|\vec{a}_2|$, and see that the crossover between $\epsilon_{M_1}$ and $\epsilon_{M_2}$ indeed occurs as shown in FIG. S4. Based on those facts, we can construct a concise but essential model for the whole crystal explaining the characteristic variation of electronic structures from semiconductor to semimetal as functions of thickness and group classification (see FIG. S5). In addition, these crystals provide good transport properties when compared to the dSi. We note that the transverse effective masses for electrons near the CBM are reduced by half when compared with the dSi, while similar values of other components are shown as in the Table S2. The structures with lower crystal symmetry (group 3) always show semimetallic electronic structures (FIG. \[fig4\](d)).
Interestingly, we find that the surface oxidation can extremely widen the applicability of the new crystals. With the oxygen (O) adsorption on the surface, the crystal in group 1 with $n$=2 is significantly stabilized by 1.98 eV/O$_2$, and with the subsequent dissociation of the adsorbed O$_2$ molecule on the surface, the system becomes even more stabilized by 5.89 eV/O$_2$ (See supplementary material for details). We confirm that adsorption and dissociation of O$_2$ molecules do not affect much on the characteristic planar structure of the surface in a wide range of O coverage from an isolated limit (FIG. S8) to the full coverage (FIG. \[fig5\](c)). We find that the ground state occurs when the adsorbed O atom is located on the middle of the surface Si-Si bond just on top of the underneath bridge Si atom marked as b$_1$ in FIG. \[fig5\](a). Moreover, electronic properties vary notably *via* surface oxidation from the indirect energy gap of $\sim$0.2 eV (FIG. 4(a)) to the direct bandgap of $\sim$1.2 eV (FIG. \[fig5\](b)), which significantly widens the versatility of the 2D Si crystals in this study. Furthermore, we confirm that the oxidized 2D crystals can form stable layered structure by itself, suggesting this Si material as a feasible candidate for a component in vdW heterojunction [@geim_van_2013].
Conclusions
===========
In this work, using a newly developed *ab initio* computational method, we propose a series of two-dimensional silicon crystals with versatile electronic properties. The surface layer of the new 2D Si crystals exhibits atomically flat distorted hexagonal structure without buckling, and the inner layer silicon atoms fill up the space between the flat surface layers. We classified 2D Si structures into three groups and each of the groups possesses distinct electronic properties originated from structural variations such as semiconductor as well as semimetals. Moreover, their oxidized forms are shown to be a direct bandgap semiconductor. Therefore, we believe that our new 2D Si crystals satisfy highly desirable characteristics of next generation electronic technology platforms only with a single atomic element and their oxides, very similar with the current 3D Si electronic devices.
The authors thank Dr. In-Ho Lee for discussions. D. Y. K. acknowledges financial support from the NSAF (U1530402). Y.-W. S. was supported by the NRF of Korea funded by the MSIP (QMMRC, No. R11-2008-053-01002-0 and vdWMRC, No. 2017R1A5A1014862). The computing resources were supported by the Center for Advanced Computation of KIAS.
[47]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
**Supplementary Material for:\
Atomically flat two-dimensional silicon crystals with versatile electronic properties**\
Kisung Chae,$^1$ Duck Young Kim,$^2$ and Young-Woo Son$^1$\
$^1$ *Korea Institute for Advanced Study, Seoul 02455, South Korea*\
$^2$ *Center for High Pressure Science and Technology Advanced Research, Shanghai 201203, P. R. China*\
(Dated: )
A new structure search method for 2D crystal prediction
=======================================================
Here, we describe in detail the structure searching method used to predict the 2D Si crystals in this study, named `SANDWICH` (Search by *Ab initio* Novel Design *via* Wyckoff positions Iteration in Conformational Hypersurface). The main idea of the method is to put together two symmetrically distinctive parts to compensate unpaired electrons at the interface. Thus, the method is particularly suitable for predicting 2D crystals which might favor electronic compensation over local distortion. By doing so, we believe that a new series of 2D crystals can be efficiently searched in a highly confined conformational space near the local minimum structures. The number of distinct sets of crystals that can be constructed by this method can be as many as the possible combinations of surfaces and inner parts, which varies depending on the molecular geometries of the ground state structure (i.e., tetrahedral building block for Si). General procedures of the `SANDWICH` method are summarized as a flowchart in FIG. \[figs1\], and structural parameters of 2D crystals in this study are listed in the Table \[tab:structure\].
![A flowchart of the `SANDWICH` method.[]{data-label="figs1"}](figures1.pdf){width="6cm"}
To be more specific, the first step in the `SANDWICH` method begins with the choice of geometry of surface layers, which determines the space group of the 2D crystals. Note that the two surface layers can be displaced relative to each other by a fractional lattice translation vector $\vec{d}$, and only specific $\vec{d}$ vectors are allowed only when the set of Wyckoff positions in the given space group remain invariant after a translation operation by $\vec{d}$. For instance, the two $\vec{d}$ vectors of $\mathbf{0}$ and 0.5 $\vec{a}_{1}$ + 0.5 $\vec{a}_{2}$ were considered in this study because a set of Wyckoff positions of $e$ (0, 0, $\pm$z) and $i$ \[(1/2, 0, $\pm$z), (0, 1/2, $\pm$z) and (1/2, 1/2, $\pm$z), respectively\] under the space group of P6/mmm (No. 191) are invariant with respect to both of the $\vec{d}$ vectors. For the same surface layers, if Wyckoff positions of $e$ and $h$ \[(1/3, 1/3, $\pm$z) and (2/3, 2/3, $\pm$z)\] are chosen to construct the initial structures, there must be three $\vec{d}$ vectors of $\mathbf{0}$ and $\pm$1/3 $\vec{a}_{1}$ + $\pm$1/3 $\vec{a}_{2}$. The next step is to select some of the Wyckoff positions for inner layer construction from the list given for the space group. Here, a rule of thumb is that all the atoms in the constructed crystals have to be free of CN defects. At this point, for a given number of atomic layers in the inner layer n (or thickness), a set of crystals can be constructed. Then, starting with those initial guess structures, the corresponding ground states are sought, followed by dynamic stability tests.
index $\vec{d}$ n $\vec{a}_1$ $\vec{a}_2$ $\gamma$ ($\degree$) t (Å) space group (No.)
------- ----------- --- ------------- ------------- ---------------------- ------- -------------------
1 4.165 4.165 57.94 4.93 C222 (21)
2 4.157 4.157 56.72 5.34 Cmme (67)
4 4.086 4.086 58.04 7.89 C222 (21)
5 4.085 4.085 57.90 9.35 C222 (21)
7 4.041 4.041 58.77 12.03 C222 (21)
8 4.035 4.035 58.79 13.42 C222 (21)
3 3.974 3.974 64.80 6.70 C222 (21)
6 3.968 3.968 63.11 10.72 C222 (21)
9 3.967 3.967 61.95 14.78 C222 (21)
5 3.885 3.885 69.74 9.57 C222 (21)
9 3.793 3.793 80.28 15.24 C222 (21)
5 4.503 3.859 53.88 9.69 P2 (3)
6 3.976 4.126 58.75 10.70 P2 (3)
2 3.850 4.275 59.86 5.50 P2 (3)
3 3.910 3.893 67.65 6.88 P2 (3)
4 3.880 4.360 57.41 8.13 P2/c (13)
5 3.876 4.027 64.52 9.37 P2 (3)
6 3.879 4.561 53.18 11.00 P2/c (13)
: Structural parameters of 2D crystals in various groups.
\[tab:structure\]
We note that some of the 2D Si crystals already reported elsewhere can also be found by the method proposed here. For instance, the crystals, so called a bilayer silicene [@bai_graphene-like_2010s; @aierken_first-principles_2016s], in AA (ontop) and orthorhombic (displaced along the zigzag chain direction) stacking of the two silicene monolayers are nothing but the ones with ($\vec{d}$=0, n=0) and ($\vec{d}$=0.5 $\vec{a}_{1}$ + 0.5 $\vec{a}_{2}$, n=0), respectively. Similarly, it is theoretically shown that other group IV elements such as germanium (Ge) and tin (Sn) with the same valence electron configuration as Si can form similar bilayer structures as well [@acun_germanene:_2015s; @huang_quantum_2016s].
Structural stability test
=========================
![Histogram of total energies of the relaxed structures from stochastic displacements referenced to the lowest energy.[]{data-label="figs2"}](figures2.pdf){width="8cm"}
In addition to calculating harmonic phonon dispersion to check the dynamic stability, we also tested the stability beyond a harmonic regime. We started from one of our proposed 2D silicon crystals (group 1, n=2), and then randomly moved each atomic position from the equilibrium within a pre-determined spherical space with a fixed radius. This method was used to predict a low energy models of SiNF [@le_page_low_2006s], and it is one important operation in *ab initio* random structure searching [@pickard_ab_2011s] strategy, which have been successfully applied to many compounds under pressure. We moved every Si atom in a (2$\times$2) supercell by 0.5 Å, and then allowed a full relaxation. Note that the magnitude of the displacement is significantly larger than that used in harmonic limit of 0.01 Å. This indicates that the crystals were tested on a very harsh condition or very high temperature. We examined $\sim$5,500 structures and the results are shown in FIG. \[figs2\]. We find that majority of relaxed structures after random distortion are recovered back to the original structure, which reaches $\sim$68 % out of the total samples, and possess the lowest total energy. Rest of the distorted structures were relaxed to local minimum structures with buckled surfaces with higher total energy as shown in FIG. \[figs2\].
Origin of a variety of electronic properties
============================================
We demonstrate that the versatile electronic properties (FIG. 4) are primarily attributed to the surface layer structure. Simple shear deformation, together with the fact that electronic wave functions near the Fermi energy are localized on the surface layers (FIG. \[figs3\]), shows that the variation of electronic properties is solely due to the surface geometry, and rules out the role of inner layer (FIG. \[figs4\]). Specifically, shift of the band at M$_2$ ($\vec{k}_{M_2}$ = 0.5 $\vec{b}_{1}$ + 0.5 $\vec{b}_{2}$) with respect to the shear strain (or the cell angle $\gamma$) is clearly shown, altering the electronic structures ranging from semimetallic to indirect semiconducting.
![Charge densities for a 2D crystal in group 2 with n=9 at valence band maximum (VBM) and conduction band minimum (CBM) rendered in red and yellow, respectively. An isovalue of 0.005 electron per Å$^3$ was used. Unitcell is drawn as black dashed line. Charge density projected along the normal direction to the crystal is plotted for CBM and VBM.[]{data-label="figs3"}](figures3.pdf){width="4cm"}
To reveal the mechanism of the band shift at M$_2$, we consider a minimal surface model. The model is composed of essential parts: a hexagonal framework of Si with a bridge Si atom with hydrogen passivation as shown in FIG. \[figs5\](a). Based on the fact that the band shift is primarily due to the surface layer structure, we varied the $\gamma$ from 50$\degree$ to 70$\degree$. The length of the surface bond with the bridge atom (denoted as b in FIG. \[figs5\](a)) does not vary monotonously with respect to the $\gamma$, but becomes diminished as the unitcell is distorted as seen in FIG. \[figs5\](b). Instead, the bond angle at the surface ($\theta$) decreases monotonously with an increasing $\gamma$, so does the relative band shift between M$_1$ and M$_2$ ($\epsilon_{M_2}-\epsilon_{M_1}$) as seen in FIG. \[figs5\](c). So, the $\theta$ appears mainly responsible for the band shift.
![Electronic band dispersion of a 2D Si crystal in group 1 with n=2 as a function of $\gamma$: (a) 54.45$\degree$, (b) 56.72$\degree$ and (c) 58.99$\degree$.[]{data-label="figs4"}](figures4.pdf){width="12cm"}
For more rigorous discussion, we show the electronic band structures projected on atomic orbitals of the minimal model in FIG. \[figs5\](d). The band near the valence band maximum seems to be composed mainly of the s, p$_x$ and p$_z$ states of the surface atoms. On the other hand, orbital characters of the band at M$_1$ and M$_2$ vary significantly. The band at M$_1$ contains a significant portion of atomic orbital perpendicular to the surface layer such as p$_z$, d$_{xz}$ and d$_{yz}$. On the other hand, the band at M$_2$ is mainly composed of orbital lying on the surface layer such as s, p$_x$ and d$_{xy}$. We note that d$_{xy}$ orbital contributes 22 % to the band at M$_2$. Thus, we find out that the sensitive band shift at M$_2$ is partly due to the enhanced (diminished) contribution of d$_{xy}$ orbital with the decreased (increased) $\gamma$ as shown in FIG. \[figs5\](c). The bands at M$_1$ and M$_3$ do not vary as much as M$_2$ because the bridge bond varies less sensitively to the $\gamma$, compared to the surface bonds. Therefore, the $\Delta$ remains almost constant even with the $\gamma$ smaller than 60$\degree$.
![(a) Atomic configuration of the model. (b) Bond length and bond angle marked in (a) with respect to $\gamma$. (c) $\Delta$ and energy level difference at M$_1$ and M$_2$ due to $\gamma$ are shown. The d$_{xy}$ orbital overlaid on the surface layer with a corresponding angle is shown below. (d) Dispersion of the electronic band structure with atomic orbital projection. Orbitals for surface and bridge atoms are denoted as s and b, respectively, on the superscript. Size of the symbols represents the amount of contribution.[]{data-label="figs5"}](figures5.pdf){width="\textwidth"}
Effective masses
================
Effective masses for electron and hole are calculated by interpolating energy band dispersion to a parabolic band around the band extrema for selected 2D Si crystals. In general, the 2D crystals in this study have elliptical Fermi surfaces, indicating anisotropic effective masses as seen in FIG. \[figs6\]. The calculated effective masses are summarized in the Table \[tab:effmass\]. Compared to cubic diamond phase (dSi) with longitudinal ($m^{*}_{e,L}$) and transverse ($m^{*}_{e,T}$) electronic effective masses of 0.92 $m_0$ and 0.19 $m_0$ ($m_0$: mass of a free electron), respectively [@kittel_introduction_2005s], our predicted crystals in group 1 and 2 show 50 % lighter $m^{*}_{e,L}$. For hole effective masses, our results are comparable to the dSi: 0.49 $m_0$ and 0.16 $m_0$ for heavy and light holes, respectively. We note that the crystal in group 1 with n=2 can be a good n-type semiconductor with high mobility due to reduced effective mass.
![Eigenvalue maps in the reciprocal space. Top and bottom panes indicate conduction and valence bands, respectively, for (a, d) group 1, n=2, (b, e) group 2, n=3, and (c, f) group 3, n=4. Band extrema are marked as x, and the first BZ is drawn as black lines. Effective masses around the band extrema were calculated along longitudinal and transverse direction as marked by dissecting yellow lines.[]{data-label="figs6"}](figures6.pdf){width="8cm"}
group, n m$^*_{e, L}$ m$^*_{e, T}$ m$^*_{h, L}$ m$^*_{h, T}$
---------- -------------- -------------- -------------- --------------
1, 2 0.45 m$_0$ 0.16 m$_0$ 0.19 m$_0$ 0.51 m$_0$
2, 3 0.52 m$_0$ 0.18 m$_0$ 0.43 m$_0$ 0.22 m$_0$
3, 4 1.13 m$_0$ 0.16 m$_0$ 0.23 m$_0$ 0.20 m$_0$
: Electron (e) and hole (h) effective masses (m$^*$) of the 2D Si crystals with varying group and n. Longitudinal (L) and transverse (T) directions for each case are shown in FIG. \[figs6\].
\[tab:effmass\]
Oxygen adsorption
=================
In general, Si surface is vulnerable to an oxygen (O) ambient environment due to strong interaction between Si and O, forming a stable oxide film. In addition to their mechanical and dynamical stability as confirmed in FIG. 3, our crystals are expected to show a better chemical stability when compared with other Si crystals, because all the surface Si atoms are fully compensated. To validate this, we attached an isolated O$_2$ molecule on a (3$\times$2) supercell of an orthogonal unitcell as seen in FIG. \[figs7\]. As a diatomic molecule, three orientations for O$_2$ alignment along x, y and z were considered for adsorption on the following sites: ontop (o), bridge 1 (b$_1$) and bridge 2 (b$_2$) (FIG. \[figs7\]). Note that b$_1$ and b$_2$ are distinguished whether the bridge bond possesses a bridge Si atom on the opposite side or not.
We observe negative adsorption energies ($E_{\mathrm{ads}}$) as $$E_{\mathrm{ads}}=E(\mathrm{Si}+\mathrm{O_2})-E(\mathrm{Si})-E(\mathrm{O_2}),$$ indicating that the O$_2$ adsorption is a thermodynamically spontaneous process. Moreover, we observe that internal energy of the system is further reduced in a great deal by dissociation of the adsorbed O$_2$ molecule. Interestingly, the characteristic flat surface of the crystals is retained in most cases throughout adsorption and subsequent dissociation processes (FIG. \[figs8\]). This indicates that an oxidation process would make the crystals even more stable without disturbing the original framework, and prevent the structure from being degraded by further oxidation.
![A xy-projection of (3$\times$2) supercell (red) of the conventional Bravais unit cell shown in magenta lines. Only top surface layer (blue balls) and the subsurface bridge atoms (white) are represented. Three possible adsorption sites of ontop (o), bridge 1 (b$_1$) and bridge 2 (b$_2$) are marked on the corresponding sites.[]{data-label="figs7"}](figures7.pdf){width="3.5cm"}
![Initial and relaxed structures of the three low-energy configurations of O$_2$ (red balls) adsorption.[]{data-label="figs8"}](figures8.pdf){width="8cm"}
In some cases, we observe a significant deformation of the crystal when an O$_2$ molecule is adsorbed on the o site as shown in FIG. \[figs8\](b). We found that the one of the dissociated O atoms is bound to the Si atom, of which the bond orientation is perpendicular to the surface. This changes the local bonding character of the Si to sp$^3$ similar to the dSi (111) surface, so that the flat surface is not preserved anymore. Also, the bridge Si atom, which was originally bound to that Si atom is significantly relaxed, making a new bond with another bridge Si atom. Consequently, the surface Si atom on the other side becomes protruded as seen in FIG. \[figs8\](b). This process seems irreversible because of the strong Si-O bonding, and the protruded Si atom on the other side is likely to be chemically reactive since one of the valence electrons is not fully compensated anymore. However, this process is energetically less favorable by 1.35 eV per a O$_2$ molecule (Table \[tab:Oads\]), and O atoms will sit on the most stable b$_1$ site without disturbing the framework in a quasi-static oxidation condition.
------------ --------------- -------------------- --------------------
adsorption initial O$_2$ E$_{\mathrm{ads}}$ O$_2$ dissociation
site orientation (eV/O$_2$)
x -6.39 yes
y -4.92 yes
z -2.67 no
x -4.62 yes
y -2.67 no
z -1.37 no
x -7.87 yes
y -7.87 yes
z -6.52 yes
------------ --------------- -------------------- --------------------
: Adsorption energy ($E_{\mathrm{ads}}$) of an isolated O$_2$ molecule on various adsorption sites.
\[tab:Oads\]
[7]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**]{}, ed. (, , )
|
ø
Ø
IPMU-18-0027\
**QCD axion dark matter from long-lived domain walls during matter domination**
**Keisuke Harigaya**$^{(a,b)}$ and **Masahiro Kawasaki**$^{(c,d)}$
*$^{(a)}$[Department of Physics, University of California, Berkeley, California 94720, USA]{}\
$^{(b)}$[Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA]{}\
$^{(c)}$[ICRR, University of Tokyo, Kashiwa, Chiba 277-8582, Japan]{}\
$^{(d)}$[Kavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, Chiba 277-8583, Japan]{}*
Introduction
============
The standard model has a source of CP violation from the QCD dynamics [@'tHooft:1976up], which has however not been observed [@Baker:2006ts]. This is so-called the strong CP problem. One of the most attractive solutions to it is provided by the Peccei-Quinn (PQ) mechanism [@Peccei:1977hh; @Peccei:1977ur], where anomalous $U(1)_\text{PQ}$ symmetry is introduced. The mechanism predicts the existence of a light pseudo-scalar field called axion [@Weinberg:1977ma; @Wilczek:1977pj], which is also a candidate of dark matter in the universe [@Preskill:1982cy; @Abbott:1982af; @Dine:1982ah].
The axion model is however not always cosmologically safe. The QCD dynamics explicitly break the $U(1)_\text{PQ}$ symmetry down to a discrete subgroup $Z_{N_\text{DW}}$. If the subgroup is non-trivial (i.e. $N_\text{DW}>1$), there exist stable domain walls which eventually dominate the energy density of the universe [@Zeldovich:1974uw; @Sikivie:1982qv]. The domain problem can be avoided if the PQ symmetry is already broken during inflation, but this scenario requires a small inflation scale to suppress the isocurvature perturbation [@Linde:1984ti; @Linde:1985yf; @Seckel:1985tj; @Lyth:1989pb; @Lyth:1991ub; @Turner:1990uz; @Linde:1991km] or a flat potential of the PQ symmetry breaking field so that the effective PQ symmetry breaking scale is large during inflation [@Linde:1991km].
The domain wall problem can be solved if the $Z_{N_\text{DW}}$ symmetry is also explicitly broken and domain walls are unstable [@Sikivie:1982qv]. Actually the PQ “symmetry" is not at all symmetry since it is explicitly broken by the quantum anomaly. There would be no wonder that the PQ symmetry is also explicitly broken by a small amount at the classical level, and there is no residual $Z_{N_\text{DW}}$ symmetry. For example, if the PQ symmetry is accidental symmetry as a result of other exact symmetry [@Lazarides:1985bj; @Casas:1987bw; @Randall:1992ut; @Barr:1992qq; @Holman:1992us; @Dine:1992vx; @Dias:2002gg; @Choi:2009jt; @Carpenter:2009zs; @Harigaya:2013vja; @Harigaya:2015soa; @Fukuda:2017ylt], we expect explicit PQ symmetry breaking by higher dimensional operators, which may also break the $Z_{N_\text{DW}}$ symmetry.[^1]
The unstable domain wall mainly decays into axions, which can explain the observed amount of dark matter [@Hiramatsu:2010yn; @Hiramatsu:2012sc; @Kawasaki:2014sqa] for a range of the decay constant $f_a$ much smaller than the one required for the misalignment mechanism [@Preskill:1982cy; @Abbott:1982af; @Dine:1982ah] ($f_a > 10^{11}~\text{GeV}$). Such a small decay constant is relavent for several future searches for solar axions [@Vogel:2013bta; @Armengaud:2014gea; @Anastassopoulos:2017kag] and halo axions [@Rybka:2014cya; @Hochberg:2016ajh; @TheMADMAXWorkingGroup:2016hpc; @Arvanitaki:2017nhi]. For a recent review of axion searches see e.g. [@Irastorza:2018dyq]. This scheme is however strongly constrained by non-observation of the effect of the strong CP phase. The explicit PQ symmetry breaking gives a small axion mass in addition to the one given by the QCD strong dynamics, and hence the cancellation of the strong CP phase becomes incomplete. The large enough explicit PQ symmetry breaking to obtain the dark matter abundance tends to yield too much strong CP phase unless $f_a$ is close to the astrophysical lower bound, or the phase of the explicit symmetry breaking term is accidentally small. In the previous works [@Hiramatsu:2010yn; @Hiramatsu:2012sc; @Kawasaki:2014sqa] it is assumed that the universe is radiation-dominated after the QCD phase transition. In this letter we investigate the case where the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dominated epoch. Such a cosmological scenario is actually expected if there is a very weakly coupled field such as a moduli field. The matter dominance results in the dilution of the axions emitted from domain walls, and hence the required magnitude of the explicit PQ symmetry breaking becomes smaller. We show how the viable parameter space is extended.
Axions from long-lived domain walls
===================================
We assume that the PQ symmetry is restored at some time after inflation and is spontaneously broken later. This include the cases where the Hubble scale during inflation is larger than the PQ symmetry breaking scale and thermal/non-thermal effects restore the PQ symmetry. Once the $U(1)_\text{PQ}$ symmetry is broken, cosmic strings are produced [@Kibble:1976sj]. The reconnection and the decay of strings maintain the number of strings per horizon size roughly one, which is so-called the scaling law [@Kibble:1984hp; @Bennett:1985qt].
As the temperature of the universe becomes as low as the QCD phase transition temperature, the explicit breaking of the PQ symmetry by the QCD dynamics becomes effective, and domain-walls are formed between the strings, with the wall-tension given by [@Hiramatsu:2012sc; @Huang:1985tt] $$\begin{aligned}
\sigma_\text{dw}\simeq 9 m_a f_a^2,\end{aligned}$$ where $m_a$ is the axion mass. In the parameter space of interest the domain walls decay after the QCD phase transition. Thus we consider the axion mass at the zero-temperature given by [@Weinberg:1977ma] $$\begin{aligned}
m_a \simeq 6~\text{meV} \frac{10^9~\text{GeV}}{f_a}.\end{aligned}$$ If there remains a discrete subgroup $Z_{N_\text{DW}} (N_\text{DW} > 1)$ of the PQ symmetry, the domain wall-string network is stable. This is for example the case with the DFSZ model [@Dine:1981rt; @Zhitnitsky:1980tq]. The network roughly follows the scaling-law [@Hiramatsu:2010yn; @Press:1989yh], with the energy density given by $$\begin{aligned}
\rho_\text{dw} (t) \simeq {\cal A} \frac{\sigma_\text{dw}}{t}.\end{aligned}$$ Here ${\cal A}$ is an $O(1)$ factor which can be determined by numerical simulations. The energy density of the network decreases to the second power of the scale factor of the universe, and eventually dominates the universe. This is inconsistent with the success of the standard cosmology, and is called the domain wall problem [@Sikivie:1982qv].
The problem may be solved if the $Z_{N_\text{DW}}$ symmetry is explicitly broken so that the domain walls are unstable. We parametrize the effect of the explicit breaking by the following bias term $$\begin{aligned}
V= - 2\Xi v^4 \text{cos}\left( \frac{a}{v} + \delta \right),\end{aligned}$$ where $v = N_\text{DW} f_a$ is the PQ symmetry breaking scale, $\delta$ and $\Xi$ are constants. We use the phase convention where the QCD non-perturbative effect creates the minima at $a = 2\pi k \times f_a (k= 0,1\mathchar`-N_\text{DW}-1)$. This bias term resolves the degeneracy between the $N_\text{DW}$ minima, and puts pressure on domain walls. The network collapses around the time $t_\text{dec}$ when the pressure beats the tension of domain walls [@Kawasaki:2014sqa], $$\begin{aligned}
t_\text{dec} \simeq C_d \frac{{\cal A} \sigma_\text{dw}}{\Xi v^4 \left( 1-\text{cos}\left( 2\pi / N_\text{DW}\right) \right) },\end{aligned}$$ where $C_d$ is a numerical constant.
The energy of the domain walls is transferred into axions. The resultant number density of the axion is given by $$\begin{aligned}
\label{eq:na}
n_a (t_\text{dec}) \simeq \frac{\rho_\text{dw} (t_\text{dec})}{\tilde{\epsilon}_a m_a},\end{aligned}$$ where $\tilde{\epsilon}_a$ is the average energy of the radiated axions normalized by the axion mass. We find that Eq. (\[eq:na\]) reproduces the number density estimated by numerically solving the equation of motion of the domain wall energy density and the axion number density with an accuracy of few ten percents, if we determine $t_\text{dec}$ (i.e. $C_d$) by the “10% criteria" in Ref. [@Kawasaki:2014sqa].
In order to suppress the axion abundance, the bias term must be sufficiently large so that the domain walls decay early enough. The bias term, which explicitly breaks the PQ symmetry, results in a non-zero strong CP phase. In the limit where the bias term is much smaller than the potential given by the QCD non-perturbative effect, the phase is given by $$\begin{aligned}
\theta \simeq - 2 N_\text{DW}^3 \Xi \text{sin}\delta \frac{f_a^2}{m_a^2}.\end{aligned}$$ The relation between the strong CP phase and the neutron electric dipole moment is estimated by various methods and approximation [@Baluni:1978rf; @Crewther:1979pi; @Kanaya:1981se; @Cea:1984qv; @Schnitzer:1983pb; @Musakhanov:1984qy; @Hong:2007tf], and the estimation are different from each others by a factor of $O(10)$. This results in the uncertainly in the upper bound on $\theta$. We conservatively accept the bound of $|\theta| < 10^{-10}$.
Axion abundance without entropy production
------------------------------------------
Now we evaluate the axion abundance and the required magnitude of $\Xi$, assuming that the universe is radiation-dominated below the QCD phase transition temperature. The axion abundance is estimated as [@Kawasaki:2014sqa] $$\begin{aligned}
\frac{\rho_a}{s} \simeq 7\times 10^{-10} \text{GeV} \left( \frac{10^9~\text{GeV}}{f_a} \right)^{1/2} \left( \frac{10^{-50}}{\Xi} \right)^{1/2} \frac{{\cal A}^{3/2} C_d^{1/2}}{\tilde{\epsilon}_a N_\text{DW}^2 \text{sin} (\pi / N_\text{DW})},\end{aligned}$$ where $s$ is the entropy density. The observed dark matter abundance, $\rho_\text{DM} /s \simeq 4 \times 10^{-10}$ GeV, is reproduced with the magnitude of the explicit PQ symmetry breaking given by $$\begin{aligned}
\label{eq:bias_RD}
\Xi \simeq 2\times 10^{-50} \frac{10^9~\text{GeV}}{f_a} \frac{{\cal A}^3 C_d}{\tilde{\epsilon}_a^2 N_\text{DW}^4 \text{sin}^2 (\pi / N_\text{DW})},\end{aligned}$$ which yields a non-zero strong CP phase, $$\begin{aligned}
\theta \simeq 9 \times 10^{-10} \text{sin}\delta \left( \frac{f_a}{10^9~\text{GeV}} \right)^{3} \frac{{\cal A}^3 C_d}{\tilde{\epsilon}_a^2 N_\text{DW} \text{sin}^2 (\pi / N_\text{DW})}.\end{aligned}$$ Using Eq. (\[eq:bias\_RD\]), the temperature at which the domain walls decay is given by $$\begin{aligned}
\label{eq:temp_DWdec}
T_\text{dec,DW} \simeq 30~\text{MeV} \frac{f_a}{10^9~\text{GeV}} \frac{{\cal A}}{\tilde{\epsilon}_a}.\end{aligned}$$ In Fig. \[fig:phase\], we show the prediction on the strong CP phase by a red solid line. We list the numerical values of ${\cal A}$, $C_d$ and $\tilde{\epsilon}_a$ taken from [@Kawasaki:2014sqa] in Table \[tab:constants\]. For $N_\text{DW}=2$, to satisfy the upper bound $|\theta|< 10^{-10}$, $f_a < 8 \times 10^8 $ GeV is required for ${\sin} \delta \sim 1$. Such parameter region is disfavored by the cooling of white dwarfs [@Raffelt:2006cw], if there exist a significant axion-electron coupling as is the case with the DFSZ model. Accidentally small $\delta$ allows for larger $f_a$. For example, if $\delta < 0.1$, $f_a < 2 \times 10^9 $ GeV is allowed.[^2] The constraint is stronger for $N_\text{DW}=6$. This is because of a larger ${\cal A}$ for larger $N_\text{DW}$ and hence larger domain wall energy, which can understood by the fact that more domain walls are attached to a string. To satisfy the bound on the strong CP phase, $f_a < 3 \times 10^8 $ GeV is required for ${\sin} \delta \sim 1$.
${\cal A}$ $C_d$ $\tilde{\epsilon}_a$
----------------- ------------ ------- ----------------------
$N_\text{DW}=2$ 0.7 5.3 2.0
$N_\text{DW}=6$ 2.2 2.1 2.0
: Numerical values taken from [@Kawasaki:2014sqa].
\[tab:constants\]
![*The prediction on the strong CP phase as a function of $f_a$. The red lines assume that the universe is radiation-dominated after the QCD phase transition. The blue (green) lines assume that the universe is matter-dominated when the domain wall decays, and the matter decays at the temperature of 4 (1) MeV.* []{data-label="fig:phase"}](finetuning_fa_2.pdf "fig:"){width="0.6\linewidth"} ![*The prediction on the strong CP phase as a function of $f_a$. The red lines assume that the universe is radiation-dominated after the QCD phase transition. The blue (green) lines assume that the universe is matter-dominated when the domain wall decays, and the matter decays at the temperature of 4 (1) MeV.* []{data-label="fig:phase"}](finetuning_fa_6.pdf "fig:"){width="0.6\linewidth"}
Axion abundance with entropy production
---------------------------------------
As is shown in Eq. (\[eq:temp\_DWdec\]), for the parameters which reproduce the observed dark matter abundance, the domain walls decay above the temperature of the MeV scale. The era is before the onset of the Big-Bang Nucleosythesis, and the universe may be matter-dominated. Then the axion abundance from the domain walls is smaller than the simple case of the radiation dominance and the predicted strong CP phase is suppressed.
Actually if there exists a light, very weakly interacting particle, after the inflaton decays and the universe becomes radiation-dominated, the particle eventually dominates the energy density of the universe. We call such a particle as a moduli, although it is not necessarily a scalar particle. For example, a perticle with a mass $m$ which couples to the standard model particle with a dimension-5 operator suppressed by a scale $M_*$ decays around the temperature $$\begin{aligned}
T_{\text{dec},m} \sim 10 \text{MeV} \left( \frac{m}{10^5~\text{GeV}} \right)^{3/2} \frac{10^{18}~\text{GeV}}{M_*}.\end{aligned}$$ A gravitationally coupled particle ($M_*\sim 10^{18}$ GeV) with a mass around 100 TeV gives $T_{\text{dec},m}\sim$ MeV. In the supersymmetric standard model with the supersymmetry breaking mediated by gravitational interaction, the masses of moduli fields are as large as the masses of the scalar partners of the standard model fermions. The scalar masses of $O(100)$ TeV can explain the observed higgs mass of 125 GeV [@Okada:1990vk; @Ellis:1990nz; @Haber:1990aw; @Hall:2011jd; @Ibe:2011aa]. If $M_* \sim f_a$, which is the case for the radial direction of the PQ symmetry breaking field, $m=O(0.1\mathchar`-1)$ GeV yields a MeV scale $T_{\text{dec},m}$.
When the domain walls decay, the ratio of the number density of the axion to the energy density of the moduli, $\rho_m$, is given by $$\begin{aligned}
\frac{n_a}{\rho_m} = \frac{3 n_a(t_\text{dec})}{ 4 \mpl / t_\text{dec}^2 },\end{aligned}$$ which does not change under the expansion of the universe until the moduli decays. Converting $\rho_m$ to the entropy density when the moduli decays (see the appendix), we obtain $$\begin{aligned}
\label{eq:abundance_md}
\frac{\rho_a}{s} \simeq 1.2 T_{\text{dec},m} m_a\frac{n_a}{\rho_m} = 4\times 10^{-11} \text{GeV} \left( \frac{10^9~\text{GeV}}{f_a} \right)^{2} \frac{10^{-50}}{\Xi} \frac{ T_{\text{dec},m} }{1~\text{MeV}} \frac{{\cal A}^2 C_d}{\tilde{\epsilon}_a N_\text{DW}^4 \text{sin}^2 (\pi / N_\text{DW})},\end{aligned}$$ where $T_{\text{dec},m}$ is defined by the decay rate of the moduli $\Gamma$ as $$\begin{aligned}
\label{eq:Tdec_def}
\Gamma = 3 H(T_{\text{dec},m}),\end{aligned}$$ with the Hubble scale evaluated by that during radiation dominated era. The dark matter abundance is obtained for $$\begin{aligned}
\Xi \simeq 1\times 10^{-51} \left( \frac{10^9~\text{GeV}}{f_a} \right)^2 \frac{ T_{\text{dec},m} }{1~\text{MeV}} \frac{ {\cal A}^2 C_d }{ \tilde{\epsilon}_a N_\text{DW}^4 \text{sin}^2 (\pi / N_\text{DW})},\end{aligned}$$ which yields a non-zero strong CP phase, $$\begin{aligned}
\theta \simeq 3 \times 10^{-11} \text{sin}\delta \left( \frac{f_a}{10^9~\text{GeV}} \right)^{2} \frac{ T_{\text{dec},m} }{1~\text{MeV}} \frac{{\cal A}^2 C_d}{\tilde{\epsilon}_a N_\text{DW} \text{sin}^2 (\pi / N_\text{DW})}.\end{aligned}$$ The temperature $T_{\text{dec},m}$ is constrained to be $T_{\text{dec},m}> 0.7$ MeV, as the low temperature leads to incomplete thermalization of neutrinos, affecting the Big-Bang nucleosynthesis [@Kawasaki:2000en][^3]. This also reduces the energy density of the relativistic component of the universe, which changes the expansion history of the universe and affects the spectrum of the cosmic microwave background. The Planck satellite obtains the constraint $N_\text{eff}> 2.8$ (95%C.L.) [@Ade:2015xua]. Comparing this with the results in [@Kawasaki:2000en], we obtain $T_{\text{dec},m}> 4$ MeV. This bound can be evaded if the moduli has a significant branching fraction to radiations such as neutrinos or axions. We consider two reference values $T_{\text{dec},m}=1$ and $4$ MeV.
The prediction on the strong CP phase is shown in Fig. \[fig:phase\] by the solid blue and green lines, with $T_\text{dec,m} = 4$ and 1 MeV, respectively. For $N_\text{DW} =2$ with $T_{\text{dec},m}=1$ MeV, the fine-tuning of $\delta$ is not necessary as long as $f_a< 2\times 10^9$ GeV. For that large $f_a$, fine-tuning of about 10 % is required without the matter domination. Allowing 10 % tuning, $f_a$ may be as large as $7\times 10^{9}$ GeV.
Summary and discussion
======================
We comment on the production mechanisms of axion dark matter for a small decay constant discussed in the literature. If the misalignment angle is very close to the maximal one, the un-harmonic effect delays the commencement of the oscillation of the axion. This enhances the axion abundance and reproduces the observed dark matter abundance even if $f_a \ll 10^{11}$ GeV [@Lyth:1991ub; @Turner:1985si; @Visinelli:2009zm]. This scenario requires the fine-tuning of the misalignment angle, as well as a very small inflation scale to suppress the isocurvature perturbation. If the radial direction of the PQ symmetry breaking field takes a large value in the early universe, the production of the axions by the parametric resonance [@Kofman:1994rk; @Kofman:1997yn] can produce axion dark matter [@Co:2017mop]. The scenario requires that the potential of the PQ symmetry breaking field is flat, which would call for supersymmetry. Kinetic energy domination around the QCD phase transition enhances the axion abundance produced by the misalignment mechanism and allows for a small decay constant [@Visinelli:2009kt].
In this paper we have investigated the production of axions from unstable domain walls because of the explicit PQ symmetry breaking, assuming that the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dominated epoch. This scenario can explain the observed amount of dark matter by the axion without tuning the phase of the explicit breaking as long as $f_a < O(10^{9})$ GeV. Allowing the tuning of $O(10)$%, $f_a < O(10^{10})$ GeV is allowed.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the Director, Office of Science, Office of High Energy and Nuclear Physics, of the US Department of Energy under Contract DE-AC02- 05CH11231 (K.H.), the National Science Foundation under grants PHY-1316783 and PHY-1521446 (K.H), JSPS KAKENHI Grant Nos. 17H01131 (M.K.) and 17K05434 (M.K.), MEXT KAKENHI Grant No. 15H05889 (M.K.), and World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan (M.K.).
Derivation of Eq. (\[eq:abundance\_md\])
========================================
The equations governing the evolution of the energy densities of the moduli, $\rho_m$, and radiation, $\rho_r$, and the scalar factor, $a$, is given by $$\begin{aligned}
\dot{\rho_m} + 3 H \rho_m = - \Gamma \rho_m,\\
\dot{\rho_r} + 4 H \rho_r = + \Gamma \rho_m, \\
\frac{\dot{a}}{a} = H = \frac{1}{\sqrt{3} \mpl} \sqrt{\rho_m + \rho_r},\end{aligned}$$ with the initial conditions given at $t_i \ll 1/ \Gamma$, $$\begin{aligned}
\rho_m(t= t_i) \equiv \rho_{m,i}, \\
\rho_r(t= t_i) = 0, \\
a(t= t_i) \equiv a_i\end{aligned}$$ After the change of variables $$\begin{aligned}
t = x / \Gamma,~\rho_m = a^{-3} m \Gamma^2 \mpl^2,~\rho_r = a^{-4} r \Gamma^2 \mpl^2,\end{aligned}$$ the equations are given by $$\begin{aligned}
\frac{\text{d} m}{\text{d} x} = - m,\\
\frac{\text{d} r}{\text{d} x} = a m, \\
\frac{\text{d} a}{\text{d} x} = \frac{a}{\sqrt{3}} \sqrt{a^{-3} m + a^{-4} r}.\end{aligned}$$ The solution for $m$ is $$\begin{aligned}
m = e^{-(x-x_i)} m_i,\end{aligned}$$ where $m_i \equiv m(x_i)$. The evolution equations of $r$ and $a$ are given by $$\begin{aligned}
\label{eq:drdx}
\frac{\text{d} r}{\text{d} x} = a e^{-(x-x_i)} m_i,\\
\label{eq:dadx}
\frac{\text{d} a}{\text{d} x} = \frac{a}{\sqrt{3}} \sqrt{a^{-3} e^{-(x-x_i)} m_i + a^{-4} r}.\end{aligned}$$ For $x \rightarrow \infty $, $r$ becomes a constant $\equiv r_f$.
The quantity we are interested in is $$\begin{aligned}
\frac{n_a}{s} (t \rightarrow \infty) = \frac{n_a}{\rho_r^{3/4}} (t \rightarrow \infty) \frac{\rho_r^{3/4}}{s} = \frac{n_a}{\rho_m} (t= t_i) \frac{m_i}{r_f^{3/4}} \sqrt{\Gamma \mpl} \frac{\rho_r^{3/4}}{s}.\end{aligned}$$ Using the definition of $T_\text{dec,m}$ in Eq. (\[eq:Tdec\_def\]), we obtain $$\begin{aligned}
\frac{n_a}{s} (t \rightarrow \infty) = \frac{n_a}{\rho_m} (t= t_i) T_\text{dec,m} \frac{m_i}{r_f^{3/4}} \frac{3^{5/4}}{4}.\end{aligned}$$ A numerical evaluation of Eqs. (\[eq:drdx\]) and (\[eq:dadx\]) gives $m_i/r_f^{3/4} \simeq 1.2$, and we obtain $$\begin{aligned}
\frac{n_a}{s} (t \rightarrow \infty) \simeq 1.2 \times T_\text{dec,m} \frac{n_a}{\rho_m} (t= t_i).\end{aligned}$$
[99]{}
G. ’t Hooft, Phys. Rev. Lett. **37**, 8 (1976). C. A. Baker *et al.*, Phys. Rev. Lett. **97**, 131801 (2006) \[hep-ex/0602020\]. R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. **38**, 1440 (1977). R. D. Peccei and H. R. Quinn, Phys. Rev. D **16**, 1791 (1977).
S. Weinberg, Phys. Rev. Lett. **40**, 223 (1978). F. Wilczek, Phys. Rev. Lett. **40**, 279 (1978). J. Preskill, M. B. Wise and F. Wilczek, Phys. Lett. B **120**, 127 (1983). L. F. Abbott and P. Sikivie, Phys. Lett. B **120**, 133 (1983). M. Dine and W. Fischler, Phys. Lett. B **120**, 137 (1983). Y. B. Zeldovich, I. Y. Kobzarev and L. B. Okun, Zh. Eksp. Teor. Fiz. **67**, 3 (1974) \[Sov. Phys. JETP **40**, 1 (1974)\]. P. Sikivie, Phys. Rev. Lett. **48**, 1156 (1982). A. D. Linde, JETP Lett. **40**, 1333 (1984) \[Pisma Zh. Eksp. Teor. Fiz. **40**, 496 (1984)\]; A. D. Linde, Phys. Lett. B **158**, 375 (1985). D. Seckel and M. S. Turner, Phys. Rev. D **32**, 3178 (1985). D. H. Lyth, Phys. Lett. B **236**, 408 (1990); D. H. Lyth, Phys. Rev. D **45**, 3394 (1992). M. S. Turner and F. Wilczek, Phys. Rev. Lett. **66**, 5 (1991). A. D. Linde, Phys. Lett. B **259**, 38 (1991). G. Lazarides, C. Panagiotakopoulos and Q. Shafi, Phys. Rev. Lett. **56**, 432 (1986). J. A. Casas and G. G. Ross, Phys. Lett. B **192**, 119 (1987). L. Randall, Phys. Lett. B **284**, 77 (1992). S. M. Barr and D. Seckel, Phys. Rev. D **46**, 539 (1992). R. Holman, S. D. H. Hsu, T. W. Kephart, E. W. Kolb, R. Watkins and L. M. Widrow, Phys. Lett. B **282**, 132 (1992) \[hep-ph/9203206\]. M. Dine, hep-th/9207045. A. G. Dias, V. Pleitez and M. D. Tonasse, Phys. Rev. D **67**, 095008 (2003) \[hep-ph/0211107\]. K. S. Choi, H. P. Nilles, S. Ramos-Sanchez and P. K. S. Vaudrevange, Phys. Lett. B **675**, 381 (2009) \[arXiv:0902.3070 \[hep-th\]\]. L. M. Carpenter, M. Dine and G. Festuccia, Phys. Rev. D **80**, 125017 (2009) \[arXiv:0906.1273 \[hep-th\]\]. K. Harigaya, M. Ibe, K. Schmitz and T. T. Yanagida, Phys. Rev. D **88**, no. 7, 075022 (2013) \[arXiv:1308.1227 \[hep-ph\]\]. K. Harigaya, M. Ibe, K. Schmitz and T. T. Yanagida, Phys. Rev. D **92**, no. 7, 075003 (2015) \[arXiv:1505.07388 \[hep-ph\]\]. H. Fukuda, M. Ibe, M. Suzuki and T. T. Yanagida, Phys. Lett. B **771**, 327 (2017) \[arXiv:1703.01112 \[hep-ph\]\]. J. K. Vogel *et al.*, arXiv:1302.3273 \[physics.ins-det\]. E. Armengaud *et al.*, JINST **9**, T05002 (2014) \[arXiv:1401.3233 \[physics.ins-det\]\]. V. Anastassopoulos *et al.* \[TASTE Collaboration\], arXiv:1706.09378 \[hep-ph\]. G. Rybka, A. Wagner, A. Brill, K. Ramos, R. Percival and K. Patel, Phys. Rev. D **91**, no. 1, 011701 (2015) \[arXiv:1403.3121 \[physics.ins-det\]\]. Y. Hochberg, T. Lin and K. M. Zurek, Phys. Rev. D **94**, no. 1, 015019 (2016) \[arXiv:1604.06800 \[hep-ph\]\]. A. Caldwell *et al.* \[MADMAX Working Group\], Phys. Rev. Lett. **118**, no. 9, 091801 (2017) \[arXiv:1611.05865 \[physics.ins-det\]\]. A. Arvanitaki, S. Dimopoulos and K. Van Tilburg, arXiv:1709.05354 \[hep-ph\].
I. G. Irastorza and J. Redondo, arXiv:1801.08127 \[hep-ph\]. A. Ringwald and K. Saikawa, Phys. Rev. D **93**, no. 8, 085031 (2016) Addendum: \[Phys. Rev. D **94**, no. 4, 049908 (2016)\] \[arXiv:1512.06436 \[hep-ph\]\]. T. Hiramatsu, M. Kawasaki and K. Saikawa, JCAP **1108**, 030 (2011) \[arXiv:1012.4558 \[astro-ph.CO\]\]. T. Hiramatsu, M. Kawasaki, K. Saikawa and T. Sekiguchi, JCAP **1301**, 001 (2013) \[arXiv:1207.3166 \[hep-ph\]\]. M. Kawasaki, K. Saikawa and T. Sekiguchi, Phys. Rev. D **91**, no. 6, 065014 (2015) \[arXiv:1412.0789 \[hep-ph\]\]. T. W. B. Kibble, J. Phys. A **9**, 1387 (1976). T. W. B. Kibble, Nucl. Phys. B **252**, 227 (1985) Erratum: \[Nucl. Phys. B **261**, 750 (1985)\]. D. P. Bennett, Phys. Rev. D **33**, 872 (1986) Erratum: \[Phys. Rev. D **34**, 3932 (1986)\]. M. Dine, W. Fischler and M. Srednicki, Phys. Lett. B **104**, 199 (1981). A. R. Zhitnitsky, Sov. J. Nucl. Phys. **31**, 260 (1980) \[Yad. Fiz. **31**, 497 (1980)\]. W. H. Press, B. S. Ryden and D. N. Spergel, Astrophys. J. **347**, 590 (1989). M. C. Huang and P. Sikivie, Phys. Rev. D **32**, 1560 (1985).
V. Baluni, Phys. Rev. D **19**, 2227 (1979). R. J. Crewther, P. Di Vecchia, G. Veneziano and E. Witten, Phys. Lett. **88B**, 123 (1979) Erratum: \[Phys. Lett. **91B**, 487 (1980)\]. K. Kanaya and M. Kobayashi, Prog. Theor. Phys. **66**, 2173 (1981). P. Cea and G. Nardulli, Phys. Lett. **144B**, 115 (1984). H. J. Schnitzer, Phys. Lett. **139B**, 217 (1984). M. M. Musakhanov and Z. Z. Israilov, Phys. Lett. **137B**, 419 (1984). D. K. Hong, H. C. Kim, S. Siwach and H. U. Yee, JHEP **0711**, 036 (2007) \[arXiv:0709.0314 \[hep-ph\]\]. G. G. Raffelt, Lect. Notes Phys. **741**, 51 (2008) \[hep-ph/0611350\]. Y. Okada, M. Yamaguchi, T. Yanagida, Prog. Theor. Phys. **85** (1991) 1-6. J. R. Ellis, G. Ridolfi, F. Zwirner, Phys. Lett. **B257** (1991) 83-91. H. E. Haber, R. Hempfling, Phys. Rev. Lett. **66** (1991) 1815-1818.
L. J. Hall and Y. Nomura, JHEP **1201**, 082 (2012) \[arXiv:1111.4519 \[hep-ph\]\];
M. Ibe and T. T. Yanagida, Phys. Lett. B **709**, 374 (2012) \[arXiv:1112.2462 \[hep-ph\]\];
M. Kawasaki, K. Kohri and N. Sugiyama, Phys. Rev. D **62**, 023506 (2000) \[astro-ph/0002127\]. P. A. R. Ade *et al.* \[Planck Collaboration\], Astron. Astrophys. **594**, A13 (2016) \[arXiv:1502.01589 \[astro-ph.CO\]\]. M. S. Turner, Phys. Rev. D **33**, 889 (1986).
L. Visinelli and P. Gondolo, Phys. Rev. D **80**, 035024 (2009) \[arXiv:0903.4377 \[astro-ph.CO\]\].
L. Kofman, A. D. Linde and A. A. Starobinsky, Phys. Rev. Lett. **73**, 3195 (1994) \[hep-th/9405187\]. L. Kofman, A. D. Linde and A. A. Starobinsky, Phys. Rev. D **56**, 3258 (1997) \[hep-ph/9704452\].
R. T. Co, L. J. Hall and K. Harigaya, arXiv:1711.10486 \[hep-ph\]. L. Visinelli and P. Gondolo, Phys. Rev. D [**81**]{}, 063508 (2010) \[arXiv:0912.0015 \[astro-ph.CO\]\].
[^1]: This seems to be difficult to achieve in models with one-step symmetry breaking, as the exact symmetry results in stable topological defects. Models with several symmetry breakings like the one in Ref. [@Barr:1992qq] can have a desired property. Ref. [@Ringwald:2015dsf] considers one-field models, but the symmetry imposed there is anomalous and needs further completion of the model.
[^2]: Ref. [@Kawasaki:2014sqa] assumes an agressive bound $|\theta| < 7\times 10^{-12}$ and finds much more restricted results.
[^3]: If the moduli decays into hadrons with significant hadronic branch the BBN gives more stringent constraint as $T_{\text{dec},m} > 3\text{--}4$ MeV [@Kawasaki:2000en].
|
---
abstract: 'Dynamical control of biological systems is often restricted by the practical constraint of unidirectional parameter perturbations. We show that such a restriction introduces surprising complexity to the stability of one-dimensional map systems and can actually improve controllability. We present experimental cardiac control results that support these analyses. Finally, we develop new control algorithms that exploit the structure of the restricted-control stability zones to automatically adapt the control feedback parameter and thereby achieve improved robustness to noise and drifting system parameters.'
address:
- ' $^{1}$ Entelos, Inc., Menlo Park, CA 94025 '
- |
$^{2}$ Division of Cardiology, Department of Medicine,\
Weill Medical College of Cornell University, New York, NY 10021
author:
- 'Kevin Hall$^{1,}$[^1] and David J. Christini$^{2,}$[^2]'
title: 'Restricted feedback control of one-dimensional maps'
---
Introduction
============
Recent success controlling complex dynamics of nonlinear physical and chemical systems [@ditto:1990a; @hunt:1991a; @peng:1991a; @petrov:1992a; @roy:1992a; @carroll:1992a; @gills:1992a; @schwartz:1993b; @bielawski:1993a; @gauthier:1994a; @socolar:1994a; @petrov:1994a; @rulkov:1994a; @colet:1994a; @hubinger:1994a; @christini:1996b] has opened the door for the control of biological rhythms. Some researchers have speculated about the medical implications of controlling heart-rate dynamics or brain rhythms [@garfinkel:1992a; @schiff:1994b; @weiss:1994a; @glass:1996a; @hall:1997a; @christini:1999c]. However, biological systems typically have characteristics that require special consideration. For example, biological control studies to date [@garfinkel:1992a; @glass:1994a; @schiff:1994b; @christini:1996a; @watanabe:1996a; @brandt:1996a; @hall:1997a; @christini:1997b; @brandt:1997b; @sauer:1997a; @christini:2000a] have required that the control interventions be unidirectional — only allowing shortening of a parameter. Such a restriction is somewhat analogous to trying to balance a broomstick vertically on one’s palm using horizontal hand movements in only one direction. Intuitively, one might expect that such a restriction would limit controllability. However, as we will demonstrate in this paper, such a restriction introduces some surprising complexity to the stability properties of controlled one-dimensional map systems.
In fact, the unidirectional restriction can actually improve the controllability of some systems [@fn:gauthier_ind]. In this paper, we will show how restricted control can introduce stability zones that do not exist in the unrestricted case. Furthermore, we will show that some of these zones were present in recent cardiac control experiments [@hall:1997a]. Finally, we will exploit the structure of the stability zones in robust new control algorithms that automatically adapt the control feedback parameter.
Delayed feedback control of systems described by one-dimensional maps
=====================================================================
In this study we will consider the control of systems whose dynamics can be described by one-dimensional maps: $$X_{n+1} = f(X_n, \lambda),
\label{eqn:map4}$$ where $X_n$ is the variable to be controlled and $\lambda$ is an experimentally accessible system parameter. The goal is to stabilize the system state point $\xi_n=[X_{n},X_{n-1}]$ about an unstable period-1 fixed point $\xi^*=[X^*,X^*]$, where $X^* = f(X^*, \lambda)$, by perturbing $\lambda$ by an amount: $$\delta \lambda_n = \frac{\alpha}{2} (X_{n-1} - X_n),
\label{eqn:deltalambda}$$ where $\alpha$ is the feedback gain parameter.
The advantage of such a control scheme is that relatively little *a priori* system information is required to implement control and stabilize $\xi^*$. The only requirement is knowledge of the sign of $\frac{\partial f}{\partial
\lambda}$ so that perturbations can be applied in the correct direction. In fact, knowledge of the value of $\xi^*$ is unnecessary because the controlled system’s fixed point is identical to that of the uncontrolled system. Furthermore, if the fixed point drifts during the course of the control (as is common for biological systems), the controlled system will track the fixed point, provided that the system stays in the stable range of the feedback gain parameter $\alpha$.
The control algorithm of Eq. \[eqn:deltalambda\] is an example of delayed feedback control, a technique that has been used in a variety of modeling and experimental studies [@hunt:1991a; @peng:1991a; @petrov:1992a; @pyragas:1992a; @gauthier:1994a; @christini:1996a]. In section \[sec:AVNexperiments\] we will present an example of a biological system with constraints that restrict the control algorithm — only allowing unidirectional perturbations of $\lambda$. The purpose of this study is to examine the implications of such a restriction.
Linear stability analysis of unrestricted delayed feedback control
------------------------------------------------------------------
For unrestricted control, in which $\delta \lambda_n$ can be positive or negative, linearizing the controlled system about a fixed point at the origin gives: $$\begin{aligned}
X_{n+1} & = & A X_n + \beta (Y_n - X_n), \label{eqn:lin}\\
Y_{n+1} & = & X_n, \nonumber\end{aligned}$$ where $A \equiv \frac{\partial f}{\partial X}|_{\xi^*}$ and $\beta \equiv
\frac{\alpha}{2}\bigl( \frac{\partial f}{\partial \lambda}
\bigr)|_{\xi^*}$.
The eigenvalues of Eq. \[eqn:lin\] are $\bigl(A - \beta \pm \sqrt{(A -
\beta)^2 + 4 \beta} \bigr)/2$. The fixed point is stable provided that both eigenvalues fall inside the unit circle. This condition is met when: $$-1 < \beta < \frac{1}{2}(A + 1),
\label{eqn:unrestzone}$$ for $A < 1$ [@hall:1997a; @ushio:1996a]. Note that the stability zone shrinks to zero for $A \leq -3$, thereby limiting the applicability of the unrestricted control algorithm to maps with a sufficiently shallow slope (i.e., $-3 \leq A <1$) at $\xi^*$. Furthermore, for $A > 1$ there exists no real value for $\beta$ such that the eigenvalues fall within the unit circle. Thus, unstable positively-sloped fixed points cannot be stabilized.
Restricted delayed feedback control
===================================
Restricting the above control algorithm by only allowing shortening of $\lambda$ gives the following controller: $$\begin{aligned}
\delta \lambda_n = \Theta_n \frac{\alpha}{2} (X_{n-1} - X_n),
\label{eqn:restrictdeltalambda}\end{aligned}$$ where
$$\begin{aligned}
\Theta_n & = &
\left\{
\begin{array}{ll}
1 & \;\; \hbox{if } (X_n - Y_n) > 0, \\
0 & \;\; \hbox{otherwise}.
\end{array}
\right.
\label{eqn:cond}\end{aligned}$$
Thus, when $\Theta_n = 1$ the control is active (i.e., a perturbation is delivered), and when $\Theta_n = 0$ the control is inactive (i.e., no perturbation is given).
Linear stability analysis of restricted delayed feedback control
----------------------------------------------------------------
The restricted control algorithm of Eq. \[eqn:restrictdeltalambda\] gives the following linearized controlled system: $$\begin{aligned}
X_{n+1} & = & A X_n + \Theta_n \beta (Y_n - X_n), \\
\label{eqn:restlin}
Y_{n+1} & = & X_n. \nonumber\end{aligned}$$ Geometrically, the restriction of Eq. \[eqn:cond\] means that perturbations will only be applied if the state point $\xi_n$ lies above the return-map line of identity $X_{n+1} = X_n$. The dynamical effects of this restriction depend on the sign of the slope at $\xi^*$.
### Restricted control for $A < -1$ {#sec:control_a_below_minus1}
Typical dynamics of restricted control for negatively-sloped unstable directions ($A<-1$) are depicted in Fig \[fig:x\_return\_a\_below\_minus1\]. This figure shows eight control trials of a linear map with $A = -4$ for different values of $\beta$; there are four examples of stable control and four examples of unstable control.
Figure \[fig:x\_return\_a\_below\_minus1\](a) shows a case in which the restricted control algorithm failed to stabilize the unstable fixed point $\xi^*$ \[which is located at the intersection of the uncontrolled system map (solid line) and the line of identity (dotted line), and is denoted by a solid triangle\] with $\beta=-2.80$. The dot-dash lines correspond to the system maps when $\lambda$ is perturbed. A series of arrows originate at the initial state point and connect consecutive state points (solid circles, numbered consecutively). In this case, the initial state point ($1$) is followed by a control intervention, which causes the next iterate ($2$) to fall below the line of identity. According to Eq. \[eqn:cond\], the next iterate ($3$) will be uncontrolled and therefore will fall on the solid line. Furthermore, because the first controlled iterate ($2$) was less than the fixed point, the next iterate ($3$) will be above the line of identity, leading to a control intervention at the fourth iterate ($4$). Thus, control is applied every other iterate so that the sequence of $\Theta_n$ is 0101.... In this case, the fixed point is not stabilized because control fails to direct the system closer to the fixed point (i.e., the arrows spiral away from $\xi^*$).
Figure \[fig:x\_return\_a\_below\_minus1\](b) shows control with the same value of $A$, but using a slightly more negative $\beta$ value, $\beta=-3.1$. For these parameter settings, it can be seen that control is also applied every other iterate so that the sequence of $\Theta_n$ is again 0101.... However in this case, the fixed point is stabilized successfully because the control interventions direct the system closer to the fixed point (i.e., the arrows spiral towards $\xi^*$). Note that in order to maximize the clarity of the control sequence diagram, the axes in panel (a) are not scaled the same as those for panel (b). Similarly, for all of Fig. \[fig:x\_return\_a\_below\_minus1\], axes from different panels are not necessarily scaled the same.
Figure \[fig:x\_return\_a\_below\_minus1\](c) shows a trial in which $\beta$ was decreased to $-3.23$. As in the previous example, the first controlled iterate ($3$) is below the line of identity (dictating that the next iterate ($4$) is uncontrolled). However, in this case the perturbation is larger than would occur with the parameter settings of Fig. \[fig:x\_return\_a\_below\_minus1\](b), such that the controlled iterate ($3$) is slightly larger than the fixed point. This dictates that the next iterate ($4$) is below the line of identity, which leads to a second consecutive uncontrolled iterate ($5$). Thus, control is applied in a 001001... sequence. This sequence is stable for $\beta=-3.23$ because the control perturbations direct the system closer to the fixed point. However, if $\beta$ is decreased to $-3.40$, it can be seen from Fig. \[fig:x\_return\_a\_below\_minus1\](d) that this generates a 001001... control sequence that is unstable because the system is directed away from the fixed point.
If $\beta$ is made more negative, then a new control sequence is achieved. Figure \[fig:x\_return\_a\_below\_minus1\](e) shows an unstable 011011... sequence for $\beta=-5.50$. In this case, the first perturbation is so large that the controlled iterate ($2$) is above the line of identity, dictating that the next iterate ($3$) is also controlled. The second controlled iterate ($3$) is below the line of identity and below the fixed point, thereby producing the 011011... sequence. In this case, the fixed point is not stabilized because the control perturbations moved the state point away from the fixed point. However, when $\beta$ is decreased to $-5.76$, as seen in Fig. \[fig:x\_return\_a\_below\_minus1\](f), a 011011... sequence stabilizes the fixed point.
Like the transition from the stable 0101... sequence to the stable 001001... sequence depicted in Figs. \[fig:x\_return\_a\_below\_minus1\](b) and (c), there is a transition from the stable 011011... sequence to the stable 00110011... sequence as $\beta$ is decreased further. Figure \[fig:x\_return\_a\_below\_minus1\](g) shows the stable 00110011... case for $\beta=-5.798$. The 00110011... sequence becomes unstable as $\beta$ is decreased still further; Figure \[fig:x\_return\_a\_below\_minus1\](h) depicts this case for $\beta=-5.95$.
The progression of unstable and stable periodic control sequences continues indefinitely as $\beta$ is decreased. In fact, the switching parameter $\Theta_n$ imposes the following progression of control sequences as $\beta$ is decreased from zero: unstable $01^1$, stable $01^1$, stable $001^1$, unstable $001^1$, unstable $01^2$, stable $01^2$, stable $001^2$, unstable $001^2$, ..., unstable $01^\infty$, stable $01^\infty$, stable $001^\infty$, unstable $001^\infty$, where $1^k$ denotes $k$ consecutive ones (control perturbations) before the sequence repeats itself.
Because of the progression of the control sequences imposed by the switching term $\Theta_n$, $X_{k+1}$ can be expressed as: $$X_{k+1} = e_k X_0,
\label{eqn:x_k+1}$$ where $e_k$ is given by the following iterative expression: $$e_k = (A - \beta) e_{k-1} + \beta e_{k-2},
\label{eqn:e_k}$$ with $e_0 = A$ for all sequences and $e_1 = A^2 + \beta (1 - A)$ for the $01^k$ sequences or $e_1 = A^2$ for the $001^k$ sequences.
The boundaries of the stability zones can be computed by using the criterion that stable sequences move the system closer to the fixed point after one control sequence. Because $X_{k+1}$ is the last iterate of the first $01^k$ control sequence and $X_{k+2}$ is the last iterate of the first $001^k$ sequence, the stability conditions are $e_k < 1$ and $e_{k+1} < 1$ for the $01^k$ and $001^k$ sequences, respectively. Therefore, the boundaries are given by $k$ degree polynomials in $\beta$. For example, the $k=1$ control sequences are stable for $1 + A + 1/A \leq \beta \leq 1+A$ for $A <
-1$ [@bielawski:1993a]. Figure \[fig:beta\_vs\_a\_below\_minus1\] depicts the stability zones (shaded regions) for $k = 1$ and $k = 2$. The boundaries between the stable $01^k$ and $001^k$ sequences are defined by the condition $e_k = 0$ (dotted curves in Fig. \[fig:beta\_vs\_a\_below\_minus1\]). These curves mark the optimal parameter values for a given stability region, because the fixed point is reached after a single control sequence $01^k$.
The striking feature of this analysis is that the domain of control is extended by the restriction of Eq. \[eqn:cond\] [@fn:gauthier_ind]. In fact, Fig. \[fig:beta\_vs\_a\_below\_minus1\] shows that for all $A<-1$, stable control sequences exist for the restricted system. This is in contrast with the unrestricted system, for which stable control sequences exist only for $A>-3$, as shown by the dashed triangular region marking its stability zone (according to Eq. \[eqn:unrestzone\]).
While there are an infinite number of stable zones corresponding to an arbitrary number $k$ consecutive control perturbations, the stability zones are bounded by the curve $\beta = A - 2 - 2 \sqrt{1 - A}$. This boundary is computed by recognizing that as $k$ approaches infinity, control is always active and $X_{n+1} > X_n$ for every iterate. Thus, the algorithm behaves just like the unrestricted control of Eq. \[eqn:lin\] with real eigenvalues greater than one — a condition met only when $\beta$ is below the boundary.
### Restricted control for $A > 1$ {#sec:control_a_above_1}
Typical dynamics of restricted control for positively-sloped unstable directions ($A>1$) are depicted in Fig \[fig:x\_return\_a\_above\_1\]. This figure shows four control trials of a linear map, with $A = 2.1$, for different values of $\beta$; there are three examples of unstable control and one example where $\xi^*$ is controlled.
Figure \[fig:x\_return\_a\_above\_1\](a) shows an unstable $01^\infty$ sequence for $\beta = 1.5$. In this case, the perturbations are so small that all state points lie above the line of identity. The control slows the exponential growth (which would be marked by a rapid exit from $\xi^*$ along the solid line), but fails to force an approach to $\xi^*$.
If $\beta$ is decreased to $\beta = 2.5$, then the perturbations are large enough so that the first controlled $X_n$ is smaller than the previous $X_{n-1}$. $\xi_n$ then falls below the line of identity, thereby generating the $01^1$ control sequence depicted in Fig. \[fig:x\_return\_a\_above\_1\](b). In this case, the sequence is unstable because a given controlled point is further from $\xi^*$ than the previous controlled point.
For $\beta = 3.5$, control is successful; the state point approaches $\xi^*$ in a $01^1$ sequence as shown in Fig. \[fig:x\_return\_a\_above\_1\](c). Such control is successful for a noise-free model system. However, for a real-world system, once the state point is sufficiently close to $\xi^*$, noise will eventually kick $\xi_n$ to the opposite side of $\xi^*$. Subsequent $\xi_n$ will fall below the line of identity, causing the control to be deactivated and leading to an exponential departure from $\xi^*$.
When $\beta$ is increased further, the first perturbation can be so large that the state point will be kicked to the other side of $\xi^*$ as shown in Fig. \[fig:x\_return\_a\_above\_1\](d) for $\beta = 4.5$. Again, control is subsequently deactivated and the system diverges from the fixed point.
The boundaries between the different control sequences are depicted in Fig. \[fig:beta\_vs\_a\_above\_1\]. The unstable $01^\infty$ sequence occurs for $\beta<A$. This boundary was computed by finding the value for $\beta$ such that the first controlled iterate falls on the line of identity; this occurs for $\beta = A$. For $\beta > A$, the first controlled iterate lies below the line of identity, thereby shutting the control off for the next iterate. Thus, $\beta = A$ marks the boundary between the $01^\infty$ and $01^1$ sequences. The boundary between unstable and stable $01^1$ sequences was computed by finding the value for $\beta$ such that the iterate subsequent to control is equal to the previous uncontrolled iterate; this occurs for $\beta =
1+A$. For slightly larger $\beta$ values, the iterate subsequent to control is closer to $\xi^*$ than the previous uncontrolled iterate. Thus, the unstable $01^1$ sequence is bounded by $A < \beta <
1 + A$. The final boundary marks the end of the converging $01^1$ sequence and can be found by determining the value for $\beta$ such that the first controlled iterate lands at $\xi^*$. This occurs for $\beta = A^2/(A-1)$. For $\beta > A^2/(A-1)$, the first controlled iterate, and all subsequent iterates, lie below the line of identity thereby shutting of the control. Therefore, the converging $01^1$ sequence occurs in the shaded region $1 + A < \beta <
A^2/(A-1)$ [@bielawski:1993a; @fn:bielawski], and the unstable $010^\infty$ sequence occurs for $\beta > A^2/(A-1)$.
Thus, for $A > 1$ the best that the restricted control algorithm can offer is a temporary reversal of divergence from the fixed point. However, in section \[sec:auto\_adapt\] we will make a simple modification to the restricted control algorithm for $A>1$ that will keep the system in the vicinity of $\xi^*$ in the presence of noise.
Experimental observation of restricted control sequences {#sec:AVNexperiments}
========================================================
As described in Ref. [@hall:1997a], we have studied the control of a particular cardiac conduction interval, known as the atrioventricular (AV) nodal conduction time, in *in vitro* rabbit heart experiments. Because of the nonlinear excitation properties of AV-nodal tissue, the dynamics of AV-nodal conduction can bifurcate from a period-1 regime (where every impulse propagates through the AV-node at the same rate) to a period-2 regime (where propagation time alternates in a long, short, long, etc. pattern on a beat-to-beat basis) during rapid atrial excitation. It has been demonstrated [@sun:1995a; @amellal:1996a] that these dynamics can be described by a period-doubling bifurcation of a one-dimensional map of the form of Eq. \[eqn:map4\] where $X_n$ is the AV-nodal conduction time and $\lambda_n$ is the time between when the AV-node finishes conducting one impulse and when it starts conducting the next.
The goal in Ref. [@hall:1997a] was to eliminate the alternating rhythm by stabilizing the underlying period one fixed point $X^*$. This was achieved by delivering electrical stimuli to the atrial tissue in order to transiently shorten $\lambda_n$. Because there is no practical way to lengthen $\lambda_n$, the timing of the electrical stimuli was determined by the restricted controller of Eq. \[eqn:restrictdeltalambda\]. Although the *in vitro* rabbit cardiac system of Ref. [@hall:1997a] was not linear, application of the restricted control algorithm resulted in several of the control sequences predicted in the above linear system for $A < -1$. These control sequences were especially clear at the initiation of control when perturbations were largest.
For example, Fig. \[fig:hall\_97a\_fig\](a) shows the variable $X_n$ and the control parameter $\lambda_n$ during an unstable $01^1$ sequence for a feedback gain $\alpha = 3.3$ (corresponding to a negative $\beta$ because $\frac{\partial f}{\partial \lambda} < 0$ in the cardiac control experiments). The first controlled beat is indicated by the arrow and corresponds to a negative perturbation of $\lambda_0$ (all control perturbations are negative as imposed by the switching term $\Theta_n$). Because the system was nonlinear, oscillatory growth of $X_n$ was quenched and the original large amplitude alternation of $X_n$ was reduced in magnitude — but not eliminated.
When $\alpha$ was increased to $5.0$ (as shown in Fig. \[fig:hall\_97a\_fig\](b); corresponding to a later segment of the same control trial that is shown in Fig. \[fig:hall\_97a\_fig\](a)), the system shifted to a stable $001^1$ sequence that eliminated the alternation of $X_n$. After the fourth perturbation to $\lambda_n$ (beat 303), the system shifted to a stable $01^1$ sequence. This shift resulted from the close proximity of the $01^1$ and $001^1$ stable zones (Fig. \[fig:beta\_vs\_a\_below\_minus1\]); noise or drift in the system can cause such transitions.
Figure \[fig:hall\_97a\_fig\](c) shows a stable $001^2$ control sequence that eliminated the alternation of $X_n$ in a different rabbit heart using $\alpha = 2.5$. (Note that the $\alpha$ values from distinct trials are independent.) Similar to the sequence transitions in Fig. \[fig:hall\_97a\_fig\](b), the system switched to its adjacent stable $01^2$ control sequence shortly after the control was initiated, and later switched back to the stable $001^2$ control sequence.
Modifications to the restricted control algorithm
=================================================
Automatic adaptation of the feedback gain for $A < -1$ {#sec:auto_adapt}
------------------------------------------------------
Figure \[fig:hall\_97a\_fig\](a) (unsuccessful) and (b) (successful) demonstrate that successful control is dependent on the proper choice of $\alpha$. Such dependence is a critical limitation given that the information required to determine the correct value of $\alpha$ ($A$, $\xi^*$, and $\frac{\partial f}{\partial X}|_{\xi^*}$) cannot be easily determined prior to control. Furthermore, the nonstationarities typical of biological systems imply that the appropriate value of $\alpha$ may drift over time, thereby increasing the likelihood of control failure if $\alpha$ is fixed. To eliminate the limitations of a fixed $\alpha$ value chosen prior to control, we have developed a new technique that adaptively estimates $\alpha$ [@fn:previous_adaptive]. This adaptive approach is especially appropriate for applications (e.g., cardiac arrhythmia control) that cannot afford control failure of the type shown in the control attempt of Fig. \[fig:hall\_97a\_fig\](a).
This new technique exploits the structure of the stability zones to automatically adapt $\alpha$ such that $\xi^*$ is stabilized more robustly. Because multiple perturbations away from the fixed point are not desirable, the optimal stability zone is the $k = 1$ zone. Furthermore, because the stable $k = 1$ zone has the largest area, it will be the most robust to noise and drifting system parameters. To target this zone, $\alpha$ is adapted according to:
$$\begin{aligned}
\alpha_n & = &
\left\{
\begin{array}{ll}
% \alpha_{n-1} + \delta \alpha & \;\; \hbox{if $\Theta_{n-4} \Theta_{n-3} \Theta_{n-2} \Theta_{n-1}$ = 0101 or 1010},\\
\alpha_{n-1} + \delta \alpha & \;\; \hbox{if $\Theta_{n-4}$ ... $\Theta_{n-1}$ = 0101 or 1010},\\
\alpha_{n-1} - \delta \alpha & \;\; \hbox{otherwise},
\end{array}
\right.
\label{eqn:adapt_alpha}\end{aligned}$$
where $\delta \alpha$ is a small increment. When $\bigl(\frac{\partial f}{\partial \lambda}\bigr)|_{\xi^*}$ is negative (as in the cardiac experiments of Ref. [@hall:1997a]), $\alpha$ and $\delta \alpha$ are positive. Otherwise they are negative.
The algorithm of Eq. \[eqn:adapt\_alpha\] is motivated by examining the stability zones in Fig. \[fig:beta\_vs\_a\_below\_minus1\]. For $k=1$, optimal control occurs when $\beta$ is at the boundary between stable $01^1$ and stable $001^1$ (dotted curve of Fig. \[fig:beta\_vs\_a\_below\_minus1\]). If $\beta$ is too large, the control sequence will be $01^1$ (unstable if $\beta$ is so large that it is above the k=1 stability region or stable if $\beta$ is within the stability region but above the optimal control boundary). In such a case, the adaptation of Eq. \[eqn:adapt\_alpha\] will decrease $\beta$. In contrast, if $\beta$ is too small, the control sequence will be $001^1$ or some sequence with $k > 1$ (unstable if $\beta$ is so small that it is below the $k=1$ stability region or stable if $\beta$ is within the stability region but below the optimal control boundary). In such a case, the adaptation of Eq. \[eqn:adapt\_alpha\] will increase $\beta$. Thus, the adaptation will adjust the system so that it oscillates between the stable $01^1$ and stable $001^1$ sequences, provided that the increment $\delta \alpha$ is small enough so that the stepsize of $\beta$ is sufficiently less than the height of the $k=1$ stability zone. Specifically, the condition $|\delta \alpha| < |A \bigl(\frac{\partial
f}{\partial \lambda}\bigr)|_{\xi^*}|^{-1}$ ensures that the stepsize is less than half the height of the zone.
To illustrate the adaptive algorithm, we implemented the restricted controller of Eq. \[eqn:restrictdeltalambda\] with the feedback gain $\alpha$ replaced by $\alpha_n$ given by Eq. \[eqn:adapt\_alpha\]. $\alpha_0$ was randomly chosen between $-5$ and $-10$ and $\delta \alpha = - 0.1$. We applied this controller to the quadratic map: $$X_{n+1} = \lambda_n X_n (1 - X_n) + \zeta_n,
\label{eqn:quadmap}$$ where $\zeta_n$ is a normally-distributed random variable with a mean of zero and a variance of 0.001. The goal was to stabilize the fixed point $X^* = (\lambda_0 -1)/\lambda_0$. Note that for the quadratic map, $\bigl(\frac{\partial f}{\partial \lambda}\bigr)|_{\xi^*} =
\frac{\lambda - 1}{\lambda^2}$ is positive. Thus, the sign of the perturbations is opposite to that of the cardiac experiments of Ref. [@hall:1997a].
Figure \[fig:quadcon\] shows the results of an adaptive control trial of the quadratic map. For $1\leq n \leq 500$, $\lambda_0 =
3.30$, corresponding to an uncontrolled stable period-2 orbit. Control was initiated at iterate $n=125$ with an arbitrary initial value of $\alpha_n=-5.25$. The adaptive algorithm rapidly stabilized $\xi^*$ as the adaptations of $\alpha_n$ kept the system in the $k=1$ stability zone. The control was deactivated at $n = 375$ and the period-2 cycle returned.
At $n = 500$, $\lambda_0$ was switched to $3.52$, which corresponds to a period-4 rhythm. Control was reactivated at $n = 625$ with an arbitrary initial value of $\alpha_n = -8.85$. The adaptive control stabilized $\xi^*$ until the control was turned off at iterate 875. At $n = 1000$, $\lambda_0$ was switched to 3.65, which is in the chaotic regime. Control was reactivated at $n = 1125$ with an arbitrary initial value of $\alpha_n = -5.63$. $\xi^*$ was stabilized after approximately 160 iterates. When control was deactivated at $n =
1375$ the chaos resumed. The oscillations in $\alpha_n$, as governed by Eq. \[eqn:adapt\_alpha\], are apparent in each control phase of the trial.
To demonstrate the ability of the adaptive algorithm to track a drifting fixed point, we applied the control algorithm to a quadratic map with $\lambda_0=3.0$ and $\lambda_n$ increased by an increment of 0.001 every iterate ($\lambda_{n+1}=\lambda_n+0.001$). As seen in Fig. \[fig:quadrift\], the small increments to $\lambda_0$ introduced a slow drift in the system dynamics and fixed point. Control was initiated at iterate 250 while the system was in its period-2 regime. Control was maintained for 500 iterates. During this period, $X^*$ drifted from $X^*=0.692$ to $X^*=0.736$. Nevertheless, the control algorithm had no trouble tracking $X^*$. In fact, it can be seen that when control was deactivated at $n=750$, the system had passed into the chaotic regime, an occurrence that did not disrupt control. However, if the feedback gain was held fixed rather than adapted, then control could not have been maintained for the entire control period (not shown).
Control of fixed points for $A > 1$
-----------------------------------
In section \[sec:control\_a\_above\_1\], we demonstrated that the restricted control algorithm can induce a transient approach towards $\xi^*$ when $A > 1$. However, as mentioned, if the system is kicked to the other side of $\xi^*$, control is deactivated and the system diverges from $\xi^*$. If the algorithm could detect such an occurrence and reverse the sign of the perturbations, then $\xi^*$ could be approached from the opposite side of $\xi^*$ [@fn:unidirec_violation]. This idea motivates the following modification of the restricted control algorithm: $$\begin{aligned}
\delta \lambda_n = \widehat{\Theta}_n \frac{\alpha_n}{2} (X_{n-1} - X_n),
\label{eqn:poscon}\end{aligned}$$ where
$$\begin{aligned}
\widehat{\Theta}_n & = &
\left\{
\begin{array}{ll}
1 & \;\; \hbox{if $\Phi_n (X_n - Y_n) > 0$},\\
0 & \;\; \hbox{otherwise},
\end{array}
\right.
\label{eqn:Thetahat}\end{aligned}$$
and $$\begin{aligned}
\Phi_n & = &
\left\{
\begin{array}{ll}
% -1 & \;\; \hbox{if $\Theta_{n-4} \Theta_{n-3} \Theta_{n-2} \Theta_{n-1}$ = 1000},\\
-1 & \;\; \hbox{if $\Theta_{n-4}$ ... $\Theta_{n-1}$ = 1000},\\
1 & \;\; \hbox{otherwise}.
\end{array}
\right.
\label{eqn:Phi}\end{aligned}$$
For stable control, a fixed value of the feedback gain ($\alpha_n =
\alpha_0$) is chosen so that $1 + A < \beta < A^2/(A-1)$. In order to adaptively control fixed points with $A > 1$, the controller in Eq. \[eqn:poscon\] can be used with a modified adaptive feedback gain algorithm: $$\begin{aligned}
\alpha_n & = &
\left\{
\begin{array}{ll}
\alpha_{n-1} + \delta \alpha & \;\; \hbox{if $\widehat{\Theta}_{n-4}$ ... $\widehat{\Theta}_{n-1}$ = 0101 or 1010},\\
\alpha_{n-1} - \delta \alpha & \;\; \hbox{otherwise}.
\end{array}
\right.
\label{eqn:posadapt_alpha}\end{aligned}$$ Such a combination is feasible because the control-sequence boundaries for $A > 1$ dictate that Eq. \[eqn:posadapt\_alpha\] will direct the system towards the converging 0101... sequence (see Fig. \[fig:beta\_vs\_a\_above\_1\]). However, as in the case when $A<-1$, the increment in the feedback gain should be chosen such that $\alpha_n$ remains in the stable 0101... zone. Choosing $|\delta
\alpha| < |(A-1)\bigl(\frac{\partial f}{\partial
\lambda}\bigr)|_{\xi^*}|^{-1}$ ensures that the increment is less than half of the height of the zone.
In order to illustrate the control of an unstable fixed point with $A > 1$, we applied the modified control algorithm to the cubic map: $$X_{n+1} = -4 (m + 1)X_n^3 + 6 (m + 1)X_n^2 - (2m + 3)X_n + \lambda_n + \zeta_n,
\label{eqn:cubemap}$$ where $\lambda_n$ is perturbed according to Eq. \[eqn:poscon\] with $\lambda_0 = 1$, $m$ is the slope of the map at the fixed point $X^* = 0.5$, and $\zeta_n$ is a normally-distributed random variable with a mean of zero and a variance of 0.001.
Figure \[fig:cubecon\] illustrates control of the cubic map without adaptation of the feedback gain ($\alpha_n$ is fixed at $\alpha_0 = 8.0$). For $1\leq n \leq 500$, $m = 2.2$, corresponding to an uncontrolled stable period-2 orbit. After control was initiated at $n = 125$, $X^*$ was stabilized after about 80 iterates. Stabilization of $X^*$ was maintained until control was deactivated at $n = 375$, after which the system returned to the period-2 orbit. At $n = 500$, $m$ was set to $2.7$, which moved the system into the chaotic regime. Control was initiated at $n = 625$. After about 10 iterates, $X^*$ was controlled for about 20 iterates. The system subsequently escaped control for about 30 iterates before control was recaptured and maintained until the algorithm was deactivated at $n = 875$.
The adaptive feedback gain algorithm is illustrated in Fig. \[fig:cubedrift\], which shows control of a drifting cubic map with $m_{n+1} = m_n + 0.001$ and $m_0 = 2.0$. Control was activated for $250 < n < 750$. The fixed point was controlled successfully during this period. The fixed point location does not change for the drifting cubic map, but the system drifts into the chaotic regime by the end of the control period.
Conclusion
==========
Surprisingly, the typical biological restriction of unidirectional control perturbations enhances the controllability of fixed points with $A < -1$ in systems described by one-dimensional maps. Because biological systems typically drift over time, dynamic control algorithms must adapt to system nonstationarities. Although the restricted delayed feedback control technique allows for moderate tracking of the fixed point as long as the system remains within a stability zone, it is ill-suited for systems with significant drift. For such systems, automatic adaptation of the feedback gain parameter ensures that the drifting system is directed to, and remains within, the largest stability zone. Thus, with the dual benefits of the increased stability of unidirectional restricted control and the adaptability of on-the-fly gain estimation, such control techniques could be of significant value to the control of biological systems. Indeed, a recent set of clinical experiments [@christini:2000c] have shown that adaptive restricted control of this type can successfully eliminate the same alternating AV-nodal conduction rhythm that was controlled in the rabbit experiments of Ref. [@hall:1997a]. Furthermore, we have shown that simple modifications of the restricted control algorithm can control fixed points with $A > 1$ — an impossible task for the unrestricted feedback controller. Thus, this algorithm may also have applicability in physical systems that allow bidirectional perturbations.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
This work was supported, in part, by a grant from the American Heart Association (0030028N) \[DJC\].
[10]{}
W. L. Ditto, S. N. Rauseo, and M. L. Spano, Physical Review Letters [**65**]{}, 3211 (1990).
E. R. Hunt, Physical Review Letters [**67**]{}, 1953 (1991).
B. Peng, V. Petrov, and K. Showalter, Journal of Physical Chemistry [**95**]{}, 4957 (1991).
V. Petrov, B. Peng, and K. Showalter, Journal of Chemical Physics [**96**]{}, 7506 (1992).
R. Roy, T. W. Murphy, Jr., T. D. Maier, Z. Gills, and E. R. Hunt, Physical Review Letters [**68**]{}, 1259 (1992).
T. L. Carroll, I. Triandaf, I. Schwartz, and L. Pecora, Physical Review A [ **46**]{}, 6189 (1992).
Z. Gills, C. Iwata, R. Roy, I. B. Schwartz, and I. Triandaf, Physical Review Letters [**69**]{}, 3169 (1992).
I. B. Schwartz and I. Triandaf, Physical Review E [**48**]{}, 718 (1993).
S. Bielawski, D. Derozier, and P. Glorieux, Physical Review A [**47**]{}, R2492 (1993).
D. J. Gauthier, D. W. Sukow, H. M. Concannon, and J. E. S. Socolar, Physical Review E [**50**]{}, 2343 (1994).
J. E. S. Socolar, D. W. Sukow, and D. J. Gauthier, Physical Review E [**50**]{}, 3245 (1994).
V. Petrov, M. J. Crowley, and K. Showalter, Physical Review Letters [**72**]{}, 2955 (1994).
N. F. Rulkov, L. S. Tsimring, and H. D. I. Abarbanel, Physical Review E [ **50**]{}, 314 (1994).
P. Colet, R. Roy, and K. Wiesenfeld, Physical Review E [**50**]{}, 3453 (1994).
B. H[ü]{}binger, R. Doerner, W. Martienssen, W. Herdering, R. Pitka, and U. Dressler, Physical Review E [**50**]{}, 932 (1994).
D. J. Christini, J. J. Collins, and P. S. Linsay, Physical Review E [**54**]{}, 4824 (1996).
A. Garfinkel, M. L. Spano, W. L. Ditto, and J. N. Weiss, Science [**257**]{}, 1230 (1992).
S. J. Schiff, K. Jerger, D. H. Duong, T. Chang, M. L. Spano, and W. L. Ditto, Nature [**370**]{}, 615 (1994).
J. N. Weiss, A. Garfinkel, M. L. Spano, and W. L. Ditto, Journal of Clinical Investigation [**93**]{}, 1355 (1994).
L. Glass, Physics Today [**49**]{}, 40 (1996).
K. Hall, D. J. Christini, M. Tremblay, J. J. Collins, L. Glass, and J. Billette, Physical Review Letters [**78**]{}, 4518 (1997).
D. J. Christini, K. M. Stein, S. M. Markowitz, S. Mittal, D. J. Slotwiner, and B. B. Lerman, Heart Disease [**1**]{}, 190 (1999).
L. Glass and W. Zeng, International Journal of Bifurcation and Chaos [**4**]{}, 1061 (1994).
D. J. Christini and J. J. Collins, Physical Review E [**53**]{}, R49 (1996).
M. Watanabe and J. Robert F. Gilmour, Journal of Mathematical Biology [**35**]{}, 73 (1996).
M. E. Brandt and G. Chen, International Journal of Bifurcation and Chaos [ **6**]{}, 715 (1996).
D. J. Christini and J. J. Collins, [CHAOS]{} [**7**]{}, 544 (1997).
M. E. Brandt, H.-T. Shih, and G. Chen, Physical Review E [**56**]{}, R1334 (1997).
T. Sauer, Fields Institute Communication [**11**]{}, 63 (1997).
D. J. Christini and D. T. Kaplan, Physical Review E [**61**]{}, 5149 (2000).
Independently discovered by Gauthier and Socolar using different techniques: D. J. Gauthier and J. E. S. Socolar, Phys. Rev. Lett. [**79**]{}, 4938 (1997).
K. Pyragas, Physics Letters A [**170**]{}, 421 (1992).
T. Ushio, [IEEE]{} Transactions on Circuits and Systems–I [**43**]{}, 815 (1996).
Bielawaski *et al.* used an imposed $01^1$ sequence in order to control unstable orbits of laser dynamics. Such a control scheme has the same $01^1$ stability boundaries as our control algorithm with the exception that for $A > 1$ their imposed $01^1$ control sequence is stable, rather than semi-stable as with our restricted control. The advantage of the approach proposed here is that the control sequence information can be used to automatically adapt the gain parameter.
J. Sun, F. Amellal, L. Glass, and J. Billette, Journal of Theoretical Biology [**173**]{}, 79 (1995).
F. Amellal, K. Hall, L. Glass, and J. Billette, Journal of Cardiovascular Electrophysiology [**7**]{}, 943 (1996).
Related techniques have been presented previously: G. W. Flake, G.-Z. Sun, Y.-C. Lee, and H.-H. Chen, in [*Advances in Neural Information Processing*]{}, edited by J. D. Cowan, G. Tesauro, and J. Alspector (Morgan Publishers, San Mateo, CA), 647 (1994); D. J. Christini and J. J. Collins, IEEE Transactions on Circuits and Systems, I [**44**]{}, 1027 (1997).
Note that this modification of the algorithm violates the biologically motivated restriction of unidirectional perturbations. Nevertheless, not all systems are subject to such restrictions.
D. J. Christini, K. M. Stein, S. M. Markowitz, S. Mittal, D. J. Slotwiner, M. A. Scheiner, S. Iwai, and B. B. Lerman, preprint (2000).
Fig.\
{height="7.5in"}
Fig.\
Fig.\
{height="7.5in"}
Fig.\
{height="8.5in"}
Fig.\
{width="5.5in"}
Fig.\
{width="5.5in"}
Fig.\
{width="5.5in"}
Fig.\
{width="5.5in"}
Fig.\
[^1]: email: hall@entelos.com
[^2]: email: dchristi@med.cornell.edu
|
---
abstract: 'Quasitoric manifolds are manifolds that admit an action of the torus that is locally as the standard action of $T^n$ on ${{\mathbb{C}}}^n$. It is known that the quotients of such actions are nice manifolds with corners. We prove that such manifolds are equivariantly rigid i.e., that any other manifold that is $T^n$-homotopy equivalent to a quasitoric manifold, is $T^n$-homeomorphic to it.'
address: 'Department of Mathematics University of the Aegean, Karlovassi, Samos, 83200 Greece'
author:
- Vassilis Metaftsis
- 'Stratos Prassidis$^*$'
title: Topological Rigidity of Quasitoric Manifolds
---
[^1]
Introduction
============
Toric varieties are studied extensively in algebraic geometry and combinatorics ([@fu], [@oda]). The main tool in their study is the simplicial complex that is determined by the fan of the toric variety. This simplicial complex is actually the quotient of the toric variety by the torus action. The combinatorial properties of the simplicial complex reflect the algebraic and geometric properties of the variety and vice versa. A topological analogue of toric varieties was by Davis–Januszkiewicz ([@da-ja]), called quasitoric manifolds. Quasitoric manifolds are manifolds that admit an action of the torus $T^n$ which is [*locally standard*]{} such that the quotient space is a simple polytope. Locally standard actions are those where, locally, $T^n$ acts by the standard coordinate wise multiplication on ${{\mathbb{C}}}^n$. As in the toric variety case, the combinatorial properties of the polytope provide information about the topological structure of the manifold. Furthermore, the manifolds can be reconstructed from the polytope and an appropriate assignment of subgroups of $T^n$ to the faces of the polytope.
In this paper, we consider a further generalization considering locally standard $T^n$-actions on manifolds, and we call them pseudotoric manifolds. In this case, the quotient space is a nice manifold with corners. As before, we show that the combinatorial properties of the manifold with corners are reflected to the topology of the pseudotoric manifold. Also, the pseudotoric manifold can be reconstructed by an appropriate assignment of subgroups of $T^n$ to its faces. The main theorem of the paper is the following.
Let $M^{2n}$ be a quasitoric manifold and $N^{2n}$ a locally linear $T^n$-manifold. Let $f: N^{2n} \to M^{2n}$ be an equivariant homotopy equivalence. Then $f$ is equivariantly homotopic to an equivariant homeomorphism.
The idea of the proof is the same an the one used in the Coxeter group case ([@mo-pr], [@pr-sp], [@ro]). After all, the reconstruction of the quasitoric and pseudotoric manifolds, from their quotient spaces, is similar to the construction of the Coxeter complex of a Coxeter group, a similarity that was made precise in [@da-ja]. First it is proved that $N^{2n}$ is a quasitoric manifold. Let $X = M^{2n}/T^n$ and $Y = N^{2n}/T^n$. Then $X$ is a simple polytope and $f$ induces a map ${\phi}: Y \to X$ that is a face-preserving homotopy equivalence. As in the references for the Coxeter group case, we show inductively that there is a face-preserving homotopy from $\phi$ to a face-preserving homeomorphism $\chi$. The homeomorphism $\chi$ lifts to a $T^n$-homeomorhism between $N^{2n}$ and $M^{2n}$ that is homotopic to $f$.
The main theorem, loosely, can be considered as a version of an equivariant or stratified Borel Conjecture. Let ${\pi}: M \to X$. Over the interior $\overset{\circ}{\sigma}$ of faces of $X$, the map $\pi$ is a fiber bundle with fiber $T_{\sigma}$, where $T_{\sigma}$ is the isotropy group of $\sigma$. So, $M^{2n}$ admits a stratification by open aspherical manifolds.
For non-singular toric varieties better rigidity theorems are known. Let $M$ and $N$ be two toric manifolds. In [@ma] and [@ma-suh] it was shown that if $H^*(ET{\times}_TM)$ and $H^*(ET{\times}_TN)$ are isomorphic as $H^*(BT)$-algebras, then $M$ and $N$ are algebraically isomorphic. Actually a slightly stronger result was proved in the above references.
In [@yo1], a generalization of the locally standard actions is given, called local torus actions. Our methods do not directly generalize to this case. In [@yo], the generalization of the quotient map ${\pi}: M^{2n} \to X$ is given. It is called local standard torus fibration. Again, our methods can not be applied directly to the stratified rigidity problem for such $M^{2n}$.
Preliminaries and Notation {#sec-not}
==========================
We consider $S^1$ as the standard subgroup of ${{\mathbb{C}}}^*$, the multiplicative group of non-zero complex numbers. Furthermore $T^n < ({{\mathbb{C}}}^*)^n$. We refer to the standard representation of $T^n$ by diagonal matrices in $U(n)$ as the standard action of $T^n$ on ${{\mathbb{C}}}^n$. The orbit of the action is the positive cone: $${{\mathbb{R}}}_+^n = \{(x_1, x_2, \dots x_n):\; x_i \ge 0\}.$$ Most of the $T^n$-actions considered in this paper satisfy the following property.
Let $M^{2n}$ be a $2n$-dimensional manifold with an action of $T^n$. The action is called [*locally standard*]{} if for every $x\in M^{2n}$ there is a $T^n$ invariant neighbourhood $U$ of $x$ and a homeomorphism $f: U\rightarrow W$ where $W$ is an open set in ${{\mathbb{C}}}^n$ invariant under the standard action of $T^n$, and an automorphism $\phi: T^n\rightarrow T^n$ such that $f(ty)=\phi(t)f(y)$ for all $y\in U$.
An action $G\times X\rightarrow X$ is effective if there is no non-trivial element of $G$ that stabilizes $X$ pointwise. In other words the intersection of all isotropy subgroups is trivial.
1. If the action of $T^n$ is effective and it does not have any finite isotropy groups, then the action is locally standard by the slice theorem ([@yo]).
2. If $M^{2n}$ is smooth and $H^{\text{odd}}(M) = 0$, then the action is locally standard ([@ma-pa]).
The next definition formalizes the local properties of the quotient space of a locally standard $T^n$-action.
A space $X$ is an $n$-manifold with corners if it is a Hausdorff space equipped with an atlas of open sets each one homeomorphic to an open subset of ${{\mathbb{R}}}_+^n$ such that the overlap maps are local homeomorphisms that preserve the natural stratification of ${{\mathbb{R}}}^n_+$.
The quotient of a locally standard action is a [*manifold with corners*]{} ([@davis], [@ma-pa]).
For any $n$-manifold with corners $X$ we have the following.
1. For each $x\in X$ and a chart $\sigma$, define $c(x)$ to be the number of coordinates of ${\sigma}(x)$ that are $0$. The number $c(x)$ is independent of the choice of the chart and so $c$ defines a map $c:X\rightarrow {{\mathbb{N}}}$. For $0 \le k \le n$, a connected component of $c^{-1}(k)$ is a stratum of codimension $k$. The closure of a stratum is called a closed stratum.
2. Let $x\in X$. Define $$Y(x) = \{ C:\; C \; \text{closed codimension-one stratum that contains} \; x\}.$$ The manifold with corners $X$ is called [*nice*]{} if $|Y(x)| = 2$, whenever $c(x) = 2$.
3. The slice theorem implies that the quotient space of a locally standard $T^n$-action is a nice manifold with corners ([@ma-pa]).
4. A [*facet*]{} in an $n$-manifold with corners is the closure of a connected component of the codimension $1$ stratum. A non-empty intersection of $k$ facets is called a codimension-$k$ [*preface*]{} ($k = 1, \ldots, n$). In general, prefaces of codimension $> 1$ may be disconnected. A connected component of a preface is called a [*face*]{}. The manifold $X$ itself is considered to be a codimension-$0$ face. The $k$-[*skeleton*]{} of a manifold with corners $X$ is the set of all faces of codimension greater than or equal to $k$ and it is denoted by $X^{(k)}$.
5. An $n$-manifold with corners is called [*simple*]{} if the codimension $n$ faces are contained in exactly $n$ facets. This definition is in analogy with the definition of simple polyhedra.
For the sequel of the paper we assume that $M^{2n}$ is an $n$-dimensional manifold with a locally standard action of $T^n$.
The following definition is a special case of torus actions. It generalizes the definition of quasitoric manifolds given in ([@bp], [@da-ja]).
\[pseudotoric\] Given a nice manifold with corners $X^n$, a manifold $M^{2n}$ with a locally standard $T^n$-action is called a [*pseudotoric manifold over $X^n$*]{} if there is a projection ${\pi}: M^{2n} \to X^n$ whose fibers are the orbits of the action.
1. Quasitoric manifolds are pseudotoric manifolds so that the quotient space is not just a manifold with corners but it is a simple polytope.
2. Remember that a [*torus manifold*]{} $M^{2n}$ is a smooth $T^n$-manifold with an effective $T^n$-action and $M^{T^n}\not= \emptyset$ ([@ma-pa]).
3. Definition \[pseudotoric\] implies that, under the projection $\pi$, points with the same isotropy groups are mapped to the relative interior of a face of $X^n$. Thus the action of $T^n$ is free over the open stratum of $X^n$ and the vertices of $X^n$ i.e. the $0$-dimensional faces, correspond to the fixed points of the action.
Let ${\pi}: M^{2n} \to X^n$ be the projection defined above. A codimension-$1$ connected component of a fixed point set of a circle in $T^n$ is called a [*characteristic submanifold*]{} of $M$. The images of the characteristic submanifolds are the facets of $X$.
A manifold with corners $X$ is called [*face acyclic*]{} if all the faces, including $X$ itself, are acyclic, that is, its reduced integral homology $\widetilde{H}_*X$ is trivial. We call $X$ a [*homology polytope*]{} if all its prefaces are acyclic (in particular they are connected). Thus $X$ is a homology polytope if and only if it is face acyclic and every non-empty intersection of characteristic submanifolds is connected.
A simple convex polytope is an example of a manifold with corners that is a homology polytope. A quasitoric manifold can be defined as a locally standard manifold whose orbit space is a simple convex polytope with the standard face structure.
The canonical model
===================
We will show how to reconstruct the pseudotoric manifold from a manifold with corners $X$ and some linear data on the set of facets of $X$. We use the construction in [@ma-pa] that generalizes the construction of quasitoric manifolds in [@bp] and [@da-ja]. We write $T = T^n$.
First, we will see some of the properties of characteristic submanifolds of a pseudotoric manifold. Let $M_i = {\pi}^{-1}(X_i)$ be the characteristic submanifolds, where $X_i$ are the facets of $X$ ($i = 1, \ldots, k$). Let $${\Lambda}: \{ X_1, \dots , X_k\} \to \text{Hom}(S^1, T) \cong {{\mathbb{Z}}}^n$$ such that ${\Lambda}(X_i)$ is a primitive vector that determines the circle subgroup of $T$ that fixes $M_i$. The main property of these data is that if $X_{i_1}{\cap}\dots {\cap} X_{i_m} \not= \emptyset $ then ${\Lambda}(X_{i_1}), \dots , {\Lambda}(X_{i_m})$ is a part of ${\mathbb{Z}}$-basis of the integral lattice $ \text{Hom}(S^1, T)$.
Now, we give the inverse of the construction. We start with a simple manifold with corners $X$ and a map $\Lambda$ that satisfies the above condition about the non-empty intersections of the facets. Given a point $x \in X$, the smallest face which contains $x$ is the intersection $X_{i_1}{\cap}\dots {\cap} X_{i_m}$ of all facets with $x\in X_{i_j}$. We define $T(x)$ to be the subtorus of $T$ generated by the circle subgroups corresponding to ${\Lambda}(X_{i_1})$, …, ${\Lambda}(X_{i_m})$. We define: $$M_X({\Lambda}) = T{\times}X/{\sim}, \;\; (t, x) \sim (t', x') \Longleftrightarrow x = x', \; \text{and}\; t^{-1}t'\in T(x).$$ The space $M_X({\Lambda})$ is a closed manifold and the torus $T$ acts on it by acting on the first coordinate. This follows as in the case of quasitoric manifolds ([@da-ja], Proposition 1.8, [@ma-pa], Proposition 4.5, and [@yo1] Lemma 5.2 and Theorem 5.5).
\[lem-homeomorphism\] Let $M$ be a pseudotoric manifold with orbit space $X$, and the map $\Lambda$ defined as above. If $X$ is contarctible then there is a $T$-equivariant homeomorphism $M_X({\Lambda}) \to M$ covering the identity on $X$.
The idea is to construct a continuous map $f: T{\times}X \to M$ so that $f(T{\times} \{ x\})= {\pi}^{-1}(x)$. This is done by subsequent “blowing up the singular strata”. The condition on the contractibility guarantees that the resulting principal $T$ -bundle over $X$ is trivial, i.e. there is an equivariant homeomorphism $\widehat{f}: T\times X \rightarrow \widehat{M}$ inducing the identity on $X$. Then the map $f$ descends to the required equivariant homeomorphism. The details are in [@da-ja] (also in [@ma-pa] and [@yo1]).
\[rem-homeomorphism\] In [@da-ja] and [@ma-pa] the result is proved under the condition that $M$ is a smooth manifold. In [@yo1] it is proved for topological manifolds that admit a local torus action, generalizing the concept of locally standard torus manifolds.
We call $(X,\Lambda)$ a [*characteristic pair*]{}. Hence, under the hypothesis $X$ is contrarctible, a characteristic pair $(X,\Lambda)$ completely determines $M_X(\Lambda)$.
Now we investigate the natural properties of the construction. Let ${\phi}: X \to Y$ a map between manifolds with corners. It is called [*skeletal*]{} if it preserves skeleta i.e. ${\phi}(X^{(k)}) \subset Y^{(k)}$. Similarly, a homotopy is called skeletal if the maps at each level are skeletal.
\[prop-natural\] Let $(X^n, {\Lambda})$ and $(Y^n, {\Lambda}')$ be two characteristic pairs and ${\sigma}: T \to T$ a continuous automorphism. Let ${\phi}: X \to Y$ a skeletal map that satisfies ${\sigma}({\Lambda}(X_i)) < {\Lambda}'({\phi}(X_i))$ for each facet $X_i$ of $X$. Then ${\phi}$ induces a $\sigma$-equivariant map ${\phi}_*: M_{X}({{\Lambda}}) \to M_{Y}({{\Lambda}}')$.
Define that map ${\phi}_*$, the obvious way: $${\phi}_*: M_{X}({{\Lambda}}) \to M_{Y}({{\Lambda}}'), \; {\phi}_*(t,x) = ({\sigma}(t), {\phi}(x)).$$ We need to show that the map is well-defined. Let $(t, x) = (t', x)$ in $M_X({{\Lambda}})$. Then $t^{-1}t' \in T(x)$ where $x$ belongs to the relative interior of the face that is determined by the intersection of facets $X_{i_1}$, …, $X_{i_m}$ and $T(x)$ is the subgroup generated by ${{\Lambda}}(X_{i_1})$, …, ${{\Lambda}}(X_{i_m})$. Then $${\phi}(x)\in {\phi}(X_{i_1}) {\cap} \dots {\cap} {\phi}(X_{i_m}).$$ It should be noticed that the faces ${\phi}(X_{i_r})$ are not necessarily facets. Therefore there are facets $Y_{j_1}$, …, $Y_{j_s}$ of $Y$ so that ${\phi}(x)$ belongs to the relative interior of their intersection. Since the map $\phi$ is skeletal, $$Y_{j_1}{\cap} \dots {\cap}Y_{j_s} = {\phi}(X_{i_1}) {\cap} \dots {\cap} {\phi}(X_{i_m}).$$ By hypothesis, ${\sigma}({{\Lambda}}(X_{i_r})) < {{\Lambda}}'({\phi}(X_{i_r}))$ and therefore, the subgroup generated by ${{\Lambda}}'({\phi}(X_{i_r}))$, $r = 1, \ldots, m$, contains the subgroup generated by ${\sigma}({{\Lambda}}(X_{i_r}))$, $r = 1, \dots , m$, which implies that $\langle \sigma({\Lambda}(X_{i_r})), r=1,\ldots,m\rangle$ is contained in the subgroup generated by ${{\Lambda}}'(Y_{j_p})$, $p = 1, \dots , s$. But $\langle {\Lambda}'(Y_{j_p}), p=1,\ldots,s\rangle$ is actually the subgroup $T({\phi}(x))$. Thus ${\sigma}(t^{-1}t')$ belongs to the subgroup $T({\phi}(x))$. That implies, $({\sigma}(t), {\phi}(x)) = ({\sigma}(t'), {\phi}(x))$ in $M^{2n}_Y({{\Lambda}}')$.
By the construction, the map is obviously $\sigma$-equivariant.
\[cor-homotopy\] If ${\phi}_s: X \to Y$, $s\in [0, 1]$, is a skeletal homotopy so that ${\sigma}({{\Lambda}}(F)) < {{\Lambda}}'({\phi}_s(F))$, for each $s$ and each facet $F$ of $X$. Then ${\phi}_{0,*} \simeq_{\sigma} {\phi}_{1,*}$.
We now investigate the reverse construction.
\[prop-reverse\] Let $f: M_X({{\Lambda}}) \to M_Y({{\Lambda}}')$ a $\sigma$-equivariant map, with $\sigma$ as before. Let ${\phi}: X \to Y$ be the map induced on the quotients. Then
1. The map $\phi$ is skeletal.
2. ${\sigma}({{\Lambda}}(F)) < {{\Lambda}}'({\phi}(F))$, for each facet $F$ of $X$.
3. There is a $\sigma$ equivariant homotopy such that $f \simeq_{\sigma} {\phi}_*$.
The equivariance implies that $\phi$ is skeletal. Let $x\in X$ and $g\in{\Lambda}(X_i)$ for some facet $X_i$. Then one can easily show that $\phi(gx)=\sigma(g)\phi(x)$ and so $\sigma({\Lambda}(X_i))\le {\Lambda}'(\phi(X_i))$ since $\phi$ is skeletal map. Now, for each face $F$ of $X$ we denote $T_F$ the isotropy subgroup of $F$ under the action of $T$. Let $C_F = \{t_{j, i_F}:\; i_F \in I_F\}$ be a complete set of coset representatives of $T_F$ in $T$ such that if $F'\subseteq F$ then $T_F\le T_{F'}$ and $C_{F'}\subseteq C_F$. We require that $1 \in C_F$ for each $F$. Now let $f(1, x) = (t_x, {\phi}(x))$, where $t_x$ is in $C_F$ and $x$ belongs to the relative interior of $F$. Since $t_x \in T$, $$t_x = ({\exp}(2{\pi}{\lambda}_{1,x}i), \dots , {\exp}(2{\pi}{\lambda}_{n,x}i)), \;\; ({\lambda}_{1,x}, \dots , {\lambda}_{n,x})\in {{\mathbb{R}}}^n.$$ Define the line-segment path from $0$ to ${\lambda}_{i,x}$ in ${\mathbb{R}}$, $${\alpha}_{i,x}: [0, 1] \to {{\mathbb{R}}}, \; {\alpha}_{i,x}(s) = {\lambda}_{i,x}s.$$ Define the path $${\alpha}_x = (\exp(\alpha_{1,x}),\ldots,\exp(\alpha_{n,x})): [0, 1] \to T.$$ Define a homotopy $${\Phi}: M_X({\Lambda}){\times}[0, 1] \to M_Y({\Lambda}'), {\Phi}((t, x), s) = ({\sigma}(t){\alpha}_x(s) , {\phi}(x)).$$ It is obvious that $\Phi$ is well-defined, $\sigma$-equivariant and starts from ${\phi}_*$ and ends to $f$. We will show that $\Phi$ is continuous. Let $F$ be a face of $X$. Then the map $\{1\}{\times}F \to
M_X({{\Lambda}})$ is an embedding. Thus $f|\{1\}{\times}F$ is continuous. So the map $$\{1\}{\times}F \to T{\times}Y, \ \ (1, x) \mapsto (t_x, {\phi}(x)), \;\text{where}\; f(1, x) = (t_x, {\phi}(x))$$ is continuous. From the definition of ${\alpha}_x$ the map $$(\{1\}{\times}F){\times}[0, 1] \to T{\times}Y, \ \ ((1, x), s) \mapsto ({\alpha}_x(s)t_x, {\phi}(x))$$ is continuous. Composing with the quotient map, we see that ${\Phi}|\{1\}{\times}F$ is continuous. Also, the map $\Phi$ agrees on the intersection of two faces $\{1\}{\times}F$ and $\{1\}{\times}F'$ because of the choices we made for the coset representatives. Thus the map $\Phi$ is on the image of $\{1\}{\times}X$. By equivariance, $\Phi$ is continuous on $M_X({{\Lambda}})$.
Rigidity
========
Let $(X^n, {{\Lambda}})$ be a characteristic pair with $X^n$ a simple nice manifold with corners and $M^{2n} = M_X({{\Lambda}})$ the corresponding pseudotoric manifold. We assume that all the faces of $X^n$ (and $X^n$ itself) are homeomorphic to contractible manifolds with boundary. That means that $X$ is a homology polytope. That is the situation when $M$ is a quasitoric manifold. In that case the co-dimension $0$ faces have trivial isotropy subgroups. Let $N^{2n}$ be a $2n$-dimensional $T = T^n$-manifold and $f: N^{2n} \to M^{2n}$ a $T$-equivariant homotopy equivalence.
\[lem-dimension\] The action of $T$ on $N^{2n}$ is effective.
We assume that that is not the case. So there is $t\in T$ that fixes $N^{2n}$ pointwise. Let $G ={\langle}t{\rangle}$. Then $N^G = N^{2n} \simeq M^G$ since $f$ is an equivariant homotopy equivalence. But $M^G$ is a closed proper submanifold of $M^{2n}$, because the action on $M^{2n}$ is effective. Thus ${\dim}(N^G) = {\dim}(M^G) < {\dim}(M^{2n}) = {\dim}(N^{2n})$, a contradiction.
\[prop-locally-standard\] The action of $T$ on $N^{2n}$ is locally standard. Thus, $N^{2n}/T = Y$, that is, $Y$ is a manifold with corners.
Since $T$ acts effectively on $N^{2n}$, it is enough to show that $N^{2n}$ has no finite subgroups. Suppose that $F$ is a finite isotropy group of $N^{2n}$. Then $N^F \not= \emptyset$. But $N^F \simeq M^F = {\emptyset}$, a contradiction. Thus, $N^{2n}$ has no finite isotropy subgroups and thus the action of $T$ is locally standard ([@yo], Example 2.1).
\[cor-manifold-corners\] Let $N^{2n}$ and $Y$ be as above. Then there is a characteristic map ${{\Lambda}}'$ so that $N^{2n} \cong_{T} M_Y({{\Lambda}}')$.
For each facet, $F$ of $Y$, define ${{\Lambda}}'(F) = T_F$, the isotropy subgroup of $F$. Define $M_Y({\Lambda}')$ to be the quotient space $T\times Y/\sim$ with $(t,x)\sim (t',x')$ if and only if $x=x'$ and $t^{-1}t'\in T(x)$. The $T$-homotopy equivalence $f$ induces a skeletal homotopy equivalence ${\phi}: Y \to X$. Thus $Y$ is contractible. The result follows from lemma \[lem-homeomorphism\].
We start with some reductions to the rigidity problem. Let $f: N^{2n} \to M^{2n}$ a $T$-homotopy equivalence as before. Let ${\phi}: Y \to X$ be the quotient map induced by $f$. By Proposition \[prop-reverse\], the map $f$ is $T$-homotopic to ${\phi}_*$.
We need a version of the Poincar[é]{} Conjecture. For an $n$-dimensional manifold with boundary $(M,\partial M)$ the relative structure set ${\mathcal{S}}(M,\partial M)$ is the set of equivalence classes of pairs $(N,f)$ with $N$ an $n$-dimensional manifold with boundary and $f:N\rightarrow M$ a homotopy equivalence such that $\partial f:
\partial N\rightarrow\partial M$ is a homeomorphism.
\[lem-poincare\] Let $(M, {\partial}M)$ be a compact contractible manifold with boundary. Then the relative structure set $\mathcal{S}(M, {\partial}M) = *$.
In [@ro], the calculation is reduced to the structure set of the sphere. For $n \ge 5$ the result is due to Smale ([@wa]), for $n = 4$ to Freedmann ([@fr]), and for $n = 3$ to Perelman ([@pe1], [@pe2], [@pe3]). For $n=1,2$ it is a classic result.
With the above notation, the map ${\phi}_*$ is $T$-homotopic to a $T$-homeomorphism.
We will use the method that was used in [@mo-pr], [@pr-sp], and [@ro]. We will show that $\phi$ is skeletal homotopic to a skeletal homeomorphism. Then, he result follows from Corollary \[cor-homotopy\]. We will construct a skeletal homotopy by induction on faces. Notice that because $f$ is a $T$ equivariant homotopy, the skeletal map preserves (closed) faces. That means that if $F_1$ is a face of $Y$ of codimension-$k$, then $\phi$ maps $F_1$ to $F_2$ and ${\phi}|_{F_1}: F_1 \to F_2$ where $F_2$ is a face of $X$ of codimension-$k$. Also, each closed face is homeomorphic to a contractible manifold with boundary.
We start the induction. The zero faces correspond to the $T$-fixed point sets. Thus, we have the same number of zero faces. The restriction of $\phi$ to zero faces is a homeomorphism. Now, let a face $F_1$ be a face of $Y$ and ${\partial}F_1$ its boundary. We assume that there is skeletal homeomorphism $h_{{\partial}F_1}$ skeletal homotopic to ${\phi}|{\partial}F_1$. Using the homotopy extension property, there is a map ${\phi}': F_1 \to F_2$ that is homotopic to ${\phi}|F_1$ and it extends the map $h_{{\partial}F_1}$. Because all the maps and homotopies are skeletal at the boundary, they are skeletal in the closed face $F_1$. By Lemma \[lem-poincare\], ${\phi}'$ is homotopic to a homeomorphism relative to the boundary. As before, all homotopies are skeletal. Continuing this way, we get a skeletal homeomorhism $h: Y \to X$ that is skeletal homotopic to $\phi$. Also, ${{\Lambda}}'(F) < {{\Lambda}}({\phi}(F)) = {{\Lambda}}(h(F))$. Thus ${\phi}_* \simeq_T h_*$, which is a $T$-homeomorphism.
Let $M$ be a pseudotoric manifold over a nice, simple manifold with corners $X$. We assume that all the faces of $X$ (and $X$ itself) are homeomorphic to contractible manifolds with corners. Let $N$ a locally linear $T$-manifold and $f:N \to M$ a $T$-equivariant homotopy equivalence. Then $f$ is $T$-homotopic to a $T$-homeomorphism.
For Proposition \[prop-locally-standard\], the action of $T$ on $N$ is locally standard. For Corollary \[cor-manifold-corners\], $N\cong_{T} M_Y({{\Lambda}}')$ for some manifold with corners $Y$ and a characteristic function ${{\Lambda}}'$. Then the map $f$ induces a skeletal map ${\phi}: Y \to X$. By Proposition \[prop-reverse\], $\phi$ is skeletal, homotopic to a skeletal homeomorphism $h$. Thus $$f \simeq_{T} {\phi}_* \simeq_{T} h_*$$ and the last map is a $T$-homeomorphism.
The following is an immediate consequence of our Theorem.
Let $M$ be a quasitoric manifold. Let $N$ a locally linear $T^n$-manifold and $f:N \to M$ a $T^n$-homotopy equivalence. Then $f$ is $T^n$-homotopic to a $T^n$-homeomorphism.
Also, a slightly more general result holds.
Let $M$ be a pseudotoric manifold over a nice, simple manifold with corners $X$. We assume that all the faces of $X$ (and $X$ itself) are homeomorphic to contractible manifolds with corners. Let ${\sigma}: T \to T$ be a continuous automorphism. Let $N$ a locally linear $T$-manifold and $f:N \to M$ a $\sigma$-equivariant homotopy equivalence. Then $f$ is $\sigma$-homotopic to a $\sigma$-homeomorphism.
[10]{}
V. M. Buchstaber and T. E. Panov, *Torus actions, combinatorial topology and homological algebra*, arXiv:math/0010073.
M. W. Davis, *The Geometry and Topology of Coxeter Groups*, London Mathematical Society Monographs Series, 32. Princeton University Press, Princeton, NJ, 2008.
M. W. Davis and T. Januszkiewicz, *Convex polytopes, coxeter orbitfolds and torus actions*, Duke Math. J. **62** (1991), no. 2, 417-451.
M. Freedman, *The topology of four-dimensional manifolds*, J. Diff. Geom. **17**, 357 - 453 (1982).
W. Fulton, *Introduction to Toric Varieties*, Ann. of Math. Studies 131, Princeton Univ. Press, Princeton, N.J., 1993.
M. Masuda, *Equivariant cohomology distinguishes toric manifolds*, Adv. Math. **218** (2008), no. 6, 2005?2012.
M. Masuda and T. Panov, *On the cohomology of torus manifolds*, Osaka J. Math. **43** (2006), no. 3, 711 - 746.
M. Masuda,D. Y. Suh, *Classification problems of toric manifolds via topology*, Toric topology, 273 - 286, Contemp. Math., 460, Amer. Math. Soc., Providence, RI, 2008.
G. Moussong and S. Prassidis, [Equivariant rigidity theorems]{}, New York J. Math. **10** (2004), 151 - 167.
T. Oda, *Convex Bodies and Algebraic Geometry*, Springer-Verlag, New York, 1988.
G. Perelman, *The entropy formula for the Ricci flow and its geometric applications*, arXiv:math.DG/0211159.
G. Perelman, *Ricci flow with surgery on three-manifolds*, arXiv:math.DG/0303109.
G. Perelman, *Finite extinction time for the solutions to the Ricci flow on certain three-manifolds*. arXiv:math.DG/0307.
S. Prassidis and B. Spieler, *Rigidity of Coxeter groups*, Trans. Amer. Math. Soc. **352** (2000), no. 6, 2619 - 2642.
E. Rosas, *Rigidity theorems for right angled reflection groups*, Trans. Amer. Math. Soc. **308** (1988), no. 2, 837 - 848.
C. T. C. Wall, *Surgery on compact manifolds*, Academic Press, New York, 1970.
T. Yoshida, *Locally standard torus fibrations*, Proceedings of 33rd Symposium on Transformation Groups, 107-118, Wing Co., Wakayama, 2007.
T. Yoshida, *Local torus actions modeled on the standard representation*, Adv. Math. **227** (2011), no. 5, 1914-1955.
[^1]: $^*$The research of the second author has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALIS
|
---
abstract: 'Resonance-mediated many body decays of heavy mesons are analyzed. We focus on some particular processes, in which the available phase space for the decay of the intermediate resonance is very narrow. It is shown that the mass selection criteria used in several experimental studies of $D$ and $B$ meson decays could lead to a significant underestimation of branching ratios.'
address:
- 'IFLP, Depto. de Física, Universidad Nacional de La Plata, C.C. 67, 1900 La Plata, Argentina'
- 'Instituto de Física, Facultad de Ingeniería, Univ. de la República, C.C. 30, CP 11000 Montevideo, Uruguay'
author:
- 'L.N. Epele, D. Gómez Dumm, A. Szynkman'
- 'R. Méndez–Galain'
title: Cuts in the invariant mass of resonances in many body decays of mesons
---
Three or more body decays of heavy mesons are usually dominated by intermediate resonances. The latter are short lived states that cannot be directly observed: only daughter particles produced through their decays reach the detectors[@PDG]. The detected final state results from the interference of all possible intermediate channels. Thus, one has to disentangle the quantum interference to understand the underlying physics. A powerful technique to split up the various resonant channels is the so-called Dalitz plot fit[@DP].
In general, if a given resonance proceeds within a kinematic region where no other resonances occur, interference effects are assumed to be negligible. In this case, the branching ratio of the heavy meson decay to that particular resonance can be measured in a simple way: one just counts the number of events in which the invariant mass of the decay products lies within a small window around the resonance central mass. The size of this window is chosen according to the width of the resonance. This is a natural and thus widely used measurement technique[@ventana].
In this Letter we discuss the validity of this method. In fact, the resonance is a virtual state, and its squared four-momentum can reach in principle any value within the allowed phase space. This is certainly not a new statement. The interesting fact is that, as we will see, for some particular decays one could find a very large number of (naively, unexpected) events such that, even if they proceed through a particular resonance, the invariant mass of the detected particles is indeed very far from the resonance mass shell (with “far”, we mean in comparison with the resonance width). These events would be missed if the counting method simply considers a narrow window around the resonance central mass. As shown below, this is the case for some processes in which the resonance decay rate is kinematically suppressed. We are aware that the physics involved in this discussion is well known. However, we believe that the magnitude of the effect has not been well appreciated so far, and the described counting method is not always safely applicable. Here we present simulations of actual decays and discuss the significance of the effect as well as the expected experimental difficulties.
Let us first describe those aspects of many body decays relevant to our discussion. Consider a heavy meson $P$ decaying into a final state given by three detected particles, $d_1, d_2$ and $d_3$, and assume that the decay proceeds through intermediate resonances $R_1, R_2$, etc.[^1]. The total amplitude of the decay is the sum of the amplitudes of all partial channels, each one mediated by a resonance $R_i$: $${\cal A}_{tot} = \Sigma_i {\cal A}_{R_i}\;.
\label{ampltot}$$
For simplicity, let us assume that the decay is dominated by a single resonance $R$, in such a way that ${\cal A}\simeq {\cal A}_R = {\cal A}(P
\to R d_3; R \to d_1 d_2)$. It is natural to describe the process by considering three independent stages: resonance production $P \to R d_3$, resonance propagation, and resonance decay $R \to d_1 d_2$. The amplitude factorizes then as $$\begin{aligned}
{\cal A}(P \to R d_3 \to d_1 d_2 d_3) & =
\nonumber \\
{\cal A}(P \to R d_3) \times
BW_{R,12} \times {\cal A}(R \to d_1 d_2) \;. &
\label{fact}\end{aligned}$$
The amplitudes ${\cal A}(P \to R d_3)$ and ${\cal A}(R \to d_1 d_2)$ have to take into account the conservation of angular momentum in the decays, as well as the energy dependence, usually parameterized through form factors (we will come back to these two ingredients later). As usual, we describe the resonance propagation by means of a relativistic Breit-Wigner function[@BW] $$BW_{R,12} (m_{12}^2) = \frac{1}{m_0^2 - m_{12}^2 - i m_0 \Gamma}\; ,
\label{BW}$$ where $m_0$ is the resonance mass, $\Gamma$ is the resonance width, and $m_{12}^2$ is the invariant mass of the outgoing particles $d_1$ and $d_2$, $m_{12}^2 =(p_1 + p_2)^2$.
The function $|BW(m_{12}^2)|$ is peaked around the resonance mass, and decreases with a rate given by $\Gamma$. This behavior reflects the fact that $R$ is a virtual particle that —according to quantum mechanics— can have [*any*]{} invariant mass $m_{12}^2$, the relative probability of each “mass” being weighted by the factor $|BW(m_{12}^2)|^2$. It is then natural to measure $BR(P \to R d_3)$ by simply counting the number of detected particles $d_1$ and $d_2$ for which the value of $\sqrt{m_{12}^2}$ lies within a region of the order of $(m_0-\Gamma,m_0+\Gamma)$. This technique has in fact been used for many years[@ventana]. However, as we will show in the following, the usage of this approach is not always safe.
The decay of a resonance is usually driven by the strong interaction. Thus, resonances have relatively large widths —some dozen or even some hundreds of MeVs [@PDG]. However, if a particular decay is somehow suppressed, the resonance may have a longer life, and its width can be as small as some MeVs or less. This happens in particular when the resonance central mass $m_0$ is very close to the threshold of its decay to $d_1 d_2$. In this case, the phase space available for the decay turns out to be very narrow, and the (virtual) resonance could be allowed to decay through other channels which were in principle expected to be strongly suppressed in comparison with the “natural” strong channel $R\to d_1 d_2$.
This is the case, for instance, of the $\phi(1020)$ vector meson. For this resonance, the natural decay channel is $K \bar K$, in the same way as the natural decay for $\rho$ is two pions. Nevertheless, due to the small phase space available —32 MeV and 24 MeV for the charged and neutral kaons, respectively— the corresponding $\phi$ branching ratios are “only” 49% and 34% for $K^+K^-$ and $K \bar K$, respectively. Electromagnetic decays, that have branching fractions as small as $10^{-4}$ in the $\rho$ decay pattern, are of the order of 1% in the $\phi$ case. Accordingly, the $\phi$ width is about 35 times smaller than the $\rho$ one.
Other examples are low mass $D^\ast$ vector mesons. Their natural decay channel is $D \pi$, in the same way as the natural decay for $K^\ast$ is $K
\pi$. But here, the phase space is as small as 7 MeV for the ${D^\ast}^0(2007) \to D^0 \pi^0$, 6 MeV for both ${D^\ast}^+(2010)\to D^0
\pi^+$ and ${D^\ast}^+(2010)\to D^+ \pi^0$, and 8 MeV for ${D_s^\ast}^+\to
D_s^+ \pi^0$. As a consequence, measured branching ratios of these decays —which otherwise should reach almost 100%— are as “small” as 62%, 68%, 31% and 6%, respectively[^2]. In the ${D^\ast}^0(2007)$ decay pattern, the electromagnetic decay is as large as 38%, i.e., of the same order of magnitude of the strong one (in contrast, in the $K^\ast$ case, electromagnetic decays are of the order of $10^{-3}$). Accordingly, resonance widths are quite small: the width of ${D^\ast}^+(2010)$ has recently been reported to be as small as 0.1 MeV[@cleo], whereas for ${D^\ast}^0(2007)$ and ${D_s^\ast}^+$ only upper limits are known, presently of the order of 2 MeV.
Let us face the study of heavy meson decays mediated by these particular spin one resonances, focusing on cases in which the detected final state includes their natural, and yet highly suppressed, strong channels. We describe here a usual situation[@PDG], where both the initial heavy meson and the particles in the final state are scalars, $P =D, B$, and $d_i
= \pi, K, D$. In this case, Eq. (\[fact\]) can be conveniently written as $${\cal A}(P \to R d_3 \to d_1 d_2 d_3) =
F_{P,Rd_3}\; F_{R,d_1d_2} \; (-2 {\vec{p}_1}\cdot{\vec{p}_3}) \; BW_{R,12}\;,
\label{vector}$$ where $F_{P,Rd_3}$ and $F_{R,d_1d_2}$ are form factors, and the three-momenta ${\vec{p}_1}$ and ${\vec{p}_3}$ are evaluated in the resonance rest frame. The explicit momentum dependence in (\[vector\]) follows from Eq. (\[fact\]), just assuming Lorentz invariance and summing over all possible polarizations of the intermediate vector meson resonance. The differential decay width of this reaction can be written as $$d\Gamma = \frac{1}{(2\pi)^3}\frac{1}{32M^3}|{\cal A}|^2
dm^2_{12}\, dm^2_{13} \;,
\label{width}$$ where $m^2_{ij}=(p_i+p_j)^2$ and $M$ is the mass of the decaying meson $P$.
For the decays considered here, there is a strong suppression at the resonant peak, i.e. when $m_{12}^2\simeq m_0^2$. This suppression is due to purely kinematic effects. To see this, let us take ${\cal A}\simeq$ constant for a given value of $m_{12}^2$ —which means to neglect the dynamics of the decay— and integrate over the variable $m^2_{13}$ within the kinematic limits of the three body phase space. It is easy to see that $$\frac{d\Gamma}{dm^2_{12}} \propto |{\cal A}|^2\,|{\vec{p}_1}| \;,
\label{diffwidth}$$ where ${\vec{p}_1}$ is the three-momentum of $d_1$ in the resonance rest frame. Since we are assuming that the mass of the resonance is just above the threshold, $m_0\simeq m_1+m_2$, at the resonance peak both particles $d_1$ and $d_2$ will be produced almost at rest. Thus Eq. (\[diffwidth\]) implies a suppression in the partial width.
The effect is even stronger in the particular case of a vector resonance. In that case, as stated in Eq. (\[vector\]), the amplitude is proportional to ${\vec{p}_1} \cdot {\vec{p}_3}$. Assuming that the form factors are slow-varying functions of the phase space variables, the partial width is finally expected to be suppressed by a factor $(|\vec p_1|/\Lambda)^3$, where $\Lambda$ is some natural scale of the process, typically of order $M$.
Eqs. (\[vector\]) and (\[diffwidth\]), which are certainly very well known, show up the main point of this Letter. The decay width $\Gamma(P\to d_1 d_2 d_3)$ is driven by two [*competitive*]{} effects. On the one hand, the BW propagator strongly enhances the decay amplitude in the vicinity of the resonance mass. On the other hand, kinematic effects suppress the differential width at the resonance peak, hence the decay rate for other kinematic regions is comparatively enhanced. The usual suppression of the differential decay width for $P \to R d_3 \to
d_1 d_2 d_3$ outside the window allowed by the BW function is not obvious in this case.
To clarify our point, let us emphasize that we are not claiming that there is an enhancement of the resonant [*production*]{} probability $P \to R d_3$ outside the BW peak. On the contrary, the point is that due to the suppression of the resonance [*decay rate*]{} $R \to d_1 d_2$ at the resonance central mass, the [*combined*]{} production + decay probability within and outside the peak could be of comparable orders. In other words, the total width (i.e., integrated over the whole phase space) is indeed small; the important point is that, even if the decay proceeds through a BW-described resonance, the relative weight for the decay rate near or far from the resonance mass shell is not just driven by the BW function.
In order to show that this effect is not a simple academic thought, we present in the following a simulation of an actual process. We first present the pure theoretical estimate; afterwards, we will consider the experimental difficulties. Let us consider the decay $B^+ \to \bar D^{\ast 0} D^+_s ;
\bar D^{\ast 0}\to \bar D^0\pi^0$. Using Eq. (\[vector\]), it is possible to get an estimate for the differential decay rate $d\Gamma$ as a function of $m_{12}^2$. Since the form factors are usually smooth functions, we will assume as a first guess that they are constant (experimental analyses show that form factor shapes have no significant effect on the total systematic error of a given Dalitz plot fit[@exp]). In this way we can calculate the ratio $$r = \frac{\displaystyle
\int_{m^2_{12}=(m_0-n\,\Gamma)^2}^{m^2_{12}=(m_0+n\,\Gamma)^2}
|{\cal A}|^2 d\Phi }{\displaystyle \int |{\cal A}|^2 d\Phi }\;,
\label{r}$$ where $d\Phi$ is an element of the three body phase space, $m_0$ is the $\bar D^{\ast 0}$ resonance mass ($m_0 = 2007$ MeV), $\Gamma$ is the $\bar D^{\ast 0}$ width, and $n$ is a real number. $\Gamma$ is presently unknown, its upper limit being 2.1 MeV with a 90% confidence level[@PDG]. The integral in the denominator is calculated over the whole phase space, while that in the numerator is limited to a window in $m^2_{12}$. Thus, $r$ is a measure of the relative number of events that are expected to fall within the resonance peak.
We quote in Table I the values of $r$ for some input values of $n$ and $\Gamma$. Our results show that the effect we are describing can be very strong if the resonance width is as large as 2 MeV, and it remains quite significant even if $\Gamma$ is of the order of 0.1 MeV. For comparison, we include in the last column the values of $r$ corresponding to a fictitious resonance having the same quantum numbers as $\bar D^{\ast 0}$ but a higher mass, $m_0 = 2.6$ GeV. This particle would not suffer the kinematic suppression in its decay to $\bar
D^0\pi^0$, consequently a small value of $n$ is enough to get $r$ above 90%. We have found that in this case the results for $r$ are independent of the specific value of $\Gamma$.
Figures 1a and 1b show the kinematic distribution of the events for the three body decay $B^+ \to \bar D^0 \pi^0 D^+_s$, assuming that the process is dominated by the $\bar D^{\ast 0}(2007)$ resonance channel[^3] and considering a $\bar D^{\ast 0}$ width $\Gamma = 1$ MeV. The plots correspond to a Monte Carlo simulation of 10000 events, performed using Eqs. (\[vector\]) and (\[width\]). Figure 1a is the Dalitz plot of the decay as a function of the invariant masses $m^2_{\bar D^0\pi^0}$ and $m^2_{D^+_s\pi^0}$, while in Figure 1b we represent an histogram of the number of events as a function of $m^2_{\bar D^0\pi^0}$.
It is seen that the Dalitz plot shape in Fig. 1a is quite different from that naively expected for a decay mediated by a $\Gamma = 1$ MeV vector resonance. This becomes evident by looking at Figure 1c, where we show a Monte Carlo simulation of the same process, now shifting the $\bar D^{\ast 0}$ mass to the fictitious value of 2.6 GeV considered in Table I. The striking difference between the event distribution in both plots arises from the kinematic suppression discussed above. However, the situation displayed in Fig. 1a can be misleading if one just looks at the events for which $m^2_{\bar D^0\pi^0}$ is near the $\bar D^{\ast
0}$ mass. In fact, despite the spreading of events along the whole phase space, the event density is still much larger in the peak region than anywhere else. Therefore, Figure 1b could be mistaken for a $\Gamma = 1$ MeV Breit-Wigner function, with some background in the right sideband[@prl]. It would be then natural to apply the usual method, that means to consider that the events within a window of a few $\Gamma$ around the peak amount almost the total number of decay events. But, according to our simulation, this would be wrong by a factor as large as 3 to 4 (see Table I). Indeed, even if the amplitude is peaked around the resonance mass, the phase space area outside the peak is comparatively so large that the number of events falling in this region becomes very important.
Let us now discuss the experimental difficulties that could appear when data from real events are analyzed. First of all, resolution in these experiments is usually larger than the width of the intermediate resonances involved[@ventana]. As a consequence, it would be impossible to access to a direct measurement of the ratio $r$, at least for small values of $n$. Accordingly, the experimental histogram of Fig. 1b would be wider, arising from the convolution of a narrow Breit-Wigner function (physical) and a Gaussian function with a larger width (resolution). Nevertheless, the bulk of our reasoning remains valid: a large amount of physical events may fall outside the peak. A second comment refers to the difficulty of developing a more careful analysis of data, in order to include the contribution of these events. Indeed, according to Table I, the amount of events could be quite important, but at the same time they appear to be spread in a very large region of the phase space. Then, only a relatively low number of events per bin would be found outside the peak, and they could be hardly disentangled from the background, no matter which is the function used to perform the corresponding fit.
It is important to stress that our theoretical estimates rely on two basic assumptions. First, we have taken the form factors to be approximately constant along the phase space. Second, we have assumed that the Breit-Wigner shape remains valid far beyond a small region around the peak. Whereas the first hypothesis is quite natural and does not have a significant effect on our results, the second assumption can be hardly supported from the theoretical point of view. Our simulations are in this sense strongly model dependent, and this has to be kept in mind when looking at the numerical values presented in Table I.
In any case, our simulations can be taken as a severe warning, indicating that many reported results can be spoiled by the effect described in this Letter. For some particular processes, the magnitude of the corrections could be quite significant, and this should be taken into account —at least— in the corresponding systematic errors. This includes the analysis of the decays $B \to \bar D^{\ast 0} d_3$, $B\to
{D^\ast}^+ d_3$ and $B \to {D_s^\ast}^+ d_3$, with $d_3 = \pi^0, \eta,
K, D_s^+, D^0$, etc. These channels have been measured[@lista] using $\bar D^0 \pi^0 d_3$ —respectively $D^+ \pi^0 d_3$ and $D_s^+ \pi^0
d_3$— as final states, and imposing a cut in the invariant mass $m_{D^0 \pi^0}^2$ —respectively $m_{D^+ \pi^0}^2$ and $m_{D^+_s\pi^0}^2$. According to the simple analysis presented here, in all these cases the effect would be of the order predicted in Table I. In particular, in the case of the ${D^\ast}^+$ resonance, whose width has recently been reported to be 0.1 MeV, the effect is expected to be of the order of 30%.
Finally, let us mention that $\phi$ meson production and decay measurements could also be affected by the effect described above. Since the $\phi$ resonance can be produced in $D$ meson decays, a significant amount of data is presently available, and many Dalitz plot analyses have already been done. However, notice that the threshold of the decay $\phi \to KK$ is about 25 MeV away from the $\phi$ mass, so that the available phase space is not as narrow as in the $D^\ast$ case. Performing a Monte Carlo simulation for the decay $D\to \phi\pi$ similar to that described above for the process $B\to \bar D^\ast D_s$, we have found that the expected effect could be in this case as large as 30%.
We are grateful to D. Cinabro, C. Göbel and G.González–Sprinberg for interesting discussions and comments. D.G.D.acknowledges financial aid from Fundación Antorchas (Argentina). This work has been partially supported by CONICET and ANPCyT (Argentina).
[999]{}
Particle Data Group, D.E. Groom [*et al.*]{}, Eur. Phys. J. C [**15**]{} (2000) 1.
E. Byckling and K. Kajantie, [*Particle Kinematics*]{} (John Wiley & Sons, New York, 1973).
See for example ARGUS Collab., H. Albrecht [*et al.*]{}, Z. Phys. C [**54**]{} (1992) 1; CLEO Collab., D. Gibaut [*et al.*]{}, Phys. Rev. D [**53**]{} (1996) 4734; CLEO Collab., M. Artuso [*et al.*]{}, Phys. Lett. B [**378**]{} (1996) 364; WA82 Collab., M. Adamovich [*et al.*]{}, Phys. Lett. B [**305**]{} (1993) 177; MARK-III Collab., R. M.Baltrusaitis [*et al.*]{}, Phys. Rev. Lett. [**55**]{} (1985) 150. See also works quoted in [@dsdspi] and [@lista].
J.D. Jackson, Nuovo Cim. [**34**]{} (1964) 6692.
CLEO Collab., J. Gronberg [*et al.*]{}, Phys. Rev. Lett. (1995) 3232.
CLEO Collab., S. Ahmed [*et al.*]{}, Phys. Rev. Lett. (2001) 251801.
See for example E687 Collab., P.L. Frabetti [*et al.*]{}, Phys.Lett B [**331**]{} (1994) 217.
See for example Figure 1 of Ref. [@dsdspi].
The list is extensive. For a complete review see [@PDG]. A recent measurement is that of Belle Collab., K. Abe [*et al.*]{}, hep-ex/0107048.
$\ \ n\ \ $ 0.1 MeV 0.5 MeV 1 MeV 2 MeV fictitious
------------------------ --------- --------- ------- ------- ------------
1 61% 30% 18% 11% 70%
3 69% 36% 24% 17% 90%
10 73% 41% 30% 25% 97%
30 75% 46% 37% 36% 99.5%
: The ratio $r$ as explained in the text, for different values of $n$ (1 to 30) and $\Gamma$ (0.1 to 2 MeV). The last column shows the values of $r$ for a fictitious resonance with mass 2.6 GeV —see text.[]{data-label="tab1"}
[^1]: The decay also proceeds through a non-resonant channel, which is not relevant to our discussion.
[^2]: $D_s^{+*} \to D_s^+ \pi^0$ is an isospin violating decay (see Ref. [@dsdspi]). It thus has another, phase space independent suppression.
[^3]: In fact, the decay can also proceed through an intermediate resonance ${D_s^\ast}^+$, as well as other higher resonant states. The inclusion of these contributions in our analysis does not affect significantly the results.
|
---
abstract: 'Stimulated by recent advances in isolating graphene, we discovered that quantum dot can be trapped in Z-shaped graphene nanoribbon junciton. The topological structure of the junction can confine electronic states completely. By varying junction length, we can alter the spatial confinement and the number of discrete levels within the junction. In addition, quantum dot can be realized regardless of substrate induced static disorder or irregular edges of the junction. This device can be used to easily design quantum dot devices. This platform can also be used to design zero-dimensional functional nanoscale electronic devices using graphene ribbons.'
address:
- Hefei National Laboratory for Physical Sciences at Microscale
- 'Electrical and Computer Engineering, University of Alberta, AB T6G 2V4, Canada'
- Hefei National Laboratory for Physical Sciences at Microscale
- 'Electrical and Computer Engineering, University of Alberta, AB T6G 2V4, Canada'
- 'National Institute of Nanotechnology, Edmonton AB T6G 2V4, Canada '
- Hefei National Laboratory for Physical Sciences at Microscale
- Hefei National Laboratory for Physical Sciences at Microscale
author:
- 'Z. F. Wang'
- Huaixiu Zheng
- 'Q. W. Shi'
- Jie Chen
- Qunxiang Li
- 'J. G. Hou'
title: 'Quantum Dot in Z-shaped Graphene Nanoribbon'
---
Due to its amazing electrical properties, carbon nanotubes (CNTs) discovered by Iijima [@1] in 1991, have been considered as a leading candidate for nanoscale electronic applications. Major experimental and theoretical breakthroughs have been achieved [@2; @3; @4], combining two distinct chirality carbon-nanotubes by introducing topological point defects in the graphene hexagonal lattice, to realize the quantum dot in the carbon nanotube heterojunctions. In such devices, the major sources of spin decoherence have been identified as the spin orbit interaction, coupling the spin to lattice vibrations and the hyperfine interaction of the electron spin with the surrounding nuclear spins. Therefore, it is desirable to form qubits in quantum dots based on these materials, where spin orbit coupling and hyperfine interaction are considerably weaker [@5]. However, large scale integrated nanotube quantum dot devices are hard to make, because it is still difficult to assemble large numbers of CNTs together.
Very recently, the fabrication of a single layer of graphene and the measurement of its electric transport properties have been achieved [@6; @7]. Graphene is a flat monolayer of carbon atoms tightly packed into a two-dimensional (2D) honeycomb lattice. What makes graphene so attractive for nanoelectronics is that the energy spectrum closely resembles the Dirac spectrum for massless fermions. For massless Dirac fermions, the band gap is zero and the linear dispersion law holds at low energy, quasiparticles in graphene behave differently from those in conventional metals and semiconductors, where the energy spectrum can be approximated by a parabolic dispersion relation. Research interest in this material has grown exponentially [@6; @7; @8; @wzf].
As the building block of the carbon nanotube (CNTs), graphene can be viewed as a sheet of a unrolled single-walled nanotube. Graphene has similar mechanical, thermal and electrical properties as those of CNTs. Unlike CNTs, its flat structure can be easily etched using conventional lithography techniques. The interconnection wires become unnecessary and integrated nanoelectronic devices can be completely made by using continuous graphene sheets. Armchair and zigzag graphene nanoribbons (GNR) are two basic graphene ribbon structures. Based on the nearest neighbour $\pi$ orbital tight binding model with diagonal matrix element fixed at Fermi level $\epsilon_F=0 eV$ and all nonzero off diagonal matrix element set to $\gamma= -2.66 eV$, zigzag GNRs are always metallic and the armchair GNRs are either metallic or semiconducting depending on its width $W$ [@9]. For instance, when $W=3n-1$, it is metallic, otherwise is semiconducting. Comparing with the recent Local density approximation (LDA) results [@10; @11], except for the band gap opening for the ribbon with narrow width as the consequence of $\sigma$ bond length changing, GNR’s electronic structure can still be quantitatively described using the simple tight binding model.
In addition to the study of 2-D and 1-D electronic proprieties of graphene, research attention has recently been focus on designing quantum dot (the 0-D device) base on this novel material [@12; @13; @14; @dot]. In this letter, we propose a GNR quantum dot device, which consists of a Z shaped zigzag GNR junction connecting to two semi-infinite armchair GNRs as shown in Fig.1. Here, we suppose that GNR edge bonds are saturated by hydrogen atoms and no distortions exist in these GNRs. According to our numerical results, we find that this Z-shaped junction device can completely confine electronic states induced by the topological structure of the junction. By varying the length of the junction, the spatial confinement and the number of discrete levels are modified accordingly. Surprisingly, these confined states can still exist even when considerable static disorders and irregular edges of the junction occur. This finding show that this quantum-dot device can be made without too many constraints, which indicates that such a device can be easily fabricated.
To study the electronic proprieties of the Z-shaped GNR junction, we separate the device as shown in Fig. 1 into three regions: the left lead, the middle junction, and the right lead. In this study, we assume that the junction width is $W-1$, the left/right leads have equal width $W$ and the length of the junction is $L$. Here $L$ and $W$ are integer. In this design, the leads are semi-infinite armchair GNRs, but the junction is a zigzag GNR. We have performed calculations using the nearest neighbour $\pi$ orbital tight binding model, the density of states (DOS) of the Z junction are determined by the direct diagonalization of $H=H_c+\Sigma_L^{r}+\Sigma_R^{r}$, where $H_c$ is the Hamilton of Z junction. Including parts of the armchair GNRs in the Hamilton of $H_c$ do not obviously change our results, so in the following calculation, only the zigzag junction part is included in the $H_c$. The contributions from the left and right leads are included in the self-energy term $\Sigma_{L/R}^{r}$, which are calculated using Green’s function along with a transfer matrix technique [@15; @16]. The DOS of this system can be expressed as $DOS(E)=\sum\limits_\alpha\frac{1}{2\pi}\frac{\gamma_\alpha}
{(E-\varepsilon_\alpha)^2+(\gamma_\alpha)^2}$. The summation covers all eigenvalues. $\varepsilon_\alpha$ is the real part of eigenvalue and represents the position of the state. $\gamma_\alpha$ is the imaginary part of the eigenvalue, which represents the broaden of the state. Local density of states (LDOS) are directly obtained from Green’s function.
In Fig.2 (a), we show the DOS of a Z-shaped junction with $W=7$ and $L=4$. There are two sharp peaks in the energy range within the band gap. The position of the peaks is symmetric around $E=0 eV$, because the topological structure of the junction does not break electron-hole symmetry. This phenomenon is dramatically different from the quantum dot in nanotube heterojunctions, in which the pentagon-heptagon interface break down this symmetry [@2]. In Fig.2 (b), we show the spatial dependence of LDOS for the discrete states in a Z-shaped junction. In this case, the discrete energies are $E_1= -0.3 eV$ and $E_2= 0.3 eV$. By considering energy symmetry, only the LDOS of $E_1= -0.3 eV$ is plotted in Fig.2 (b). Since the LDOS corresponds to the squared amplitude of the wave function, Fig.2 (b) illustrates the spatial localization of the Z shaped junction, i.e. the discrete states are localized in the junction region and especially along the edge of the junction. This phenomenon clearly demonstrates that these discrete states are caused by quantum confinement. The Z-junction design can be used as a quantum dot device. Unlike previous designs [@12; @13; @14], quantum confinement in our study is due to the change in the network connectivity. These confined states are attributed to the surrounding barriers, which form by the interconnection between the armchair GNR and the zigzag GNR, which has been well used to explained the nanotube heterojunctions quantum dot [@2; @3].
Perfect GNRs seldom exists in reality. Even if initial perfect, once physical adsorbed on a surface, GNRs will experience the effects of the disorder due to the interaction with the substrate [@17]. This disorder can change their wave function and affect their usage in nanoelectronic devices. According to the potential range of the disorder, they can be classified into long-range and short-range effects. The long-range effect (potential changes longer than the lattice distance) is caused by substrate charge and the short-range effect (potential varies rapidly at the scale of the lattice distance) is caused by residual interaction. The effect of these disorders can be simulated by adjusting corresponding elements in the Hamilton matrix [@17]. For instance, the short-range disorder effect can be modeled by assuming that the diagonal and the nonezero off diagonal matrix elements independently fluctuate around their initial values with variances $\sigma_\epsilon$ for diagonal elements and $\sigma_\gamma$ for nonzero off diagonal elements. The long range disorder can be stimulated by introducing a 2D Gaussian potential with the form of $V(r)=V_0exp(-r^2/2\sigma^2)$ centered around the carbon site to shift the diagonal element. In the following calculation, the disorder is limited in the junction region for simplicity.
The effects of the substrate induced static disorder to the confined states in the Z-shaped junction are shown in Fig.2 (c),(d) and (e). To see the disorder effects more clearly, we consider only one type of disorders in Fig.2 (c) $\sim$ (e), respectively. In Fig.2 (c), we suppose that only the short-range disorder with $\sigma_\gamma$ exists within the junction region. Considering the C-C bond of length $d$, the corresponding nonzero off diagonal element $\gamma$ will change $\delta\gamma=\alpha\delta d$ with $\alpha\simeq47 eV/nm$ [@18]. So in Fig. 2(c), for $\sigma_\gamma= 0.07, 0.35$ and $0.7 eV$, the C-C bond lengths change as much as $\pm1 \%, \pm5 \%$ and $\pm10 \%$. Even the bond length changes $10 \%$. The DOS, however ,does not change much except for a slight peak shift of the discrete states. This off-diagonal short-range disorder has almost no impact to the confined states. In addition, the DOS remains symmetric around $E=0 eV$, which indicates that this type of disorder cannot break the electron-hole symmetry in this junction device. Secondly, in Fig.2 (d), we only consider the short-range disorder $\sigma_\epsilon$. With different disorder strengths as much as $\sigma_\epsilon= 0.5, 1.0$ and $2.0 eV$, fortunately, these confined states can still exist. Unlike the former case, the position of the confined states change more dramatically for the $\sigma_\epsilon$ disorder and they are not no longer symmetric around $E=0 eV$. Lastly, for the long-range disorder, a Gaussian potential is introduced in the junction range. The shift of different potential center, however, will not change our findings (we only plot one of them in Fig.2 (d) for illustration). Compare to the $\sigma_\epsilon$ disorder, the DOS becomes more asymmetric around $E=0 eV$. Two confined states, however, still exist within the gap region. These phenomena can be easily explained, because the discrete states are confined by the two barriers formed by the change in the network connectivity. The substrate induced disorder cannot change the topological barriers much, and thus it still has enough strength to confine the electron in the junction and form a quantum dot.
To extend this novel quantum-dot design, we also investigate how the junction length affects quantum-dot confinement. The length of the quantum-dot along the junction is $L \times 2.13{\AA}$. Here we assume no disorder exists in the junction region, Fig.2 (f) shows the energy of the confined states vs. the length of the quantum-dot. As expected, when $L$ increases, the number of confined states increases as well. Furthermore, with the increasing length of this quantum-dot device, the confined states around $E=0 eV$ are almost degenerated. In addition, within in the range of the L value studied, the energy spacing between the discrete level is about $\simeq100 meV$ except for these states around $E=0 eV$. This value is larger than the thermal broadening at room temperature.
Unlike CNT devices, GNR device most likely have irregular edges due to lithography. Finally in our study, we consider the impact of edge disorder in zigzag junction region. Here, we follows the method suggested by Areshkin to simulate eroded zigzag edge [@17]. This method ensure that all the C atoms remains 3-fold coordinated with at most one H atom attached to an edge C atom without steric problem. Here we restrict the erosion to the outmost layer of the edge. Fig.3 shows how these defects affect the confined states. Similar to the case of the $\sigma_\gamma$ disorder, a slight peak shift can be observed. However, the confined states still exist even under the presence of the irregular edges. We can explain it in a straight way. This edge disorder erodes some carbon and hydrogen atoms in the zigzag junction. The effects of breaking bonds are equivalent to add $-\gamma$ to the nonzero off diagonal elements in the Hamilton matrix similar to the $\sigma_\gamma$ disorder. The effects for eroded hydrogen can be neglected because carbon atoms can form strong covalent bonds with hydrogen atom. The $\sigma$ bands of these bonds lie far away from the Fermi surface and do not need to be considered. This result is extremely important and provides more convenience and flexibility to implement this structure in experiments. With the increasing width of the GNR junction, the ribbon behaviors will gradually transit to that in a 2D graphene. The edge effects would become much weaker. Therefore, a wider GNR junction should be able to tolerate even a greater degree of edge disorder. However, these imperfections do not change the behavior of our quantum-dot device, which is very important in future nanofabrication.
In summary, we propose a quantum-dot device design using a Z-shaped GNR junction. This system can completely confine electronic states induced by the topological structure of the Z-junction. By varying the length of this junction device, the spatial confinement and the number of discrete levels can be modified. In addition, the substrate induced static disorder and the irregular edges of GNRs do not destroy these confined states. These findings provide a convenient way to fabricate this structure experimentally. For our on-going study, we are investigating the effect of coulomb interactions in this structure. Overall, our contribution is to provide a simple model to design zero-dimensional functional nanoscale electronic devices using graphene ribbons.
This work is partially supported by the National Natural Science Foundation of China with grant numbers 10574119, 10674121 and 50121202. The research is also supported by National Key Basic Research Program under Grant No. 2006CB922000, Jie Chen would like to acknowledge the funding support from the Discovery program of Natural Sciences and Engineering Research Council of Canada under Grant No. 245680.
S. Iijima, Nature **354**, 56 (1991).
L. Chico, M. P. L'ez Sancho and M. C. Muñz, Phys. Rev. Lett. **81**, 1278 (1998).
C. G. Rocha, T. G. Dargam, and A. Latgé, Phys. Rev. B **65**, 165431 (2002).
Yagang Yao, Qingwen Li, Jin Zhang, Ran Liu, Liying Jiao, Yuntian T. Zhu, and Zhongfan Liu, Nature Materials **6**,293 (2007).
Hongki Min, J. E. Hill, N. A. Sinitsyn, B. R. Sahu, Leonard Kleinman, and A. H. MacDonald, Phys. Rev. B **74**, 165310 (2006).
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature **438**,197 (2005).
Claire Berger, Zhimin Song, Xuebin Li, Xiaosong Wu, Nate Brown, Cécile Naud, Didier Mayou, Tianbo Li, Joanna Hass, Alexei N. Marchenkov, Edward H. Conrad, Phillip N. First and Walt A. de Heer, Science **312**,1191 (2006).
Sasha Stankovich, Dmitriy A. Dikin1, Geoffrey H. B. Dommett, Kevin M. Kohlhaas, Eric J. Zimney, Eric A. Stach, Richard D. Piner, SonBinh T. Nguyen and Rodney S. Ruoff, Nature **442**, 282 (2006); Jannik C. Meyer, A. K. Geim, M. I. Katsnelson, K. S. Novoselov, T. J. Booth and S. Roth, Nature **446**, 60 (2007); Hubert B. Heersche, Pablo Jarillo-Herrero, Jeroen B. Oostinga, Lieven M. K. Vandersypen and Alberto F. Morpurgo, Nature **446**, 56 (2007).
Z. F. Wang, Ruoxi Xiang, Q. W. Shi, Jinlong Yang, Xiaoping Wang, J. G. Hou, and Jie Chen, Phys. Rev. B **74**, 125417 (2006); Z. F. Wang, Qunxiang Li, Haibin Su, Xiaoping Wang, Q. W. Shi, Jie Chen, Jinlong Yang, and J. G. Hou, Phys. Rev. B **75**, 085424 (2007); Z. F. Wang, Qunxiang Li, Huaixiu Zheng, Hao Ren, Haibin Su, Q. W. Shi, and Jie Chen, Phys. Rev. B **75**, 113406 (2007).
Kyoko Nakada, Mitsutaka Fujita, Gene Dresselhaus and Mildred S. Dresselhaus, Phys. Rev. B **54**, 17954 (1996).
Y. Son, M. L. Cohen and S. G. Louie, Nature **444**,347 (2006).
Y. Son, M. L. Cohen and S. G. Louie, Phys. Rev. Lett. **97**, 216803 (2006).
Björn Trauzettel, Denis V. Bulaev, Daniel Loss and Guido Burkard, Nature Physics **3**,192 (2007).
J. Milton Pereira, Jr., P. Vasilopoulos, and F. M. Peeters, Nano Lett **0**,4 (2007).
A. Rycerz, J. Tworzyd and C. W. J. Beenakker, Nature Physics **3**,172 (2007).
P. G. Silvestrov and K. B. Efetov, Phys. Rev. Lett. **98**, 016802 (2007).
M P López Sancho, J M López Sancho and J Rubio, J. Phys. F : Met. Phys. **15**, 851 (1985).
S. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University Press, Cambridge, England, 1995).
Denis A. Areshkin, Daniel Gunlycke and Carter T. White, Nano Lett **7**,204 (2007).
C. T. White and T. N. Todorov, Nature **393**,240 (1998).
|
---
abstract: |
Let $D \subset {\mathbf{R}}^n$ be a bounded domain with a Lipschitz boundary, let $1< p< \frac{2n}{n-2}$, and let $\phi$ minimize the ratio $\|\nabla u\|_{L^2}/
\|u\|_{L^p}$. We prove a reverse-Hölder inequality, finding a lower bound for $\|\phi\|_{L^{p-1}}$ in terms of $\|\phi\|_{L^p}$, in which equality holds if and only if $D$ is a ball. This result generalizes an inequality due to Payne and Rayner [@PR; @PR2] regarding eigenfunctions of the Laplacian.
author:
- 'Tom Carroll [^1] and Jesse Ratzkin [^2]'
title: ' [**An isoperimetric inequality for extremal Sobolev functions**]{}'
---
Introduction and statement of results
=====================================
Let $D \subset {\mathbf{R}}^n$ be a bounded domain with Lipschitz boundary, and let $1 < p <\frac{2n}{n-2}$ (or, $p > 1$ if $n=2$). For this range of exponents, the Sobolev embedding $W^{1,2}_0(D) \hookrightarrow L^p(D)$ is compact, and so the infimum $$\label{sobolev-quotient}
{\mathcal{C}_p}(D) =\inf \left \{ \frac{\int_D |\nabla u |^2 d\mu}
{\left (\int_D |u|^p d\mu\right )^{2/p}}: u \in W^{1,2}_0 (D),
u \not \equiv 0\right \}$$ is finite and achieved by a nontrivial function $\phi = \phi_p$.
We take this opportunity to set notation for the remainder of the paper. We denote the volume element of the usual Lebesque measure in ${\mathbf{R}}^n$ by $d\mu$; when it will be necessary, we will denote the induced area element on a hypersurface $\Sigma \subset {\mathbf{R}}^n$ by $d\sigma$. We write the appropriate dimensional volume of a set as $\Omega$ as $|\Omega|$, [*i.e.*]{} if $\Omega \subset {\mathbf{R}}^n$ is an open set then $|\Omega| = \mu(\Omega)$ and if $\Sigma\subset {\mathbf{R}}^n$ is a hypersurface then $|\Sigma| = \sigma(\Sigma)$. If ${\mathbf{B}}_1 \subset {\mathbf{R}}^n$ is the unit ball, we denote $|{\mathbf{B}}_1| = \omega_n$, so that $|{\mathbf{B}}_r| = \omega_n r^n$ and $|\partial {\mathbf{B}}_r| =
n \omega_n r^{n-1}$. The Sobolev space $W^{1,2}_0(D)$ is the closure of $\mathcal{C}^\infty_0(D)$ under the norm $\|u\|_{W^{1,2}} ^2 = \|u\|_{L^2}^2 + \|\nabla u \|_{L^2}^2$.
An extremal function $\phi$ for will solve the boundary value problem in $D$: $$\label{sobolev-pde}
\Delta \phi + \lambda\, \phi^{p-1} = 0 , \quad \left.
\phi \right |_{\partial D} = 0.$$ Without loss of generality we can take $\phi>0$ inside $D$. General regularity results imply that $\phi \in C^\infty_0(D)$, and a short integration by parts argument reveals that $$\label{sobolev-scaling}
\lambda = {\mathcal{C}_p}(D) \left ( \int_D \phi^p d\mu \right )^{\frac{2-p}{p}}.$$ This sharp Sobolev constant ${\mathcal{C}_p}(D)$ and its associated extremal function $\phi_p$ are both the subject of a vast literature, and incorporate much information relating the function theory and the geometry of $D$. In particular, a long string of results (for example, [@PR; @Chiti; @Alvino; @CFMP]) have uncovered isoperimetric-type inequalities of varying sorts. Our main theorem generalizes the reverse-Hölder inequalities of [@PR; @PR2; @CRpPR1], and has the following form.
\[main-thm\] Let $D \subset {\mathbf{R}}^n$ be a bounded domain with Lipschitz boundary, let ${\mathcal{C}_p}(D)$ be the sharp Sobolev constant defined by , and let $\phi$ be its associated extremal function. Let $D^*$ be a ball with the same volume as $D$. Then $$\label {main-ineq}
\left ( \int_D \phi^{p-1}\, d\mu \right )^2 \geq |D|^{\frac{n-2}{n}}
\left ( \int_D\phi^p\, d\mu \right )^{\frac{2(p-1)}{p}}
\left [ \frac{2n^2 \omega_n^{2/n}}{p\,{\mathcal{C}_p}(D)} -
(n-2)\frac{ n\, \omega_n^{\frac{2}{n} + \frac{p^2-p+2}{p(p-1)}}}{C_p(D^*)}
\right ].$$ Equality holds if and only if $D$ is a ball.
- The Hölder inequality implies that for any $u \in W^{1,2}_0$ we have $$\int_D |u|^{p-1}\, d\mu \leq |D|^{1/p} \left ( \int_D |u|^p \,d\mu \right )^{\frac{p-1}{p}}.$$ For this reason, upper bounds of the form are called reverse-Hölder inequalities.
- Observe that we recover the main inequality of [@PR2] in the case $p=2$, and we recover the reverse-Hölder inequality of [@CRpPR1] in the case $n=2$.
[Acknowledgements:]{} T. C. is partially supported by the programme of the ESF Network Harmonic and Complex Analysis and Applications (HCAA). J. R. is partially supported by a Carnegie Research grant from the University of Cape Town.
Proof of the main theorem {#proof-main-thm}
=========================
We begin by briefly outlining our strategy for proving , which we adapted from Payne and Rayner’s proof in [@PR2]. Let $M = \sup_{x \in D} \phi(x)$, and for $0 \leq t \leq M$ we define $$D_t = \{x \in D: \phi (x) > t\}, \qquad \Sigma_t = \{x \in D: \phi (x) = t\}.$$ By Sard’s theorem, we have $\Sigma_t = \partial D_t$ for almost every value of $t$. To prove we define the auxilliary function $$H(t) = \int_{D_t} \phi^{p-1} \,d\mu = \int_t^M \tau^{p-1} \int_{\Sigma_\tau}
\frac{d\sigma}{|\nabla \phi|}\, d\tau, \quad t\in [0,M].$$ In Section \[diff-ineq\] we derive lower bounds for the second derivative of $H$, and in Section \[int-ineq\] we integrate these to obtain several integral inequalites for $H$ and for powers of $\phi$. In Section \[radially-symm\] we examine the one-dimensional eigenvalue problem which arises from the radially symmetric case, and in Section \[main proof\] complete the proof of .
Differential inequalities {#diff-ineq}
-------------------------
We let $V(t) = |D_t|$. Then, by the co-area formula, $$V'(t) = -\int_{\Sigma_t}\frac{d\sigma}{|\nabla \phi|} < 0.$$ Thus $V$ is a monotone function of $t$, and we can invert it to obtain $t = t(V)$, with $$\frac{dt}{dV} = \frac{1}{V'(t)} = -\frac{1}{\int_{\Sigma_t} \frac{d\sigma}
{|\nabla \phi|}}.$$ This in turn implies that $$\label{change-var1}
\frac{dH}{dV} = \frac{dH}{dt}\frac{dt}{dV} = \left ( -t^{p-1} \int_{\Sigma_t} \frac{d\sigma}
{|\nabla \phi|}\right ) \cdot\left (- \frac{1}{\int_{\Sigma_t} \frac{d\sigma}{|\nabla \phi|}}
\right ) = t^{p-1},$$ a relation which will prove quite useful in our computations. Taking one more derivative shows that $$\frac{d^2H}{dV^2} = \frac{d}{dV} \big(t^{p-1}\big) = -\frac{(p-1)\,t^{p-2}}
{\int_{\Sigma_t} \frac{d\sigma}{|\nabla \phi|}}.$$
The function $H$ satisfies $$\label {comparison-ode1}
\frac{d^2 H}{dV^2} \geq -(p-1) (t(V))^{p-2} \frac{\Lambda H(V)}
{n^2 \omega_n^{2/n} V^{\frac{2(n-1)}{n}}},
\quad V \in \big[ 0,\vert D \vert \big].$$ with the boundary conditions $H(0) = 0$ and $H'(|D|) = 0$. Moreover, equality in forces $D$ to be a ball, and forces the function $\phi$ to be radially symmetric.
By the Cauchy-Schwarz inequality, $$|\Sigma_t|^2 \leq \left ( \int_{\Sigma_t} |\nabla \phi| \,d\sigma \right )
\left (\int_{\Sigma_t} \frac{d\sigma}{|\nabla \phi|} \right ),$$ which we can rearrange to read $$\label {cauchy-schwarz1}
\int_{\Sigma_t} \frac{d\sigma}{|\nabla \phi |} \ \geq\ \frac{|\Sigma_t|^2}
{\int_{\Sigma_t} |\nabla \phi| \,d\sigma}.$$ Since $\Sigma_t$ is a level-set of $\phi$, we may use the divergence theorem and to obtain $$\begin{aligned}
\int_{\Sigma_t} |\nabla \phi|\,d\sigma & = & -\int_{\Sigma_t} \frac{\partial \phi}
{\partial \eta} \,d\sigma = - \int_{D_t} \Delta \phi \,d\mu \\
& = & \lambda \int_{D_t} \phi^{p-1}\,d\mu = \lambda H(t).\end{aligned}$$ Combining this with we obtain $$\label{cauchy-schwarz2}
\int_{\Sigma_t} \frac{d\sigma}{|\nabla \phi |} \ \geq\ \frac{|\Sigma_t|^2}
{\lambda\,H(t)}.$$ By the classical isoperimetric inequality, $$\label{isop-ineq1}
|\Sigma_t|^2 \geq n^2 \omega_n^{2/n} |D_t|^{\frac{2(n-1)}{n}}
= n^2 \omega_n^{2/n} V^{\frac{2(n-1)}{n}}.$$ Together with , this shows that $$\begin{aligned}
\frac{d^2 H}{dV^2} & = & - (p-1) \frac{t^{p-2}}{\int_{\Sigma_t}
\frac{d\sigma}{|\nabla \phi|}} \geq -(p-1)\, t^{p-2} \,\frac{\lambda H(V)}
{|\Sigma_t|^2} \\
& \geq & -(p-1)\,t^{p-2} \,\frac{\lambda H(V)}{n^2 \omega_n^{2/n}
V^\frac{2(n-1)}{n}}.
\end{aligned}$$ Notice that the boundary conditions for this differential inequality are $$\label{H-bc-V}
H(0) = 0, \qquad H' (|D|) =\left. t^{p-1} \right |_{t=0} = 0.$$ Moreover, we only have equality in for each $V$ in $\big[ 0,\vert D \vert \big]$ if we have equality in for almost every $t$, which in turn implies that $\Sigma_t$ is a round sphere for almost every $t \in [0,M]$. This is possible only if $D$ is itself a ball. Also, equality in forces equality in , which implies $\nabla \phi$ must be constant on each sphere $\Sigma_t$, and so $\phi$ must be radial.
We change variables by letting $\rho = (V/\omega_n)^{1/n}$ be the volume radius of $D_t$, so that $V = |D_t| = \omega_n \rho^n$. We also define $\rho_M = (|D|/\omega_n)^{1/n}$. As a function of $\rho$, the function $H$ satisfies the boundary conditions $$\label{H-bc-rho}
H(0) = H'(0) = \cdots = H^{(n-1)}(0) = 0, \qquad H'(\rho_M) = 0.$$
$$\label{comparison-ode2}
\frac{d}{d\rho} \left [\rho^{1-n} \left( \frac{dH}{d\rho} \right )^{\frac{1}{p-1}} \right ]
\geq -\frac{\lambda}{(n\omega_n)^{\frac{p-2}{p-1}}} \,\rho^{1-n} \,H(\rho),
\quad 0 < \rho < \rho_M.$$
Taking derivatives, we see that $$\label{change-var2}
\frac{dH}{d V} = \frac{\rho^{1-n}}{n\omega_n} \frac{d H}{d \rho},
\quad \frac{d^2 H }{d\rho^2} = \frac{\rho^{1-n}}{n\omega_n}
\frac{d}{d\rho} \left (\frac{\rho^{1-n}}{n\omega_n} \frac{dH}{d\rho}
\right ).$$ Substituting these expressions in gives $$\label{comparison-ode3}
\frac{d}{d\rho} \left ( \rho^{1-n} \frac{dH}{d\rho} \right )
\geq -(p-1)\,\lambda\, t^{p-2}\, \rho^{1-n}\, H(\rho).$$ However, $$t^{p-1} = \frac{dH}{dV} = \frac{\rho^{1-n}}{n\omega_n} \,\frac{dH}{d\rho},$$ so that becomes $$\frac{d}{d\rho} \left ( \rho^{1-n} \,\frac{dH}{d\rho} \right )\geq
-\frac{(p-1)\, \lambda}{(n\omega_n)^{\frac{p-2}{p-1}}} \left ( \rho^{1-n}
\,\frac{dH}{d\rho} \right )^{\frac{p-2}{p-1}} \rho^{1-n}\,H(\rho).$$ This we can rewrite as $$\frac{1}{p-1} \frac{\frac{d}{d\rho} \left ( \rho^{1-n} \frac{dH}{d\rho} \right )}
{\left ( \rho^{1-n} \frac{dH}{d\rho} \right )^{\frac{p-2}{p-1}} }
= \frac{d}{d\rho} \left [ \rho^{1-n} \left (\frac{dH}{d\rho} \right )^{\frac{1}{p-1}}
\right ] \geq -\frac{\lambda}{(n\omega_n)^{\frac{p-2}{p-1}}}\, \rho^{1-n}\, H(\rho).
\qedhere$$
Since is really the same as rewritten in different variables, equality holds in for $0 < \rho < \rho_M$ if and only if $D$ is a ball and $\phi$ is radial.
Integral inequalities {#int-ineq}
---------------------
In this section we integrate and to obtain inequalities for the integral of $H$ and the integral of powers of $\phi$. As each of these inequalities is an integrated form of and , equality holds if and only if $D$ is a ball and $\phi$ is radial.
$$\begin{aligned}
\label {comparison-ode4}
\left ( \int_D \phi^{p-1} \,d\mu \right )^2 &\geq &\frac{2n^2 \omega_n^{2/n}
|D|^{\frac{n-2}{n}}}{p\, {\mathcal{C}_p}(D)} \left ( \int_D \phi^p\,d\mu \right)^{\frac{2(p-1)}{p} }\\ \nonumber
&&\qquad\qquad - \frac{n-2}{n} |D|^{\frac{n-2}{n}} \int_0^{|D|}
V^{\frac{2(1-n)}{n}} H^2(V) \,dV. \end{aligned}$$
We multiply the inequality by $\frac{p}{p-1} V \left ( \frac{dH}{dV} \right )^{1/(p-1)}$ and integrate from $0$ to $|D|$. Upon integration, the left hand side becomes $$\begin{aligned}
\int_0^{|D|} \frac{p}{p-1} V \left ( \frac{dH}{dV}
\right )^{1/(p-1)} \frac{d^2 H}{dV^2}\,dV & = & \int_0^{|D|}
V \frac{d}{dV} \left[\left (\frac{dH}{dV}\right ) ^{p/(p-1)}\right]\, dV \\
& = & V \left. \left ( \frac{dH}{dV} \right )^{p/(p-1)} \right|_0^{|D|}
- \int_0^{|D|} \left (\frac{dH}{dV} \right )^{p/(p-1)} dV \\
& = & - \int_0^{|D|} \big(t^{p-1} (V)\big)^{p/(p-1)} dV \\
& = & -\int_0^{|D|} t^p(V) dV = -\int_D\phi^p d\mu.\end{aligned}$$ The boundary terms in the integration by parts vanished since $H'(\vert D \vert)=0$, while (2.1) was used at the third step. On the other hand, using again, the right hand side becomes $$\begin{aligned}
-\frac{p\,\lambda}{n^2 \omega_n^{n/2}} \int_0^{|D|}
V \left (\frac{dH}{dV}\right ) ^{1/(p-1)} & t^{p-2} H(t) V^{\frac{2(1-n)}{n}}\, dV\\
& = -\frac{p\,\lambda}{n^2 \omega_n^{n/2}} \int_0^{|D|} t^{p-2}
\left ( \frac{dH}{dV} \right )^{\frac{2-p}{p-1}} V^{\frac{2-n}{n}} H(V)
\frac{dH}{dV}\, dV \\
& = -\frac{p\,\lambda}{n^2 \omega_n^{n/2}} \int_0^{|D|}t^{p-2}
(t^{p-1})^{\frac{2-p}{p-1}} V^{\frac{2-n}{n}} H(V) \frac{dH}{dV}\, dV \\
& = -\frac{p\,\lambda}{n^2 \omega_n^{n/2}}\int_0^{|D|} V^{\frac{2-n}{n}}
H(V) \frac{dH}{dV}\, dV\end{aligned}$$ We combine these last two equations and replace $\lambda$ by ${\mathcal{C}_p}(D) \left (\int_D \phi^p d\mu \right ) ^{(2-p)/p}$ to obtain $$\begin{aligned}
\left ( \int_D \phi^p \,d\mu \right )^{\frac{2(p-1)}{p}} & \leq &
\frac{p\,{\mathcal{C}_p}(D)}{n^2 \omega_n^{n/2}}
\int_0^{|D|} V^{\frac{2-n}{n}} H(V) \,\frac{dH}{dV}\, dV\\
& = & \frac{p\, {\mathcal{C}_p}(D)}{2n^2 \omega_n^{2/n}}\int_0^{|D|}
V^{\frac{2-n}{n}} \frac{d}{dV} \big(H^2(V)\big) \,dV \\
& = & \frac{p\,{\mathcal{C}_p}(D)}{2n^2 \omega_n^{2/n}} \left [
|D|^{\frac{2-n}{n}} \left ( \int_D \phi^{p-1} d\mu \right )^2 +
\frac{n-2}{n} \int_0^{|D|} H^2(V) \,V^{\frac{2(1-n)}{n}}\, dV \right ],\end{aligned}$$ which we can rearrange to give .
$$\label {comparison-ode5}
\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} \left ( \frac{dH}{d\rho}
\right )^{\frac{p}{p-1}} \leq \frac{\lambda}{(n\omega_n
)^{\frac{p-2}{p-1}}} \int_0^{\rho_M} \rho^{1-n} H^2(\rho) d\rho.$$
Equality holds if and only if $D$ is a ball.
We mutliply by $H$ and integrate from $0$ to $\rho_M$. The boundary conditions imply that $\rho^{1-n}\,\frac{dH}{d\rho}$ is bounded at $0$. Hence the boundary terms vanish in the integration parts below, and we obtain that $$\int_0^{\rho_M}\rho^{\frac{1-n}{p-1}} \left ( \frac{dH}{d\rho} \right )^{\frac{p}{p-1}}\,d\rho
=
\int_0^{\rho_M} \left [ \rho^{1-n}\,\frac{dH}{d\rho} \right]^{\frac{1}{p-1}} \frac{dH}{d\rho}\, d\rho
\leq \frac{\lambda}{(n\omega_n)^{\frac{p-2}{p-1}}} \int_0^{\rho_M} \rho^{1-n} H^2(\rho) d\rho.$$
With $\phi$, $H$, and $\rho$ defined as above, $$\label {change-var3}
\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} \left ( \frac{dH}{d\rho}
\right )^{\frac{p}{p-1}} \,d\rho= (n\omega_n)^{\frac{1}{p-1}}\int_D
\phi^p\, d\mu.$$
We use and to conclude $$\begin{aligned}
\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} \left ( \frac{dH}
{d\rho} \right )^{\frac{p}{p-1}} & = & \int_0^{\rho_M}
(\rho^{1-n})^{\frac{p}{p-1}} \left ( \frac{dH}{d\rho} \right
)^{\frac{p}{p-1}} \rho^{n-1} d\rho \\
& = & \int_0^{|D|}\left (\rho^{1-n} \frac{dH}{d\rho}
\right )^{\frac{p}{p-1}} \frac{dV}{n\omega_n} \\
&= &\int_0^{|D|} \left ( n\omega_n\frac{dH}{dV} \right
)^{\frac{p}{p-1}} \frac{dV}{n\omega_n} \\
& = & (n\omega_n)^{\frac{1}{p-1}} \int_0^{|D|}
\left (\frac{dH}{dV} \right )^{\frac{p}{p-1}}dV\\
& =& (n\omega_n)^{\frac{1}{p-1}}\int_0^{|D|}
(t^{p-1})^{\frac{p}{p-1}} dV \\
& = & (n\omega_n)^{\frac{1}{p-1}} \int_0^{|D|}t^p dV
= (n\omega_n)^{\frac{1}{p-1}} \int_D \phi^p d\mu.
\qedhere\end{aligned}$$
$$\label{change-var4}
n\omega_n \left ( \int_D \phi^p \,d\mu \right )^{\frac{2(p-1)}{p}}\leq
\ {\mathcal{C}_p}(D)\, \int_0^{\rho_M} \rho^{1-n} H^2(\rho) \,d\rho.$$
Moreover, we have equality if and only if $D$ is a ball and $\phi$ is radial.
Combine , , and .
$$\begin{aligned}
\label {comparison-ode6}
\left ( \int_D \phi^{p-1} \,d\mu \right )^2 & \geq &
\frac{2n^2 \omega_n^{2/n} |D|^{\frac{n-2}{n}}}
{p \, {\mathcal{C}_p}(D)} \left (\int_D \phi^p d\mu \right )^{\frac{2(p-1)}{p}}
\\ \nonumber
&&\qquad\qquad- (n-2)\,\omega_n^{\frac{2-n}{n}} |D|^{\frac{n-2}{n}}
\int_0^{\rho_M} \rho^{1-n} H^2(\rho) \,d\rho.\end{aligned}$$
Equality holds if and only if $D$ is a ball.
Since $\rho(V) = \big( V/\omega_n\big)^{1/n}$, we have $$n \omega_n^{1/n} \frac{d \rho}{dV} V^{\frac{n-1}{n}} = 1,$$ so that $$\begin{aligned}
\int_0^{|D|} V^{\frac{2(1-n)}{n}} H^2(V) \,dV& = &
\int_0^{|D|} H^2(\rho) \,\omega_n^{\frac{2(1-n)}{n}} \rho^{2(1-n)}
\,n \omega_n^{1/n}\, (\omega_n \rho^n)^{\frac{n-1}{n}} \frac{d\rho}{dV} \,dV \\
& = & n \omega_n^{\frac{2-n}{n}} \int_0^{\rho_M} \rho^{1-n} H^2(\rho) \,d\rho .\end{aligned}$$ Putting this into gives .
The radially symmetric case {#radially-symm}
---------------------------
Motivated by and , we define $\Lambda_*$ by $$\label{lambda-star}
\Lambda_* = \inf \left \{ \left. \left (\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\,f'(\rho)^{\frac{p}{p-1}} \,d\rho \right )^{\frac{2(p-1)}{p}} \right/
\int_0^{\rho_M} \rho^{1-n}\, f^2 (\rho)\, d\rho \right \}$$ where the infimum is over all functions on $[0,\rho_M]$ for which $$\label{lambda-star-bc}
f(0) = f'(0) = \cdots = f^{(n-1)}(0)=0 = f'(\rho_M),\quad f \not \equiv 0 .$$
Notice that we have rescaled the numerator to make the quotient scale-invariant. This does not, however, affect the Euler-Lagrange equation involved.
The Euler-Lagrange equation for the variational problem , with the boundary conditions , is $$\label{Euler-Lagrange}
f''(\rho) - \frac{n-1}{\rho}\,f'(\rho) + \Lambda \big[ \rho^{1-n}
f'(\rho) \big]^{\frac{p-2}{p-1}}\, f(\rho) = 0.$$
Since the ratio defining $\Lambda_*$ is scale-invariant, we may either restrict our attention to either of the constrained critical point problems: $$\text{minimize }\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\,f'(\rho)^{\frac{p}{p-1}}\,d\rho\ \text{ subject to }
\int_0^{\rho_M} \rho^{1-n} f^2 d\rho\ = \text{ constant}$$ or $$\text{maximize } \int_0^{\rho_M}\rho^{1-n} f^2 d\rho\
\text{ subject to }\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\,f'(\rho)^{\frac{p}{p-1}}\,d\rho\ = \text{ constant.}$$ Regardless, the method of Lagrange multipliers implies that a constrained critical point $f$ satisfies $$\left. \frac{d}{d\epsilon} \right |_{\epsilon = 0} \int_0^{\rho_M}
\rho^{\frac{1-n}{p-1}} \left ( \frac{df}{d\rho} + \epsilon \frac{dg}{d\rho}
\right )^{\frac{p}{p-1}}d\rho = \Lambda \left. \frac{d}{d\epsilon}
\right |_{\epsilon = 0} \int_0^{\rho_M} \rho^{1-n}\,\big[f(\rho)+ \epsilon\, g(\rho)\big]^2
d\rho,$$ for any admissible $g$. Having evaluated these derivatives, we use the boundary conditions to see that $\rho^{1-n}\,f'(\rho)$ is bounded at $0$ and that consequently the boundary terms arising from integration by parts vanish, and see that $$\begin{aligned}
2\lambda \int_0^{\rho_M} & \rho^{1-n}\, f(\rho)\,g(\rho)\, d\rho \\
&=
\frac{p}{p-1} \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\left ( \frac{df}{d\rho} \right )^{\frac{1}{p-1}} \frac{dg}{d\rho}\, d\rho \\
& = -\frac{p}{p-1} \int_0^{\rho_M} g(\rho) \frac{d}{d\rho} \left [
\rho^{\frac{1-n}{p-1}} \left (\frac{df}{d\rho} \right )^{\frac{1}{p-1}}
\right ] \, d \rho \\
& = -\frac{p}{p-1} \int_0^{\rho_M} g(\rho) \left [ \frac{1}{p-1}
\,\rho^{\frac{1-n}{p-1}} \left ( \frac{df}{d\rho} \right )^{\frac{2-p}{p-1}}
\frac{d^2 f}{d\rho^2} + \frac{1-n}{p-1}\, \rho^{\frac{2-p-n}{p-1}}
\left ( \frac{df}{d\rho} \right )^{\frac{1}{p-1}} \right ]\,d\rho.\end{aligned}$$ This must hold for all choices of $g$, hence (absorbing a factor of $2p/(p-1)^2$ into the Lagrange multiplier $\Lambda$) we must have $$\begin{aligned}
0 & = & \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{2-p}{p-1}} f''(\rho) - (n-1)\,
\rho^{\frac{2-p-n}{p-1}}f'(\rho)^{\frac{1}{p-1}} + \Lambda \, \rho^{1-n}\,f(\rho)\\
& = & \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{2-p}{p-1}}
\left [ f''(\rho) - (n-1) \rho^{-1} f'(\rho) + \Lambda \left [ \rho^{1-n}\, f'(\rho) \right ]^{\frac{p-2}{p-1}}
f(\rho) \right ],\end{aligned}$$ as claimed.
Let $D^*$ be the ball ${\mathbf{B}}_{\rho_M}$ of radius $\rho_M$. Then, $$\label {comparison-ode7}
\Lambda_* \leq (n \omega_n)^{\frac{2-p}{p}} {\mathcal{C}_p}(D^*).$$
We use the function $H(\rho)$ for the ball ${\mathbf{B}}_{\rho_M}$ as a test function for the quotient defining $\Lambda_*$ and use the inequalities , , and : $$\begin{aligned}
\Lambda_* & \leq \left.\left ( \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
H'(\rho)^{\frac{p}{p-1}} \,d\rho \right )^{\frac{2(p-1)}{p}} \right/
\int_0^{\rho_M} \rho^{1-n}\, H^2(\rho) \,d\rho \\
& \leq \frac{\Lambda}{(n\omega_n)^{\frac{p-2}{p-1}}}
\left ( \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} H'(\rho)^{\frac{p}{p-1}}\, d\rho \right )^{\frac{p-2}{p}} \\
& = \frac{1}{(n\omega_n)^{\frac{p-2}{p-1}}}\, {\mathcal{C}_p}(D^*)
\left ( \int_{D^*} \phi^p\, d\mu \right )^{\frac{2-p}{p}}
\left [ \frac{1}{(n\omega_n)^{\frac{1}{p-1}}} \int_{D^*}
\phi^p \,d\mu \right ]^{\frac{p-2}{p}} \\
& = (n\omega_n)^{\frac{2-p}{p}}\, {\mathcal{C}_p}(D^*). \qedhere\end{aligned}$$
In order to obtain a lower bound for $\Lambda_*$ in terms of ${\mathcal{C}_p}(D^*)$, we first need to relate the particular $\Lambda$ occurring in the Euler-Lagrage equation to the eigenvalue $\Lambda_*$, just as relates the number $\lambda$ occurring in the Euler-Lagrange equation to the eigenvalue ${\mathcal{C}_p}(D)$.
\[star-lemma\] Let $f$ be a minimizer for $\Lambda_*$ given by with the boundary conditions and satisfy the Euler-Lagrange equation , written as $$\label{Euler-Lagrange-2}
\frac{d}{d\rho} \left[ \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{1}{p-1}}\right] + \Lambda\, \rho^{1-n}\,f(\rho) = 0.$$ Then $$\label{star-scaling}
\Lambda = \Lambda_* \, \left (\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\,f'(\rho)^{\frac{p}{p-1}} \,d\rho \right )^{\frac{2-p}{p}}.$$
Multiply the Euler-Lagrange equation across by $f(\rho)$ and integrate from $0$ to $\rho_M$ to obtain $$\int_0^{\rho_M}f(\rho)\,\frac{d}{d\rho} \left[ \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{1}{p-1}}\right]\,d\rho
+ \Lambda\, \int_0^{\rho_M}\rho^{1-n}\,f(\rho)^2\,d\rho = 0.$$ Integrating by parts in the first term and using the boundary conditions gives $$\int_0^{\rho_M}f(\rho)\,\frac{d}{d\rho} \left[ \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{1}{p-1}}\right]\,d\rho
=
- \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{p}{p-1}}\,d\rho,$$ from which it follows that $$\Lambda = \left. \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{p}{p-1}}\,d\rho
\right/ \int_0^{\rho_M}\rho^{1-n}\,f(\rho)^2\,d\rho.$$ We can use to write $\int_0^{\rho_M}\rho^{1-n}\,f^2(\rho)\,d\rho$ in terms of $\Lambda_*$ since $f$ is a minimizer for this Rayleigh quotient, leading to $$\Lambda = \Lambda_*
\left( \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} f'(\rho)^{\frac{p}{p-1}}\,d\rho \right)^{1-\frac{2(p-1)}{p}},$$ which is .
$$\label {lambda-star2}
{\mathcal{C}_p}(D^*) \leq (n \omega_n)^{\frac{p-2}{p}} \Lambda_*.$$
Let $f$ be a minimizer for the generalized quotient defining $\Lambda_*$. Set $$\psi(\rho) = \int_\rho^{\rho_M} r^{1-n}\, f(r)\,dr, \qquad 0 \leq \rho \leq \rho_M,$$ so that $\psi(\rho_M) = 0$. Then $\psi(\rho)$ (where $\rho = \vert x \vert$ for $x \in {\mathcal{C}_p}(D^*)$) is an admissible test function for the quotient defining ${\mathcal{C}_p}(D^*)$. Thus $$\label{t1}
{\mathcal{C}_p}(D^*) \leq (n \omega_n)^{\frac{p-2}{p}}
\left. \int_0^{\rho_M}\rho^{n-1}\,\psi'(\rho)^2\,d\rho \right/
\left(\int_0^{\rho_M} \rho^{n-1}\,\psi(\rho)^p\,d\rho\right)^{2/p}.$$ Now $$\label{t2}
\int_0^{\rho_M}\rho^{n-1}\,\psi'(\rho)^2\,d\rho
= \int_0^{\rho_M}\rho^{n-1}\,\left[ \rho^{1-n}\, f(\rho) \right]^2\,d\rho
= \int_0^{\rho_M} \rho^{1-n}\, f(\rho)^2\,d\rho.$$ Next, using the Euler-Lagrange equation , $$\begin{aligned}
\psi(\rho) & = \int_\rho^{\rho_M} r^{1-n}\, f(r)\,dr \\
& = -\frac{1}{\Lambda} \int_\rho^{\rho_M}
\frac{d}{dr} \left[ r^{\frac{1-n}{p-1}} f'(r)^{\frac{1}{p-1}}\right]\,dr \\
& = -\frac{1}{\Lambda}\,\left. r^{\frac{1-n}{p-1}}\, f'(r)^{\frac{1}{p-1}}\right\vert_{r=\rho}^{r=\rho_M}\\
& = \frac{1}{\Lambda}\, \rho^{\frac{1-n}{p-1}}\, f'(\rho)^{\frac{1}{p-1}},\end{aligned}$$ where we used $f'(\rho_M)=0$. From this we obtain that $$\begin{aligned}
\int_0^{\rho_M} \rho^{n-1}\,\psi(\rho)^p\,d\rho
& = \int_0^{\rho_M} \rho^{n-1}\, \frac{1}{\Lambda^p}\, \rho^{\frac{p(1-n)}{p-1}} f'(\rho)^{\frac{p}{p-1}}\,d\rho
\nonumber \\
& = \frac{1}{\Lambda^p}\, \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}\, f'(\rho)^{\frac{p}{p-1}}\,d\rho.
\label{t3}\end{aligned}$$ With the help of the identities and , we can write the numerator and the denominator of in terms of the minimizer $f$ for $\Lambda_*$. We find, using that $f$ minimizes the quotient for $\Lambda_*$ at the second step and using at the third step, that $$\begin{aligned}
{\mathcal{C}_p}(D^*) & \leq (n \omega_n)^{\frac{p-2}{p}}
\left. \int_0^{\rho_M} \rho^{1-n}\, f(\rho)^2\,d\rho\right/
\left(\frac{1}{\Lambda^p}\, \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}\, f'(\rho)^{\frac{p}{p-1}}\,d\rho\right)^{\frac{2}{p}}\\
& = (n \omega_n)^{\frac{p-2}{p}}\, \frac{\Lambda^2}{\Lambda_*}\,
\left (\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\,f'(\rho)^{\frac{p}{p-1}} \,d\rho \right )^{\frac{2(p-1)}{p}-\frac{2}{p}} \\
& = (n \omega_n)^{\frac{p-2}{p}}\, \frac{1}{\Lambda_*}\,
\left[ \Lambda\, \left (\int_0^{\rho_M} \rho^{\frac{1-n}{p-1}}
\,f'(\rho)^{\frac{p}{p-1}} \,d\rho \right )^{\frac{p-2}{p}}\right]^2\\
& = (n \omega_n)^{\frac{p-2}{p}}\, \frac{\Lambda_*^2}{\Lambda_*}
= (n \omega_n)^{\frac{p-2}{p}}\, \Lambda_*.
\qedhere\end{aligned}$$
Completion of the proof of Theorem \[main-thm\] {#main proof}
-----------------------------------------------
We can now finally complete the proof of Theorem \[main-thm\]. Indeed, we have $$\begin{aligned}
\int_0^{\rho_M} \rho^{1-n}\,H^2(\rho)\, d\rho & \leq & \frac{1}{\Lambda_*}
\left ( \int_0^{\rho_M} \rho^{\frac{1-n}{p-1}} \left ( \frac{dH}{d\rho}
\right )^{\frac{p}{p-1}}\, d\rho \right )^{\frac{2(p-1)}{p}} \\
& = & \frac{1}{\Lambda_*} \left ( (n\omega_n)^{\frac{1}{p-1}}
\int_D \phi^p d\mu \right )^{\frac{2(p-1)}{p}} \\
& = & \frac{1}{\Lambda_*} (n \omega_n)^{2/p} \left (
\int_D \phi^p d\mu \right )^{\frac{2(p-1)}{p}},\end{aligned}$$ with equality if and only if $D$ is a ball and $\phi$ is radial. Substituting this last inequality into , we have $$\begin{aligned}
\left ( \int_D \phi^{p-1} d\mu \right )^2 & \leq & \frac{2n^2 \omega_n^{n/2}}
{p {\mathcal{C}_p}(D)} |D|^{\frac{n-2}{n}} \left ( \int_D \phi^p d\mu \right )
^{\frac{2(p-1)}{p}} \\
&&\qquad- (n-2) \omega_n^{\frac{2-n}{n}} |D|^{\frac{n-2}{n}}\frac{1}{\Lambda_*}
(n\omega_n)^{2/p} \left ( \int_D \phi^p d\mu \right )^{\frac{2(p-1)}{p}}.\end{aligned}$$ Since $\Lambda_* = (n \omega_n)^{\frac{2-p}{p}} {\mathcal{C}_p}(D^*)$ by and , the main inequality follows with equality if and only if $D$ is a ball. $\square$
[999]{}
A. Alvino, V. Ferone and G. Trombetti, *On the properties of some nonlinear eigenvalues.* SIAM J. Math. Anal. [**29**]{} (1998), 437–451.
T. Carroll and J. Ratzkin, *Two isoperimetric inequalities for the Sobolev constant.* to appear in Z. Angew. Math. Phys.
A. Chianchi, N. Fusco, F. Maggi, and A. Pratelli, *The sharp Sobolev inequality in quantitative form.* J. Eur. Math. Soc. [**11**]{}(2009), 1105–1139.
G. Chiti, *A reverse Hölder inequality for the eigenfunctions of linear second order elliptic operators.* Z. Angew. Math. Phys. [**33**]{} (1982), 143–148.
B. Gidas, W.N. Ni, and L. Nireberg, *Symmetry and related properties via the maximum principle*. Comm. Math. Phys. [**68**]{} (1979), 209–243.
L. Payne and M. Rayner, *An isoperimetric inequality for the first eigenfunction in the fixed membrane problem.* J. Angew. Math. Phys. [**23**]{} (1972), 13–15.
L. Payne and M. Rayner, *Some isoperimetric norm bounds for solutions of the Helmholtz equation.* Z. Angew. Math. Phys. [**24**]{} (1973), 105–110.
G. P' olya and G. Szegő, [*Isoperimetric Inequalities in Mathematical Physics*]{}. Princeton University Press (1951).
[^1]: School of Mathematical Sciences, University College Cork, [t.carroll@ucc.ie]{}
[^2]: Department of Mathematics and Applied Mathematics, University of Cape Town, [jesse.ratzkin@uct.ac.za]{}
|
---
abstract: 'The estimation of the extremal dependence structure is spoiled by the impact of the bias, which increases with the number of observations used for the estimation. Already known in the univariate setting, the bias correction procedure is studied in this paper under the multivariate framework. New families of estimators of the stable tail dependence function are obtained. They are asymptotically unbiased versions of the empirical estimator introduced by Huang \[Statistics of bivariate extremes (1992) Erasmus Univ.\]. Since the new estimators have a regular behavior with respect to the number of observations, it is possible to deduce aggregated versions so that the choice of the threshold is substantially simplified. An extensive simulation study is provided as well as an application on real data.'
address:
- |
A.-L. Fougères\
C. Mercadier\
Université de Lyon, CNRS, Université Lyon 1\
Institut Camille Jordan\
43 blvd du 11 novembre 1918\
F-69622 Villeurbanne-Cedex\
France\
\
- |
L. de Haan\
Department of Economics\
Erasmus University\
P.O. Box 1738\
3000 DR Rotterdam\
The Netherlands\
author:
-
-
-
title: Bias correction in multivariate extremes
---
,
\
Introduction
============
Estimating extreme risks in a multivariate framework is highly connected with the estimation of the extremal dependence structure. This structure can be described *via* the stable tail dependence function (s.t.d.f.) $L$, first introduced by @Huang1992. For any arbitrary dimension $d$, consider a multivariate vector $(X^{(1)},\ldots,X^{(d)})$ with continuous marginal cumulative distribution functions (c.d.f.) $F_1, \ldots, F_d$. The s.t.d.f. is defined for each positive reals $x_1,\ldots,x_d$ as $$\begin{aligned}
&& \lim_{t \to\infty} t \pr\bigl\{ 1-F_1\bigl(X^{(1)}
\bigr)\leq t^{-1}x_1\mbox{ or } \ldots\mbox{ or }
1-F_d\bigl(X^{(d)}\bigr) \leq t^{-1}x_d
\bigr\}
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad = L(x_1,\ldots,x_d).\end{aligned}$$ Assuming that such a limit exists and is nondegenerate is equivalent to the classical assumption of existence of a multivariate domain of attraction for the componentwise maxima; see, for example, @dehaanferreira2006, Chapter 7. The previous limit can be rewritten as $$\label{eqL} \lim_{t \to\infty} t \bigl[ 1- F\bigl
\{F_1^{-1}\bigl(1-t^{-1}x_1\bigr),
\ldots, F_d^{-1}\bigl(1-t^{-1}x_d
\bigr) \bigr\}\bigr] = L(x_1,\ldots,x_d),$$ where $F$ denotes the multivariate c.d.f. of the vector $(X^{(1)},\ldots,X^{(d)})$, and for $j=1,\dots,d$, $F_j^{-1}(t) = \inf\{z \in{\mathbb R}\dvtx F_j(z)
\geq t \}$. Consider a sample of size $n$ drawn from $F$ and an intermediate sequence, that is to say a sequence $k=k(n)$ tending to infinity as $n
\to\infty$, with $k/n \to0$. Denote by ${\mathbf{x}}=(x_1,\ldots,x_d)$ a vector of the positive quadrant $\mathbb{R}^d_+=\{ (x_1,\ldots,x_d)\dvtx x_j \geq0, j=1,\dots, d\}$ and by $X^{(j)}_{k,n}$ the $k$th order statistics among $n$ realizations of the margins $X^{(j)}$. The empirical estimator of $L({\mathbf{x}})$ is obtained from (\[eqL\]), replacing $F$ by its empirical version, $t$ by $n/k$, and $F_j^{-1}(1-t^{-1}x_j)$ for $ j=1,\ldots,d$ by its empirical counterpart $X^{(j)}_{n-[nt^{-1}x_j],n}$, so that $$\label{eq1storderhat} \hat L_k({\mathbf x})= \frac{1}k \sum
_{i=1}^n \ind_{ \{X^{(1)}_i \geq
X^{(1)}_{n-[kx_1]+1,n}~\mathrm{or}~\ldots~\mathrm{or}~X^{(d)}_i \geq X^{(d)}_{n-[kx_d]+1,n} \} }.$$ See @Huang1992 for pioneering works on this estimator. Under suitable conditions, it can be shown (see Section \[secnotation\]) that the estimator $\hat L_k({\mathbf x})$ has the following asymptotic expansion: $$\label{eqasympt-expL} \hat L_k({\mathbf x}) - L({\mathbf x}) \approx
\frac{Z_L({\mathbf x})}{\sqrt{k}} + \alpha(n/k)M({\mathbf x}),$$ where $Z_L$ is a continuous centered Gaussian process, $\alpha$ is a function that tends to 0 at infinity and $M$ is a continuous function. In particular$ \sqrt{k} \{ \hat L_k({\mathbf x}) - L({\mathbf x}) \}$ can be approximated in distribution by $Z_L({\mathbf x})$, provided that $ \sqrt
{k} \alpha(n/k)$ tends to 0 as $n$ tends to infinity. This condition imposes a slow rate of convergence of the estimator $\hat L_k({\mathbf x})$, so one would be interested in relaxing this hypothesis. As a counterpart, as soon as $ \sqrt{k} \alpha(n/k)$ tends to a nonnull constant $\lambda$, an asymptotic bias appears and is explicitely given by $\lambda M({\mathbf x})$. The aim of this paper is to provide a procedure that reduces the asymptotic bias. The latter will be estimated and then subtracted from the empirical estimator. This kind of approach has been considered in the univariate setting for the bias correction of the extreme value index with unknown sign by @caidehaanzhou2013. Refer also to @peng1998 ([-@peng1998; -@peng2010]) @fragadehaanlin2003, @gomesdehaanrodrigues2008 and @caeirogomesrodrigues2009 for previous contributions on this problem. Note finally that the case of dependent sequences has been recently studied by @dehaanmercadierzhou2014.
The nonparametric estimation of the extremal dependence structure has been widely studied in the bivariate case; see, for instance, @Huang1992, @einmahldehaansinha1997, @caperaafougeres2000, @abdousghoudi2005, @guillotteperronsegers2011 and @bucherdettevolgushev2011. Bias correction problems in the bivariate context received less attention than in the univariate setting. To the best of our knowledge, it seems to be reduced to @beirlantdierckxguillou2011 and @goegebeurguillou2013, who consider the estimation of bivariate joint tails, which differs slightly from our task.
As for the multivariate framework, @dehaanresnick1993 introduces the empirical estimator. General approaches under parametric assumptions on the function $L$ have been developed, for example, by @colestawn1991, @joesmithweissman1992, @einmahlkrajinasegers2008 and @einmahlkrajinasegers2012. Apparently, no procedure correcting the bias can be found in the literature for dimension greater than two. The objective of this article is to fill this gap. Note that our method does not only consist of applying the univariate bias procedure at several points. Indeed, the bias is no longer a parametric function, so that the new feature is mainly the fact that we are able to estimate and then subtract a function with an unknown form. Two families of asymptotically unbiased estimators of the s.t.d.f. are proposed, and their theoretical behaviors are studied. A practical advantage of these new estimators is that they can be aggregated, thus reducing the variability.
The paper is organized as follows: Section \[secnotation\] contains hypotheses and first results. The bias reduction procedure is described in Section \[secprocedure\], and the main theoretical results are presented therein. Several theoretical models are exhibited in Section \[secexamples\] that satisfy the required assumptions. Section \[secsimu\] illustrates the performance of the new estimators on both simulated and real data. The estimation of side components is postponed up to Section \[secrho\]. The proofs are relegated to Section \[secproofs\].
Notation, assumptions and first results {#secnotation}
=======================================
Let ${\mathbf X}_1=(X^{(1)}_1,\ldots,X^{(d)}_1),\break \ldots, {\mathbf
X}_n=(X^{(1)}_n, \ldots,X^{(d)}_n)$ be independent and identically distributed multivariate random vectors with c.d.f. $F$ and continuous marginal c.d.f.’s $F_j$ for $j=1,\ldots,d$. Suppose $F$ is in the domain of attraction of an extreme value distribution with c.d.f. $G$. We recall that it supposes the existence for $j=1,\ldots,d$ of sequences $a_n^{(j)}>0$, $b_n^{(j)}$ of real numbers and a c.d.f. $G$ with nondegenerate marginals such that $$\begin{aligned}
&& \lim_{n\to\infty} \mathbb{P}\bigl(\max\bigl\{X^{(1)}_1,
\ldots,X^{(1)}_n\bigr\} \leq a^{(1)}_n
x_1 + b^{(1)}_n, \ldots,
\\
&&\hspace*{54pt} \max\bigl
\{X^{(d)}_1,\ldots,X^{(d)}_n\bigr\}\leq a^{(d)}_n x_d + b^{(d)}_n \bigr)=G({\mathbf x})\end{aligned}$$ for all points ${\mathbf x}$ where $G$ is continuous. Denote by $G_j$ the $j$th marginal c.d.f. of $G$. It is possible to show that the domain of attraction condition can be expressed as condition (\[eqL\]) along with the convergence of the marginal distributions to the $G_j$’s, and that $$\label{eqLbis} L({\mathbf x})=-\log G \bigl(\{-\log G_1\}
^{-1}(x_1),\ldots,\{-\log G_d
\}^{-1}(x_d) \bigr).$$ Let $\mu$ be the measure defined by $$\label{eqmu} \mu\bigl\{A({\mathbf x})\bigr\}:=L({\mathbf x}),$$ where $A({\mathbf x}):=\{{\mathbf u}\in\mathbb{R}_+^d$: there exists $j$ such that $u_j>x_j \}$ for any vector ${\mathbf{x}}\in \mathbb{R}_+^d$.
Several conditions are now described. The first two have been introduced by @dehaanresnick1993:
- The *first-order condition* consists of assuming that the limit given in (\[eqL\]) exists, and that the convergence is uniform on any $[0,T]^d$, for $T>0$.
- The *second-order condition* consists of assuming the existence of a positive function $\alpha$, such that $\alpha(t) \to0$ as $t\to\infty$, and a nonnull function $M$ such that for all ${\mathbf x}$ with positive coordinates, $$\begin{aligned}
\label{eq2ndorder}
&& \lim_{t \to\infty} \frac{1}{\alpha(t)} \bigl\{ t \bigl[ 1-
F\bigl\{ F_1^{-1}\bigl(1-t^{-1}x_1
\bigr),\ldots, F_d^{-1}\bigl(1-t^{-1}x_d
\bigr) \bigr\}\bigr] - L({\mathbf x}) \bigr\}
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad = M({\mathbf x}),\end{aligned}$$ uniformly on any $[0,T]^d$, for $T>0$.
- The *third-order condition* consists of assuming the existence of a positive function $\beta$, such that $\beta(t) \to0$ as $t\to\infty$, and a nonnull function $N$ such that for all ${\mathbf x}$ with positive coordinates, $$\begin{aligned}
\label{eq3rdorder}
&& \lim_{t \to\infty} \frac{1}{\beta(t)} \biggl\{\frac{t [ 1- F\{
F_1^{-1}(1-t^{-1}x_1),\ldots, F_d^{-1}(1-t^{-1}x_d) \}] - L({\mathbf
x})}{\alpha(t)} - M({\mathbf x}) \biggr\}
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad = N({\mathbf x}),\hspace*{-10pt}\end{aligned}$$ uniformly on any $[0,T]^d$, for $T>0$. This implicitly requires that $N$ is not a multiple of the function $M$; see Remark \[rmMN\].
\[rkhomoL\] The function $L$ defined by (\[eqL\]) and that appears in (\[eq2ndorder\]) and (\[eq3rdorder\]) is homogeneous of order 1. We refer, for instance, to @dehaanferreira2006, pages 213 and 236. Most of the estimators constructed in this paper use the homogeneity property. Note that pointwise convergence in (\[eqL\]) entails uniform convergence on the square $[0,T]^d$. See, for instance, @dehaanferreira2006, page 237.
\[rmMN\] If $N=c\cdot M$ for some constant $c$, the relation can be reformulated as $$\begin{aligned}
&& \lim_{t \to\infty} \frac{1}{\beta(t)} \biggl\{ \frac{t [ 1- F\{
F_1^{-1}(1-t^{-1}x_1),\ldots, F_d^{-1}(1-t^{-1}x_d) \}] - L({\mathbf
x}) }{\alpha(t)(1+c\beta(t))} - M({
\mathbf x}) \biggr\}
\\
&&\qquad = 0,\end{aligned}$$ which we want to exclude. We refer to @dehaanferreira2006 \[([-@dehaanferreira2006]), page 385\], to see that the same complication turns up in the one-dimensional case.
\[rkpropMN\] The functions $M$ and $N$ involved in the second and third-order conditions satisfy some usual properties; see, for example, @dehaanresnick1993. More specifically, one can show that there exist nonpositive reals $\rho$ and $\rho'$ such that $ \alpha$ (resp., $ \beta$) is a regularly varying function of order $\rho$ (resp., $ \rho'$), that is, $ \alpha(tz)/\alpha(t) \to
z^\rho$ when $t \to\infty$, for each positive $z$. Besides, $M$ is homogeneous of order $1-\rho$, that is to say $M(r{\mathbf
x})=r^{1-\rho} M({\mathbf x})$, for each positive $r$ and ${\mathbf x}$ with positive coordinates. Finally, the function $N$ is homogeneous of order $1-\rho- \rho'$.
\[rkAI\] An interesting situation to consider is when the c.d.f. $F$ is in the domain of attraction of an extreme value distribution $G$ with independent components, that is, $G = \prod_{j=1}^d G_j$. Such a c.d.f. is said to have the property of [asymptotic independence]{}. In this case, the function $M$ is the limit of the joint tail of the distribution, and in dimension 2, the coefficient of tail dependence $\eta$ introduced by @ledfordtawn1996 [@ledfordtawn1997] equals $1/(1-\rho)$, where $\rho$ is defined in Remark \[rkpropMN\].
In this paper, we will handle two sets of assumptions. First consider:
- the second-order condition is satisfied, so that (\[eq2ndorder\]) holds;
- the coefficient of regular variation $\rho$ of the function $\alpha$ defined in (\[eq2ndorder\]) is negative;
- the function $M$ defined in (\[eq2ndorder\]) is continuous.
These hypotheses allow us to get the asymptotic uniform behavior of $\hat L_k$, the empirical estimator of $L$ defined by (\[eq1storderhat\]), as detailed in the following proposition.
\[propcv-ps-L\] Let ${\mathbf X}_1, \ldots, {\mathbf X}_n$ be independent multivariate random vectors in $\mathbb{R}^d$ with common joint c.d.f. $F$ and continuous marginal c.d.f.’s $F_j$ for $j=1,\ldots,d$. Assume that the set of conditions holds. Suppose further that the first-order partial derivatives of $L$ (denoted by $ \partial_jL$ for $j=1,\ldots,d$) exist and that $\partial_j L$ is continuous on the set of points $\{{\mathbf x}=(x_1,\dots, x_d) \in\mathbb{R}^d_+\dvtx x_j >0\}$. Consider $\hat L_k$ the estimator of $L$ defined by (\[eq1storderhat\]) where $k$ is such that $\sqrt{k} \alpha(n/k) \to
\infty$. Then as $n$ tends to infinity, we get $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\frac{1}{\alpha(n/k)} \bigl\{ \hat
L_k ({\mathbf x}) - L({\mathbf x}) \bigr\} -M({\mathbf x}) \biggr{\vert}\stackrel{\mathbb{P}} {\longrightarrow} 0.$$
Under stronger assumptions, and for some choice of the intermediate sequence, the asymptotic distribution of the previous stochastic process can be obtained after multiplication by the rate $\sqrt{k}
\alpha(n/k)$. For a positive $T$, let $D([0,T]^d)$ be the space of real valued functions that are right-continuous with left-limits. Now introduce the conditions:
- the third-order condition is satisfied, so that (\[eq2ndorder\]) and (\[eq3rdorder\]) hold;
- the coefficients of regular variation $\rho$ and $\rho'$ of the functions $\alpha$ and $\beta$ defined in (\[eq2ndorder\]) and (\[eq3rdorder\]) are negative;
- the function $M$ defined in (\[eq2ndorder\]) is differentiable and $N$ defined in (\[eq3rdorder\]) is continuous.
\[propdev-asympt-L\] Assume that the conditions of Proposition \[propcv-ps-L\] are fulfilled and that the set of conditions hold. Consider $\hat L_k$ the estimator of $L$ defined by (\[eq1storderhat\]) where $k$ is such that $\sqrt{k} \alpha(n/k) \to
\infty$ and $\sqrt{k} \alpha(n/k) \beta(n/k)\to0$. Then as $n$ tends to infinity, $$\label{eqasympt-dev-L} \sqrt{k} \biggl\{ \hat L_k({\mathbf x}) - L({\mathbf x}) -
\alpha\biggl(\frac
{n}{k}\biggr)M({\mathbf x}) \biggr\} \stackrel{d} {\to}
Z_L({\mathbf x}),$$ in $D([0,T]^d)$ for every $T>0$, where$$\label{eqZL}Z_L({\mathbf x}): = W_L({\mathbf x}) - \sum
_{j=1}^d W_L(x_j
{\mathbf e}_j)\partial_jL({\mathbf x}).$$ The process $W_L$ above is a continuous centered Gaussian process with covariance structure $ {\mathbb E}[W_L({\mathbf x})W_L({\mathbf y})]= \mu\{R({\mathbf x}) \cap R({\mathbf
y})\}$ given in terms of the measure $\mu$ defined by (\[eqmu\]) and of $
R({\mathbf x})=\{{\mathbf u} \in{\mathbb R}^d_+$: there exists $j$ such that $0 \leq u_j\leq x_j \}$.
\[rkasymptbias\] A difference between the previous result and Theorem 7.2.2 of @dehaanferreira2006 consists of the choice of the intermediate sequence that is larger here. Indeed, we suppose ${\vert}\sqrt{k} \alpha(n/k) {\vert}\to\infty$ whereas they choose $k(n)=o (n^{-2\rho/(1-2\rho)} )$, which implies $\sqrt{k} \alpha(n/k) \to0$. Our choice requires the more informative second-order condition (\[eq2ndorder\]). A nonnull asymptotic bias appears in our framework.
The conditions on $k$, $\alpha$ and $\beta$ required in Proposition \[propdev-asympt-L\] are not too restrictive: because of the regular variation of $\alpha$ and $\beta$, they are implied by the choice $k(n)=n^\kappa$, with $
\kappa\in (-\frac{2\rho}{1-2\rho}, -\frac{2(\rho+ \rho
')}{1-2(\rho+ \rho')} )$.
Bias reduction procedure {#secprocedure}
========================
As pointed out in Remark \[rkasymptbias\], a nonnull asymptotic bias $\alpha({n}/{k})M({\mathbf x})$ appears from Proposition \[propdev-asympt-L\]. The bias reduction procedure will consist of subtracting the estimated asymptotic bias obtained in Section \[subsecmethodA\]. The key ingredient is the homogeneity of the functions $L$ and $M$ mentioned in Remarks \[rkhomoL\] and \[rkpropMN\]. This homogeneity will also provide other constructions to get rid of the asymptotic bias.
Estimation of the asymptotic bias of hatLk {#subsecmethodA}
------------------------------------------
Equation (\[eqasympt-dev-L\]) suggests a natural correction of $\hat L_k$ as soon as an estimator of $\alpha({n}/{k})M({\mathbf x})$ is available. In order to take advantage of the homogeneity of $L$, let us introduce a positive scale parameter $a$ which allows to contract or to dilate the observed points. We denote $$\label{eqLa-hat} \hat L_{k,a}({\mathbf x}):=a^{-1}\hat
L_k(a{\mathbf x})$$ and $$\label{eqDelta} \hat \Delta_{k,a}({\mathbf x}):=\hat L_{k,a}({\mathbf
x})-\hat L_{k}({\mathbf x}).$$
From (\[eqasympt-dev-L\]) one gets $$\label{eqdev-asympt-La-L} \sqrt{k} \biggl\{\hat L_{k,a}({\mathbf x}) -L({\mathbf x}) -
\alpha\biggl(\frac
{n}{k}\biggr) a^{-\rho} M({\mathbf x}) \biggr\}
\stackrel{d} {\to} a^{-1} Z_L(a{\mathbf x}),$$ in $D([0,T]^d)$ for every $T>0$. Equations (\[eqDelta\]) and Proposition \[propcv-ps-L\] yield as $n$ tends to infinity, $$\label{eqDeltaCV} \frac{ \hat\Delta_{k,a}({\mathbf x})}{\alpha(n/k)} {\stackrel}{{\mathbb P}} {\longrightarrow}
\bigl(a^{-\rho} -1\bigr)M({\mathbf x}).$$ Fixing $a$ such that $a^{-\rho} -1=1$, a natural estimator of the asymptotic bias of $\hat L_k({\mathbf x})$ is thus $ \hat\Delta_{k, 2^{-1/{ \hat\rho}}} ({\mathbf x})$, where $\hat\rho$ is an estimator of $\rho$. Recall that the unknown parameter $\rho$ is the regular variation index of the function $\alpha$ involved in the-second order condition. Let $k_\rho$ be an intermediate sequence that represents the number of order statistics used in the estimator $\hat\rho$. Assume that $k_\rho\gg k$ where $k=k(n)$ is the sequence used in Proposition \[propdev-asympt-L\]. A first asymptotically unbiased estimator of $L({\mathbf x})$ can be defined as $$\label{defestimA}
{\accentset{\circ}{L}}_{k, 1, k_\rho}({\mathbf x}):= \hat L_k({\mathbf
x}) - \hat\Delta _{k, 2^{-1/{\hat\rho}}} ({\mathbf x}).$$ The asymptotic behavior of this estimator is provided in Theorem \[thmbiasB\] and Remark \[rkCP\]. We refer the reader to Section \[secrho\] for more details concerning the estimation of $\rho$.
Estimation of the asymptotic bias of hatLk,a {#subsecmethodB}
--------------------------------------------
The previous construction can be easily generalized by correcting the estimator $\hat L_{k,a}$ instead of $\hat L_{k}$. Indeed, from (\[eqdev-asympt-La-L\]) one can see that the asymptotic bias of $\hat
L_{k,a}({\mathbf x})$ is $ \alpha(\frac{n}{k}) a^{-\rho} M({\mathbf x})$. Recall that when $n$ tends to infinity, one has for any positive real $b$, $$\frac{ \hat\Delta_{k,b}({\mathbf x})}{\alpha(n/k)} {\stackrel}{{\mathbb P}} {\longrightarrow}
\bigl(b^{-\rho} -1\bigr)M({\mathbf x}). $$ Thus fixing $b$ such that $b^{-\rho} -1=a^{-\rho}$ will help to cancel the asymptotic bias. It yields the following asymptotically unbiased estimator of $L$: $$\label{defestimB} {\accentset{\circ}{L}}_{k, a, k_\rho}({\mathbf x}):= \hat L_{k,a}({\mathbf
x}) - \hat \Delta_{k, (a^{-\hat\rho} +1)^{-1/{\hat\rho}}} ({\mathbf x}).$$
\[thmbiasB\] Assume that the conditions of Proposition \[propdev-asympt-L\] are fulfilled, and consider the estimator of $L$ defined by (\[defestimB\]). Let $k_\rho$ be an intermediate sequence such that $\sqrt{k_\rho
}\alpha(n/k_\rho)(\hat\rho-\rho)$ converges in distribution. Suppose also that $k$ is such that $k=o(k_\rho)$, $\sqrt{k} \alpha
(n/k) \to\infty$ and $\sqrt{k} \alpha(n/k) \beta(n/k) \to 0$. Under these assumptions, as $n$ tends to infinity, $$\label{eqdev-asympt-B} \sqrt{k} \bigl\{ {\accentset{\circ}{L}}_{k, a, k_\rho}({\mathbf x}) -L({\mathbf x})
\bigr\} \stackrel{d} {\to} {\accentset{\circ}{Y}}_{a}({\mathbf x}),$$ in $D([0,T]^d)$ for every $T>0$, where $ {\accentset{\circ}{Y}}_{a}$ is a continuous centered Gaussian process defined by $${\accentset{\circ}{Y}}_{a} ({\mathbf x}):= Z_L({\mathbf x}) -
b^{-1} Z_L( b {\mathbf x} ) + a^{-1}
Z_L(a {\mathbf x}) $$ with covariance $
{\mathbb E}[ {\accentset{\circ}{Y}}_a({\mathbf x}) {\accentset{\circ}{Y}}_a({\mathbf y})] = {\mathbb
E}[Z_L({\mathbf x}) Z_L({\mathbf y})] (1 - b^{-1/2} + a^{-1/2} )^2$ and $b = (a^{-\rho} +1)^{-1/\rho}$.
\[rkhypo4\] The assumption that $\sqrt{k_\rho}\alpha(n/k_\rho)(\hat\rho-\rho
)$ converges in distribution will be reconsidered in Section \[secrho\].
\[rkCP\] Theorem \[thmbiasB\] remains true when $a=1$ and thus characterizes the asymptotic behavior of the estimator given in (\[defestimA\]). For this particular choice of $a$, the covariance reduces to $ {\mathbb
E}[Z_L({\mathbf x}) Z_L({\mathbf y})](2-2^{1/{2\rho}})^2$.
An alternative estimation of the asymptotic bias of hatLk,a {#subsecmethodC}
-----------------------------------------------------------
The procedure of bias reduction introduced in the previous section requires the estimation of the second-order parameter $\rho$. It is actually possible to avoid it, making use of combinations of estimators of $L$. The asymptotic bias of $\hat L_{k,a}({\mathbf x})$ is $ \alpha
(\frac{n}{k}) a^{-\rho} M({\mathbf x})$, as already noted from (\[eqdev-asympt-La-L\]). Making use of (\[eqDeltaCV\]) and homogeneity of $M$, one gets as $n$ tends to infinity, $$\frac{ \hat\Delta_{k_\rho,a}(a {\mathbf x})} { \hat\Delta_{k_\rho,a}(a {\mathbf x}) -a \hat\Delta_{k_\rho,a}({\mathbf x})} \stackrel{ \mathbb{P} } {\longrightarrow} \frac{a^{-\rho}}{a^{-\rho}-1}, $$ for any intermediate sequence $k_\rho$ that satisfies $\sqrt{k_\rho
}\alpha(n/k_\rho)\to\infty$. The expression $$\hat\Delta_{k,a}({\mathbf x}) \frac{ \hat\Delta_{k_\rho,a}(a {\mathbf x})
}{ \hat\Delta_{k_\rho,a}(a {\mathbf x}) -a \hat\Delta_{k_\rho,a}({\mathbf
x}) } $$ can thus be used as an estimator of the asymptotic bias of $\hat
L_{k,a}({\mathbf x})$. After simplifications, this leads to a new family of asymptotically unbiased estimators of $L({\mathbf x})$ by substracting the estimated bias from $\hat L_{k,a}({\mathbf
x})$, namely $$\label{defestimC} \tilde L_{k, a, k_\rho}({\mathbf x}) =\frac{\hat L_k({\mathbf x}) \hat\Delta
_{k_\rho,a}(a {\mathbf x})-\hat L_k(a {\mathbf x}) \hat\Delta_{k_\rho,a}({\mathbf x})}{\hat\Delta_{k_\rho,a}(a {\mathbf x}) -a \hat\Delta
_{k_\rho,a}({\mathbf x}) },$$ which is well defined for any real number $a$ such that $0<a< 1$.
\[thmbias-Lu\] Assume that the conditions of Proposition \[propdev-asympt-L\] are fulfilled, and consider the estimator of $L$ defined by (\[defestimC\]). Let $k_\rho$ be an intermediate sequence such that $\sqrt{k_\rho
}\alpha(n/k_\rho)(\hat\rho-\rho)$ converges in distribution. Suppose also that $k$ is such that $k=o(k_\rho)$, $\sqrt{k} \alpha
(n/k) \to\infty$, $\sqrt{k}=O(\sqrt{k_\rho}\alpha(n/k_\rho))$ and $\sqrt{k} \alpha(n/k) \beta(n/k) \to 0$. Assume moreover that the function $M$ never vanishes except on the axes. Then, as $n$ tends to infinity, $$\label{eqdev-asympt-unbiased} \sqrt{k} \bigl\{ \tilde L_{k, a, k_\rho}({\mathbf x}) -L({\mathbf x})
\bigr\} \stackrel{d} {\to} \tilde Y_a({\mathbf x}), $$ in $D([\varepsilon,T]^d)$ for every $\varepsilon>0$ and $T>0$, where $
\tilde Y_{a}$ is a continuous centered Gaussian process with covariance $
{\mathbb E}[ \tilde Y_a({\mathbf x}) \tilde Y_a({\mathbf y})]$ given by $ {\mathbb E}[Z_L({\mathbf x}) Z_L({\mathbf y})]\times\break (a^{-\rho} -1)^{-2} (
a^{-\rho} - a^{-1/2} )^2$.
The covariance function specified above is decreasing with respect to the parameter $a$ for any fixed value of $\rho$. This suggests at first glance to choose $a$ close to 1 in order to reduce the asymptotic variance of $ \tilde Y_{a}$, but this would give a degenerate form of (\[defestimC\]). See Section \[secsimu\] for practical considerations for the choice of $a$.
Theoretical examples {#secexamples}
====================
The aim of this section is to furnish several multivariate distributions that satisfy the third-order condition (\[eq3rdorder\]). For the sake of simplicity, expressions are displayed in the bivariate setting. We start by focusing on heavy-tailed margins. In this case, a first possible step to get the pointwise convergence is to obtain, for well-chosen positive reals $p$ and $q$, an expansion (for $t$ tending to infinity) of the form $$\begin{aligned}
&& t {\mathbb P}\bigl(X>t^px \mbox{ or } Y>t^q y\bigr)
\\
&&\qquad =
T_1(x,y) + \alpha(t) T_2(x,y) + \alpha(t) \beta(t)
T_3(x,y) + o\bigl(\alpha(t) \beta(t) \bigr), $$ with $T_1(1,1)>0$. One can then identify each term involved in (\[eq3rdorder\]) as follows: $$\begin{aligned}
L(x,y) &=& T_1\bigl(a(x), b(y)\bigr),\qquad M(x,y) = T_2
\bigl(a(x), b(y)\bigr)\quad\mbox{and}\quad
\nonumber\\[-8pt]\\[-8pt]\nonumber
N(x,y) &=& T_3\bigl(a(x), b(y) \bigr),\end{aligned}$$ where $$a(x) = x^{-p} \bigl\{T_1(1,+\infty) \bigr\}^p, \qquad b(x) = x^{-q} \bigl\{T_1(+\infty,1) \bigr
\}^q. $$ Applying @resnick1986 \[([-@resnick1986]), Corollary 5.18\], one can check that in such a framework a form of the bivariate extreme value distribution $G$ is given by $$G(x,y)=\exp \biggl(-\frac{T_1(x,y)}{T_1(1,1)} \biggr).$$
Powered norm densities {#subsecResnick-style}
----------------------
Following the idea of @resnick1986 \[([-@resnick1986]), pages 276 and 286\] consider first a norm ${\Vert}\cdot{\Vert}$, and a cone $
\mathcal{D}$ of $\mathbb{R}^2$, that is to say, a set such that if $(x,y)\in
\mathcal{D}$, then $(tx,ty)\in\mathcal{D}$ for every positive $t$. Without loss of generality, suppose that $(1,1)\in\mathcal{D}$. Let $(X,Y)$ be a bivariate random vector with probability density function given by $$f(x,y):=\frac{c {\mathbf1}_{\mathcal{D}}(x,y)}{(1+{\Vert}(x,y)^T{\Vert}^\alpha
)^{\beta}},$$ where $c$ is a normalizing positive constant and where $\alpha$ and $\beta$ are some positive real numbers such that $\alpha\beta>2$. Set $A_\mathcal{D}(x,y):=\{(u,v)\in\mathcal{D}\dvtx u>x$ or $v>y\}$, and define $p:=(\alpha\beta-2)^{-1}$. One can check that for $j=1,2,3$, $$\begin{aligned}
T_j(x,y)&=&{\int\!\!\!\int}_{A_\mathcal{D}(x,y)}\frac{c c_j \,du \,dv}{{\Vert}(u,v)^T{\Vert}^{\alpha(\beta+j-1)}},\end{aligned}$$ where $c_1=1$, $c_2=-\beta$ and $c_3=\beta(\beta+1)/2$. The functions $M$ and $N$ are homogeneous with order given through $\rho=\rho^\prime=-\alpha p$.
Let us discuss some particular choices of the norm:
- For the $L^1$-norm and $\alpha=1$, the model coincides with the bivariate Pareto of type II distribution, denoted by $\operatorname{BPII}(\beta)$ in this paper, and referred to as MP$^{(2)}(\mathit{II})(0,1, \beta-2)$ in @KBJ2000, page 604. In this case, $p=q=( \beta- 2)^{-1}$, and $
L(x,y)= x+y -(x^{-p} + y^{-p})^{-1/p}$. The latter s.t.d.f. is known as the negative logistic model, introduced by @joe1990; see also @beirlantgoegebeursegersteugels2004, page 307.
- When the Euclidean norm is chosen, one recovers the bivariate Cauchy distribution for $\alpha=2$, $\beta=3/2$ and $p=1$. On the positive quadrant, that means for $\mathcal{D}=\mathbb{R}_+^2$, we have $c=2/\pi$, $T_1(u,v)=c(u^{-2}+v^{-2})^{1/2}$ and $a(x)=b(x)=c/x$. On the whole plane, which means that $\mathcal{D}=\mathbb{R}^2$, we get $c=1/(2\pi)$, $T_1(u,v)=c \{u^{-1} + v^{-1}+
(u^{-2}+v^{-2})^{1/2} \}$ and $a(x)=b(x)=2c/x$. This can also be seen as a particular case of the following item.
- The Student distributions with Pearson correlation coefficient $\theta$ arise choosing the norm ${\Vert}(x,y)^T{\Vert}=\nu
^{-1/2}(x^2-2\theta xy+y^2)^{1/2}$, for a positive real number $\nu$, $\alpha=2$, $\beta=(\nu+2)/2$ and $p=\nu^{-1}$. In this case, the integral form of the function $T_1$ cannot be totally simplified, and one classically writes the s.t.d.f. as $$\begin{aligned}
L(x,y) &=& (x+y) \biggl[ \frac{y}{x+y}F_{\nu+1} \biggl\{
\frac{(y/x)^{1/\nu}-\theta}{\sqrt{1-\theta^2}} \sqrt{\nu+1} \biggr\}
\\
&&\hspace*{38pt}{} + \frac{x}{x+y}F_{\nu+1}
\biggl\{\frac{(x/y)^{1/\nu}-\theta}{\sqrt
{1-\theta^2}} \sqrt{\nu+1} \biggr\} \biggr], $$ where $F_{\nu+1}$ is the c.d.f. of the univariate Student distribution with $\nu+1$ degrees of freedom. This dependence structure is also obtained for some elliptical models; see, for example, \[@krajina2012, page 1813\] and next subsection.
- Other choices for the norm would lead to other distributions. Note that one can also relax the symmetry condition, considering, for instance, the Mahalanobis pseudo-norm defined by ${\Vert}(x,y)^T{\Vert}^2=(x/\sigma)^2- 2\rho(x/\sigma)(y/\tau)+(y/\tau)^2$ for a real number $\rho$ such that ${\vert}\rho{\vert}<1$ and some positive real numbers $\sigma$ and $\tau$.
Elliptical distributions {#subsecelliptical}
------------------------
Consider the usual representation of the centered elliptical distribution $(X,Y)^T=R{\mathbf AU}$, in terms of a positive random variable $R$, a $2\times2$ matrix $\mathbf A$ such that ${\bolds\Sigma
}={\mathbf A A}^T$ is of full rank, and a bivariate random vector $\mathbf U$ independent of $R$, uniformly distributed on the unit circle of the plane. Assume that $R$ has a probability density function denoted by $g_R$. One can then express the probability density function of $(X,Y)$ as $$f(x,y):=\frac{1}{{\vert}\operatorname{det}\mathbf A{\vert}}g_R \bigl\{ (x, y) {\bolds\Sigma
}^{-1} (x,y)^T \bigr\}. $$ A sufficient condition to satisfy (\[eq3rdorder\]) is to assume that the distribution of $R$ belongs to the Hall and Welsch class \[@hallwelsh1985\], namely, $${\mathbb P}(R>r) = c r^{-1/\gamma} \bigl\{ 1 + D_1
r^{\rho/\gamma} + D_2 r^{(\rho+ \rho_1)/\gamma} + o\bigl(r^{(\rho+ \rho_1)/\gamma
}
\bigr) \bigr\}, $$ with positive real $c$, nonnull reals $D_1$ and $ D_2$ and negative reals $\rho$ and $ \rho_1$.
One can check that, for $j=1,2,3$, $$\begin{aligned}
T_j(x,y)&=&\frac{c}{2 \pi\gamma{\vert}\operatorname{det}\mathbf A{\vert}} {\int\!\!\!\int}_{\{(u,v)\dvtx u
> x~\mathrm{or}~v >y\}}\frac{du \,dv}{\{(u,v) {\bolds\Sigma}^{-1}(u,v)^T\}
^{1+1/(2\gamma) + p_j}},\end{aligned}$$ where $p_1=0, p_2 = - \rho/(2\gamma)$ and $p_3 = - (\rho+ \rho
_1)/(2\gamma)$.
Assuming for simplicity that ${\bolds\Sigma} = {1\ \ \theta\choose \theta\ \ 1}$, the s.t.d.f. can be written as $$\begin{aligned}
L(x,y) &=& (x+y) \biggl[ \frac{y}{x+y}F_{1/\gamma+1} \biggl\{
\frac
{(y/x)^{\gamma}-\theta}{\sqrt{1-\theta^2}} \sqrt{1/\gamma+1} \biggr\}
\\
&&\hspace*{37pt}{} + \frac{x}{x+y}F_{1/\gamma+1}
\biggl\{\frac{(x/y)^{\gamma
}-\theta}{\sqrt{1-\theta^2}} \sqrt{1/\gamma+1} \biggr\} \biggr],\end{aligned}$$ which is the form already obtained for the Student distribution in Section \[subsecResnick-style\] for $\nu= 1/\gamma$. See @demartamcneil2005 for more details. Note finally that for a general matrix $\bolds\Sigma$ and the special case $g_R(r)= c(1+r^\alpha
)^{-\beta}$, one recovers the Mahalanobis pseudo-norm already mentioned in the previous subsection.
When dealing with margins that are *not* heavy tailed, the calculus is done directly from (\[eq2ndorder\]). The last two examples of bivariate distributions have short and light tailed margins, respectively.
Archimax distributions {#subsecArchimax}
----------------------
Consider the bivariate c.d.f. defined for each $0 \leq u,v \leq1$ by $$\label{eqArchimax} F(u,v) = \bigl\{1 + L\bigl(u^{-1}-1,v^{-1} -1
\bigr) \bigr\}^{-1},$$ given in terms of a s.t.d.f. $L$. This distribution has standard uniform univariate margins and corresponds to a particular case of Archimax bivariate copulas introduced in @caperaafougeresgenest2000, in which the function $\phi(t) = t^{-1} -1$ is the Clayton Archimedean generator with index 1. Expanding the left-hand side term of (\[eq2ndorder\]) leads to, as $t$ tends to infinity, $$t \bigl\{ 1- F \bigl(1-t^{-1} x, 1- t^{-1} y \bigr) \bigr\}
= L(x,y) +t^{-1} M(x,y) + t^{-2} N(x,y) + o
\bigl(t^{-2} \bigr), $$ where $$\begin{aligned}
M(x,y) &:=& x^2 \partial_1 L(x,y)+y^2
\partial_2 L(x,y) -L^2(x,y),
\\
N(x,y) &:=& x^4/2 \partial^2_{11}L(x,y)
+x^2y^2 \partial^2_{12}L(x,y)+y^4/2
\partial^2_{22}L(x,y)
\\
&&{}+ L^3(x,y) + \bigl(x^3- 2
x^2 L(x,y) \bigr) \partial_1 L(x,y)
\\
&&{} + \bigl(y^3 - 2 y^2 L(x,y) \bigr) \partial_2
L(x,y).\end{aligned}$$ This allows us to identify $\rho= \rho^\prime= -1$. Above, the notation $\partial_{ij}L$ stands for $\partial^2 L / (\partial x_i\, \partial x_j)$.
Multivariate symmetric logistic distributions {#subsecasymlog}
---------------------------------------------
Consider the c.d.f. defined by $$\label{eqasymlog} F(x,y) = \exp \bigl\{ - \bigl(e^{-x/s} + e^{-y/s}
\bigr)^s \bigr\},$$ for each $x,y \in{\mathbb R}$, which corresponds to the bivariate extreme value distribution with Gumbel univariate margins $F_1(x)=F_2(x)=\exp\{ - e^{-x} \} $ and symmetric logistic s.t.d.f. $L(x,y)= (x^{1/s} + y^{1/s})^s$, where $0 < s \leq1$. This distribution was introduced in @tawn1988; see, for example, @beirlantgoegebeursegersteugels2004, page 304. Expanding $
t [ 1- F \{F_1^{-1}(1-t^{-1} x), F_2^{-1}(1- t^{-1} y)
\} ]$ leads to $$L(x,y) +t^{-1} M(x,y) + t^{-2} N(x,y) + o
\bigl(t^{-2} \bigr), $$ where $$\begin{aligned}
M(x,y) &:=& \tfrac{1}{2} \bigl(x x^{1/s}+y y^{1/s}\bigr)
\bigl\{L(x,y)\bigr\}^{1-1/s}- \tfrac
{1}{2}\bigl\{L(x,y)\bigr
\}^2,
\\
N(x,y)&:=&\frac{1}{3} \bigl(x^2 x^{1/s}+y^2
y^{1/s}\bigr)\bigl\{L(x,y)\bigr\}^{1-1/s}
\\
&&{} +\frac{1-s}{8s}
(xy)^{1/s}(x -y)^2 \bigl\{L(x,y)\bigr\}^{1-2/s}
\\
&&{} +\frac{1}{3! }\bigl\{L(x,y)\bigr\}^3 -
\frac{1}{2} \bigl(x x^{1/s}+y y^{1/s}\bigr)\bigl\{L(x,y)
\bigr\}^{2-1/s}.\end{aligned}$$ This allows us to identify $\rho= \rho^\prime= -1$. The identification of second and third-order terms has previously be derived by @ledfordtawn1997.
Finite sample performances {#secsimu}
==========================
The purpose of this section is to evaluate the performance of the estimators of $L$ introduced in Section \[secprocedure\]. For simplicity, we will focus on dimension 2, and simulate samples from the distributions presented in Section \[secexamples\]. Thanks to the homogeneity property, one can focus on the estimation of $t \mapsto L(1-t,t)$ for $0 \leq t \leq1$, which coincides with the Pickands dependence function $A$; see, for example, @beirlantgoegebeursegersteugels2004, page 267. Considering first the estimation at $t=1/2$ leads to the definition of aggregated versions of our estimators. These new estimators will be both compared in terms of $L^1$-errors for $L$ or associated level curves.
Estimators in practice {#subseccorrections}
----------------------
Let us start with the estimation of $L(1/2,1/2)$ for the bivariate Student distribution with 2 degrees of freedom. This model is a particular case of Sections \[subsecResnick-style\] and \[subsecelliptical\]. For one sample of size 1000, Figure \[sec-simu1-stu2\] gives, as functions of $k$, the estimation of $L$ at point $(1/2,1/2)$ by $\hat L_{k}$, ${\accentset{\circ}{L}}_{k}$ and $\tilde L_{k}$, respectively, defined by (\[eq1storderhat\]), (\[defestimB\]) and (\[defestimC\]). For the last two estimators, the parameters have been tuned as follows: $a=0.4$, $k_\rho= 990$ and $\rho$ estimated using (\[defrho-hat\]) with $a=r=0.4$. These values have been empirically selected based on intensive simulation, and will be kept throughout the paper.
![Estimation of $L(1/2,1/2)$ for the bivariate $\operatorname{Student}(2)$ law based on a sample of size 1000.[]{data-label="sec-simu1-stu2"}](1305f01.eps)
One can check from Figure \[sec-simu1-stu2\] that the empirical estimator $\hat L_{k}$ behaves fairly well in terms of bias for small values of $k$. Besides, the bias is efficiently corrected by the two estimators ${\accentset{\circ}{L}}_{k}$ and $\tilde L_{k}$. Since the bias almost vanishes along the range of $k$, one can think about reducing the variance through an aggregation in $k$ (via mean or median) of ${\accentset{\circ}{L}}_{k}$ or $\tilde L_{k}$. This leads us to consider the two following estimators: $$\begin{aligned}
{\accentset{\circ}{L}}_{\mathrm{agg}} &:=& \operatorname{Median}({\accentset{\circ}{L}}_{k}, k=1,
\ldots, \kappa_n),
\\
\tilde L_{\mathrm{agg}}&:=&\operatorname{Median}(\tilde L_{k}, k=1, \ldots,
\kappa _n), $$ where $n$ is the sample size and $\kappa_n$ is an appropriate fraction of $n$. Their performance will be compared to those of the family $\{
\hat L_k, k=1,\ldots,n-1\}$. Simplified notation $\{\hat L_k, k\}$ will be used instead of $\{\hat L_k, k=1,\ldots,n-1\}$. Because any s.t.d.f. $L$ satisfies $\max(t,1-t) \leq L(1-t,t) \leq1$, the competitors have been corrected so that they satisfy the same inequalities.
If $\kappa_n$ satisfies the condition imposed on $k_n$ in Theorems \[thmbiasB\] and \[thmbias-Lu\], then the aggregated estimators ${\accentset{\circ}{L}}_{\mathrm{agg}}$ and $\tilde L_{\mathrm{agg}}$ would inherit the asymptotic properties of ${\accentset{\circ}{L}}_{k}$ and $\tilde L_{k}$. Indeed, all the estimators jointly converge, since they are based on a single process.
\[remmixture\] In the following simulation study, $\kappa_n$ is arbitrarily fixed to $n-1$. Such a choice is open to criticism since it does not satisfy the theoretical assumptions mentioned in the previous remark. But it is motivated here by the fact that the bias happened to be efficiently corrected, even for very large values of $k$, as already illustrated on Figure \[sec-simu1-stu2\]. Note, however, that such a choice would not be systematically the right one. In presence of more complex models such as mixtures, $\kappa_n$ should not exceed the size of the subpopulation with heaviest tail. To illustrate this point, take, for example, the bivariate c.d.f. $F=pG +
(1-p)H$, where $G$ is the c.d.f. of the bivariate $\operatorname{BPII}(3)$ model, and $H$ is the uniform c.d.f. on $[0,1]^2$. Then the s.t.d.f. is $L(x,y)=x+y-(1/x + 1/y)^{-1}$, and only $p\%$ of the data belong to the targeted domain of attraction, so $\kappa_n$ should not exceed $pn$.
Classical criteria of quality of an estimator $\hat\theta$ of $\theta
$ are the absolute bias (ABias) and the mean square error (MSE) defined by $$\begin{aligned}
\operatorname{ABias}&=&\frac{1}{N}\sum_{i=1}^N
\bigl{\vert}\hat{\theta}^{(i)}-\theta\bigr{\vert},
\\
\operatorname{MSE}&=&\frac{1}{N}\sum_{i=1}^N
\bigl(\hat{\theta}^{(i)}-\theta\bigr)^2,\end{aligned}$$ where $N$ is the number of replicates of the experiment and $\hat
{\theta}^{(i)}$ is the estimate from the $i$th sample. Note that what we call *Abias* is also referred as *MAE* (for Mean Absolute Error) in the literature. Figure \[figabias-mse-st\] plots these criteria in the estimation of $L(1/2,1/2)$ for the bivariate $\operatorname{Student}(2)$ model when $n=1000$ and $N=200$.
![ ABias, MSE for the estimation of $L(1/2,1/2)$ in the bivariate $\operatorname{Student}(2)$ model when $n=1000$ as a function of $k$.[]{data-label="figabias-mse-st"}](1305f02.eps)
Figure \[figabias-mse-st\] exhibits the strong dependence of the behavior of $\hat{L}_k$ in terms of $k$, as well as the efficiency of the bias correction procedures. The estimator ${\accentset{\circ}{L}}_k$ given by (\[defestimB\]) outperforms the estimator $\tilde L_k$ defined by (\[defestimC\]), no matter the value of $k$. Moreover, the ABias and MSE curves associated to ${\accentset{\circ}{L}}_k$ almost reach the minimum of those of $\hat{L}_k$. Finally, the aggregated version ${\accentset{\circ}{L}}_{\mathrm{agg}}$ answers surprisingly well to the estimation problem of the s.t.d.f. $L$. First, its performance is similar to the best reachable from the original estimator $\hat L_k$. Second, it gets rid of the delicate choice of a threshold $k$ (or would at least simplify this choice; see Remark \[remmixture\]). These comparisons have also been made for five other models obtained from Section \[secexamples\]. The results are very similar to the ones obtained for the bivariate $\operatorname{Student}(2)$ distribution and are therefore not presented.
Comparisons in terms of $L^1$-error for $L$ {#subsecL1}
-------------------------------------------
The comparisons are now handled not only at a single point, but for the whole function using an $L^1$-error defined as follows: $$\label{eqnorm1} \frac{1}{T+1}\sum_{t=1}^T
\biggl{\vert}\hat L \biggl(1-\frac{t}{T},\frac
{t}{T} \biggr)- L
\biggl(1-\frac{t}{T},\frac{t}{T} \biggr)\biggr{\vert},$$ where $T$ is the size of the subdivision of $[0,1]$. Figure \[figl1-norm-L\] gives the boxplots based on $N=100$ realizations of ${\accentset{\circ}{L}}_{\mathrm{agg}}, \tilde L_{\mathrm{agg}}$ and $\{\hat L_k, k\}$ for $T=30$ in the case of six bivariate models:
- First row: Cauchy and $\operatorname{Student}(2)$ models;
- Second row: $\operatorname{BPII}(3)$ model and Symmetric logistic model with $s=1/3$;
- Third row: Archimax model with logistic generator $L(x,y)=(x^2+y^2)^{1/2}$ and mixed generator $L(x,y)=(x^2+y^2+xy)/(x+y)$.
![Boxplot of the $L^1$-error of function $L$ for the estimators $\protect{\accentset{\circ}{L}}_{\mathrm{agg}}, \tilde L_{\mathrm{agg}}$ and $\{\hat L_k, k\}$. First row: bivariate Cauchy model (left) and bivariate $\operatorname{Student}(2)$ model (right). Second row: bivariate $\operatorname{BPII}(3)$ model (left) and bivariate Symmetric logistic model (right). Third row: bivariate Archimax model with logistic (left) and mixed generator (right).[]{data-label="figl1-norm-L"}](1305f03.eps)
As already observed in Figure \[figabias-mse-st\], the estimator ${\accentset{\circ}{L}}_{\mathrm{agg}}$ is again very competitive compared to the best element of $\{\hat L_k, k\}$, no matter the choice of model. Recall that the value of $k$ leading to the best $\hat L_k$ depends crucially on the model and is consequently unknown in practice, which invites any users to apply this new procedure.
The estimator $ \tilde L_{\mathrm{agg}}$ is definitely less competitive compared to ${\accentset{\circ}{L}}_{\mathrm{agg}}$. Given these results we will not pursue with the $ \tilde L_{\mathrm{agg}}$ estimator in the rest of this paper, and will focus our attention on the behavior of ${\accentset{\circ}{L}}_{\mathrm{agg}}$.
Comparisons between mathringLagg, a convex version of mathringLagg, and Peng’s estimator {#subseccompar}
----------------------------------------------------------------------------------------
A natural step is now to compare the performance of our best estimator ${\accentset{\circ}{L}}_{\mathrm{agg}}$ with an existing competitor, recently introduced by @peng2010. In his work, Peng provides a data-driven method which chooses the threshold via estimating a s.t.d.f. Another interesting task is to compare ${\accentset{\circ}{L}}_{\mathrm{agg}}$ with a convexified version of itself, since any s.t.d.f. is a convex function; see, for example, @beirlantgoegebeursegersteugels2004 \[([-@beirlantgoegebeursegersteugels2004]), Section 8.2.2\] or @dehaanferreira2006 \[([-@dehaanferreira2006]), Section 6.1.5\]. Note that a general convexification procedure has been proposed in dimension 2 by @filsguillousegers2008; see also some alternative suggestions in @bucherdettevolgushev2011.
In order to take maximal advantage from this simulation study, the three different models implemented have been considered in two versions for each: the first model is the Gaussian one, simulated with Pearson’s correlation coefficient $ \pm0.5$. The Gaussian model is a particular case of elliptical distributions (see Section \[subsecelliptical\]), which illustrates the asymptotic independent situation; cf. Remark \[rkAI\]. The second model is the bivariate Symmetric logistic one, introduced in Section \[subsecasymlog\], with two different strengths of dependence (close to independence on the left column, stronger dependence on the right column). The third model is the bivariate Student family, introduced in Sections \[subsecResnick-style\] and \[subsecelliptical\] as a particular case. Two strengths of dependence have also been chosen, close to asymptotic independence on the left column and stronger dependence on the right column.
Our results, summarized in Figure \[figl1-norm-L-rev1\], will thus exhibit in particular how the performance in the estimation of the s.t.d.f. depends on the distance to the asymptotic independence case. The $y$-axis scale has been fixed for all the six cases so that one can measure that the estimation of the s.t.d.f. is a more ambitious problem under asymptotic independence. However, our estimator ${\accentset{\circ}{L}}_{\mathrm{agg}}$ has still nice properties when comparing it to the empirical estimator $\hat{L}_k$.
The convex version ${\accentset{\circ}{L}}_{\mathrm{aggc}}$ performs quite equivalently as ${\accentset{\circ}{L}}_{\mathrm{agg}}$. A reason for this is that by construction our estimator is actually not far from a convex function. So balancing the cost of convexifying with the benefit in the performance motivates the simple use of ${\accentset{\circ}{L}}_{\mathrm{agg}}$.
Finally, regarding Peng’s estimator $\hat{L}_P$, one observes that this estimator is an interesting alternative to the original family $\{
\hat{L}_k,k\}$, which, however, never outperforms our proposal.
![Boxplot of the $L^1$-error of function $L$ for the estimators $\protect{\accentset{\circ}{L}}_{\mathrm{agg}}, \protect{\accentset{\circ}{L}}_{\mathrm{aggc}}, \hat L_P$ and $\{
\hat L_k, k\}$. First row: bivariate Normal model with correlation $\tau$: $\tau=0.5$ (left) and $\tau=-0.5$ (right). Second row: bivariate Symmetric logistic$(s)$ model: $s=1/1.2$ (left) and $s=1/3$ (right). Third row: bivariate $\operatorname{Student}(\nu)$ model: $\nu=20$ (left) and $\nu
=2$ (right).[]{data-label="figl1-norm-L-rev1"}](1305f04.eps)
Estimating a failure probability
--------------------------------
[Let us illustrate in this subsection the question of estimating an arbitrarily chosen failure probability $P(X>10^4$ or $Y>2
\cdot10^4)$, where $(X,Y)$ comes from the $\operatorname{BPII}(3)$ model, so that $P(X>10^4$ or $Y>2 \cdot10^4) = 0.00011665$. Samples of size $n=1000$ are considered. Thus empirical estimation]{} will be useless for evaluating the probability of exceeding such extreme values for $X$ or $Y$, and an extrapolation based on extreme value theory is thus needed.
First assume that it is known that the margins are standard Pareto. This probability can be approximated by $$P\bigl(X>10^4 \mbox{ or } Y>2\cdot10^4\bigr)\simeq
\bigl(10^{-4}+5 \cdot 10^{-5} \bigr) L(2/3,1/3),$$ which naturally comes from (\[eqL\]), the projection on the simplex and the homogeneity of $L$. Estimating the unknown parameter $L(2/3,1/3)$ with our candidate ${\accentset{\circ}{L}}_{\mathrm{agg}}$ and the original family $\{\hat L_k, k\}$ gives several boxplots (based on 500 replicates) that are presented in Figure \[figproba\]. The comparison of these estimates is again favorable to ${\accentset{\circ}{L}}_{\mathrm{agg}}$.

\[figproba\]
We also investigated the possible use of a second-order term in the approximation of the probability $P(X>10^4$ or $ Y>2\cdot10^4)$, making use of the following estimators $$\bigl(10^{-4}+5 \cdot10^{-5} \bigr) {\accentset{\circ}{L}}_{\mathrm{agg}} \biggl(\frac{2}{3},\frac{1}3 \biggr) + \biggl(
\frac{k}{n} \biggr)^{\hat
\rho} \bigl(10^{-4}+5
\cdot10^{-5} \bigr)^{1-\hat\rho} \hat \Delta_{k,2^{-1/\hat\rho}} \biggl(
\frac{2}{3},\frac{1}3 \biggr). $$ The results were so similar to those obtained in Figure \[figproba\] that we chose to skip them.
Second, when the margins are not assumed to be known, the estimation of $p_1=1-F_1(10^4)$ and $p_2=1-F_2(2\cdot10^4)$ can be reached by the POT method \[see, e.g., @beirlantgoegebeursegersteugels2004, Section 7.4\] for several values of a threshold. After the study of mean residual life plots and quantile plots, the thresholds have been fixed to be $X_{n-k,n}$ and $Y_{n-k,n}$ for $k=200$. The POT estimates deduced with these thresholds are, respectively, denoted by $\hat{p}_1$ and $\hat{p}_2$. The probability $P(X>10^4$ or $Y>2 \cdot10^4)$ is then approximated by $$P\bigl(X>10^4\mbox{ or } Y>2\cdot10^4\bigr)\simeq (
\hat{p}_1+\hat {p}_2 ) L \biggl(\frac{\hat{p}_1}{\hat{p}_1+\hat{p}_2},
\frac
{\hat{p}_2}{\hat{p}_1+\hat{p}_2} \biggr).$$ Estimating on each repetition the unknown parameter $L (\hat{p}_1/(\hat{p}_1+\hat{p}_2),\hat{p}_2/(\hat{p}_1+\hat
{p}_2) )$ with our candidate ${\accentset{\circ}{L}}_{\mathrm{agg}}$ and the original family $\{
\hat L_k, k\}$ gives several boxplots (based on 500 replicates) presented in Figure \[figprobabis\].
![Boxplot (500 replicates) of the estimation of $P(X>10^4$ or $Y>2\cdot10^4)$ when $(X,Y)$ is drawn from the $\operatorname{BPII}(3)$ model with sample size $n=1000$ and estimating margins by POT method.[]{data-label="figprobabis"}](1305f06.eps)
It seems clear that the uncertainty on the margins $F_1$ and $F_2$ has much more influence than that of the s.t.d.f. $L$. Such findings corroborate previous studies; see, for example, @bruuntawn1998 and @dehaansinha1999.
$Q$-curves {#subsecQcurve}
----------
Another nice representation of a function of several variables is through its level sets. In the case of the function $L$, it consists of looking (for any positive real $c$) at sets of the form $\{(x,y)\in
\mathbb{R}_+^2, L(x,y)\leq c\}$. From homogeneity property, it is characterized by $$Q:=\bigl\{(x,y)\in\mathbb{R}_+^2, L(x,y)\leq1\bigr\}.$$ Following @dehaanferreira2006 \[([-@dehaanferreira2006]), page 245\], the boundary of this set can be written as $$\partial Q= \bigl\{ \bigl(b(\theta)\cos\theta,b(\theta)\sin\theta \bigr):
b(\theta)=\bigl(L(\cos\theta,\sin\theta)\bigr)^{-1}, \theta
\in[0,\pi/2] \bigr\}.$$ The estimation of $\partial Q$ is naturally obtained by replacing $L$ by any estimator, and this is done here for the estimators ${\accentset{\circ}{L}}_{\mathrm{agg}}$ and $\{\hat L_k, k\}$. Figure \[figneptune\] (left) exhibits the bias phenomenon (as $k$ increases) induced by $\hat L_k$ in the estimation of the $Q$-curve. The bias factor on $\hat L_k$ is illustrated with $k=50, k=100$ and $k=800$. The correction of the bias with ${\accentset{\circ}{L}}_{\mathrm{agg}}$ is effective. As in the previous section, the comparison of the different estimators is provided in terms of a global criterium based on the $L^1$-norm, given by $$\frac{\pi}{2(T+1)}\sum_{t=0}^T \biggl
{\vert}\hat b \biggl(\frac{\pi
t}{2T} \biggr) - b \biggl(\frac{\pi t}{2T}
\biggr) \biggr{\vert}\biggl\{ \cos \biggl(\frac{\pi t}{2T} \biggr)+\sin \biggl(
\frac{\pi
t}{2T} \biggr) \biggr\}. $$ Figure \[figl1-norm-Q\] displays the boxplots of this measure, based on $N=100$ realizations and for $T=30$ under the six bivariate models given in the previous section.
![Left: Estimation of the $Q$-curve for the bivariate $\operatorname{Student}(2)$ law based on a sample of size 1000. Right: Estimated $Q$-curve for [the wave heights data]{} introduced in @dehaanferreira2006.[]{data-label="figneptune"}](1305f07.eps)
The estimation of the $Q$-curve based on the original estimator $\hat
L_k$ is strongly sensitive to the choice of $k$: the bias (resp., the variability) is an increasing (resp., decreasing) function of $k$. The performances of ${\accentset{\circ}{L}}_{\mathrm{agg}}$ is similar to that of the best $\hat L_k$, which is unknown in practice. These features corroborate the conclusions drawn in Section \[subsecL1\].
To close this section, let us illustrate the $Q$-curve estimation [on the wave heights data set]{} of @dehaanferreira2006, page 207. As explained therein, wave height (HmO) and still water level (SWL) have been recorded during 828 storm events on the Dutch coast. The analogous Figure 7.2 from @dehaanferreira2006 is reported in Figure \[figneptune\] (right). Even if the two curves are not so close, the conclusion remains the same: the estimated boundary is concave, which indicates that the high values of the two variables are dependent.
![Boxplot of the $L^1$-error of $Q$-curve for the estimators $\protect{\accentset{\circ}{L}}_{\mathrm{agg}}$ and $\{\hat L_k, k\}$. First row: bivariate Cauchy model (left) and bivariate $\operatorname{Student}(2)$ model (right). Second row: bivariate $\operatorname{BPII}(3)$ model (left) and bivariate Symmetric logistic model (right). Third row: bivariate Archimax model with logistic (left) and mixed generator (right).[]{data-label="figl1-norm-Q"}](1305f08.eps)
Estimation of second-order components rho and $M$ {#secrho}
=================================================
In this section, we focus on the estimation of the function $M$ coming from the second-order condition (\[eq2ndorder\]) and on the estimation of its homogeneity parameter $1-\rho$.
Second-order parameter rho
--------------------------
A possible way to estimate $\rho$ is to use on each margin one of the techniques developed in the univariate setting; see, for example, @gomesdehaanpeng2002 or @ciupercamercadier2010. Other methods make use of the multivariate structure of the data; see, for example, @peng2010 and also @goegebeurguillou2013 in a slightly different framework. The construction described here takes likewise advantage of the multivariate information of the sample. With this purpose, the following proposition shows that a variable of interest is the ratio of two terms $\hat\Delta_{k,a}$, defined by (\[eqDelta\]).
\[proplim-quotient\] Assume that the conditions of Proposition \[propcv-ps-L\] are fulfilled and fix positive real numbers $r$ and $a\in(0,1)$. Assume moreover that the function $M$ never vanishes except on the axes. Then, as $n$ tends to infinity, for every $\varepsilon>0$ and $T>0$, $$\sup_{\varepsilon\leq x_1,\ldots,x_d \leq T} \biggl{\vert}\frac{ \hat
\Delta_{k,a}(r {\mathbf x})} { \hat\Delta_{k,a}({\mathbf x}) } -
r^{1-\rho} \biggr{\vert}\stackrel{\mathbb{P}} {\longrightarrow} 0.$$
If the requirement that the function $M$ is either positive, or negative in the positive quadrant does not hold, one could consider the integral of $(\hat{\Delta}_{k,a}({\mathbf x}))^2$ over the set $\{
{\mathbf x}=(x_1,\ldots,x_d)$ s.t. $x_1^2+\cdots+x_d^2=1\}$ and prove a result like Lemma \[lemdelta\] for this statistic. Then the integral of $M^2$ appears in the denominator in Proposition \[proplim-quotient\] instead of $M$ itself, and the sign of $M$ does not matter. This will be part of a future work.
A family of consistent estimators of the parameter $\rho$ can be derived from Proposition \[proplim-quotient\]. $$\label{defrho-hat} \hat\rho_{k,a,r}({\mathbf x}):= \biggl(1 -
\frac{1}{\log r} \log\biggl{\vert}\frac{ \hat\Delta_{k,a}(r {\mathbf x})} { \hat\Delta_{k,a}({\mathbf x})
} \biggr{\vert}\biggr)
\wedge0.$$ The following property can be obtained from the asymptotic expansion given in Proposition \[propdev-asympt-L\].
\[prop3estim-rho\] Assume that the conditions of Proposition \[propdev-asympt-L\] are fulfilled, and fix positive real numbers $r$ and $a\in(0,1)$. Consider the estimator of $\rho$ defined by (\[defrho-hat\]). Assume moreover that the function $M$ never vanishes except on the axes. Then, as $n$ tends to infinity, $$\sqrt{k} \alpha\biggl(\frac{n}{k}\biggr) \bigl\{ \hat
\rho_{k,a,r}({\mathbf x}) - \rho \bigr\} {\stackrel}{{d}} {\longrightarrow}\hat
Z_{ \rho, a, r}({\mathbf x}),$$ in $D([\varepsilon,T]^d)$ for every $\varepsilon>0$ and $T>0$, with $$\hat Z_{ \rho, a, r}({\mathbf x}): = \frac{a^{-1} Z_L (a {\mathbf
x} )-Z_L({\mathbf x})}{ (a^{-\rho} -1) M({\mathbf x}) \log r } -\frac
{a^{-1} Z_L (ra {\mathbf x} )-Z_L(r {\mathbf x})}{ (a^{-\rho} -1)
M({\mathbf x}) r^{1-\rho} \log r}.$$
Figure \[figbox-rho\] illustrates the finite sample behavior of this estimator of $\rho$ for a collection of bivariate models introduced in Section \[secexamples\], for which the true value of $\rho$ is equal to $-$1.
![Boxplot of 500 estimations of $\rho$ given by (\[defrho-hat\]) using samples of size 1000 drawn from six models: $\operatorname{Student}(2)$; $\operatorname{BPII}(3)$; Symmetric Logistic with $s=1/3$; Archimax model with logistic generator with $s=1/2$; Archimax model with mixed generator. Red line indicates the true value of $\rho=-1$.[]{data-label="figbox-rho"}](1305f09.eps)
These boxplots show that [the estimator performs reasonably well in median, no matter the choice of model, but the uncertainty is rather important. Fortunately this seems from simulation studies to have only minor influence on the estimation of $L$.]{}
Second-order function $M$
-------------------------
Recall that from (\[eqdev-asympt-La-L\]) the asymptotic bias of $\hat L_{k,a}({\mathbf x})$ is given by $\alpha(\frac{n}{k}) a^{-\rho}
M({\mathbf x})$. In order to circumvent an estimation of the term $\alpha
(n/k)$, a renormalization is needed, focusing, for instance, on the estimation of $M({\mathbf x})/M({\mathbf1/2})$ where ${\mathbf1/2}=(1/2,\ldots,1/2)$. Thanks to (\[eqDeltaCV\]), this ratio can be consistently estimated by $$\frac{ \hat\Delta_{k,a}({\mathbf x})}{\hat\Delta_{k,a}({\mathbf1/2})}$$ as soon as $k$ is a well-chosen intermediate sequence. The asymptotic normality can also be derived from analogous arguments to those used in the proof of Proposition \[prop3estim-rho\]. Details are not presented here for the sake of simplicity.
Figure \[figl1-norm-M\] summarizes the behavior of the estimator of the curve $t\mapsto M(t,1-t)/M(1/2,1/2)$ through boxplots of the $L^1$-error, defined as in (\[eqnorm1\]). We observe from this figure that the best estimation is reached for large values of $k$. This feature does not depend on the degree of asymptotic dependence in the Symmetric logistic model, nor on the strength of the bias of the original estimator $\hat L_k$ detected on Figure \[figl1-norm-L\]. These graphs confirm that the asymptotic bias is remarkably well estimated for large values of $k$. This helps to understand why the bias subtraction is accurate for large or very large choices of $k$, as also commented in Section \[subseccorrections\].
![Boxplot of the $L^1$-error of $M(\cdot)/M(1/2,1/2)$-curve. First row: bivariate logistic model with $s=0.1$ (left) and with $s=0.5$ (right). Second row: bivariate logistic model with $s=0.9$ (left) and bivariate Archimax with mixed generator (right).[]{data-label="figl1-norm-M"}](1305f10.eps)
Concluding comments
===================
This paper deals with the estimation of the extremal dependence structure in a multivariate context. Focusing on the s.t.d.f., the empirical counterpart is the nonparametric reference. A common feature when modeling extreme events is the delicate choice of the number of observations used in the estimation, and it spoils the good performance of this estimator. The aim of this paper has been to correct the asymptotic bias of the empirical estimator, so that the choice of the threshold becomes less sensitive. Two asymptotically unbiased estimators have been proposed and studied, both theoretically and numerically. The estimator defined in Section \[subsecmethodB\] proves to outperform the original estimator, whatever the model considered. Its aggregated version defined in Section \[subseccorrections\] appears as a worthy candidate to estimate the s.t.d.f.
Proofs {#secproofs}
======
[Proof of Proposition \[propcv-ps-L\]]{} Denote by $U^{(j)}_i$ the uniform random variables $U^{(j)}_i=1-F_j(X^{(j)}_i)$ for $j=1,\ldots,d$. Introducing $$V_k({\mathbf x})= \frac{1}k \sum_{i=1}^n
\ind_{ \{U^{(1)}_i \leq
kx_1/n~\mathrm{or}~\ldots~\mathrm{or}~U^{(d)}_i \leq kx_d/n
\} }$$ allows us to rewrite $\hat L_k$ as the following: $$\hat L_k ({\mathbf x}) = V_k \biggl(\frac{n}{k}
U^{(1)}_{[kx_1],n}, \ldots, \frac{n}{k}U^{(d)}_{[kx_d],n}
\biggr).$$ Write $$\begin{aligned}
&& \hat L_k({\mathbf x}) - L({\mathbf x})
\\
&&\qquad = V_k
\biggl(\frac{n}{k} U^{(1)}_{[kx_1],n}, \ldots,
\frac{n}{k}U^{(d)}_{[kx_d],n} \biggr)
\\
&&\quad\qquad{} - \frac{n}{k}
\bigl[ 1- F\bigl\{F_1^{-1}\bigl(1-U^{(1)}_{[kx_1],n}
\bigr),\ldots,F_d^{-1}\bigl(1-U^{(d)}_{[kx_d],n}
\bigr) \bigr\}\bigr]
\\
&&\quad\qquad{}+ \frac{n}{k} \bigl[ 1- F\bigl\{F_1^{-1}
\bigl(1-U^{(1)}_{[kx_1],n}\bigr),\ldots,F_d^{-1}
\bigl(1-U^{(d)}_{[kx_d],n}\bigr) \bigr\}\bigr]
\\
&&\quad\qquad{} - L \biggl(
\frac{n}{k} U^{(1)}_{[kx_1],n}, \ldots, \frac{n}{k}U^{(d)}_{[kx_d],n}
\biggr)
\\
&&\quad\qquad{} + L \biggl(\frac{n}{k} U^{(1)}_{[kx_1],n}, \ldots,
\frac
{n}{k}U^{(d)}_{[kx_d],n} \biggr) - L({\mathbf x}),\end{aligned}$$ and denote $A_{1,k}({\mathbf x})$ \[resp., $A_{2,k}({\mathbf x})$ and $A_{3,k}({\mathbf x})$\] the first line (resp., second and third lines) of the right-hand side.
Applying @dehaanferreira2006 \[([-@dehaanferreira2006]), Proposition 7.2.3\] leads to $$\sqrt{k} A_{1,k}({\mathbf x}) \stackrel{d} {\to} W_L({\mathbf x}),$$ in $D([0,T]^d)$ for every $T>0$ and for any intermediate sequence, where $W_L$ is a continuous centered Gaussian process with covariance structure specified in Proposition \[propdev-asympt-L\]. Due to the Skorohod construction we can write $$\label{eqskorohodA1} \sup_{0\leq x_1,\ldots, x_d \leq T} \bigl{\vert}\sqrt{k}
A_{1,k}({\mathbf x}) - W_L({\mathbf x}) \bigr{\vert}\to0\qquad
\mbox{a.s.},$$ which implies, since $\sqrt{k}\alpha(n/k)\to\infty$, $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\frac{A_{1,k}({\mathbf
x})}{\alpha (n/k)} \biggr{\vert}=O_{\mathbb{P}} \biggl(\frac{1}{\sqrt{k}\alpha (n/k)} \biggr).$$ Again for any intermediate sequence, the proof of @dehaanferreira2006 \[([-@dehaanferreira2006]), Theorem 7.2.2\] ensures the convergence for $j=1,\ldots,d$ $$\label{eqEKSmarg1} \sup_{x\in[0,T]} \biggl{\vert}\sqrt{k} \biggl(
\frac{n}{k}U^{(j)}_{[kx],n} -x \biggr) +
W_L(x{\mathbf e}_j)\biggr{\vert}\to0 \qquad\mbox{a.s.},$$ and finally $$\label{eqskorohodA3} \sup_{0\leq x_1,\ldots, x_d \leq T} \Biggl{\vert}\sqrt{k}
A_{3,k}({\mathbf x}) +\sum_{j=1}^dW_L(x_j{
\mathbf e}_j) \partial_j L({\mathbf x}) \Biggr{\vert}\to 0
\qquad\mbox{a.s.}$$ As previously, this yields $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\frac{A_{3,k}({\mathbf
x})}{\alpha (n/k)} \biggr{\vert}=O
\biggl(\frac
{1}{\sqrt{k}\alpha (n/k)} \biggr).$$ Since the intermediate sequence satisfies $\sqrt{k}\alpha (\frac
{n}{k} ) \to\infty$, it thus remains to prove that $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\frac{A_{2,k}({\mathbf
x})}{\alpha(n/k)} -M({\mathbf x}) \biggr
{\vert}\to0 \qquad\mbox {a.s.}$$ The second-order condition that holds uniformly on $[0,T]^d$ in (\[eq2ndorder\]) yields $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\frac{A_{2,k}({\mathbf
x})}{\alpha(n/k)} -M \biggl(
\frac{n}{k} U^{(1)}_{[kx_1],n}, \ldots, \frac{n}{k}U^{(d)}_{[kx_d],n}
\biggr)\biggr{\vert}\to0 \qquad\mbox{a.s.} $$ Then the result follows from $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}M({\mathbf x}) -M \biggl(
\frac
{n}{k} U^{(1)}_{[kx_1],n}, \ldots, \frac{n}{k}U^{(d)}_{[kx_d],n}
\biggr)\biggr{\vert}\to0 \qquad\mbox{a.s.}, $$ which is obtained combining (\[eqEKSmarg1\]) and the continuity of the function $M$.
[Proof of Proposition \[propdev-asympt-L\]]{} We use the notation introduced in the proof of Proposition \[propcv-ps-L\]. Thanks to the Skorohod construction, we can start from (\[eqskorohodA1\]). Combined with (\[eqskorohodA3\]), it is sufficient to prove the convergence $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\sqrt{k} \biggl\{ A_{2,k}({
\mathbf x}) -\alpha\biggl(\frac{n}{k}\biggr) M({\mathbf x}) \biggr\} \biggr{\vert}\to0 \qquad\mbox{a.s.}$$ Note that the third-order condition, the uniformity on $[0,T]^d$ of the convergence in (\[eq3rdorder\]) and the continuity of $N$ yield $$A_{2,k}({\mathbf x}) = \alpha\biggl(\frac{n}{k}\biggr)M \biggl(
\frac{n}{k} U^{(1)}_{[kx_1],n}, \ldots, \frac{n}{k}U^{(d)}_{[kx_d],n}
\biggr)+ O_{\mathbb P} \biggl(\alpha\biggl(\frac{n}{k}\biggr)\beta
\biggl(\frac{n}{k}\biggr) \biggr). $$ Thanks to (\[eqEKSmarg1\]) and to the existence of the first-order partial derivatives $\partial_j M$ $(j = 1, \dots, d)$ of the function $M$, we have that $$\begin{aligned}
\label{eqdev-M}
&& \sup_{0\leq x_1,\ldots, x_d \leq T} \Biggl{\vert}\sqrt{k} \biggl\{ M
\biggl(\frac{n}{k} U^{(1)}_{[kx_1],n}, \ldots,
\frac{n}{k}U^{(d)}_{[kx_d],n} \biggr) - M({\mathbf x}) \biggr\}
\\
&&\hspace*{121pt}\qquad{} +
\sum_{j=1}^dW_L(x_j{
\mathbf e}_j) \partial_j M({\mathbf x}) \Biggr{\vert}\end{aligned}$$ converges to 0 in probability, as $n$ tends to infinity. This implies that $$\sup_{0\leq x_1,\ldots, x_d \leq T} \biggl{\vert}\sqrt{k} \biggl\{ A_{2,k}({
\mathbf x}) -\alpha\biggl(\frac{n}{k}\biggr) M({\mathbf x}) \biggr\} \biggr{\vert}= O_{\mathbb P} \biggl(\biggl{\vert}\sqrt{k}\alpha\biggl(\frac{n}{k}
\biggr)\beta\biggl(\frac{n}{k}\biggr) + \alpha\biggl(\frac{n}{k}
\biggr)\biggr{\vert}\biggr),$$ which completes the proof, thanks to the choice of the intermediate sequence.
[Proof of Theorem \[thmbiasB\]]{} Recall that $b=(a^{-\rho} +1)^{-1/\rho}$, and denote $\hat
b=(a^{-\hat\rho} +1)^{-1/\hat\rho}$. Write $${\accentset{\circ}{L}}_{k, a, k_\rho}- L= \{ \hat L_{k,a} - L \} + \{ \hat
L_k - L\} - \{ \hat L_{k, \hat b} - L\},\label{eqdecomp1}$$ which equals, thanks to (\[eqdev-asympt-La-L\]) and under Skorohod’s construction, $$\begin{aligned}
&& \alpha \biggl(\frac{n}{k} \biggr) \bigl(a^{-\rho} + 1\bigr) M({
\mathbf x}) + \frac{1}{\sqrt k} \bigl(a^{-1} Z_L(a {\mathbf x}) +
Z_L({\mathbf x}) \bigr)
\\[-1pt]
&&\quad{} - \alpha \biggl(\frac{n}{k} \biggr) \hat
b^{-\rho} M({\mathbf x}) - \frac
{ b^{-1}}{\sqrt k} Z_L(b {\mathbf x})+ o
\biggl(\frac{1}{\sqrt{k}} \biggr)
\\[-1pt]
&&\qquad = \alpha \biggl(\frac{n}{k} \biggr) \bigl( \bigl(a^{-\rho} + 1
\bigr)-b^{-\rho} \bigr) M({\mathbf x})+ \frac{1}{\sqrt k}{\accentset{\circ}{Y}}_a({\mathbf x})
\\[-1pt]
&&\quad\qquad{} +\alpha \biggl(\frac{n}{k} \biggr)
\bigl(b^{-\rho}-\hat b^{-\rho
} \bigr)M({\mathbf x})+ o \biggl(
\frac{1}{\sqrt{k}} \biggr)
\\[-1pt]
&&\qquad = \alpha \biggl(\frac{n}{k} \biggr) \bigl( \bigl(a^{-\rho} + 1
\bigr)-b^{-\rho} \bigr) M({\mathbf x})+ \frac{1}{\sqrt k}{\accentset{\circ}{Y}}_a({\mathbf x})
\\[-1pt]
&&\quad\qquad{} +\alpha \biggl(\frac{n}{k} \biggr)O_\mathbb{P}
\biggl(\frac
{1}{\sqrt{k_\rho} \alpha(n/k_\rho)} \biggr)+ o \biggl(\frac
{1}{\sqrt{k}} \biggr).\end{aligned}$$ The first term is zero. Since both $k = o(k_\rho)$ and $\alpha$ is regularly varying with negative index, the only the last term can be put into the term $o (\frac{1}{\sqrt{k}} )$. Finally, the covariance function follows from the equality in law as processes between $Z_L(a {\mathbf x}) $ and $\sqrt{a} Z_L({\mathbf x})$.
The proofs of Theorem \[thmbias-Lu\] and Proposition \[prop3estim-rho\] are based on the following auxiliary result.
\[lemdelta\] Assume that the conditions of Proposition \[propdev-asympt-L\] are fulfilled. Then for any positive real $r$, one has as $n$ tends to infinity, $$\begin{aligned}
&& \sqrt{k} \alpha\biggl(\frac{n}{k}\biggr) \biggl\{ \frac{\hat\Delta_{k,a}(r
{\mathbf x})}{\alpha(n/k)} -
\bigl(a^{-\rho} -1\bigr)r^{1-\rho} M({\mathbf x}) \biggr\}
\stackrel{d} {
\to} a^{-1} Z_L (ra {\mathbf x} ) - Z_L(r {\mathbf x}),\end{aligned}$$ in $D([0,T]^d)$ for every $T>0$.
[Proof of Lemma \[lemdelta\]]{} Making use of the homogeneity of the function $L$, write $$\hat\Delta_{k,a}(r {\mathbf x}) = \bigl\{ \hat L_{k,a}(r {\mathbf x})
- L(r {\mathbf x}) \bigr\} - \bigl\{ \hat L_k(r {\mathbf x}) -L(r {\mathbf x})
\bigr\}. $$ Using the Skorohod construction, it follows from equations (\[eqasympt-dev-L\]) and (\[eqdev-asympt-La-L\]) that $$\begin{aligned}
&& \sup_{0\leq x_1,\ldots, x_d \leq T/r} \biggl{\vert}\sqrt{k} \alpha\biggl(
\frac{n}{k}\biggr) \biggl\{\frac{\hat\Delta_{k,a}(r {\mathbf x})}{\alpha(
n/k)}-\bigl(a^{-\rho} -1 \bigr)r^{1-\rho} M({\mathbf x}) \biggr\}
\\
&&\hspace*{149pt}{} -a^{-1} Z_L (ra {
\mathbf x} ) + Z_L(r {\mathbf x}) \biggr{\vert}\end{aligned}$$ tends to 0 almost surely, as $n$ tends to infinity.
[Proof of Theorem \[thmbias-Lu\]]{} Note that $$\begin{aligned}
&& \hat L_k({\mathbf x}) \frac{\hat\Delta_{k_\rho,a}(a {\mathbf x})}{\alpha
(n/k_\rho)} -\hat L_k(a {
\mathbf x}) \frac{ \hat\Delta_{k_\rho,a}({\mathbf
x})}{\alpha(n/k_\rho)}
\\
&&\qquad = \hat L_k({\mathbf x}) \biggl(
\frac{\hat\Delta
_{k_\rho,a}(a {\mathbf x})}{\alpha(n/k_\rho)} - a \frac{ \hat\Delta
_{k_\rho,a}({\mathbf x})}{\alpha(n/k_\rho)} \biggr) -a\frac{\hat\Delta
_{k_\rho,a}({\mathbf x})\hat\Delta_{k,a}({\mathbf x})}{\alpha(n/k_\rho)}.\end{aligned}$$ Under a Skorohod construction, Lemma \[lemdelta\] allows us to write the expansions of the terms $\hat\Delta_{k,a}({\mathbf x})$, $\hat
\Delta_{k_\rho,a}({\mathbf x})$ and $\hat\Delta_{k_\rho,a}(a {\mathbf
x})$, which implies on the one hand $$\begin{aligned}
\label{eqdeltaterm1}
&& \frac{\hat\Delta_{k_\rho,a}(a {\mathbf x})}{\alpha(n/k_\rho)} - a \frac{ \hat\Delta_{k_\rho,a}({\mathbf x})}{\alpha(n/k_\rho)}\nonumber
\\
&&\qquad =a \bigl(a^{-\rho} -1 \bigr)^2 M({\mathbf x})
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad\quad {} + \frac{1}{\sqrt{k_\rho} \alpha(n/k_\rho)} \bigl\{ a^{-1} Z_L
\bigl(a^2 {\mathbf x}\bigr) -2 Z_L(a {\mathbf x}) + a
Z_L({\mathbf x}) \bigr\}
\\
&&\quad\qquad{} +o \biggl( \frac{1}{\sqrt{k_\rho} \alpha(n/k_\rho)} \biggr),\nonumber\end{aligned}$$ and $$\begin{aligned}
\label{eqdeltaterm2}
\frac{\hat\Delta_{k_\rho,a}({\mathbf x})\hat\Delta_{k,a}({\mathbf
x})}{\alpha(n/k_\rho)} &=& \alpha(n/k) \bigl(a^{-\rho}-1\bigr)^2
M^2({\mathbf {x}})\nonumber
\\
&&{} +\bigl(a^{-\rho}-1\bigr)M({\mathbf x})
\frac{a^{-1}Z_L(a{\mathbf x})-Z_L({\mathbf
x})}{\sqrt{k}}
\\
&&{} + O_{\mathbb P} \biggl(\frac{\alpha(n/k)}{\sqrt{k_\rho} \alpha
(n/k_\rho)} + \frac{1}{\sqrt{k} \sqrt{k_\rho} \alpha(n/k_\rho
)} \biggr)+o \biggl(\frac{1}{\sqrt{k}} \biggr)\nonumber\end{aligned}$$ on the other hand, both uniformly for ${\mathbf x}\in[\varepsilon,T]^d$. Combining (\[eqdeltaterm1\]) and (\[eqdeltaterm2\]) with equation (\[eqasympt-dev-L\]), one gets $$\begin{aligned}
&& \hat L_k({\mathbf x}) \frac{\hat\Delta_{k_\rho,a}(a {\mathbf x})}{\alpha
(n/k_\rho)} -\hat L_k(a {
\mathbf x}) \frac{ \hat\Delta_{k_\rho,a}({\mathbf
x})}{\alpha(n/k_\rho)}\nonumber
\\
&& \qquad =a \bigl(a^{-\rho} -1\bigr)^2 M({\mathbf x}) L({\mathbf x}) +
\frac{1}{\sqrt{k}} M({\mathbf x}) \bigl(a^{-\rho} -1\bigr)
\bigl(a^{1-\rho} Z_L({\mathbf x}) - Z_L(a {\mathbf x})
\bigr)
\\
&&\quad\qquad {}+ \frac{1}{\sqrt{k_\rho} \alpha(n/k_\rho)} L({\mathbf x}) \bigl\{ a^{-1} Z_L
\bigl(a^2 {\mathbf x}\bigr) -2 Z_L(a {\mathbf x}) + a
Z_L({\mathbf x}) \bigr\}
\\
&&\quad\qquad{} +o \biggl(\frac{1}{\sqrt{k}} \biggr) +o \biggl(
\frac{1}{\sqrt{k_\rho} \alpha(n/k_\rho)} \biggr).\nonumber\end{aligned}$$
Since the last expression and equation (\[eqdeltaterm1\]) are, respectively, the numerator and denominator of $\tilde L_{k,k_\rho,
a}({\mathbf x})$, one obtains, after simplification, $$\sqrt{k} \bigl(\tilde L_{k,k_\rho, a}({\mathbf x}) - L({\mathbf x})\bigr) =
\frac
{a^{-\rho} Z_L({\mathbf x}) - a^{-1} Z_L(a {\mathbf x}) }{ a^{-\rho} -1} + o \biggl(\frac{\sqrt{k}}{ \sqrt{k_\rho} \alpha(n/k_\rho)} \biggr) +o(1),$$ since $M$ does not vanish by assumption. The choice of the sequences $k$ and $k_\rho$ allows us to conclude since $\sqrt{k} = O (\sqrt{k_\rho} \alpha(n/k_\rho) )$.
[Proof of Proposition \[proplim-quotient\]]{} Applying Lemma \[lemdelta\], we have $$\label{eqsupdelta} \sup_{\varepsilon\leq x_1,\ldots,x_d \leq T} \biggl{\vert}\frac{\hat\Delta
_{k,a}({\mathbf x})}{\alpha (n/k)}-
\bigl(a^{-\rho
}-1\bigr)M({\mathbf x})\biggr{\vert}\stackrel{\mathbb{P}} {
\longrightarrow} 0.$$ As a consequence, $$\begin{aligned}
&& \sup_{\varepsilon\leq x_1,\ldots,x_d \leq T} \biggl{\vert}\frac{ \hat
\Delta_{k,a}(r {\mathbf x})} { \hat\Delta_{k,a}({\mathbf x}) } -
r^{1-\rho} \biggr{\vert}\\
&&\qquad = \sup_{\varepsilon\leq x_1,\ldots,x_d \leq T} \biggl{\vert}\frac
{ \hat\Delta_{k,a}(r {\mathbf x})/\alpha(n/k)} { \hat\Delta_{k,a}({\mathbf
x})/\alpha(n/k) } - r^{1-\rho} \biggr{\vert}\\
&&\qquad = O_{\mathbb P} \biggl( \sup_{\varepsilon\leq x_1,\ldots,x_d \leq T} \biggl{\vert}\frac{\hat\Delta_{k,a}(r {\mathbf x})}{\alpha(n/k)} - r^{1-\rho
} \frac{ \hat\Delta_{k,a}({\mathbf x})}{\alpha(n/k)} \biggr{\vert}\biggr),\end{aligned}$$ since $(a^{-\rho} -1)M({\mathbf x}) \neq0$ by assumption. Writing $$\begin{aligned}
&& \biggl{\vert}\frac{\hat\Delta_{k,a}(r {\mathbf x})}{\alpha(n/k)} - r^{1-\rho
} \frac{ \hat\Delta_{k,a}({\mathbf x})}{\alpha(n/k)} \biggr
{\vert}\\
&&\qquad \leq \biggl{\vert}\frac{\hat\Delta_{k,a}(r {\mathbf x})}{\alpha(n/k)} - r^{1-\rho
}
\bigl(a^{-\rho}-1\bigr)M({\mathbf x}) \biggr{\vert}\\
&&\quad\qquad{} + \biggl{\vert}r^{1-\rho} \bigl(a^{-\rho}-1\bigr)M({\mathbf x}) -
r^{1-\rho} \frac{ \hat
\Delta_{k,a}({\mathbf x})}{\alpha(n/k)} \biggr{\vert},\end{aligned}$$ and using twice equation (\[eqsupdelta\]) leads to the conclusion.
[Proof of Proposition \[prop3estim-rho\]]{} Define $ Q_{k,a,r}({\mathbf x}):=\frac{ \hat\Delta_{k,a}(r
{\mathbf x})} { \hat\Delta_{k,a}({\mathbf x}) }$. Lemma \[lemdelta\] used twice yields $$\label{eqdev-asympt-Q} \sqrt{k} \alpha\biggl(\frac{n}{k}\biggr)
\bigl(Q_{k,a,r}({\mathbf x}) - r^{1-\rho}\bigr) \stackrel{d} {\to} -
r^{1-\rho} \log r \hat Z_{\rho,a, r}({\mathbf x}),$$ where $ \hat Z_{\rho,a, r}({\mathbf x})$ is defined in Proposition \[prop3estim-rho\]. Since $\hat\rho_{k,a,r}({\mathbf x})=1-\log(Q_{k,a,r}({\mathbf x}))/ \log r$, the result follows straightforwardly from (\[eqdev-asympt-Q\]) and the Delta method.
Acknowledgments {#acknowledgments .unnumbered}
===============
We wish to thank Armelle Guillou for pointing out a deficiency in the original version of the paper, as well as several misprints. We thank the referees for very helpful comments.
[37]{}
(). . .
, (). . .
, , (). . , .
(). . .
, (). . .
, (). . .
, (). . .
(). . .
, (). . .
(). . .
(). . .
(). . , .
, (). .
(). . .
(). . .
(). . .
, (). . .
, (). . .
, (). . .
, (). . .
, (). . .
(). . .
, (). . .
, (). . .
, (). . .
(). . .
(). .
(). . .
, (). . .
, (). , ed. . , .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
(). . .
|
---
abstract: 'Multi-branch is extensively studied for learning rich feature representation for person re-identification (Re-ID). In this paper, we propose a branch-cooperative architecture over OSNet, termed BC-OSNet, for person Re-ID. By stacking four cooperative branches, namely, a global branch, a local branch, a relational branch and a contrastive branch, we obtain powerful feature representation for person Re-ID. Extensive experiments show that the proposed BC-OSNet achieves state-of-art performance on the three popular datasets, including Market-1501, DukeMTMC-reID and CUHK03. In particular, it achieves mAP of 84.0% and rank-1 accuracy of 87.1% on the CUHK03\_labeled.'
author:
- 'Lei Zhang, Xiaofu Wu$^\dag$, Suofei Zhang and Zirui Yin[^1][^2] [^3]'
title: 'Branch-Cooperative OSNet for Person Re-Identification'
---
Person re-identification, feature representation, deep learning, multi-branch network architecture.
Introduction
============
The increasing demand makes deep learning methods account for a large proportion in the field of computer vision, such as image classification, target detection, semantic segmentation, person re-identification(Re-ID), etc. Often, there are three typical functions for a person Re-ID system: person detection, person tracking and person retrieval. In general, the task of person Re-ID focuses on person retrieval, which is to train a feature descriptor with person-discriminative capabilities, from large-scale pedestrians captured by multiple cameras. The main difficulty for person Re-ID comes from the fact that pedestrian images captured by cameras with a range of differences including perspectives, image resolutions, changes in illumination, unconstrained posture, occlusion, heterogeneous modes[@Zheng2016]. It still remains a big challenge for further improving the retrieval accuracy.
The key step for person Re-ID system is to find a rich but discriminative feature representation for pedestrian images. In the past decade, convolutional neural networks (CNNs) have been widely used in Re-ID tasks[@Krizhevsky2012] due to their attractive advantages. Usually, CNNs can only use global features for pedestrian images. However, the retrieval performance is very limited because intra-class variations cannot be well represented by global features. In order to eliminate this limitation, many methods have been proposed. Pose-based Re-ID [@Su2017PDC] proposed to use the key points of pose to divide the body and weight different blocks for enhancing the feature representation for recognition. Part-based Re-ID (PCB, MGN)[@Sun2018PCB][@Wang2018MGN] were proposed for local feature representation (such as head, body, etc.) without using pose estimation. In order to take into account the connection between parts of images and contrast of background and pedestrian, relation module and global contrastive pooling(GCP)[@Park2019GCP] were proposed. Considering the lightweight of CNN architecture, OSNet[@Zhou2019OSNet] performs very well and is capable of learning omni-scale feature representations.
In this paper, we propose a branch-cooperative OSNet for person Re-ID. By combining various branch-oriented features, including part-level and global-level features, relational and contrastive features, BC-OSNet captures more feature details for retrieval. In brief, the main contributions of this paper are as follows: We summarise our contribution as follows:
1. Based on the baseline OSNet [@Xie2020PLR-OSNet], we propose a branch-cooperative network architecture for person Re-ID. The proposed BC-OSNet has four branches, including a global branch, a local branch with part-level features, the relational branch and contrastive branch. We show that these branches are cooperative for enriching the feature representation.
2. Extensive experiments show that the proposed BC-OSNet can achieve state-of-the-art results on the popular Re-ID datasets, despite of its small size. In particular, our results on CUHK03\_labeled might be the best, achieving mAP of 84.0% and rank-1 accuracy of 87.1% on the CUHK03\_labeled.
Related Work
============
Part-Level Features
-------------------
In general, global features are helpful for learning the contour information, so that the images can be retrieved from a broader perspective. Part-level features, however, may contain more fine-grained information. In [@Yi2014][@Li2014], the input pedestrian image was divided into three overlapping parts, and then the three part-level features can be well learned. Later, different ways of body dividing appeared. Pose-driven Deep Convolutional (PDC)[@Su2017PDC] took into account the posture of the body into the influence of appearance and employed the pose estimation algorithm to predict the posture. Part-Aligned Representations (PAR)[@Zhao2017PAR] cut the human body into several distinguishing regions and then connects feature vectors from each region in obtaining the final feature vector representation. Part-based Convolution Baseline (PCB)[@Sun2018PCB] learns the part-level features by dividing the feature map equally, and Refined Part Pooling (RPP) was proposed to improve the content consistency of the divided area. For Multiple Granularities Network (MGN)[@Wang2018MGN], it uniformly divides the image into multiple stripes for obtaining a local feature representation with multiple granularities.
Relational and Contrastive Features
-----------------------------------
The basic idea behind the relation network is to consider all entity pairs and integrate all these relations[@Santoro2017].The concatenation of both global-level and part-level features is beneficial for rich personnel representation. To further improve the feature richness, the relation between part-level and the rest, and the contrastive information between background and retrieved object are equally important. This will help to build a link between various parts since they often do not function independently. Global contrastive pooling(GCP)[@Park2019GCP] aggregates most discriminative information while One-vs.-rest relation module[@Park2019GCP] utilizes the relation between part-level and the rest to make the network more distinguishable, while retaining the compact feature representation for person Re-ID.
Other Related Works
-------------------
With max-pooling as downsampling for CNNs, the relative position information for features is of more importance. In order to reduce the model size, global average pooling was proposed to replace the fully-connected layer with better overfitting performance. Recently, generalized-mean (GeM)[@Radenović2018GeM] pooling was proposed for narrowing the gap between max-pooling and average-pooling. Due to the success of the dropout[@Hinton2012dropout], several variants, such as fast dropout[@Wang2013fastdropout] and DropConnect[@Wan2013dropconnect] were proposed. A continuous dropout algorithm[@Shen2017GCDropout] was proposed for achieving a good balance between the diversity and independence of subnetworks. In this paper, we propose to employ batch dropblock[@Dai2019BDB] in our architecture. Unlike the general dropblock[@Ghiasi2018dropblock], Batch DropBlock[@Dai2019BDB] is an attentive feature learning module for metric learning tasks, which randomly drops the same region of all the feature maps in a batch during training and reinforces the attentive feature learning of the remaining parts.
{width="95.00000%"}
BC-OSNet
========
The overall network architecture of BC-OSNet is shown in Figure \[fig:architecture\], where the input is of size $ H \times W \times C $ with $ H, W, C(=3)$ denoting height, width and the number of channels, respectively. The shared-net of BC-OSNet takes the first 5 layers of OSNet [@Zhou2019OSNet], including 3 convolutional layers and 2 transition layers. Then, four cooperative branches are employed for feature extraction, including local branch (v1), global branch (v2), global contrastive pooling (GCP) branch (v3) and one-vs-rest relation branch (v4) [@Park2019GCP], respectively. The use of four branches is to facilitate the learning of diverse but discriminative features.
Cooperative Branches
--------------------
### Local Branch
The first branch (v1) is a local branch. In this branch, the feature map is divided into 4 horizontal grids, and part-level features of $ 1 \times 1 \times C $ can be finally obtained by the use of average pooling(AP). It should be noted that 4 part-level features are concatenated into a column vector for producing a single ID-prediction loss, while each part-level feature is driven by a ID-prediction loss for PCB [@Sun2018PCB]. Let $$\begin{aligned}
\mathbf{f} = [f_1^T,f_2^T,\cdots,f_4^T]^T\end{aligned}$$ denote the concatenated feature vector, where $ f_1, f_2, f_3 f_4 $ denote 4 column vectors that divide the feature map horizontally. Let the labeled set denote by $ \left\lbrace (x_i, y_i), i = 1,2, \cdots, N_s \right\rbrace $. Then, the ID-prediction loss can be written as $$\begin{aligned}
\label{E2}
L=-\frac{1}{N_{s}} \sum_{i=1}^{N_{s}} \log \left(\frac{\exp \left(\left(\mathbf{W}^{y_{i}}\right)^{T} \mathbf{f}^{i}+b_{y_{i}}\right)}{\sum_{j} \exp \left(\left(\mathbf{W}^{j}\right)^{T} \mathbf{f}^{j}+b_{j}\right)}\right)\end{aligned}$$ where $ W^{y_i} $ and $ W^j $ are the $ y_i-th $ and $ j-th $ columns of the weight matrix $ W $. Compared with PCB, this can obtain more effective and more differentiated information. Usually, GAP is used for obtaining each part-level feature vector. Currently, both GAP and GMP were employed in existing methods and it is not well understood which pooling method is better. Here, we employ GeM [@Radenović2018GeM], which $$\begin{aligned}
GeM(f_k = [f_0,f_1,\cdots,f_n])=[\frac{1}{n}\sum_{i=1}^{n}(f_i)^{p_k}]^\frac{1}{p_k}
$$ where GeM operator $f_k$ is a single feature map, when $ p_k \to \infty $, GeM is equal to max pooling, when $ p_k = 1 $, GeM is equal to average pooling, We initialized the GeM parameter with $ p_k = 1 $ in the local branch.
{width="90.00000%"}
{width="80.00000%"}
### Global Branch
The second branch (v2) is the global branch. The difference from the local branch is that GeM pooling is performed directly after conv4 and conv5. Note that we set $ p_k = 6.5 $ for the initialization of GeM, and a 512-dimensional vector is obtained.
### GCP Branch
The third branch (v3) is a global contrastive pooling (GCP) branch. GCP is based on GAP and GMP to obtain local contrastive features. After obtaining the feature map at the output of conv5, it is divided into 6 horizontal grids and 256-dimensional feature vector is obtained by GCP. For better understanding the internal structure of GCP, we denote by $\mathbf{f}_{avg}$ and $\mathbf{f}_{max}$ the feature maps obtained with average pooling and max pooling, respectively. Note here that average pooling is operated on each part-level feature($\mathbf{f}_{avg}=\sum_{i=1}^{n}AP(\mathbf{f}_i)$) while max pooling is performed over the feature map at the output of conv5. The contrastive feature $\mathbf{f}_{cont}$ is obtained by subtracting $\mathbf{f}_{max}$ from $\mathbf{f}_{avg}$($\mathbf{f}_{cont}=\frac{1}{n-1} (\mathbf{f}_{avg}-\mathbf{f}_{max})$), which denotes the discrepancy information between them. To reduce dimensionality, the bottleneck layer is employed to process $\mathbf{f}_{cont} $ and $\mathbf{f}_{max}$ of the channel dimension $C$, and the corresponding reduced-dimensional features are denoted by $\mathbf{f}_{cont}'$ and $\mathbf{f}_{max}'$ of the channel dimension of $c$. Then, the global contrastive feature $\mathbf{q}_0$ can be written as $$\begin{aligned}
\mathbf{q}_0 = \mathbf{f}_{max}' + \mathcal{B}(\mathcal{C}(\mathbf{f}_{max}',\mathbf{f}_{cont}'))\end{aligned}$$ where $\mathcal{C}$ means the concatenation of $\mathbf{f}_{cont} '$ and $ \mathbf{f}_{max}'$ to form a column vector with the channel dimension of $2c$, $\mathcal{B}$ represents the operation of bottleneck layer, with which the channel dimension is reduced to $c$ ($2c\rightarrow c$).
### One-vs.-rest Relation Branch
The fourth branch (v4) is the one-vs-rest relation branch. In general, the part-level features often contain information about individual parts, which, however, does not reflect the relationship between them. The one-vs-rest relation branch makes it possible to associate each part-level with the corresponding rest parts. Similar to GCP, we first get 6 horizontal-grid features as $ (\mathbf{f}_1, \cdots, \mathbf{f}_6) $ and a 1536-dimensional vector through one-vs-rest relation module is computed as follows. Firstly, AP is employed to get $$\begin{aligned}
\mathbf{r}_i = \frac{1}{5}\sum_{j \neq i}\mathbf{f}_j. \end{aligned}$$ Then, both $ \mathbf{f}_i $ and $ \mathbf{r}_i $ are processed by the bottleneck layer for reducing the number of channels from C to c, producing $ \mathbf{f}_i' $ and $ \mathbf{r}_i' $. With relation network, a local relational feature $\mathbf{q}_i$ can be computed as $$\begin{aligned}
\mathbf{q}_i = \mathbf{f}_i' + \mathcal{B}(\mathcal{C}(\mathbf{f}_i',\mathbf{r}_i')), \quad i=1,\cdots,6\end{aligned}$$ where $ \mathbf{q}_i $ is a 256-dimensional vector.
---------------------------------- ------- -------- ------- -------- ------- ------------------------------------------------------------------------------------------ ------- --------
mAP rank-1 mAP rank-1 mAP rank-1 mAP rank-1
HA-CNN[@Li2018HACNN] 41.0 44.4 38.6 41.7 75.7 95.6 63.8 80.5
PCB[@Sun2018PCB] - - 54.2 61.3 77.3 92.4 65.3 81.9
AlignedReID[@Luo2019AlignedReID] - - 59.6 61.5 79.1 91.8 69.7 82.1
PCB+RPP[@Sun2018PCB] - - 57.5 63.7 81.0 93.1 68.5 82.9
HPM[@Yang2018HPM] - - 57.5 63.9 82.7 94.2 74.3 86.6
BagOfTricks[@Luo2019bagoftricks] - - - - 85.9 94.5 76.4 86.4
OSNet[@Zhou2019OSNet] - - 67.8 72.3 84.9 94.8 73.5 88.6
MGN[@Wang2018MGN] 67.4 68.0 66.0 66.8 86.9 95.7 78.4 88.7
ABD[@Chen2019ABD] - - - - 88.28 95.6 78.59 89.0
GCP[@Park2019GCP] 75.6 77.9 69.6 74.4 88.9 95.2 78.6 89.7
BDB[@Dai2019BDB] 76.7 79.4 73.5 76.4 86.7 95.3 76.0 89.0
SONA[@Xia2019SONA] 79.23 81.85 76.35 79.10 88.67 **[95.68]{}&78.05&89.25\
Ours &**[84.0]{}&**[87.1]{}&**[80.5]{}&**[84.3]{}&**[89.5]{}&95.6&**[81.2]{}&**[91.4]{}\
****************
---------------------------------- ------- -------- ------- -------- ------- ------------------------------------------------------------------------------------------ ------- --------
------------------ ------ -------- ------ -------- ------ -------- ------------------------------------------------------------------------------------------------------------- --------
mAP rank-1 mAP rank-1 mAP rank-1 mAP rank-1
local-global 79.7 83.4 75.4 78.2 88.2 95.5 81.2 91.1
local-global-OvR 83.0 85.7 79.2 82.1 88.8 95.1 **[81.3]{}&90.8\
local-global-gcp &81.2 &83.9 &78.2 &81.9&89.5&95.5&80.7&90.3\
local-global-gcp-OvR &**[84.0]{} &**[87.1]{} &**[80.5]{} &**[84.3]{}&**[89.5]{}&**[95.6]{}&81.2&**[91.4]{}\
****************
------------------ ------ -------- ------ -------- ------ -------- ------------------------------------------------------------------------------------------------------------- --------
--------- ------ -------- ------ -------- ---------------------------------------------------------------------------------- -------- ----- --------
mAP rank-1 mAP rank-1 mAP rank-1 mAP rank-1
w/o-GeM 83.3 86.5 80.1 83.6 **[89.8]{}&**[95.7]{}&**[81.3]{}&91.2\
w-GeM &**[84.0]{} &**[87.1]{} &**[80.5]{} &**[84.3]{}&89.5&95.6&81.2&**[91.4]{}\
****************
--------- ------ -------- ------ -------- ---------------------------------------------------------------------------------- -------- ----- --------
---------- ------ -------- ------ -------- ------ -------------------------------------------------------------------------------- ----- --------
mAP rank-1 mAP rank-1 mAP rank-1 mAP rank-1
f6+f4+f2 82.6 84.8 79.0 82.7 89.5 **[95.7]{}&**[81.8]{}&**[91.7]{}\
f6 &**[84.0]{} &**[87.1]{} &**[80.5]{} &**[84.3]{} &**[89.5]{}&95.6&81.2&91.4\
****************
---------- ------ -------- ------ -------- ------ -------------------------------------------------------------------------------- ----- --------
---------- ------ -------- ------ -------- -------------------------------------------------------------------------------------- -------- ----- --------
mAP rank-1 mAP rank-1 mAP rank-1 mAP rank-1
BC-OSNet 84.0 87.1 80.5 84.3 **[89.5]{}&**[95.6]{}&**[81.2]{}&**[91.4]{}\
+GCDropout+BDB &**[84.2]{} &**[87.3]{} &**[81.6]{} &**[85.3]{} &89.2&95.3&80.7&91.0\
****************
---------- ------ -------- ------ -------- -------------------------------------------------------------------------------------- -------- ----- --------
Loss Functions
--------------
To train our model, the final total loss is the sum of the loss functions of each branch, including a single ID loss(softmax loss), a soft margin triplet loss[@Hermans2017tripletloss] and a center loss[@Wen2016centerloss]. $$\begin{aligned}
L_{sum} = \lambda_1 L_{id} + \lambda_2 L_{triplet} + \lambda_3 L_{center}\end{aligned}$$ where $\lambda_1,\lambda_2,\lambda_3$ are weighting factors. The single ID loss is shown in equation\[E2\]. Randomly extract each small batch of $P$ identities and $K$ instances, soft margin triplet loss is defined as: $$\begin{aligned}
\nonumber
L_{triplet}=\sum_{i=1}^{P} \sum_{a=1}^{K}[\alpha+ \overbrace{\max _{p=1 \ldots K}\left\|x_{a}^{(i)}-x_{p}^{(i)}\right\|_{2}}^{\text{hardest positive}} \\
-\underbrace{\min _{n=1 \ldots K \atop j=1 \ldots P , j \neq i}\left\|x_{a}^{(i)}-x_{n}^{(i)}\right\|_{2}}_{ \text{ hardest negative }} ]_{+}
$$ where $ x_a^{(i)}, x_p^{(i)}, x_n^{(i)} $ are features extracted from anchor, positive sample and negative sample, respectively, and $ \alpha $ is the edge hyperparameter. In order to improve the discriminative capability of features, the center loss is used as $$\begin{aligned}
\mathcal{L}_C = \frac{1}{2}\sum_{i=1}^{m} \left\| \mathbf{x}_i - \mathbf{c}_{y_i} \right\|_2^2\end{aligned}$$ where $ \mathbf{c}_{y_i} \in \mathbf{R}^d $ is the class center of the depth feature of class $y_i $.
Experiments
===========
Extensive experiments were conducted on three person Re-ID dataset, including Market1501, DukeMTMC-reID and CUHK03.
Datasets
--------
The Market1501 dataset is composed of 32668 pedestrian images taken by 6 cameras, a total of 1501 categories. The subset of train contains 12936 images of 751 identities, the subset of query contains 3368 images of 750 identities and the subset of gallery contains 15013 images of 751 identities.
The DukeMTMC-reID dataset was captured by 8 cameras, including 16522 training images of 702 identities, 2228 query images of 702 identities and 17661 gallery images of 1110 identities.
The CUHK03 dataset is divided into CUHK03\_labeled and CUHK03\_detected according to different annotation methods, of which there are 14096 images and 14097 images, respectively, which were captured by two cameras. For training subset, 7368 images for CUHK03\_label and 7365 images for CUHK03\_detected. 1400 images for both in query subset. For gallery subset, 5328 images for CUHK03 \_label and 5332 images for CUHK03\_detected.
Implementation Details
----------------------
The input image is of size $256\times 128$ for both training and testing. The data augmentation methods include random flipping and random erasing[@Zhong2017random-rease]. The optimizer is Adam[@Kingma2014adam] with momentum of 0.9 and weight decay of 5e-04. During training, the batch size is set to 64 and the number of epoches is 160. Each batch contains 16 identities and each identity has 4 images. The warm up stragey is used during training, where the initial learning rate is 3.5e-4, after the 60th epoch, the learning rate is changed to 3.5e-05, and after the 130th epoch, the learning rate is changed to 3e-06. All networks are trained end-to-end using PyTorch. Training our model takes about sixteen, eighteen and eight hours with a singal NVIDIA Tesla P100 GPU for the Market1501, DukeMTMC-reID, and CUHK03 datasets, respectively.
Comparison with State-of-the-art
--------------------------------
The comparison between BC-OSNet and state-of-the-art methods is shown in Table\[tab1\] for the all three datasets. Note that none of the results below use re-ranking[@Zhong2017reranking] or multi-query fusion techniques. All recent methods for comparison include HA-CNN[@Li2018HACNN], AlignedReID[@Luo2019AlignedReID], PCB[@Sun2018PCB], HPM[@Yang2018HPM], MGN[@Wang2018MGN], GCP[@Park2019GCP], ABD[@Chen2019ABD], BDB[@Dai2019BDB], SONA[@Xia2019SONA], OSNet[@Zhou2019OSNet], et al. Clearly, BC-OSNet performs very competitively.
Ablation Studies
----------------
We conducted a large number of comparative experiments on Market-1501, DukeMTMC-reID and CUHK03 datasets to study the effectiveness of each branch, module and hyperparameters.
### Benefit of GCP and one-vs-rest relation Branches
The proposed BC-OSNet comprehensively takes these four components into consideration. As shown in Table 2, competitive results are obtained on the three data sets, especially on the CUHK03 dataset. It can be seen that the four branches complement each other and can extract various features and manifest them in the experiment.
### Benefit of GeM
GeM can be considered as a generalized version of GAP and GMP. Table\[tab3\] shows that mAP and rank-1 have slightly increased on both CUHK03\_label and CUHK03\_detect. This means that it could learn a better version between GAP and GMP with the learnable parameter $p_k$ inherent in GeM.
### $\mathbf{q}^{h6} vs. \mathcal{C}(\mathbf{q}^{h6},\mathbf{q}^{h4},\mathbf{q}^{h2})$
Let $\mathbf{q}^{h6} \triangleq \mathcal{C}(\mathbf{f}_0,\cdots,\mathbf{f}_6)$. We consider to use $\mathbf{q}^{h2}$ and $\mathbf{q}^{h4}$ that splits the initial feature map into two and four horizontal regions, respectively. Accordingly, the concatenation of $\mathbf{q}^{h2}$, $\mathbf{q}^{h4}$ and $\mathbf{q}^{h6}$, namely, $\mathcal{C}(\mathbf{q}^{h2},\mathbf{q}^{h4},\mathbf{q}^{h6})$ is employed for final feature representation. Note that $\mathbf{q}^{h2}$, $\mathbf{q}^{h4}$ and $\mathbf{q}^{h6}$ contain different local relational features, and thus have different global contrastive features. In[@Park2019GCP], it was shown that the use of $ \mathcal{C} (\mathbf{q}^{h6}, \mathbf{q}^{h4}, \mathbf{q}^{h2}) $ could be better than the use of $\mathbf{q}^{h6}$. We, however, report the totally-different results as shown in Table\[tab4\]. This may be the use of different backbone networks.
### Benefit of GCDropout and BDB
Ordinary dropout can effectively prevent overfitting. The dropout variables of Gaussian Continuous Dropout(GCDropout) are subject to a continuous distribution rather than the discrete distribution, moreover, the units in the network obey Gaussian distribution. GCDropout can effectively prevent the co-adaptation of feature detectors in deep neural networks and achieve a good balance between the diversity and independence of subnetworks. Batch DropBlock(BDB) force the network to learn some detailed features in the remaining area, which can complement the local branch. It can be seen from Table\[tab5\] that the performance has been better improved after using those methods.
Conclusion
==========
In this paper, we propose a branch-cooperative network for person Re-identification. Based on the OSNet baseline, we propose to a 4-branch architecture, with global, local, connection and contrast features, for obtaining more diverse and resolution features. In addition, various tricks have been incorporated into BC-OSNet, including GeM, Gaussian Continuous Dropout, et al. The ablation analysis clearly demonstrates the cooperation of four branches for boosting the final performance.
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
Liang Zheng, Yi Yang, and Alexander G. Hauptmann. “Person re-identification: Past, present and future.” arXiv preprint arXiv:1610.02984 (2016).
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012.
D. Yi, Z. Lei, S. Liao, S. Z. Li et al. “Deep metric learning for person re-identification.” 2014 22nd International Conference on Pattern Recognition. IEEE, 2014.
W. Li, R. Zhao, T. Xiao, and X. Wang. “Deepreid: Deep filter pairing neural network for person re-identification.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
Shen, Xu, et al. “Continuous dropout.” IEEE transactions on neural networks and learning systems 29.9 (2017): 3926-3937.
G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors.” arXiv preprint arXiv:1207.0580 (2012).
Wang, Sida, and Christopher Manning. “Fast dropout training.” international conference on machine learning. 2013.
L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. “Regularization of neural networks using dropconnect.” International conference on machine learning. 2013.
Ghiasi, Golnaz, Tsung-Yi Lin, and Quoc V. Le. “Dropblock: A regularization method for convolutional networks.” Advances in Neural Information Processing Systems. 2018.
C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian. “Pose-driven deep convolutional model for person re-identification.” Proceedings of the IEEE international conference on computer vision. 2017.
L. Zhao, X. Li, J. Wang, and Y. Zhuang. “Deeply-learned part-aligned representations for person re-identification.” Proceedings of the IEEE international conference on computer vision. 2017.
Wang, Guanshuo and Yuan, Yufeng and Chen, Xiong and Li, Jiwei and Zhou, Xi. “Learning discriminative features with multiple granularities for person re-identification.” Proceedings of the 26th ACM international conference on Multimedia. 2018.
Luo, Hao, et al. “Bag of tricks and a strong baseline for deep person re-identification.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019.
Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang. “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline).” Proceedings of the European Conference on Computer Vision (ECCV). 2018.
Yang Fu, Yunchao Wei, Yuqian Zhou, Honghui Shi, Gao Huang, Xinchao Wang, Zhiqiang Yao and Thomas Huang. “Horizontal pyramid matching for person re-identification.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019.
Ben Xie, Xiaofu Wu, Suofei Zhang, Shiliang Zhao and Ming Li. “Learning Diverse Features with Part-Level Resolution for Person Re-Identification.” arXiv preprint arXiv:2001.07442 (2020).
Zhou, Kaiyang, et al. “Omni-scale feature learning for person re-identification.” Proceedings of the IEEE International Conference on Computer Vision. 2019.
Santoro, A.; Raposo, D.; Barrett, D. G.; Malinowski, M.; Pascanu, R.; Battaglia, P.; and Lillicrap, T. “A simple neural network module for relational reasoning.” Advances in neural information processing systems. 2017.
Park, Hyunjong, and Bumsub Ham. “Relation Network for Person Re-identification.” arXiv preprint arXiv:1911.09318 (2019).
Radenović, Filip, Giorgos Tolias, and Ondřej Chum. “Fine-tuning CNN image retrieval with no human annotation.” IEEE transactions on pattern analysis and machine intelligence 41.7 (2018): 1655-1668.
Hermans, Alexander, Lucas Beyer, and Bastian Leibe. “In defense of the triplet loss for person re-identification.” arXiv preprint arXiv:1703.07737 (2017).
Y. Wen, K. Zhang, Z. Li, and Y. Qiao. “A discriminative feature learning approach for deep face recognition.” European conference on computer vision. Springer, Cham, 2016.
P. Doll´ar, Z. Tu, P. Perona, and S. Belongie, “Integral channel features.” BMVC, 2009.
Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; and Yang, Y. “Random erasing data augmentation.” arXiv preprint arXiv:1708.04896 (2017).
Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014).
Ioffe, Sergey, and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” arXiv preprint arXiv:1502.03167 (2015).
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012.
Zhong, Zhun, et al. “Re-ranking person re-identification with k-reciprocal encoding.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.
Li, Wei, Xiatian Zhu, and Shaogang Gong. “Harmonious attention network for person re-identification.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
Luo, H.; Jiang, W.; Zhang, X.; Fan, X.; Qian, J.; and Zhang, C. “AlignedReID++: Dynamically matching local information for person re-identification.” Pattern Recognition 94 (2019): 53-61.
T. Chen, S. Ding, J. Xie, Y. Yuan, W. Chen, Y. Yang, Z. Ren, and Z. Wang. “Abd-net: Attentive but diverse person re-identification.” Proceedings of the IEEE International Conference on Computer Vision. 2019.
Z. Dai, M. Chen, X. Gu, S. Zhu, and P. Tan. “Batch DropBlock network for person re-identification and beyond.” Proceedings of the IEEE International Conference on Computer Vision. 2019.
Xia, Bryan Ning, et al. “Second-Order Non-Local Attention Networks for Person Re-Identification.” Proceedings of the IEEE International Conference on Computer Vision. 2019.
[^1]: $^\dag$Corresponding author. This work was supported in part by the National Natural Science Foundation of China under Grants 61372123, 61671253 and by the Scientific Research Foundation of Nanjing University of Posts and Telecommunications under Grant NY213002.
[^2]: Lei Zhang, Xiaofu Wu and Zirui Yin are with the National Engineering Research Center of Communications and Networking, Nanjing University of Posts and Telecommunications, Nanjing 210003, China (E-mails: 1019010621@njupt.edu.cn; xfuwu@ieee.org; 1219012816@njupt.edu.cn).
[^3]: Suofei Zhang is with the School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210003, China (E-mail: zhangsuofei@njupt.edu.cn).
|
---
abstract: 'For a convex body $K\subset{{\mathbb R}}^n$ and $i\in\{1,\dots,n-1\}$, the function assigning to any $i$-dimensional subspace $L$ of ${{\mathbb R}}^n$, the $i$-dimensional volume of the orthogonal projection of $K$ to $L$, is called the $i$-th projection function of $K$. Let $K, K_0\subset {{\mathbb R}}^n$ be smooth convex bodies of class $C^2_+$, and let $K_0$ be centrally symmetric. Excluding two exceptional cases, we prove that $K$ and $K_0$ are homothetic if they have two proportional projection functions. The special case when $K_0$ is a Euclidean ball provides an extension of Nakajima’s classical three-dimensional characterization of spheres to higher dimensions.'
address:
- 'Department of Mathematics, University of South Carolina, Columbia, S.C. 29208, USA'
- 'Mathematisches Institut, Albert-Ludwigs-Universit[ä]{}t Freiburg, D-79104 Freiburg, Germany'
author:
- Ralph Howard
- Daniel Hug
title: Smooth convex Bodies with proportional projection functions
---
Introduction and main results
=============================
A [[******]{}convex body]{} in ${{\mathbb R}}^n$ is a compact convex set with nonempty interior. If $K$ is a convex body and $L$ a linear subspace of ${{\mathbb R}}^n$, then $K|L$ is the orthogonal projection of $K$ onto $L$. Let $\mathbb{G}(n,i)$ be the Grassmannian of all $i$-dimensional linear subspaces of ${{\mathbb R}}^n$. A central question in the geometric tomography of convex sets is to understand to what extent information about the projections $K|L$ with $L\in \mathbb{G}(n,i)$ determines a convex body. Possibly the most natural, but rather weak, information about $K|L$ is its $i$-dimensional volume $V_i(K|L)$. The function $L\mapsto V_i(K|L)$ on $\mathbb{G}(n,i)$ is the [[******]{}$i$-*th* projection function]{} (or the [[******]{}$i$-*th* brightness function]{}) of $K$. When $i=1$ this is the [[******]{}width function]{} and when $i=n-1$ the [[******]{}brightness function]{}. If this function is constant the body has [[******]{}constant $i$-brightness]{}. For $n\geq 2$ and any $i\in
\{1,\dots,n-1\}$ by classical results about the existence of sets with constant width and results of Blaschke [@Blaschke:Kreis pp. 151–154] and Firey [@Firey:constant] there are convex bodies which are not Euclidean balls that have constant $i$-brightness (cf. [@Gardner:book Thm 3.3.14, p. 111; Rmk 3.3.16, p. 114]). Thus it is not possible to determine if a convex body is a ball from just one projection function. For other results about determining convex bodies from a single projection function see Chapter 3 of Gardner’s book [@Gardner:book] and the survey paper [@GSW97] of Goodey, Schneider, and Weil.
Therefore, as pointed out by Goodey, Schneider, and Weil in [@GSW97] and [@GSW97+], it is natural to ask if a convex body with two constant projection functions must be a ball or, more generally, what can be said about a pair of convex bodies, one of which is centrally symmetric, that have two of their projection functions proportional. Examples in the smooth and the polytopal setting, due to Campi [@Campi], Gardner and Volčič [@GV93], and to Goodey, Schneider, and Weil [@GSW97+], show that the assumption of central symmetry on one of the bodies cannot be dropped. Recall that a convex body is of class $C^2_+$ iff its boundary, ${\partial}K$, is of class $C^2$ and has everywhere positive Gauss-Kronecker curvature. A convex body of class $C^2_+$ has a $C^2$ support function, and in fact the class of convex bodies with $C^2$ support functions is a slightly larger class than the class $C^2_+$. Since our proofs essentially work in this more general class, we will consider the corresponding setting. A classical result [@Nakajima:ball] of S. Nakajima (= A. Matsumura) in 1926 states that a convex body of class $C^2_+$ with constant width and constant brightness is a Euclidean ball. Our main result extends this to higher dimensions:
\[Theorem1.4\] Let $K, K_0\subset {\mathbb R}^{n}$ be convex bodies with $K_0$ of class $C^2_+$ and centrally symmetric and with $K$ having $C^2$ support function. Let $1\leq i<j\leq n-1$ be integers such that $i\notin\{1,n-2\}$ if $j=n-1$. Assume there are real positive constants $\alpha,\beta>0$ such that $$V_i(K\vert L)=\alpha V_i(K_0\vert L)\quad \text{and}\quad V_j(K\vert
U)=\beta V_j(K_0\vert U),$$ for all $L\in\mathbb{G}(n,i)$ and $U\in
\mathbb{G}(n,j)$. Then $K$ and $K_0$ are homothetic.
Other than Nakajima’s result the only previously known case is $i=1$ and $j=2$ proven by Chakerian [@Chakerian:rel-width] in 1967. Letting $K_0$ be a Euclidean ball in the theorem gives:
\[intro:cor\] Let $K\subset{{\mathbb R}}^n$ be a convex body with $C^2$ support function. Assume that $K$ has constant $i$-brightness and constant $j$-brightness, where $1\leq i<j\leq n-1$ and $i\notin\{1,n-2\}$ if $j=n-1$. Then $K$ is a Euclidean ball.
If ${\partial}K$ is of class $C^2$ and $K$ has constant width, then the Gauss-Kronecker curvature of $K$ is everywhere positive. Therefore for $K$ of class $C^2$ and of constant width the assumption of positive curvature can be omitted:
\[Corollary1.3\] Let $K \subset {\mathbb R}^{n}$ be a convex body of class $C^{2}$ with constant width and constant $k$-brightness for some $k \in
\{2,\dots,n-2\}$. Then K is a Euclidean ball.
Unfortunately, this does not cover the case that $K$ has constant width and brightness, which we consider the most interesting open problem related to the subject of this paper. Under the strong additional assumption that $K$ and $K_0$ are smooth convex bodies of revolution with a common axis, we can also settle the two cases not covered by Theorem \[Theorem1.4\].
\[revolution\] Let $K, K_0\subset{{\mathbb R}}^n$ be convex bodies that have a common axis of revolution such that $K$ has $C^2$ support function and $K_0$ is centrally symmetric and of class $C^2_+$. Assume that $K$ and $K_{0}$ have proportional brightness and proportional $i$-th brightness function for an $i\in \{1,n-2\}$. Then $K$ is homothetic to $K_0$. In particular, if $K_0$ is a Euclidean ball, then $K$ also is a Euclidean ball.
From the point of view of convexity theory the restriction to convex bodies of class $C^2_+$ or with $C^2$ support functions is not natural and it would be of great interest to extend Theorem \[Theorem1.4\] and Corollaries \[intro:cor\] and \[Corollary1.3\] to general convex bodies. In the case of Corollary \[Corollary1.3\] when $n\geq3$, $i=1$ and $j=2$ this was done in [@Howard:brightness]. However, from the point of view of differential geometry, the class $C^2_+$ is quite natural and the convex bodies of constant $i$-brightness in $C^2_+$ have some interesting differential geometric properties. Recall that a point $x$ of ${\partial}K$ is an [[******]{}umbilic point]{} iff all of the radii of curvature of ${\partial}K$ at $x$ are equal. The following is a special case of Proposition \[Proposition4.4\] below.
\[intro:umbilic\] Let $K$ be a convex body of class $C^2_+$ in ${{\mathbb R}}^n$ with $n\geq 5$, and let $2\leq k\leq n-3$. Assume that $K$ has constant $k$-brightness. Then ${\partial}K$ has a pair of umbilic points $x_1$ and $x_2$. Moreover the tangent planes of ${\partial}K$ at $x_1$ and $x_2$ are parallel and the radii of curvature of ${\partial}K$ at $x_1$ and $x_2$ are equal.
This is surprising as when $n\geq 4$ the set of $K$ in $C^2_+$ with no umbilic points is a dense open set in $C^2_+$ with the $C^2$ topology.
Finally we comment on the relation of our results to those in the paper [@Haab:brightness] of Haab. All our main results are stated by Haab, but his proofs are either incomplete or have errors (see the review in Zentralblatt). In particular, the proof of his main result, stating that a convex body of class $C^2_+$ with constant width and constant $(n-1)$-brightness is a ball, is wrong (the proof is based on [@Haab:brightness Lemma 5.3] which is false even in the case of $n=1$) and this case is still open. We have included remarks at the appropriate places relating our results and proofs to those in [@Haab:brightness]. Despite the errors in [@Haab:brightness], the paper still has some important insights. In particular, while Haab’s proof of his Theorem 4.1 (our Proposition \[prop:wedge\]) is incomplete, see Remark \[Haab:incomplete\] below, the statement is correct and is the basis for the proofs of most of our results. Also it was Haab who realized that having constant brightness implies the existence of umbilic points. While his proof is incomplete and the details of the proof here differ a good deal from those of his proposed argument, the global structure of the proof here is still indebted to his paper.
Preliminaries {#sec:prelim}
=============
We will work in Euclidean space ${{\mathbb R}}^n$ with the usual inner product ${\langle}\cdot\,,\cdot{\rangle}$ and the induced norm $|\cdot|$. The support function of a convex body $K$ in ${{\mathbb R}}^n$ is the function $h_K{\colon}{{\mathbb R}}^n\to {{\mathbb R}}$ given by $h_K(x)=\max_{y\in K}{\langle}x,y{\rangle}$. The function $h_K$ is homogeneous of degree one. A convex body is uniquely determined by its support function. An important fact for us, first noted by Wintner [@Wintner:parallel Appendix], is that if $K$ is of class $C^2_+$, then its support function $h_K$ is of class $C^2$ on ${{\mathbb R}}^n{\smallsetminus}\{0\}$ and the principal radii of curvature (see below for a definition) of $K$ are everywhere positive (cf. [@Schneider:convex p. 106]). Conversely, if the support function of $K$ is of class $C^2$ on ${{\mathbb R}}^n{\smallsetminus}\{0\}$ and the principal radii of curvature of $K$ are everywhere positive, then $K$ is of class $C^2_+$ (cf. [@Schneider:convex p. 111]). In this paper, we say that a support function is of class $C^2$ if it is of class $C^2$ on ${{\mathbb R}}^n{\smallsetminus}\{0\}$. Let $L$ be a linear subspace of ${{\mathbb R}}^n$. Then the support function of the projection $K|L$ is the restriction $h_{K|L}=h_K\big|_{L}$. In particular, if $h_K$ is of class $C^2$, then $h_{K|L}$ is of class $C^2$ in $L$. As an easy consequence we obtain that if $K$ is of class $C^2_+$, then $K|L$ is of class $C^2_+$ in $L$.
All of our proofs work for convex bodies $K\subset{{\mathbb R}}^n$ that have a $C^2$ support function, which leads to a somewhat larger class than the convex bodies of class $C^2_+$. As an example, let $K$ be of class $C^2_+$ and let $r_0$ be the minimum of all the radii of curvature on ${\partial}K$. Then by Blaschke’s rolling theorem (cf. [@Schneider:convex Thm 3.2.9 p. 149]) there is a convex set $K_1$ and a ball $B_{r_0}$ of radius $r_0$ such that $K$ is the Minkowski sum $K=K_1+B_{r_0}$ and no ball of radius greater than $r_0$ is a Minkowski summand of $K$. Thus no ball is a summand of $K_1$, for if $K_1=K_2+B_r$, $r>0$, then $K=K_1+B_{r_0}=K_2+B_{r+r_0}$, contradicting the maximality of $r_0$. As every convex body with $C^2$ boundary has a ball as a summand, it follows that $K_1$ does not have a $C^2$ boundary. But the support function of $K_1$ is $h_{K_1}=h_K-r_0$ and therefore $h_{K_1}$ is $C^2$. When $K_1$ has nonempty interior, for example when $K$ is an ellipsoid with all axes of different lengths, then $K_1$ is an example of a convex set with $C^2$ support function, but with ${\partial}K_1$ not of class $C^2$.
If the support function $h=h_K$ of a convex body $K\subset{{\mathbb R}}^n$ is of class $C^2$, then let ${\operatorname{grad}}h_K$ be the usual gradient of $h_K$. This is a $C^1$ vector field on ${{\mathbb R}}^n{\smallsetminus}\{0\}$. Let ${{\mathbb S}}^{n-1}$ be the unit sphere in ${{\mathbb R}}^n$. Then for $u\in {{\mathbb S}}^{n-1}$ the unique point on ${\partial}K$ with outward normal $u$ is ${\operatorname{grad}}h_K(u)$ (cf. [@Schneider:convex (2.5.8), p. 107]). In the case where $K$ is of class $C^2_+$, $u\mapsto {\operatorname{grad}}h_K(u)$ is the inverse of the [[******]{}Gauss map]{} of ${\partial}K$. For this reason, $u\mapsto {\operatorname{grad}}h_K(u)$ is called the [[******]{}reverse spherical image map]{} (cf. [@Schneider:convex p. 107]). Let $d^2h_K$ be the usual Hessian of $h_K$ viewed as a field of selfadjoint linear maps on ${{\mathbb R}}^n{\smallsetminus}\{0\}$. That is for a vector $X$ we have that $d^2h_KX=\nabla_X{\operatorname{grad}}h_K$ is the directional derivative of ${\operatorname{grad}}h_K$ in the direction $X$. As $h_K$ is homogeneous of degree one for any $u\in {{\mathbb S}}^{n-1}$, it follows that $d^2h_K(u)u=0$. Moreover, since $d^2h_K$ is selfadjoint this implies that the orthogonal complement $u^\bot$ of $u$ is invariant under $d^2h_K(u)$. As $u^\bot=T_u{{\mathbb S}}^{n-1}$ we can then define a field of selfadjoint linear maps $L(h_K)$ on the tangent spaces to ${{\mathbb S}}^{n-1}$ by $$L(h_K)(u):=d^2h_K(u)\big|_{u^\bot}.$$ For given $u\in{{\mathbb S}}^{n-1}$, $L(h_K)(u)$ is called the [[******]{}reverse Weingarten map]{} of $K$ at $u$. The eigenvalues of $L(h_K)(u)$ are the (principal) [[******]{}radii of curvature]{} of $K$ in direction $u$ (cf. [@Schneider:convex p. 108]). Recall that if $K$ is of class $C^2_+$, then the derivative of the Gauss map is the [[******]{}Weingarten map]{} of ${\partial}K$. As $d^2h_K$ is the directional derivative of ${\operatorname{grad}}h_K\big|_{{{\mathbb S}}^{n-1}}$ and ${\operatorname{grad}}h_K\big|_{{{\mathbb S}}^{n-1}}$ is the inverse of the Gauss map, we have that $L(h_K)$ is the inverse of the Weingarten map. Provided that $K$ is of class $C^2_+$, the Weingarten map is positive definite and therefore the same is true of its inverse $L(h_K)$.
In the following, the notion of the area measure of a convex body will be useful. In the case of general convex bodies the definition is a bit involved, see [@Schneider:convex pp. 200–203] or [@Gardner:book pp. 351–353], but we will only need the case of bodies with support functions of class $C^2$ where an easier definition is possible. Let $K\subset{{\mathbb R}}^n$ be a convex body with support function $h_K$ of class $C^2$. Then the (top order) [[******]{}area measure]{} is defined on Borel subsets $\omega$ of ${{\mathbb S}}^{n-1}$ by $$\label{top-area}
S_{n-1}(K,\omega):=\int_{\omega} \det(L(h_K)(u))\,du,$$ where $du$ denotes integration with respect to spherical Lebesgue measure. (See, for instance, [@Schneider:convex (4.2.20), p. 206; Chap. 5] or [@Gardner:book (A.7), p. 353].)
We need also a generalization of the operator $L(h_K)$. Let $K_0\subset{{\mathbb R}}^n$ be a convex body of class $C^2_+$, and let $h_0$ be the support function of $K_0$. As $K_0$ is of class $C^2_+$, the linear map $L(h_0)(u)$ is positive definite for all $u\in {{\mathbb S}}^{n-1}$. Therefore $L(h_0)(u)$ will have a unique positive definite square root which we denote by $L(h_0)^{1/2}(u)$. Then for any convex body $K\subset{{\mathbb R}}^n$ with support function $h_K$ of class $C^2$, we define $$\label{def:Lh0}
L_{h_0}(h_K)(u):=L(h_0)^{-1/2}(u)L(h_K)(u)L(h_0)^{-1/2}(u)$$ where $L(h_0)^{-1/2}(u)$ is the inverse of $L(h_0)^{1/2}(u)$. It is easily checked that if $K$ is of class $C^2_+$, then $L_{h_0}(h_K)(u)$ is positive definite for all $u$. Furthermore, we always have $$\det(L_{h_0}(h_K)(u))=\frac{\det(L(h_K)(u))}{\det(L(h_0)(u))}.$$ The linear map $L_{h_0}(h_K)(u)$ has the interpretation as the inverse Weingarten map in the relative geometry defined by $K_0$. This interpretation will not be used in the present paper, but it did motivate some of the calculations.
Projections and support functions
=================================
Some multilinear algebra
------------------------
The geometric condition of proportional projection functions can be translated into a condition involving reverse Weingarten maps. In order to fully exploit this information, the following lemmas will be used. In fact, these lemmas fill a gap in [@Haab:brightness [§]{}4]. For basic results concerning the Grassmann algebra and alternating maps, which are used subsequently, we refer to [@MarcusI:73], [@MarcusII:75].
\[Lemma4.1\] Let $G,H,L{\colon}{\mathbb R}^{n} \to {\mathbb
R}^{n}$ be positive semidefinite linear maps. Let $k \in
\{1,\dots,n\}$, and assume that $$\label{eq4.1}
\left\langle \left(\land^{k}G+\land^kH\right)\xi,\xi \right\rangle =
\left\langle \left(\land^k L\right)\xi,\xi \right\rangle$$ for all decomposable $\xi \in \bigwedge^{k}{\mathbb R}^{n}$. Then $$\label{eq4.1b}
\land^{k}G+\land^k H = \land^{k}L.$$
It is sufficient to consider the cases $k \in \{2,\dots,n-1\}$. For $\xi,\zeta\in\bigwedge^k{{\mathbb R}}^n$, we define $$\omega_L(\xi,\zeta):=\left\langle \left(\land^k L\right)\xi,\zeta\right\rangle.$$ Then, for any $u_{1},\dots,u_{k+1},v_{1},\dots,v_{k-1} \in {\mathbb
R}^{n}$, the identity $$\label{eq4.2}
\sum_{j=1}^{k+1}(-1)^{j} \omega_L(u_{1}\land\dots\land\check{u}_{j}\land\dots\land u_{k+1};
u_{j}\land v_{1}\land \dots\land v_{k-1}) = 0$$ is satisfied, where $\check{u}_{j}$ means that $u_{j}$ is omitted. Thus, in the terminology of [@Kulkarni72], $\omega_L$ satisfies the first Bianchi identity. Once has been verified, the proof of Lemma \[Lemma4.1\] can be completed as follows. Define $\omega_G$ and $\omega_H$ by replacing $L$ in the definition of $\omega_L$ by $G$ and $H$, respectively. Then $\omega_{G,H}:=
\omega_G+\omega_H$ also satisfies the first Bianchi identity. By assumption, $$\omega_{G,H}(\xi,\xi)=\omega_L(\xi,\xi)$$ for all decomposable $\xi \in \bigwedge^{k}{\mathbb
R}^{n}$. Proposition 2.1 in [@Kulkarni72] now implies that $$\omega_{G,H}(\xi,\zeta)=\omega_L(\xi,\zeta)$$ for all decomposable $\xi,\zeta \in \bigwedge^{k}{\mathbb R}^{n}$, which yields the assertion of the lemma.
For the proof of we proceed as follows. Since $L$ is positive semidefinite, there is a positive semidefinite linear map $\varphi{\colon}{\mathbb R}^{n} \to {\mathbb R}^{n}$ such that $L =
\varphi \circ \varphi$. Hence $$\omega_L(u_{1}\land \dots \land u_{k}; v_{1} \land \dots \land v_{k}) = \langle \varphi
u_{1} \land \dots \land \varphi u_{k},\varphi v_{1} \land \dots
\land \varphi v_{k} \rangle$$ for all $u_{1},\dots,v_{k} \in {\mathbb R}^{n}$. For $a_{1},\dots,a_{k+1},b_{1},\dots,b_{k-1} \in {\mathbb R}^{n}$ we define $$\begin{gathered}
\Phi(a_{1},\dots,a_{k+1}; b_{1},\dots,b_{k-1}) \\
:= \sum_{j=1}^{k+1}(-1)^{j} \langle a_{1} \land \dots \land
\check{a}_{j} \land \dots \land a_{k+1}; a_{j} \land b_{1} \land
\dots \land b_{k-1} \rangle.\end{gathered}$$ We will show that $\Phi = 0$. Then, substituting $a_{i} = \varphi(u_{i})$ and $b_{j} = \varphi(v_{j})$, we obtain the required assertion .
For the proof of $\Phi = 0$, it is sufficient to show that $\Phi$ vanishes on the vectors of an orthonormal basis $e_{1},\dots,e_{n}$ of ${{\mathbb R}}^n$, since $\Phi$ is a multilinear map. So let $a_{1},\dots,a_{k+1} \in \{e_{1},\dots,e_{n}\}$, whereas $b_1,\dots,b_{k-1}$ are arbitrary.
If $a_{1},\dots,a_{k+1} $ are mutually different, then all summands of $\Phi$ vanish, since $\langle a_{i},a_{j} \rangle = 0$ for $i \not=
j$. Here we use that $$\langle u_1\land\dots\land u_k,v_1\land\dots\land v_k\rangle=
\det\left(\langle u_i,v_j\rangle_{i,j=1}^k\right)$$ for $u_1,\dots,u_k,v_1,\dots,v_k\in{{\mathbb R}}^n$.
Otherwise, $a_{i} = a_{j}$ for some $i \not= j$. In this case, we argue as follows. Assume that $i <
j$ (say). Then, repeatedly using that $a_{i} = a_{j}$, we get $$\begin{aligned}
& \Phi(a_{1},\dots,a_{k+1}; b_{1},\dots,b_{k-1}) \\
& = (-1)^{i} \langle a_{1} \land \dots \land \check{a}_{i}
\land \dots \land a_{j} \land \dots \land a_{k+1}; a_{i} \land
b_{1} \land \dots \land b_{k-1} \rangle \\
&\quad + (-1)^{j} \langle a_{1} \land \dots \land a_{i} \land
\dots \land \check{a}_{j} \land \dots \land a_{k+1}; a_{j} \land
b_{1} \land \dots \land b_{k-1} \rangle \\
&= (-1)^{i}(-1)^{j-i-1} \langle a_{1} \land \dots \land a_{j}
\land \dots \land \check{a}_{j} \land \dots \land a_{k+1}; a_{i}
\land b_{1} \land \dots \land b_{k-1}\rangle \\
&\quad + (-1)^{j} \langle a_{1} \land \dots \land a_{i} \land \dots
\land \check{a}_{j} \land \dots \land a_{k+1}; a_{j} \land b_{1}
\land \dots \land b_{k-1} \rangle \\
&= 0,\end{aligned}$$ which completes the proof.
\[Haab:incomplete\] In the proof of Theorem 4.1 in [@Haab:brightness], Haab uses a special case of Lemma \[Lemma4.1\], but his proof is incomplete. To describe the situation more carefully, let $T{\colon}\bigwedge^k{{\mathbb R}}^n\to\bigwedge^k{{\mathbb R}}^n$ denote a symmetric linear map satisfying $\langle T\xi,\xi\rangle =1$ for all decomposable unit vectors $\xi\in \bigwedge^k{{\mathbb R}}^n$. From this hypothesis Haab apparently concludes that $T$ is the identity map (cf. [@Haab:brightness p. 126, l.15-20]). While Lemma \[Lemma4.1\] implies that a corresponding fact is indeed true for maps $T$ of a special form, a counterexample for the general assertion is provided in [@MarcusII:75 p.124-5]. For a different counterexample, let $k$ be even and let $Q$ be the symmetric bilinear form defined on $\bigwedge^k({{\mathbb R}}^{2k})$ by $Q(w,w)=w \land w$. This is a symmetric bilinear form as $k$ is even and $w\land w\in \bigwedge^{2k}{{\mathbb R}}^{2k}$ so that $\bigwedge^{2k}{{\mathbb R}}^{2k}$ is one dimensional and thus can be identified with the real numbers. In this example, $Q(\xi,\xi)=0$ for all decomposable $k$-vectors $\xi$, but $Q$ is not the zero bilinear form.
Haab states a version of the next lemma, [@Haab:brightness Cor 4.2, p. 126], without proof.
\[Lemma4.2\] Let $G, H{\colon}{\mathbb R}^{n} \to {\mathbb R}^{n}$ be selfadjoint linear maps and assume that $$\land^{k}G+\land^kH=\beta \land^k {\operatorname{id}}$$ for some constant $\beta\in{{\mathbb R}}$ with $\beta\neq0$ and some $k\in
\{1,\dots n-1\}$. Then $G$ and $H$ have a common orthonormal basis of eigenvectors. If $k\geq 2$, then either $G$ or $H$ is an isomorphism.
If $k=1$, this is elementary so we assume that $2\leq k\leq
n-1$. We first show that at least one of $G$ or $H$ is nonsingular. Assume that this is not the case. Then both the kernels $\ker G$ and $\ker H$ have positive dimension. Choose $k$ linearly independent vectors $v_1,\dots,v_k$ as follows: If $\ker G\cap \ker H\neq \{0\}$, then let $0\neq v_1\in \ker G\cap \ker H$ and choose any vectors $v_2,\dots,v_k$ so that $v_1,v_2,\dots,v_k$ are linearly independent. If $\ker G\cap \ker H= \{0\}$, then there are nonzero $v_1\in \ker G$ and $v_2\in \ker H$. Then $\ker G\cap \ker H= \{0\}$ implies that $v_1$ and $v_2$ are linearly independent. So in this case choose $v_3,\dots,v_k$ so that $v_1,\dots,v_k$ are linearly independent. In either case $$\begin{aligned}
(\land^{k}G+\land^kH)v_1\land v_2\land \dots \land v_k&
=Gv_1\land Gv_2\land \dots\land G v_k+Hv_1\land Hv_2\land \dots
\land Hv_k \\
&=0\end{aligned}$$ which contradicts that $\land^{k}G+\land^kH=\beta \land^k {\operatorname{id}}$ and $\beta\neq 0$.
Without loss of generality we assume that $H$ is nonsingular. Since $G$ is selfadjoint, there exists an orthonormal basis $e_1,\dots,e_n$ of eigenvectors of $G$ with corresponding eigenvalues $\alpha_1,\dots,\alpha_n\in{{\mathbb R}}$. For a decomposable vector $\xi=v_1\land\dots\land v_k\in\land^k{{\mathbb R}}^n{\smallsetminus}\{0\}$, we define $$\begin{aligned}
\mbox{[}\xi\mbox{]}:=&{\operatorname{span}}\{v\in{{\mathbb R}}^n:v\land\xi=0\}\\
=&{\operatorname{span}}\{v_1,\dots,v_k\}\in\mathbb{G}(n,k).\end{aligned}$$ Then, for any $1\leq i_1<\dots<i_k\leq n$, we get $$\begin{aligned}
H({\operatorname{span}}\{e_{i_1},\dots,e_{i_k}\})&={\operatorname{span}}\{H(e_{i_1}),\dots, H(e_{i_k})\}\\
&=[H(e_{i_1})\land\dots\land H(e_{i_k})]\\
&=[(\land^k H)e_{i_1}\land\dots\land e_{i_k}]\\
&=[\left(\beta\land^k {\operatorname{id}}-\land^k G\right)e_{i_1}\land\dots\land e_{i_k}]\\
&=[(\beta-\alpha_{i_1}\cdots\alpha_{i_k})e_{i_1}\land\dots\land e_{i_k}]\\
&={\operatorname{span}}\{e_{i_1},\dots,e_{i_k}\},\end{aligned}$$ where we used that $H$ is an isomorphism to obtain the second and the last equality. Since $k\leq n-1$, we can conclude that $$\begin{aligned}
H({\operatorname{span}}\{e_1\})&=H\left(\bigcap_{j=2}^{k+1}{\operatorname{span}}\{e_{1},\dots,\check{e}_j,\dots,e_{k+1}\}\right)\\
&=\bigcap_{j=2}^{k+1}H\left({\operatorname{span}}\{e_{1},\dots,\check{e}_j,\dots,e_{k+1}\}\right)\\
&=\bigcap_{j=2}^{k+1}{\operatorname{span}}\{e_{1},\dots,\check{e}_j,\dots,e_{k+1}\}\\
&={\operatorname{span}}\{e_1\}.\end{aligned}$$ By symmetry, we obtain that $e_i$ is an eigenvector of $H$ for $i=1,\dots,n$.
One proportional projection function
------------------------------------
Subsequently, if $K,K_0\subset{{\mathbb R}}^n$ are convex bodies with support functions of class $C^2$, we put $h:=h_K$ and $h_0:=h_{K_0}$ to simplify our notation. The following proposition is basic for the proofs of our main results.
\[prop:wedge\] Let $K,K_0\subset {{\mathbb R}}^n$ be convex bodies having support functions of class $C^2$, let $K_0$ be centrally symmetric, and let $k\in\{1,\dots,n-1\}$. Assume that $\beta>0$ is a positive constant such that $$\label{KP}
V_k(K \vert U)=\beta V_k(K_0 \vert U)$$ for all $U\in\mathbb{G}(n,k)$. Then, for all $u\in
{{\mathbb S}}^{n-1}$, $$\label{wedge}
\land^k L(h)(u)+\land^k L(h)(-u)
=2\beta\land^k L(h_0)(u) .$$
Let $u\in {{\mathbb S}}^{n-1}$ and a decomposable unit vector $\xi \in
\bigwedge^kT_u{{\mathbb S}}^{n-1}$ be fixed. Then there exist orthonormal vectors $e_1,\dots, e_k\in u^\perp$ such that $\xi=e_1\land\dots\land e_k$. Put $E:={\operatorname{span}}\{e_1{,\dots,}e_k,u\}\in\mathbb{G}(n,k+1)$ and $E_0:={\operatorname{span}}\{e_1,\dots,e_k\}
\in\mathbb{G}(n,k)$. For any $v\in E\cap {{\mathbb S}}^{n-1}$, $$V_k\left((K\vert E)\vert (v^\perp\cap E)\right)=\beta V_k\left((K_0\vert E)\vert
(v^\perp\cap E)\right),$$ and therefore a special case of Theorem 2.1 in [@GSW] (see also Theorem 3.3.2 in [@Gardner:book]) yields that $$S^E_k(K\vert E,\cdot)+S_k^E((K\vert E)^*,\cdot)=2\beta S_k^E(K_0\vert
E,\cdot),$$ where $S^E_k(M,\cdot)$ denotes the (top order) surface area measure of a convex body $M$ in $E$, and $(K\vert E)^*$ is the reflection of $K\vert E$ through the origin. Since $h_{K\vert E}=h_K\big|_E$ is of class $C^2$ in $E$, equation applied in $E$ implies that $$\label{substi}
\det\left(d^2h_{K\vert E}(u)\big|_{E_0}\right)+
\det\left(d^2h_{K\vert E}(-u)\big|_{E_0}\right)
=2\beta\det\left(d^2h_{K_0\vert E}(u)
\big|_{E_0}\right).$$ Since $e_1,\dots,e_k,u$ is an orthonormal basis of $E$, we further deduce that $$\begin{aligned}
\det\left(d^2h_{K\vert E}(u)\big|_{E_0}\right)&=\det\left(d^2h_K(u)(e_i,e_j)_{i,j=1}^k\right)\\
&=\det\left(
\langle L(h)(u)e_i,e_j\rangle_{i,j=1}^k\right)\\
&=\left\langle\land^kL(h)(u)\xi,\xi\right\rangle,\end{aligned}$$ and similarly for the other determinants. Substituting these expressions into yields that $$\left\langle\left( \land^k L(h)(u)+\land^k L(h)(-u)\right)\xi,\xi\right\rangle=
\left\langle 2\beta \land^k L(h_0)(u)\xi,\xi\right\rangle$$ for all decomposable (unit) vectors $\xi\in\bigwedge^k{{\mathbb R}}^n$. Hence the required assertion follows from Lemma \[Lemma4.1\].
It is useful to rewrite Proposition \[prop:wedge\] in the notation of . The following corollary is implied by Proposition \[prop:wedge\] and Lemma \[Lemma4.2\].
\[cor:L0\] Let $K,K_0\subset {{\mathbb R}}^n$ be convex bodies with $K_0$ being centrally symmetric and of class $C^2_+$ and $K$ having $C^2$ support function. Let $k\in\{1,\dots,n-1\}$. Assume that $\beta>0$ is a positive constant such that $$V_k(K \vert U)=\beta V_k(K_0 \vert U)$$ for all $U\in\mathbb{G}(n,k)$. Then, for all $u\in {{\mathbb S}}^{n-1}$, $$\label{wedge0}
\land^k L_{h_0}(h)(u)+\land^k L_{h_0}(h)(-u)
=2\beta \land^k{\operatorname{id}}_{T_u\mathbb{S}^{n-1}}.$$ Moreover, for $k\in\{1,\dots,n-2\}$ the linear maps $L_{h_0}(h)(u)$ and $L_{h_0}(h)(-u)$ have a common orthonormal basis of eigenvectors.
The cases $1\leq i<j\leq n-2$
=============================
Polynomial relations
--------------------
In the sequel, it will be convenient to use the following notation. If $x_{1},\dots,x_{n}$ are nonnegative real numbers and $I \subset
\{1,\dots,n\}$, then we put $$x_{I} := \prod_{\iota \in I} x_{\iota}.$$ If $I = {\varnothing}$, the empty product is interpreted as $x_{{\varnothing}} := 1$. The cardinality of the set $I$ is denoted by $|I|$.
\[Lemmapra\] Let $a, b > 0$ and $2 \leq k < m \leq n-1$ with $a^{m} \not= b^{k}$. Let $x_{1},\dots,x_{n}$ and $
y_{1},\dots,y_{n}$ be positive real numbers such that $$x_{I} + y_{I} = 2a \quad \mbox{ and } \quad x_{J} + y_{J} = 2b$$ whenever $I, J\subset\{1,\dots,n\}$, $|I| = k$ and $|J| =
m$. Then there is a constant $c>0$ such that $x_\iota/y_\iota=c$ for $\iota=1,\dots,n$.
It is easy to see that this can be reduced to the case where $m=n-1$. Thus we assume that $m=n-1$. By assumption, $$x_\iota x_I+y_\iota y_I=2a\quad\text{and}\quad x_\iota x_{I'}+y_\iota y_{I'}=2a$$ whenever $\iota\in\{1,\dots,n\}$, $I,I'\subset\{1,\dots,n\}{\smallsetminus}\{\iota\}$, $|I|=|I'|=k-1$. Subtracting these two equations, we get $$\label{6n}
x_\iota(x_I-x_{I'})=y_\iota(y_{I'}-y_I).$$
By symmetry, it is sufficient to prove that $x_1/y_1=x_2/y_2$. We distinguish several cases.
[**Case 1.**]{} There exist $I,I'\subset\{3,\dots,n\}$, $|I|=|I'|=k-1$ with $x_I\neq x_{I'}$. Then implies that $$\frac{x_1}{y_1}=\frac{y_{I'}-y_I}{x_I-x_{I'}}=\frac{x_2}{y_2}.$$
[**Case 2.**]{} For all $I,I'\subset\{3,\dots,n\}$ with $|I|=|I'|=k-1$, we have $x_I=x_{I'}$.
Since $1\leq k-1\leq n-3$, we obtain $x:=x_3=\dots=x_n$. From we get that also $y_I=y_{I'}$ for all $I,I'\subset\{3,\dots,n\}$ with $|I|=|I'|=k-1$. Hence, $y:=y_3=\dots=y_n$.
[**Case 2.1.**]{} $x_1=x_2$. Since $$x_1x^{k-1}+y_1y^{k-1}=2a,\quad x_2x^{k-1}+y_2y^{k-1}=2a$$ and $x_1=x_2$, it follows that $y_1=y_2$. In particular, we have $x_1/y_1=x_2/y_2$.
[**Case 2.2.**]{} $x_1\neq x_2$.
[**Case 2.2.1.**]{} $x_1,x_2,x_3$ are mutually distinct. Choose $$I:=\{2\}\cup\{5,6,\dots,k+2\},\quad I':=\{4\}\cup\{5,6,\dots,k+2\}.$$ Here note that $k+2\leq n$ and $\{5,6,\dots,k+2\}$ is the empty set for $k=2$. Then $x_I\neq x_{I'}$ as $x_2\neq x_4=x_3$. Hence yields that $$\label{7n}
\frac{x_1}{y_1}=\frac{y_{I'}-y_I}{x_I-x_{I'}}=\frac{x_3}{y_3}.$$ Next choose $$I:=\{1\}\cup\{5,6,\dots,k+2\},\quad I':=\{4\}\cup\{5,6,\dots,k+2\}.$$ Then $x_I\neq x_{I'}$ as $x_1\neq x_4=x_3$, and hence yields that $$\label{8n}
\frac{x_2}{y_2}=\frac{y_{I'}-y_I}{x_I-x_{I'}}=\frac{x_3}{y_3}.$$ From and , we get $x_1/y_1=x_2/y_2$.
[**Case 2.2.2.**]{} $x_1\neq x_2=x_3$ or $x_1=x_3\neq x_2$. By symmetry, it is sufficient to consider the first case. Since $k-1\leq n-3$ and using $$x_2x^{k-1}+y_2y^{k-1}=2a\quad\text{and}\quad x_3x^{k-1}+y_3y^{k-1}=2a,$$ we get $y_2=y_3$. By the assumption of the proposition, the equations $$\begin{aligned}
x_2^k+y_2^k&=2a,\label{9n}\\
x_1x_2^{k-1}+y_1y_2^{k-1}&=2a,\label{10n}\\
x_2^{n-1}+y_2^{n-1}&=2b,\label{11n}\\
x_1x_2^{n-2}+y_1y_2^{n-2}&=2b. \label{12n}\end{aligned}$$ are satisfied. From and , we get $$x_2^{k-1}(x_2-x_1)+y_2^{k-1}(y_2-y_1)=0.$$ Moreover, and imply that $$x_2^{n-2}(x_2-x_1)+y_2^{n-2}(y_2-y_1)=0.$$ Since $x_1\neq x_2$, we thus obtain $$\frac{y_1-y_2}{x_2-x_1}=\frac{x_2^{k-1}}{y_2^{k-1}}=\frac{x_2^{n-2}}{y_2^{n-2}},$$ and therefore $y_2/x_2=1$. But now , and $x_2=y_2$ give $x_2^k=a$ and $x_2^{n-1}=b$, hence $a^{n-1}=b^{k}$, a contradiction. Thus Case 2.2.2 cannot occur.
\[Lemmaalge\] Let $a, b > 0$ and $1 \leq k < m \leq n-1$ with $a^{m} \not= b^{k}$. Then there exists a finite set $\mathcal{F} =
\mathcal{F}_{a,b,k,m}$, only depending on $a, b, k, m$, such that the following is true: if $x_{1},\dots,x_{n}$ are nonnegative and $y_{1},\dots,y_{n}$ are positive real numbers such that $$x_{I} + y_{I} = 2a \quad \mbox{ and } \quad x_{J} + y_{J} = 2b$$ whenever $I, J\subset\{1,\dots,n\}$, $|I| = k$ and $|J| =
m$, then $y_{1},\dots,y_{n} \in \mathcal{F}$.
\[rmk:infinite\] The condition $a^{m} \not= b^{k}$ is necessary in this lemma. For example, if $a=b=1$, let $x_1=x_2=\dots =x_{n-1}=y_1=y_2=\dots y_{n-1}=1$, $x_n=t$ and $y_n=1-t$, where $t\in(0,1)$. Then $x_I+y_I=2$ for any nonemepty subset $I$ of $\{1,\dots,n\}$.
It is easy to see that it is sufficient to consider the case $m=n-1$.
First, we consider the case $k=1$. Moreover, we assume that $x_1,\ldots,x_n$ are positive. Then by assumption $$\label{1b}
x_\iota+y_\iota=2a\quad\text{and}\quad x_J+y_J=2b$$ for $\iota=1,\dots,n$ and $J\subset\{1,\dots,n\}$, $|J|=n-1$. We put $X:=x_{\{1,\dots,n\}}$ and $Y:=y_{\{1,\dots,n\}}$. Then implies $$\frac{X}{x_\ell}+\frac{Y}{y_\ell}=2b,\quad \ell=1,\dots,n.$$ Using $y_\ell=2a-x_\ell$, this results in $$2bx_\ell^2+(-X+Y-4ab)x_\ell+2aX=0.$$ The quadratic equation $$2bz^2+(-X+Y-4ab)z+2aX=0$$ has at most two real solutions $z_1,z_2$, hence $x_1,\dots,x_n\in\{z_1,z_2\}$.
[**Case 1.**]{} $x_1=\dots=x_n=:x$. Then by also $y_1=\dots=y_n=:y$. It follows that $$\label{2b}
x^{n-1}+(2a-x)^{n-1}-2b=0 .$$ The coefficient of highest degree of this polynomial equation is $2$ if $n$ is odd, and $(n-1)2a$ if $n$ is even. Hence is not the zero polynomial. This shows that has only finitely many solutions, which depend on $a,b,m$ only.
[**Case 2.**]{} If not all of the numbers $x_1,\dots,x_n$ are equal, and hence $z_1\neq z_2$, we put $$l:=|\{\iota\in\{1,\dots,n\}:x_\iota=z_1\}|.$$ Then $1\leq l\leq n-1$ and $n-l=|\{\iota\in\{1,\dots,n\}:x_\iota=z_2\}|$. Then yields that $$\begin{aligned}
z_1^{l-1}z_2^{n-l}+(2a-z_1)^{l-1}(2a-z_2)^{n-l}&=2b,\label{3b}\\
z_1^{l}z_2^{n-l-1}+(2a-z_1)^{l}(2a-z_2)^{n-l-1}&=2b.\label{4b}\end{aligned}$$ If $l=1$, then gives $$\label{5b}
z_2^{n-1}+(2a-z_2)^{n-1}=2b.$$ Since this is not the zero polynomial, there exist only finitely many possible solutions $z_2$. Furthermore, gives $$z_1\left[z_2^{n-2}-(2a-z_2)^{n-2}\right]=2b-2a(2a-z_2)^{n-2}.$$ If $z_2 \neq a$, then $z_1$ is determined by this equation. The case $z_2=a$ cannot occur, since with $z_2=a$ implies that $a^{n-1}=b$, which is excluded by assumption.
If $l=n-1$, we can argue similarly.
So let $2\leq l\leq n-2$. Note that $0<z_1,z_2<2a$ since $x_\iota,y_\iota>0$ and $x_\iota+y_\iota=2a$. Equating and , we obtain $$\label{6b}
\left(\frac{2a-z_1}{z_1}\right)^{l-1}=\left(\frac{z_2}{2a-z_2}\right)^{n-l-1}.$$ The positive points on the curve $Z_1^{l-1}=Z_2^{n-l-1}$, where $Z_1,Z_2>0$, are parameterized by $Z_1=t^{n-l-1}$ and $Z_2=t^{l-1}$, $t>0$. Therefore setting $$t^{n-l-1}=\frac{2a-z_1}{z_1},\qquad t^{l-1}=\frac{z_2}{2a-z_2},$$ that is $$\label{7b}
z_1=\frac{2a}{1+t^{n-l-1}},\qquad z_2=\frac{2at^{l-1}}{1+t^{l-1}},$$ we obtain a parameterization of the solutions $z_1,z_2$ of . Now we substitute in and thus get $$(2a)^{n-1}\frac{t^{(l-1)(n-l)}}{(1+t^{n-l-1})^{l-1}(1+t^{l-1})^{n-l}}
+(2a)^{n-1}\frac{t^{(l-1)(n-l-1)}}{(1+t^{n-l-1})^{l-1}(1+t^{l-1})^{n-l}}=2b.$$ Multiplication by $(1+t^{n-l-1})^{l-1}(1+t^{l-1})^{n-l}$ yields a polynomial equation where the monomial of largest degree is $$2b t^{(n-l-1)(l-1)}t^{(l-1)(n-l)},$$ and therefore the equation is of degree $(l-1)(2(n-l)-1)$. This equation will have at most $(l-1)(2(n-l)-1)$ positive solutions. Plugging these values of $t$ into [ ]{} gives a finite set of possible solutions of and , depending only on $a,b,m$. This clearly results in a finite set of solutions of .
We turn to the case $2\leq k\leq n-2$. We still assume that $x_1,\ldots,x_n$ are positive. By assumption and using Lemma \[Lemmapra\], we get $$(1+c^k)y_I=2a\quad\text{and}\quad (1+c^{n-1})y_J=2b$$ for $I,J\subset\{1,\dots,n\}$, $|I|=k$, $|J|=n-1$, where $c>0$ is a constant such that $x_\iota/y_\iota=c$ for $\iota=1,\dots,n$. We conclude that $$y_{\tilde{I}}=\frac{b}{a}\frac{1+c^k}{1+c^{n-1}}$$ whenever $\tilde{I}\subset\{1,\dots,n\}$, $|\tilde{I}|=n-1-k$. Since $1\leq n-1-k\leq n-2$, we obtain $y_1=\dots=y_n=:y$. But then also $x_1=\dots=x_n=:x$. Thus we arrive at $$\label{15n}
x^k+y^k=2a\quad\text{and}\quad x^{n-1}+y^{n-1}=2b.$$ The set of positive real numbers $x,y$ satisfying is finite. In fact, implies that $$(2a-x^k)^{n-1}=y^{k(n-1)}=(2b-x^{n-1})^k,$$ and thus $$\begin{gathered}
\label{16n}
\sum_{\iota=0}^{n-1}\binom{n-1}{\iota}(2a)^\iota(-1)^{n-1-\iota}x^{k(n-1-\iota)}\\-
\sum_{\ell=0}^k\binom{k}{\ell}(2b)^\ell(-1)^{k-\ell}x^{(n-1)(k-\ell)}=0.\end{gathered}$$ The coefficient of the monomial of highest degree is $(-1)^{n-1}+(-1)^{k-1}$, if this number is nonzero, and otherwise it is equal to $(n-1)(2a)(-1)^{n-2}$, since $k(n-2)>(n-1)(k-1)$. In any case, the left side of is not the zero polynomial, and therefore has only a finite number of solutions, which merely depend on $a,b,k,m$.
Finally, we turn to the case where some of the numbers $x_1,\ldots,x_n$ are zero. For instance, let $x_1=0$. Then we obtain that $$y_1y_{I'}=2a,\qquad y_1y_{J'}=2b$$ whenever $I',J'\subset\{2,\ldots,n\}$, $|I'|=k-1$ and $|J'|=n-2$, and thus $y_{J'}/y_{I'}=b/a$. Therefore $y_{\tilde I}=b/a$ for all $\tilde I\subset \{2,\ldots,n\}$ with $|\tilde I|=n-1-k$. Using that $k\ge 1$, we find that $y:=y_2=\ldots=y_n=(b/a)^{\frac{1}{n-1-k}}$. Since $y_1y^{k-1}=2a$, we again get that $y_1,\ldots,y_n$ can assume only finitely many values, depending only on $a,b,k,m=n-1$.
Proof of Theorem \[Theorem1.4\] for $1\leq i<j\leq n-2$
-------------------------------------------------------
An application of Corollary \[cor:L0\] shows that, for $u\in {{\mathbb S}}^{n-1}$, $$\label{re1}
\land^i L_{h_0}(h)(u)+
\land^i L_{h_0}(h)(-u)=2\alpha \land^i \text{id}_{u^\perp},$$ $$\label{re2}
\land^j L_{h_0}(h)(u)+\land^j L_{h_0}(h)(-u)=2\beta\land^j
\text{id}_{u^\perp},$$ Since $i<j\leq n-2$, Corollary \[cor:L0\] also implies that, for any fixed $u\in {{\mathbb S}}^{n -1}$, $L_{h_0}(h)(u)$ and $L_{h_0}(h)(-u)$ have a common orthonormal basis of eigenvectors with corresponding nonnegative eigenvalues $x_1,\dots,x_{n-1}$ and $y_1,\dots,y_{n-1}$, respectively.
[**Case 1.**]{} $\alpha^j\neq\beta^i$. We will show that there is a finite set, $\mathcal{F}^*_{\alpha,\beta,i,j}$, independent of $u$, such that $$\label{re3}
\det{\left(}L_{h_0}(h)(u){\right)}=
\frac{\det L(h)(u)}{\det
L(h_0)(u)}\in\mathcal{F}^*_{\alpha,\beta,i,j},\quad \text{for
all $u\in{{\mathbb S}}^{n-1}$}.$$ Assume this is the case. Then, since $h,h_0$ are of class $C^2$, the function on the left-hand side of is continuous on the connected set ${{\mathbb S}}^{n-1}$ and hence must be equal to a constant $\lambda\geq 0$. If $\lambda=0$, then $\det L(h)\equiv0$ and, as $\det
L(h)$ is the density of the surface area measure $S_{n-1}(K,\cdot)$ with respect to spherical Lebesgue measure, this implies that the surface area measure $S_{n-1}(K,\cdot)\equiv 0$. But this cannot be true, since $K$ is a convex body (with nonempty interior). Therefore $\lambda>0$. Again using that $\det L(h)(u)$ is the density of the surface measure $S_{n-1}(K,\cdot)$, and similarly for $h_0$ and $K_0$, we obtain $S_{n-1}(K,\cdot)= S_{n-1}(\lambda^{1/(n-1)}K_0,\cdot)$. But then Minkowski’s inequality implies that $K$ and $K_0$ are homothetic (see [@Schneider:convex Thm 7.2.1]).
To construct the set $\mathcal{F}^*_{\alpha,\beta,i,j}$, we first put $0$ in the set. Then we only have to consider the points $u\in
{{\mathbb S}}^{n-1}$ where $\det L_{h_0}(h)(u)\neq0$. At these points and show that the assumptions of Lemma \[Lemmaalge\] are satisfied (with $n$ replaced by $n-1$). Hence there is a finite set $\mathcal{F}_{\alpha,\beta,i,j}$, such that for any $u\in {{\mathbb S}}^{n-1}$ with $\det L_{h_0}(h)(u)\neq0$, if $x_1,\dots,x_{n-1}$ are the eigenvalues of $L_{h_0}(h)(-u)$ and $y_1,\dots,y_{n-1}$ are the eigenvalues of $L_{h_0}(h)(u)$, then $y_1,\dots,y_{n-1}\in \mathcal{F}_{\alpha,\beta,i,j}$. Let $\mathcal{F}^*_{\alpha,\beta,i,j}$ be the union of $\{0\}$ with the set of all products of $n-1$ numbers each from the set $\mathcal{F}_{\alpha,\beta,i,j}$.
[**Case 2.**]{} If $\alpha^j=\beta^i$, then the assumptions can be rewritten in the form $$\label{e4.8}
\left(\frac{V_j(K_0\vert U)}{V_j(K\vert U)}\right)^{\frac{1}{j}}=
\left(\frac{V_i(K_0\vert L)}{V_i(K\vert L)}\right)^{\frac{1}{i}}$$ for all $U\in\mathbb{G}(n,j)$ and all $L\in\mathbb{G}(n,i)$. Let $U\in\mathbb{G}(n,j)$ be fixed. By homogeneity we can replace $K_0$ by $\mu K_0$ on both sides of , where $\mu>0$ is chosen such that $V_j(\mu K_0\vert U)=V_j(K\vert U)$. We put $M_0:=\mu K_0\vert U$ and $M:=K\vert U$. Then, for any $L\in
\mathbb{G}(n,i)$ with $L\subset U$, we have $$V_j(M)=V_j(M_0)\quad\text{and}\quad V_i(M\vert L)=V_i(M_0\vert L).$$ By the discussion in [@GSW97 [§]{} 4] or the main theorem in [@ChLu], we infer that $M$ is a translate of $M_0$, and therefore $K| U$ and $K_0| U$ are homothetic. Since $j\geq 2$, Theorem 3.1.3 in [@Gardner:book] shows that $K$ and $K_0$ are homothetic.
The cases $2\leq i<j\leq n-1$ with $i\neq n-2$
==============================================
Existence of relative umbelics
------------------------------
We need another lemma concerning polynomial relations.
Let $n\geq 5$, $k\in\{2,\dots,n-3\}$, $\gamma>0$, and let positive real numbers $0<x_1\leq x_2\leq \dots\leq x_{n-1}$ be given. Assume that $$\label{sternl}
x_I+x_{I^*}=2\gamma$$ for all $I\subset\{1,\dots,n-1\}$, $|I|=k$, where $I^*:=\{n-i:i\in I\}$. Then $x_1=\dots=x_{n-1}$.
Choosing $I=\{1,2,\dots,k\}$ in , we get $$\label{eql1}
x_1x_2\cdots x_k+x_{n-k}\cdots x_{n-2}x_{n-1}=2\gamma.$$ Choosing $I=\{1,n-k,\dots,n-2\}$ in , we obtain $$\label{eql2}
x_1x_{n-k}\cdots x_{n-2}+x_2\cdots x_kx_{n-1}=2\gamma.$$ Subtracting from , we arrive at $$\label{eql3}
x_{n-k}\cdots x_{n-2}(x_{n-1}-x_1)+x_2\cdots x_k(x_1-x_{n-1})=0.$$
Assume that $x_1\neq x_{n-1}$. Then implies that $$\label{eql4}
x_2\cdots x_k=x_{n-k}\cdots x_{n-2}.$$ We assert that $x_2=x_{n-2}$. To verify this, we first observe that $2\leq k\leq n-3$ and $x_2\leq \dots \leq x_{n-2}$. After cancellation of factors with the same index on both sides of , we have $$\label{eql5}
x_2\cdots x_l=x_{n-l}\cdots x_{n-2},$$ where $2\leq l<n-l$ (here we use $k\leq n-3$). Since $$x_l\leq x_{n-l},\quad x_{l-1}\leq x_{n-l+1},\quad\dots\quad x_2\leq x_{n-2},$$ equation yields that $x_2=\dots=x_{n-2}$.
Now turns into $$\label{eql6}
x_1x_2^{k-1}+x_2^{k-1}x_{n-1}=2\gamma.$$ From with $I=\{2,\dots,k+1\}$ and using that $k\leq n-3$, we obtain $$\label{eql7}
x_2^k+x_2^k=2\gamma.$$ Hence and show that $$\label{eql8}
x_1+x_{n-1}=2x_2.$$ Applying with $I=\{1,\dots,k-1,n-1\}$ and using , we get $$2x_1x_2^{k-2}x_{n-1}=2\gamma=2x_2^k,$$ hence $$\label{eql9}
x_1x_{n-1}=x_2^2.$$ But and give $x_1=x_{n-1}$, a contradiction.
This shows that $x_1=x_{n-1}$, which implies the assertion of the lemma.
\[Proposition4.4\] Let $K,K_0 \subset {{\mathbb R}}^n$ be convex bodies with $K_0$ centrally symmetric and of class $C^2_+$ and $K$ having a $C^2$ support function. Let $n\geq 5$ and $k\in\{2,\dots,n-3\}$. Assume that there is a constant $\beta>0$ such that $$V_k(K\vert U)=\beta V_k(K_0\vert U)$$ for all $U\in\mathbb{G}(n,k)$. Then there exist $u_0\in {{\mathbb S}}^{n-1}$ and $r_0>0$ such that $$L_{h_0}(h)(u_0)=L_{h_0}(h)(-u_0)=r_0{\operatorname{id}}_{T_{u_0}{{\mathbb S}}^{n-1}}.$$
For $u\in {{\mathbb S}}^{n-1}$, let $r_1(u),\dots,r_{n-1}(u)$ denote the eigenvalues of the selfadjoint linear map $L_{h_0}(h)(u){\colon}T_u{{\mathbb S}}^{n-1}\to T_u{{\mathbb S}}^{n-1}$, which are ordered such that $$r_1(u)\leq\dots\leq r_{n-1}(u).$$ Then we define a continuous map $R{\colon}{{\mathbb S}}^{n-1}\to {{\mathbb R}}^{n-1}$ by $$R(u):=(r_1(u),\dots,r_{n-1}(u)).$$ By the Borsuk-Ulam theorem (cf. [@Guillemin-Pollack p. 93] or [@Matousek:BU]), there is some $u_0\in {{\mathbb S}}^{n-1}$ such that $$\label{4.0a}
R(u_0)=R(-u_0).$$ Corollary \[cor:L0\] shows that $L_{h_0}(h)(u_0)$ and $L_{h_0}(h)(-u_0)$ have a common orthonormal basis $e_1,\dots,e_{n-1}\in u_0^\perp$ of eigenvectors and that at least one of $L_{h_0}(h)(u_0)$ or $L_{h_0}(h)(-u_0)$ is nonsingular. But $R(u_0)=R(-u_0)$ implies that $L_{h_0}(h)(u_0)$ and $L_{h_0}(h)(-u_0)$ have the same eigenvalues and thus they are both nonsingular. Therefore the eigenvalues of both $L_{h_0}(h)(u_0)$ and $L_{h_0}(h)(-u_0)$ are positive.
We can assume that, for $\iota=1,\dots,n-1$, $e_\iota$ is an eigenvector of $L_{h_0}(h)(u_0)$ corresponding to the eigenvalue $r_\iota:=r_\iota(u_0)$. Next we show that $e_\iota$ is an eigenvector of $L_{h_0}(h)(-u_0)$ corresponding to the eigenvalue $r_{n-\iota}(-u_0)$. Let $\tilde{r}_\iota$ denote the eigenvalue of $L_{h_0}(h)(-u_0)$ corresponding to the eigenvector $e_\iota$, $\iota=1,\dots,n-1$. Since $\tilde{r}_1,
\dots,\tilde{r}_{n-1}$ is a permutation of $r_1(-u_0),\dots,r_{n-1}(-u_0)$, it is sufficient to show that $\tilde{r}_1\geq \dots\geq \tilde{r}_{n-1}$. By Corollary \[cor:L0\], for any $1\leq i_1<\dots< i_k\leq n-1$ we have $$\left(\land^kL_{h_0}(h)(u_0)+\land^k L_{h_0}(h)(-u_0)\right)e_{i_1}\land\dots\land e_{i_k}=2\beta
e_{i_1}\land\dots\land e_{i_k} ,$$ and therefore $$\label{4.1a}
r_{i_1}\cdots r_{i_k}+\tilde{r}_{i_1}\cdots \tilde{r}_{i_k}=2\beta .$$ For $\iota\in\{1,\dots,n-2\}$, we can choose a subset $I\subset\{1,\dots,n-1\}$ with $|I|=k-1$ and $\iota,\iota+1\notin I$, since $k+1\leq n-1$. Then yields $$r_I r_\iota+\tilde{r}_I \tilde{r}_\iota=
r_I r_{\iota+1}+\tilde{r}_I\tilde{r}_{\iota+1}
\geq r_I r_{\iota}+ \tilde{r}_I\tilde{r}_{\iota+1},$$ which implies that $\tilde{r}_\iota\geq\tilde{r}_{\iota+1}$.
Let $1\leq i_1<\dots<i_k\leq n-1$ and $I:=\{i_1,\dots,i_k\}$. Applying the linear map $\land^k L_{h_0}(h)(u_0)+
\land^k L_{h_0}(h)(-u_0)$ to $e_{i_1}\land\dots\land e_{i_k}$, we get $$\label{eqsp1}
\prod_{\iota\in I}r_\iota(u_0)+ \prod_{\iota\in I}r_{n-\iota}(-u_0)=2\beta.$$ From and we conclude that the sequence $0<r_1(u_0)\leq\dots\leq r_{n-1}(u_0)$ satisfies the hypothesis of Lemma \[sternl\]. Hence, $r_1(u_0)=\dots=r_{n-1}(u_0)=:r_0$. But $R(-u_0)=R(u_0)$ implies that also $r_1(-u_0)=\dots=r_{n-1}(-u_0)=r_0$, which yields the assertion of the proposition.
Proof of Theorem \[Theorem1.4\]: remaining cases
------------------------------------------------
It remains to consider the cases where $j=n-1$. Hence, we have $2\leq i\leq n-3$. Proposition \[Proposition4.4\] implies that there is some $u_0\in {{\mathbb S}}^{n-1}$ such that the eigenvalues of $L_{h_0}(h)(u_0)$ and $L_{h_0}(h)(-u_0)$ are all equal to $r_0>0$. But then Corollary \[cor:L0\] shows that $$r_0^i+r_0^i=2\alpha=2\frac{V_i(K\vert L)}{V_i(K_0\vert L)},$$ for all $L\in\mathbb{G}(n,i)$, and $$r_0^j+r_0^j=2\beta=2\frac{V_j(K\vert U)}{V_j(K_0\vert U)},$$ for all $U\in\mathbb{G}(n,j)$. Hence, we get $$\left(\frac{V_j(K_0\vert U)}{V_j(K\vert U)}\right)^{\frac{1}{j}}=
\left(\frac{V_i(K_0\vert L)}{V_i(K\vert L)}\right)^{\frac{1}{i}}$$ for all $U\in\mathbb{G}(n,j)$ and all $L\in\mathbb{G}(n,i)$. Thus again equation is available and the proof can be completed as before.
Proof of Corollary \[Corollary1.3\]
-----------------------------------
Let $K$ have constant width $w$. Then, [@Bonnesen-Fenchel [§]{}64], the diameter of $K$ is also $w$ and any point $x\in {\partial}K$ is the endpoint of a diameter of $K$. That is there is $y\in \partial K$ such that $|x-y|=w$. Then $K$ is contained in the closed ball $B(y,w)$ of radius $w$ centered at $y$ and $x\in {\partial}B(y,w)\cap K$. Thus if ${\partial}K$ is $C^2$, then ${\partial}K$ is internally tangent to the sphere ${\partial}B(y,w)$ at $x$. Therefore all the principle curvatures of ${\partial}K$ at $x$ are greater or equal than the principle curvatures of ${\partial}B(y,w)$ at $x$, and thus all the principle curvatures of ${\partial}K$ at $x$ are at least $1/w$. Whence the Gauss-Kronecker curvature of ${\partial}K$ at $x$ is at least $1/w^{n-1}$. As $x$ was an arbitrary point of ${\partial}K$ this shows that if ${\partial}K$ is a $C^2$ submanifold of ${{\mathbb R}}^n$ and $K$ has constant width, then ${\partial}K$ is of class $C^2_+$. Corollary \[Corollary1.3\] now follows directly from Corollary \[intro:cor\].
Bodies of revolution
====================
We now give a proof of Proposition \[revolution\]. By assumption, there are constants $\alpha,\beta>0$ such that $$V_i(K | L)=\alpha V_i(K_0 | L)\quad \text{and}\quad V_{n-1}(K |
U)=\beta V_{n-1}(K_0 | U),$$ for all $L\in\mathbb{G}(n,i)$ and $U\in \mathbb{G}(n,n-1)$, where $i\in\{1,n-2\}$. We can assume that the axis of revolution contains the origin and has direction $e\in{{\mathbb S}}^{n-1}$. Let $u\in{{\mathbb S}}^{n-1}{\smallsetminus}\{\pm e\}$. Then there are $\varphi\in
\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ and $v_0\in{{\mathbb S}}^{n-1}\cap
u^\perp$ such that $u=\cos\varphi\, v_0+\sin\varphi\, e$. For the sake of completeness we include a proof of the following lemma.
\[eigenvalues\] The map $L(h_K)(u)$ is a multiple of the identity map on $e^\perp\cap
v_0^\perp$ and has $-\sin\varphi\, v_0+\cos \varphi\, e$ as an eigenvector.
By rotational invariance, there is some $r(\varphi)>0$ such that $$\label{diff}
h_K(\cos\varphi\,v+\sin\varphi\, |v|e)=r(\varphi)|v|,$$ for all $v\in e^\perp$. Differentiating twice with respect to $v\in e^\perp$ yields that, for any $v,w\in e^\perp\cap
v_0^\perp$, $$\cos^2\varphi \, d^2h_K(\cos\varphi\, v_0+\sin\varphi\, e)(v,w)
=r(\varphi) \langle v,w\rangle.$$ Moreover, differentiating with respect to $v$, we obtain, for any $v\in e^\perp\cap v_0^\perp$, $$\label{diffdiff}
dh_K(\cos\varphi\, v_0+\sin\varphi\, e)(v)=0.$$ Differentiating with respect to $\varphi$, we obtain $$d^2h_K(\cos\varphi\, v_0+\sin\varphi\, e)(v,-\sin\varphi\, v_0+\cos\varphi\, e)=0.$$ Thus, if $v_1,\dots,v_{n-2}$ is an orthonormal basis of $e^\perp\cap
v_0^\perp$, then $-\sin\varphi\, v_1+\cos\varphi\, e,v_1,\dots,$ $v_{n-2}$ is an orthonormal basis of eigenvectors of $L(h_K)(u)$ with corresponding eigenvalues $x_1$ and $x_2=\dots=x_{n-1}=:x$.
Let $K$ and $K_0$ be as in Proposition \[revolution\] and let $e$ be a unit vector in the direction of the common axis of rotation of $K$ and $K_0$. Let $h$ be the support function of $K$ and $h_0$ the support function of $K_0$. Let $u\in {{\mathbb S}}^{n-1}\cap e^\bot$ be a point in the equator of ${{\mathbb S}}^{n-1}$ defined by $e$. As $e$ is orthogonal to $u$, the vector $e$ is in the tangent space to ${{\mathbb S}}^{n-1}$ at $u$. Let $e_2{,\dots,}e_{n-1}$ be an orthonormal basis for $\{u,e\}^\bot$. Then $e,e_2{,\dots,}e_{n-1}$ is an orthonormal basis for both $T_u{{\mathbb S}}^{n-1}$ and $T_{-u}{{\mathbb S}}^{n-1}$. By Lemma \[eigenvalues\] there are eigenvalues $x_1$, and $x_2=x_3=\dots =x_{n-1}=:x$ such that $L(h)(u)e=x_1 e$ and $L(h)(u)e_j=x e_j$ for $j=2{,\dots,}n-1$. By rotational symmetry we also have $L(h)(-u)e=x_1 e$ and $L(h)(-u)e_j=x e_j$ for $j=2{,\dots,}n-1$. Likewise if $y_1$, and $y_2=y_3=\dots =y_{n-1}=:y$ are the eigenvalues of $L(h_0)(u)$, then they are also the eigenvalues of $L(h_0)(-u)$ and $L(h_0)(\pm u)e=y_1 e$ and $L(h_0)(\pm u)e_j=y e_j$ for $j=2{,\dots,}n-1$. Proposition \[prop:wedge\] implies the polynomial relations $$\begin{aligned}
x_1x^{i-1}+x_1x^{i-1}&=2\alpha y_1y^{i-1},\\
x^{i}+x^{i}&=2\alpha y^i,\\
x_1x^{n-2}+x_1x^{n-2}&=2\beta y_1y^{n-2}.\end{aligned}$$ The first two of these imply that $x/y=x_1/y_1$ and therefore $$\alpha^{n-1}={\left(}\frac{x}{y}{\right)}^{i(n-1)}=\beta^i.$$ As in the proof of Case 2 of the proof of Theorem \[Theorem1.4\] this implies that equation holds which in turn implies $K$ and $K_0$ are homothetic.
[10]{}
W. Blaschke, *Kreis und [K]{}ugel*, Chelsea Publishing Co., New York, 1949. [MR ]{}[17,887b]{}
T. Bonnesen and W. Fenchel, *Theorie der konvexen [K]{}[ö]{}rper*, Chelsea Publishing Co., Bronx, N.Y., 1971, Reissue of the 1948 reprint of the 1934 original. [MR ]{}[51 \#8954]{}
S. Campi, *Reconstructing a convex surface from certain measurements of its projections*, Boll. Un. Mat. Ital. B (6) **5** (1986), 945–959. [MR ]{}[88f:52004]{}
G. D. Chakerian, *Sets of constant relative width and constant relative brightness*, Trans. Amer. Math. Soc. **129** (1967), 26–37. [MR ]{}[35 \#3545]{}
G. D. Chakerian and E. Lutwak, *Bodies with similar projections*, Trans. Amer. Math. Soc. **349** (1997), no. 5, 1811–1820. [MR ]{}[98a:52011]{}
W. J. Firey, *Convex bodies of constant outer [$p$]{}-measure*, Mathematika **17** (1970), 21–27. [MR ]{}[42 \#2367]{}
R. J. Gardner, *Geometric tomography*, Encyclopedia of Mathematics and its Applications, vol. 58, Cambridge University Press, Cambridge, 1995. [MR ]{}[96j:52006]{}
R. J. Gardner and A. Volčič, *Tomography of convex and star bodies*, Adv. Math. **108** (1994), 367–399. [MR ]{}[95j:52013]{}
P. Goodey, R. Schneider, and W. Weil, *Projection functions on higher rank [G]{}rassmannians*, Geometric aspects of functional analysis (Israel, 1992–1994), Oper. Theory Adv. Appl., vol. 77, Birkh[ä]{}user, Basel, 1995, pp. 75–90. [MR ]{}[96h:52007]{}
[to3em]{}, *Projection functions of convex bodies*, Intuitive geometry (Budapest, 1995), Bolyai Soc. Math. Stud., vol. 6, J[á]{}nos Bolyai Math. Soc., Budapest, 1997, pp. 23–53. [MR ]{}[98k:52020]{}
[to3em]{}, *On the determination of convex bodies by projection functions*, Bull. London Math. Soc. **29** (1997), 82–88. [MR ]{}[97g:52017]{}
V. Guillemin and A. Pollack, *Differential topology*, Prentice-Hall Inc., Englewood Cliffs, N.J., 1974. [MR ]{}[50 \#1276]{}
F. Haab, *Convex bodies of constant brightness and a new characterisation of spheres*, J. Differential Geom. **52** (1999), no. 1, 117–144. [MR ]{}[2001k:52008]{}
R. Howard, *[Convex Bodies of Constant Width and Constant Brightness]{}*, Preprint. Available at: arXiv:math.MG/0306437.
R. S. Kulkarni, *On the [B]{}ianchi [I]{}dentities*, Math. Ann. **199** (1972), 175–204. [MR ]{}[49 \#3767]{}
M. Marcus, Finite dimensional multilinear algebra. Part I. Pure and Applied Mathematics, Vol. 23. Marcel Dekker, Inc., New York, 1975. [MR ]{}[50 \#4599]{}
M. Marcus, Finite dimensional multilinear algebra. Part II. Pure and Applied Mathematics, Vol. 23. Marcel Dekker, Inc., New York, 1975. [MR ]{}[53 \#5623]{}
J. Matou[š]{}ek, *Using the [B]{}orsuk-[U]{}lam theorem*. Lectures on topological methods in combinatorics and geometry. Written in cooperation with Anders Bj[ö]{}rner and G[ü]{}nter M. Ziegler. Universitext, Springer-Verlag, Berlin, 2003. [MR ]{}[2004i:55001]{}
S. Nakajima, *Eine charakteristische [E]{}igenschaft der [K]{}ugel*, Jber. Deutsche Math.-Verein **35** (1926), 298–300.
R. Schneider, *Convex bodies: The [B]{}runn-[M]{}inkowski theory*, Encyclopedia of Mathematics and its Applications, vol. 44, Cambridge University Press, 1993. [MR ]{}[94d:52007]{}
A. Wintner, *On parallel surfaces*, Amer. J. Math. **74** (1952), 365–376. [MR ]{}[14,203c]{}
|
---
abstract: 'We numerically and analytically analyze the startup continuous shear rheology of heavily entangled rigid rod polymer fluids based on our self-consistent, force-level theory of anharmonic tube confinement. The approach is simplified by neglecting stress-assisted transverse barrier hopping, and irreversible relaxation proceeds solely via deformation-perturbed reptation. This process is self-consistently coupled to tube dilation and macroscopic rheological response. We predict that with increasing strain the tube strongly dilates, entanglements are lost, and reptation speeds up. As a consequence, a stress overshoot emerges not due to affine over-orientation, but rather to strong weakening of the entanglement network. Just beyond the stress overshoot the longest relaxation time is predicted to scale as the inverse shear rate, corresponding to the emergence of a generic form of convective constraint release (CCR). Its origin is a stress-induced local force that self-consistently couples the mesoscopic tube-scale physics with macroscopic mechanical response. Tube weakening at the overshoot occurs via the same qualitative mechanism that leads to microscopic absolute yielding in the nonlinear elastic scenario analyzed in the preceding paper. The stress overshoot and emergent CCR thus correspond to an elastic-viscous crossover due to deformation-induced disentanglement, which at long times and high shear rates results in a quantitatively sensible flow stress plateau and shear thinning behavior. The predicted behavior for needles is suggested to be relevant to flexible chain melts at low Rouse Weissenberg numbers if contour length equilibration is fast. Quantitative predictions for tube dilation in the steady state flow of chains melts are favorably compared to a recent simulation. The analytic analysis presented in this paper provides new insights concerning the physical origin of our non-classical rheological predictions.'
author:
- 'Kenenth S. Schweizer'
- 'Daniel M. Sussman'
title: 'A Force-Level Theory of the Rheology of Entangled Rod and Chain Polymer Liquids. II. Perturbed Reptation, Stress Overshoot, Emergent Convective Constraint Release and Steady State Flow'
---
Introduction
============
In this companion article to the preceding paper I \[1\], we employ our force-level statistical mechanical approach to study the nonequilibrium and nonlinear *dynamical* effects relevant to the rheology of rigid rod liquids. We focus primarily on finite-rate startup continuous shear deformations in the heavily entangled limit. Of special interest for rods under all flow conditions, and for chains in slow flows where the contour length is equilibrated, is to fundamentally understand the nature of (and possible connections between) the physics of tube dilation, the stress overshoot, emergent convective constraint release (CRR), the steady state flow curve and shear thinning.
As a relevant preliminary, in section II we analyze the deformation-modified terminal relaxation process and tube survival function that quantifies disentanglement in the “nonlinear elastic” limit studied in paper I. In section III we implement for rods the full numerical theory for startup continuous shear rheology in a new simplified manner where stress-assisted transverse barrier hopping is not allowed. This simplifies the physical picture underlying our approach, and also allows analytic results (valid for the heavily entangled limit of rod liquids) to be extracted from the coupled constitutive and structural evolution equations. Thus, dissipative relaxation dynamics are *entirely* controlled by a self-consistent treatment of the coupling between reptation, the deformation-modified tube confinement field, and the macroscopic stress-strain response \[2\]. Key numerical predictions of the theory are established, and physical behavior akin to that proposed in phenomenological CCR models emerges naturally in this treatment. Sections IV and V present analytic derivations of the central numerical results obtained in section III. These allow a greatly enhanced intuitive physical understanding of the origin of our numerical results, the (sometimes surprising) interconnections of different predictions, and the conceptual similarities and differences with the Doi-Edwards (DE)\[3\] and GLaMM (Graham-Likhtman-McLeish-Milner)\[4\] models. We propose a partial synthesis of some concepts treated as distinct in phenomenological models. In section VI we briefly discuss how our rheological theory for rods may be relevant to chain polymer liquids if stretch rapidly equilibrates. A numerical application to tube swelling in the steady state, as probed in a recent primitive path (PP) analysis of chain melt simulations \[5\], is presented in Section VII. The paper concludes in Section VIII with a summary and a look towards the future.
While much of this article stands independently from its companion paper, it has been written assuming the reader is familiar with the preceding paper I \[1\], and equations from that work are cited as Eq(I.xx). As in paper I, statistical mechanical derivations previously documented in the literature are not repeated.
Perturbed Reptation and Disentanglement in the Nonlinear Elastic Regime
=======================================================================
Here we bring together our results for entangled rods, and PP chains on timescales both long and short relative to contour-length relaxation, as obtained in paper I to analyze how deformation and orientation modify the reptation-controlled terminal relaxation time and irreversible disentanglement process within the nonlinear elastic scenario \[1\]. These results are relevant to the early stages of startup shear rheology, and provide context for the remainder of this paper, which goes beyond the nonlinear elastic limit.
Rods and Contour-Length Relaxed PP Chains
-----------------------------------------
As stress or strain grows, tube confinement weakens, entanglements are lost, and thus reptation speeds up, thereby accelerating disentanglement. This change in the disentanglement time scale was not present in the nonlinear elastic limit studied in section III of paper I. Here, we present an “adiabatic” analysis of these dynamical questions; in subsequent sections we will show that this analysis is relevant to the full dynamical treatment before the emergence of flow-induced CCR.
In the quiescent state of a heavily entangled rod fluid the terminal rotational time is inversely proportional to the transverse diffusion constant \[2,6-8\] ${{D}_{\bot }}/{{D}_{\bot ,0}}\propto {{D}_{rot}}/{{D}_{rot,0}}$, and thus: $${{\tau }_{rot}}=\frac{{{L}^{2}}}{6{{D}_{\bot }}}=\frac{{{\tau }_{0}}}{36}\cdot \frac{{{D}_{\bot ,0}}}{{{D}_{\bot }}},$$ where ${{\tau }_{0}}={{L}^{2}}/{{D}_{||,0}}$ is proportional to the fast (dilute-solution-like for rods) CM longitudinal reptation time. Since here we ignore stress-assisted transverse barrier hopping as an alternate channel for relaxation, one can exploit the same simple connections between ${{\tau }_{rot}}$, ${{D}_{\bot }}$, and ${{d}_{T}}$ that apply in equilibrium and write ${{D}_{rot}}\propto {{D}_{\bot }}\propto \tau _{rot}^{-1}\propto {{({{d}_{T}}/L)}^{2}}$. Using Eqs. (I.7) and (I.20), an approximate representation of the reduction of the relaxation time is thus \[2,8\]: $$\begin{aligned}
\frac{{{\tau }_{rot}}(\rho ,S,\sigma )}{{{\tau }_{rot}}(\rho ,0,0)}&=&{{\left( \frac{{{d}_{\bot }}(\rho ,0,0)}{{{d}_{\bot }}(\rho ,S,\sigma )} \right)}^{2}} \nonumber \\
&\approx& {{\left( \sqrt{1-S}-[(\beta \sigma {{L}^{3}})/3(\rho /{{\rho }_{c}})] \right)}^{2}},\end{aligned}$$ Rod orientation and the direct force effect in the dynamic free energy (transverse tube confinement field) dilate the tube diameter, speeding up relaxation and disentanglement. The approximate equality in Eq. (2) requires that the orientational order parameter ($S$) is not close to unity and the stress ($\sigma$) is not too close to its microscopic absolute yield value \[2,8,9\], a condition where the neglect of stress-assisted barrier hopping is accurate.
The above results allow one to analytically derive how the tube survival function changes if the tube diameter slowly changes for any reason: $$\Psi (\gamma )=\exp \left( -\frac{1}{Wi}\int\limits_{0}^{\gamma }{d\gamma '{{\left( \frac{{{d}_{T}}(S(\gamma ');\sigma (\gamma '))}{{{d}_{T}}} \right)}^{2}}} \right)$$ where ${\textrm{Wi}}$ is the Weissenberg number. It is the integration through strain history of the reptation form of the relaxation time that we refer to as the “adiabatic” scenario. Recall that in equilibrium, or under the DE assumption that there is no change of the relaxation spectrum under deformation, Eq.(3) becomes a simple exponential function $${{\Psi }_{DE}}(\gamma )=\exp \left( -\gamma /Wi \right)$$ Taking into account only tube weakening due to deformation-induced orientational order (i.e., only the first contribution on the right hand side of Eq.(2), not the term stemming from the “direct force” effect), and using Eq.(I.23), we obtain: $$\begin{aligned}
\Psi (\gamma )&=&\exp \left( -\frac{1}{Wi}\int\limits_{0}^{\gamma }{d\gamma '\left( 1+\left( \gamma {{'}^{2}}/4 \right) \right)} \right) \nonumber \\
&=&\exp \left( -\frac{\gamma }{Wi}\left( 1+\frac{{{\gamma }^{2}}}{12} \right) \right)\end{aligned}$$ At large strains, disentanglement occurs faster than the exponential form above.
Based on a detailed analysis that includes orientation-induced tube softening and the direct force effect, and using an elastic stress-strain relation $\sigma ={{G}_{e}}\gamma h(\gamma )$, one can derive an approximate result for tube swelling in the nonlinear elastic limit \[7\]: $$\frac{{{d}_{T}}}{{{d}_{T}}(\gamma )}\approx 1-\frac{\gamma }{{{\gamma }_{y}}}$$ which immediately implies: $$\frac{{{\tau }_{rot}}(\gamma )}{{{\tau }_{rot,0}}}={{\left( \frac{{{d}_{T,0}}}{{{d}_{T}}(\gamma )} \right)}^{2}}\approx {{\left( 1-\frac{\gamma }{{{\gamma }_{y}}} \right)}^{2}}$$ Using Eqs. (3) and (7) one obtains: $$\Psi (\gamma )\approx \exp \left( -\frac{{{\gamma }_{y}}}{Wi}\cdot \frac{\gamma }{{{\gamma }_{y}}-\gamma } \right)$$ The inverse tube diameter decreases linearly, and the reptation-driven terminal relaxation time decreases quadratically, with strain. As a consequence, an essential singularity enters the tube survival function, corresponding to the abrupt destruction of the transverse confinement field as strain approaches its microscopic absolute yield value.
We expect that these results for rods are relevant for contour-length-relaxed PP chains when reptation is the dominant mechanism of dynamic disentanglement \[7,10\].
Stretched PP Chains
-------------------
Implications of unrelaxed chain stretching (i.e., slow contour length retraction after a deformation) and the attendant tube compression for reptation-driven transport were derived in paper I. But what are the strain dependences of the longest relaxation time? Under the assumption of no stretch relaxation, recall the deGennes scaling argument in equilibrium \[11\]: $$\begin{aligned}
{{\tau}_{rep}}&\approx &{{\tau }_{0}} \frac{L_{e,contour}^{2}}{{{D}_{Rouse}}} \propto {{\left( \frac{N\sigma }{\sqrt{{{N}_{e}}}} \right)}^{2}}N \nonumber \\
&\propto& \frac{{{N}^{3}}}{{{N}_{e}}}{{\tau }_{0}}\propto {{\tau }_{0}}{{\left( \frac{{{L}_{c}}}{{{d}_{T}}} \right)}^{2}}D_{Rouse}^{-1}\end{aligned}$$ where the full PP contour length enters in the first term on the right hand side, but the fixed chemical contour length, $L_c$, enters in the final expression. Equivalently, $${{\tau }_{rep}}\propto {{\tau }_{Rouse}}{{\left( \frac{{{R}_{g}}}{{{d}_{T}}} \right)}^{2}}\propto \frac{{{N}^{3}}}{{{N}_{e}}}{{\tau }_{0}}$$ How to generalize this to a deformed anisotropic melt at short times is not clear. To proceed, we are guided by the fact that the physical contour length of a real polymer is fixed; what dominates is how much of it must crawl along a coarse-grained tube due to transverse localization. This perspective suggests: $$\begin{aligned}
{{\tau }_{rep}}(\gamma ) &\propto& {{\tau }_{0}}{{\left( \frac{{{L}_{c}}}{{{d}_{T}}} \right)}^{2}}D_{Rouse}^{-1}\propto \frac{{{N}^{2}}{{b}^{2}}}{d_{T}^{2}(\gamma )}D_{Rouse}^{-1}\nonumber \\
&\propto &{{\tau }_{rep,0}}{{\left( \frac{{{d}_{T}}}{{{d}_{T}}(\gamma )} \right)}^{2}}\end{aligned}$$ The final result is of the same form as for rods in Eq. (7): the reptation time changes only via the tube diameter, which if compressed slows down dynamic disentanglement.
Of course, it is unlikely that stretched chains do not relax their contour length on the reptation time scale. The immediate implications, though, are for the modification of the tube survival function. Using Eqs. (3) and (11) and Eq.(I.47) we obtain: $$\begin{aligned}
& \Psi (\gamma )=\exp \left( -\frac{1}{Wi}\int\limits_{0}^{\gamma }{dx\frac{{{(1+{{x}^{2}}/4)}^{{}}}}{{{(1+{{x}^{2}}/2)}^{2}}}} \right) \nonumber \\
& \quad \quad \ =\exp \left( -\frac{1}{4Wi}\left( 3\sqrt{2}\arctan (\gamma /\sqrt{2})+\frac{\gamma }{1+{{\gamma }^{2}}/2} \right) \right)\end{aligned}$$
At very low strains, Eq.(12) reduces to the exponential from of Eq.(4), as it must. At high strains, the disentanglement rate slows compared to the DE result, and in the very large strain limit dynamical tube destruction is absent. Again, the latter limit is unphysical since Eq. (12) is valid only on time scales short relative to chain stretch relaxation.
Startup Shear Rheology of Entangled Rod Fluids: Dynamical Theory and Numerical Predictions
==========================================================================================
Formulation
-----------
To fully treat startup continuous shear rheology of rod fluids within our microscopic approach requires taking into account time-dependent tube softening due to both the rod orientation and direct force effects, both of which speed up reptation and viscous disentanglement. A flow-induced disentanglement process emerges from the self-consistent connection between tube scale physics, single-rod motion, and the macroscopic stress-strain response \[2\]. All these factors are coupled via the spatially-resolved nonequilibrium dynamic free energy (i.e., the tube confinement field) which evolves with strain or time. Given that stress both dilates the tube and lowers the transverse barrier, two channels of polymer motion exist \[2,8\]: deformation-accelerated reptation and activated transverse barrier hopping. In the heavily entangled limit, we have numerically found that activated hopping is of secondary importance \[2\], and we ignore it in this work.
The mathematical realization of the above ideas involves coupled evolution equations for stress and rod orientation, and an Òeffective strainÓ concept. We adopt a constitutive equation that is formally of the DE, pure-orientational-stress form \[2\]: $$\sigma (t)={{G}_{e}}\int\limits_{-\infty }^{t}{dt'Q(E(t,t'))\frac{d}{dt'}}\Psi (t-t')$$ $$\tag{13b}
Q(x)\equiv xh(x)\approx 5x/(5+{{x}^{2}})$$ $$\tag{13c}
\Psi (t-t')=\exp \left( -\int\limits_{t'}^{t}{dt''\frac{1}{{{\tau }_{rot}}(\rho ,S(t''),\sigma (t'')}} \right)$$ where the terminal (rotational) relaxation time is given by the first equality in Eq.(2).
Closure of the above equations requires an evolution equation for the orientational order parameter, S(t). We proposed a competition between relaxation-driven orientational randomization and a mechanically-driven, rate-dependent, orientational driving force \[2\]: $$\frac{dS(t)}{dt}=\frac{-S(t)}{{{\tau }_{rot}}(\rho ,S(t),\sigma (t))}+\left( {{\left. \frac{d{{S}_{a}}}{d\gamma } \right|}_{\gamma ={{\gamma }_{eff}}}} \right)\dot{\gamma }$$ The second term introduces the idea of an effective strain, ${{\gamma }_{eff}}$. Under affine conditions, one has the classic Lodge-Meissner relation: $S(\gamma )=2\gamma {{\left( 3\sqrt{4+{{\gamma }^{2}}}-\gamma \right)}^{-1}}$ for $S>0$. We postulate that this functional form still applies beyond the affine regime, which provides the self-consistent relation: $$\gamma_{eff}(t) = \frac{3S(t)}{\sqrt{1+S(t)-2S^2(t)}},$$ $${{S}_{a}}({{\gamma }_{eff}})=S(t).$$ where ${{S}_{a}}(\gamma )$ is the rod orientation that results from an affine (step) shear strain deformation of amplitude $\gamma $. This effective strain corresponds to the affine strain needed to generate the current amount of orientational order in the system. The underlying physical idea is that as rod orientational relaxation proceeds, the amount by which further deformation orients rods depends on the current orientational order, not a hypothetical state that exists in the absence of relaxation (e.g., an affine deformed state). In the limit of very high Wi one has: $$\tau _{rot}^{-1}\ll \dot{\gamma},\quad {{\gamma }_{eff}}\to \gamma =\dot{\gamma }t$$
Equations (13)-(16), along with the required input from paper I, define the full rheological theory which is numerically solved via integration through deformation history. The tube diameter, relaxation time, orientation parameter, and orientational stress are all coupled via their connection to the nonequilibrium tube confinement field.
Numerical Predictions
---------------------
As in paper I, we consider a heavily entangled needle fluid with $\rho /{{\rho }_{c}}=1000$, where ${{\rho }_{c}}{{L}^{3}}=3\sqrt{2}$. Figures 1-4 show representative numerical predictions, computed as in our prior work \[2\] but here neglecting the transverse barrier hopping process.
{width="0.9\linewidth"}
The main frame of Fig.1 shows the dimensionless stress versus strain at various Wi values. A weak overshoot peak occurs at a strain of order unity, which is very close to its Òabsolute yield strainÓ analog per section III of paper I, and below the DE affine over-orientation value of ${{\gamma }_{m}}\approx \sqrt{5}$. The overshoot amplitude and strain weakly increase with Wi, and the inset shows the corresponding stress and orientational parameter at the overshoot peak. All of these quantities grow monotonically with Wi, and then saturate.
Figure 2 shows results for the tube diameter normalized by its quiescent value versus strain. Massive dilation is predicted, which is largest at the stress overshoot. The magnitude of the dilation, and the tendency for it to exhibit an overshoot, are enhanced as Wi grows. For ${\textrm{Wi}}=1000$ and the heavily entangled system studied here, the tube diameter is $\sim 10$ times larger than its equilibrium value. This is still far smaller than the needle length, and the system remains entangled. For long time transport and relaxation, this implies dynamics $\sim 100$ times faster than in equilibrium.
{width="0.9\linewidth"}
Beyond the stress overshoot, the tube diameter attains its steady state value quickly but not instantaneously. The inset shows the ratio of the equilibrium to steady state tube diameter, which decreases strongly as Wi grows due to enhanced tube dilation. Strict power law scaling is not predicted, but over a moderate range of Weissenberg numbers we find $d_T \sim {\textrm{Wi}}^{-x}$, $x\approx 0.4-0.5$.
Figure 3 shows the effective terminal relaxation time in units of its equilibrium value as a function of strain at various Wi. The dynamics are greatly enhanced due to tube dilation beginning at strains *well below* the stress overshoot, which occurs at a strain of $\sim 1-1.5$. The functional form agrees well with the parabolic law of Eq.(7) derived under the “adiabatic” simplification. Plateau values are attained just beyond the stress overshoot. This signals an emergent CCR regime which is precisely defined here by a relaxation time scaling as the inverse of the shear rate. The inset quantifies this CCR effect as a function of strain in terms of an effective Weissenberg number, ${\textrm{Wi}}_{eff} \equiv \dot{\gamma }{{\tau }_{rot}}(\gamma \to \infty )$. At high Wi, we find ${\textrm{Wi}}_{eff}\rightarrow 1-2$.
{width="0.9\linewidth"}
Figure 4 presents results for the steady state effective Weissenberg number, orientational order parameter, and flow stress as a function of the externally imposed Wi. These quantities follow almost identical dependences. A stable, monotonic flow curve with a near plateau, and hence sensible shear thinning behavior, is predicted. At high Wi values, the steady state flow stress is $\sim 10\%$ smaller, and $S$ is $\sim 25\%$ larger, than at the overshoot.
The overall picture that emerges from the numerical results is that tube dilation, relaxation time acceleration, emergent CCR, and the overshoot feature are *all* intimately related phenomena. Their common physical origin is the self-consistent coupling between macroscopic stress, orientation, and their local force consequences at the anharmonic tube field level. At zeroth order, our new calculations agree well with our prior numerical results \[2\] for heavily entangled rods that included stress-assisted transverse barrier hopping, thereby establishing the practical unimportance of the latter relaxation channel for heavily entangled systems.
{width="0.9\linewidth"}
None of the trends discussed above agree with DE theory \[3\], for which the stress overshoot is due to affine over-orientation, the non-monotonic flow curve and excessive shear thinning are unphysical, there is no tube dilation, and the reptation time is invariant to deformation. As a cautionary comment, we emphasize that our suggestion that the stress overshoot is due to emergent CCR, and not affine over-orientation, is sensitive to numerical prefactors of order unity that enter in our approach to nonlinear rheology. We are unaware of attempts to phenomenologically include CCR for entangled rod fluids. Our generic “emergent CCR” mechanism Ð a process whereby bulk fluid stress and flow act on the tube scale and modify single polymer dynamics Ð does roughly seem in the qualitative spirit of the phenomenological CCR formulations used for chain rheology. However, our theory qualitatively disagrees with the classic way of incorporating CCR as an additional distinct process which restores a sensible flow curve but is *not* the origin of the stress overshoot. Though quantitative factors matter in our approach, the present results suggest the stress overshoot may be a signature of an elastic-viscous dynamic disentanglement crossover. Even though “microscopic absolute yielding” (tube destruction) does not occur here, it is the underlying key concept as manifested by massive tube dilation.
Analytic Analysis of Transient Elastic-Viscous Crossover Regime
===============================================================
We first analytically analyze the “pre-flow” and “onset of flow” regimes, which we define as strains up to the stress overshoot. In this regime, tube dilation, growing orientation, faster reptation, and the signatures of emergent CCR all occur.
To make analytic progress, simplifications are invoked. (i) Transverse activated barrier hopping is neglected. (ii) We focus on the heavily entangled regime. (iii) The approximate, but qualitatively reliable, analytic results derived previously and summarized in paper I and above (consistent with point (ii)) are adopted. (iv) The DE-like early time picture of replacing \[2,3\] continuous startup shear with a step strain form of the constitutive equation is adopted. In the end, the derived results are found to be a faithful representation – in some cases remarkably so – of the full numerical calculations for heavily entangled needle fluids presented in section III.
Simplification (iv) is achieved via an integration by parts of Eq.(13). Upon keeping only the strain- (time-) local first term, one obtains: $$\sigma (\gamma )\approx {{G}_{e}}\frac{\gamma }{1+{{\gamma }^{2}}/5}\Psi (\gamma ;S,\sigma )$$ Differentiating to find the maximum yields: $$\begin{aligned}
\frac{d}{d\gamma }\sigma (\gamma ){{|}_{{{\gamma }_{m}}}}&=&0=\left\{ \frac{1}{1+\gamma _{m}^{2}/5}-\frac{2\gamma _{m}^{2}/5}{{{\left( 1+\gamma _{m}^{2}/5 \right)}^{2}}} \right\} \nonumber \\
& -& \frac{{{\gamma }_{m}}}{1+\gamma _{m}^{2}/5}\cdot \frac{1}{Wi}{{\left( \frac{{{d}_{T}}({{\gamma }_{m}})}{{{d}_{T}}} \right)}^{2}}\end{aligned}$$ Dropping the last term recovers the classic DE overshoot strain of ${{\gamma }_{m}}=\sqrt{5}$ due to the affine over-orientation effect. Naively, this seems correct if Wi diverges, and it would be *if* the tube diameter did not evolve with strain. However, Eq. (18) is a self-consistent relation for the overshoot strain, and in our approach the tube diameter does swell with increasing deformation. To analyze the full Eq. (18) we first rewrite it as: $$\frac{{{d}_{T}}({{\gamma }_{m}})}{{{d}_{T}}}=\sqrt{Wi}\sqrt{\frac{5-\gamma _{m}^{2}}{{{\gamma }_{m}}\left( 5+\gamma _{m}^{2} \right)}}$$ Since we expect the maximum strain is of order unity, if ${\textrm{Wi}}\gg 1$ then Eq. (19) implies a large amount of tube swelling *must* occur, although the tube need not be literally destroyed to achieve flow. As we derive below, the underlying physics controlling the overshoot is deeply connected with emergent CCR.
We now use the result of the second equality in Eq. (2) in Eq. (19) to obtain: $$\sqrt{1-S({{\gamma }_{m}})}-\frac{1}{3}\frac{{{\rho }_{e}}}{\rho }\tilde{\sigma }({{\gamma }_{m}})=\frac{1}{\sqrt{Wi}}\sqrt{\frac{{{\gamma }_{m}}(5+\gamma _{m}^{2})}{\left( 5-\gamma _{m}^{2} \right)}}$$ where $\tilde{\sigma} \equiv \beta L^3\sigma$. If ${\textrm{Wi}}\gg 1$ then the right hand side is nearly zero, corresponding to massive tube dilation. As Wi grows, the microscopic absolute yield point is more closely approached, though not reached in this heavily entangled limit. Using the expressions for the orientational order parameter and stress from section III yields a nonlinear equation for $\gamma_m$, and thus $S$. This can be numerically solved, and to leading order in ${\textrm{Wi}}^{-1}$ we find: $$\gamma_m\approx 1,\quad S(\gamma_m)\approx 1/3,\quad \textrm{for}\ {\textrm{Wi}}\rightarrow\infty$$ These results are consistent with the numerical calculations in Fig. 1, including their saturation at very high Wi. The leading order shift of the overshoot strain with inverse Wi can be straightforwardly computed. We find that this correction is negative and decreases as ${\textrm{Wi}}^{-1/2}$. Thus, as ${\textrm{Wi}}\gg 1$, the overshoot strain and stress approach their asymptotic limits from below, as found numerically. Eqs. (19)-(21) imply that at the stress overshoot the tube diameter grows as a power law: $${{d}_{T}}({{\gamma }_{m}})\ \approx {{d}_{T}}\sqrt{2Wi/3}\propto {{d}_{T}}\sqrt{Wi}$$ The tube is massively swollen, with a form given by the analytic analysis.
The numerical study found CCR-like behavior emerging just beyond the stress overshoot, the origin of which is now clear from the above analysis since $$\frac{{{\tau }_{rot}}(\sigma ,S)}{{{\tau }_{rot,0}}}={{\left( \frac{{{d}_{T,0}}}{{{d}_{T}}(\sigma ,S)} \right)}^{2}}\approx \frac{2}{3}{{\left( \frac{1}{\sqrt{Wi}} \right)}^{2}}\propto W{{i}^{-1}}$$ $${{\tau }_{rot}}({{\gamma }_{m}})\approx \frac{2}{3Wi}{{\tau }_{rot,0}}=\frac{2}{3}{{\dot{\gamma }}^{-1}}$$ Thus, the overshoot and emergence of CCR occur at essentially the same stress or strain, and are intimately related via the massive tube dilation driven by its coupling to the macroscopic stress via the direct force in the dynamic free energy of Eq(I.13). Equation (23) shows that CCR effectively emerges smoothly from the reptation time as the tube diameter becomes flow rate dependent. These features conflict with the DE scenario \[3\]. They are also unlike the GLaMM model \[4\] for chain melts which independently accounts for the overshoot via the affine over-orientation effect and the plateau flow stress via empirical “fine tuning” \[15\] of the parameter which quantifies the CCR mechanism: $c_\nu$, the number of retraction events required to result in a tube hop of order the tube diameter \[7\].
In the pre-overshoot regime, one can derive the dependence of the reptation time on accumulated strain. Combining the above results, we find to a good approximation that the reptation time decreases as: $$\frac{{{\tau }_{rot}}(\gamma )}{{{\tau }_{rot,0}}}\approx {{\left( 1-\frac{\gamma }{{{\gamma }_{m}}} \right)}^{2}}, \quad\gamma <{{\gamma }_{m}}$$ This “parabolic law” agrees well with our numerical results and the “adiabatic” analysis in section II that led to Eq.(7). An empirical interpolation of Eqs. (24) and (25) is: $$\frac{{{\tau }_{rot}}(\gamma )}{{{\tau }_{rot,0}}}\approx {{\left( 1-\frac{\gamma }{{{\gamma }_{m}}} \right)}^{2}}+\frac{2}{3Wi}\quad \quad$$ This implies an effective terminal relaxation time of: $${{\tau }_{rot}}\approx {{\tau }_{rot,0}}{{\left( 1-\frac{\gamma }{{{\gamma }_{m}}} \right)}^{2}}\ +\ \frac{2}{3\dot{\gamma }}$$ The apparent additive form mirrors Marrucci’s simple CCR formula \[13\]. Here, however, the flow rate term is *not* added as an independent convective process that “sweeps away” tube constraints. Rather, reptative motion continuously accelerates due to tube softening. In essence, CCR emerges continuously as the self-consistent “end point” of tube softening due to the nonlinear feedback between the anharmonic confinement field, polymer motion, and macroscopic mechanical response.
The above analysis also establishes that the simplest form of the CCR-controlled tube survival function, $\Psi (\gamma )\approx \exp \left( -3\gamma /2 \right)$, applies in the vicinity of the stress overshoot, but *not* below it. This has two important implications: (i) well below the stress overshoot is a *nonlinear-elastic-like* regime since CCR is not fully operable, and (ii) the stress overshoot is an *elastic-viscous crossover*. These points seem in qualitative contrast with the DE \[3\] and GLaMM \[4\] approaches for chain melts where CCR is “turned on” at early times and plays no role in determining the stress overshoot at low ${\textrm{Wi}}_R$.
Finally, our full dynamical analysis predicts an overshoot strain very close to the simple absolute yield estimates in section III. This is not an accident, since in the former case we predict that the overshoot (onset of stress drop) is correlated with emergent CCR and a near destruction of the tube (i.e., massive tube dilation and entanglement reduction). This is an important point Ð even in dynamic startup shear, the concept of a “microscopic absolute yielding” of a nonlinear elastic origin is relevant.
Analytic Analysis of the Long Time Steady State Flow Regime
===========================================================
We now analytically consider the inter-relationships between the flow curve and the steady state values of the rod orientation, tube diameter, and relaxation time. From Eq. (13), and the existence of a time-independent steady state, one has the self-consistent nonlinear relation: $${{\sigma }_{\infty }}={{G}_{e}}Q({{\gamma }_{eff,\infty }})\dot{\gamma }{{\tau }_{eff,\infty }}={{G}_{e}}Q({{\gamma }_{eff,\infty }})\ {\textrm{Wi}}_{eff,\infty }$$ where the effective steady state Weissenberg number depends on the steady state stress and orientational order parameter. The effect of orientation on this stress enters via the damping function evaluated using our effective strain concept \[2\]. To proceed, we take the steady state limit of the evolution Eq. (14): $$\frac{dS}{dt}=-\frac{1}{{{\tau }_{eff}}}S+\dot{\gamma }{{\left( \frac{\partial {{S}_{a}}}{\partial \gamma } \right)}_{\gamma \equiv {{\gamma }_{eff}}}}=0$$ which implies $${\textrm{Wi}}_{eff,\infty }={{S}_{\infty }}{{\left( \frac{\partial {{S}_{\infty }}}{\partial {{\gamma }_{eff}}} \right)}^{-1}}$$ $${{S}_{\infty }}=S({{\gamma }_{eff,\infty }})=\frac{2{{\gamma }_{eff,\infty }}}{{{\gamma }_{eff,\infty }}-3\sqrt{4+\gamma _{eff,\infty }^{2}}}$$ Using Eq. (2) and the above relations in Eq. (28) one obtains: $$\frac{{{\sigma }_{\infty }}}{{{G}_{e}}}=Q({{S}_{\infty }})\ {\textrm{Wi}}\ {{\left( \sqrt{1-{{S}_{\infty }}}-\frac{3\sqrt{2}}{5}\frac{{{\sigma }_{\infty }}}{{{G}_{e}}} \right)}^{2}}$$ This nonlinear equation for the dimensionless flow stress includes all the tube softening effects. Equation (32) is not closed since it requires the steady state polymer orientation, which depends on the steady state stress. But one can first formally solve the quadratic equation for the dimensionless flow stress to obtain: $$\begin{aligned}
\frac{{{\sigma }_{\infty }}}{{{G}_{e}}}=\frac{25}{36}\Bigg{[}&&\frac{1}{{{Q}_{\infty }}Wi}+\frac{6\sqrt{2}}{5}\sqrt{1-{{S}_{\infty }}} \\
&\pm &\left. \sqrt{{{\left( \frac{1}{{{Q}_{\infty }}Wi}+\frac{6\sqrt{2}}{5}\sqrt{1-{{S}_{\infty }}} \right)}^{2}}-\frac{72}{25}(1-{{S}_{\infty }})} \right] \nonumber\end{aligned}$$ Taking the negative root, and simplifying in the $W \gg 1$ limit, gives: $$\frac{{{\sigma }_{\infty }}}{{{G}_{e}}}=\frac{5\sqrt{2}}{6}\sqrt{1-{{S}_{\infty }}}-\frac{25}{36}\sqrt{\frac{12\sqrt{2(1-{{S}_{\infty }})}}{5{{Q}_{\infty }}{\textrm{Wi}}}}+.....$$ One sees there is an intrinsic limit as Wi diverges (flow stress plateau, perfect shear thinning) which is approached from below, as we found numerically. Substituting Eq.(34) into Eq.(29), the leading order contributions cancel and one obtains: $$\begin{aligned}
{\textrm{Wi}}_{eff}&=&{\textrm{Wi}}\ {{\left( \sqrt{1-{{S}_{\infty }}}-\frac{3\sqrt{2}}{5}\frac{{{\sigma }_{\infty }}}{{{G}_{e}}} \right)}^{2}} \\
&=&Wi{{\left( \frac{25}{36}\sqrt{\frac{12\sqrt{2(1-{{S}_{\infty }})}}{5{{Q}_{\infty }}Wi}} \right)}^{2}}=\frac{125\sqrt{2}}{108{{Q}_{\infty }}}\sqrt{1-{{S}_{\infty }}}\nonumber\end{aligned}$$ Thus, we deduce that the effective steady state Wi number is a rate-independent number. This implies the emergent CCR-like prediction: $${{\tau }_{eff}}={{\dot{\gamma }}^{-1}}\left( \frac{125\sqrt{2}}{108{{Q}_{\infty }}}\sqrt{1-{{S}_{\infty }}} \right)$$
We note that Eqs. (35) and (36) express the steady state stress and CCR relaxation time in terms of steady state orientation. An implication is that the prefactor of the emergent CCR rate can be understood explicitly as: $$\begin{aligned}
{{\tau }_{eff}}&=&{{\dot{\gamma }}^{-1}}\left( \frac{125\sqrt{2}}{108{{Q}_{\infty }}}\cdot \frac{6}{5\sqrt{2}}\cdot \frac{{{\sigma }_{\infty }}}{{{G}_{e}}} \right) \\
&=&{{\dot{\gamma }}^{-1}}\left( \frac{25}{18}\cdot \frac{{{\sigma }_{\infty }}}{{{G}_{e}}{{Q}_{\infty }}} \right)=\frac{25}{18}{{\dot{\gamma }}^{-1}}\left( \frac{{{\sigma }_{\infty }}}{{{G}_{e}}{{\gamma }_{eff,\infty }}}h({{\gamma }_{eff,\infty }}) \right) \nonumber\end{aligned}$$ Thus, besides the numerical prefactor, the amplitude of the CCR time is related to what can be interpreted as the ratio of the flow stress to an effective elastic stress including the damping function, evaluated at the “effective steady state strain.” It appears that the quantitative aspects of emergent CCR is a rather intricate issue that is functionally coupled to flow stress, the bare elastic modulus, and effective strain/over-orientation. These insights may be germane to the uncertainty concerning how to precisely quantify the CCR effect in phenomenological models \[12,15\].
The prefactor in Eq. (36) and the flow stress can be explicitly determined from the steady state orientational order parameter, or equivalently from the effective strain. This calculation is done using the expression for ${\textrm{Wi}}_{eff}$ in Eq. (35), taking the derivative of $S$ with respect to the effective strain, and then simplifying. The required algebraic manipulations employ the following readily-derived relations: $${\textrm{Wi}}_{eff} =\frac{125\sqrt{2}}{108}\cdot \frac{\sqrt{1-{{S}_{\infty }}}}{{{Q}_{\infty }}}\quad \equiv \quad {{S}_{\infty }}\left( \frac{\partial {{\gamma }_{eff,\infty }}}{\partial {{S}_{\infty }}} \right)$$ $$\begin{aligned}
{{\gamma }_{eff,\infty }}&=&\frac{3{{S}_{\infty }}}{\sqrt{1+{{S}_{\infty }}-2S_{\infty }^{2}}} \\
& \Rightarrow & \left( \frac{\partial {{\gamma }_{eff}}}{\partial {{S}_{\infty }}} \right)=\frac{3\left( 1+\frac{{{S}_{\infty }}}{2} \right)}{{{\left( 1+{{S}_{\infty }}-2S_{\infty }^{2} \right)}^{3/2}}} \nonumber\end{aligned}$$ $${{Q}_{\infty }}=\frac{{{\gamma }_{eff,\infty }}}{1+\frac{1}{5}{{({{\gamma }_{eff,\infty }})}^{2}}}=\frac{3{{S}_{\infty }}}{\left( 1+{{S}_{\infty }}-\frac{1}{5}S_{\infty }^{2} \right)\sqrt{1+{{S}_{\infty }}-2S_{\infty }^{2}}}$$ Combining the above results and simplifying, one obtains a closed equation for the steady state orientational order parameter: $$\begin{aligned}
\frac{125\sqrt{2}}{108}\sqrt{1-{{S}_{\infty }}}&&\left( 1+{{S}_{\infty }}-\frac{1}{5}S_{\infty }^{2} \right) \\
&=&9S_{\infty }^{2}\left( 1+\frac{{{S}_{\infty }}}{2} \right){{\left[ 1+{{S}_{\infty }}-2S_{\infty }^{2} \right]}^{-1}} \nonumber\end{aligned}$$ Solving this numerically yields: $${{S}_{\infty }}\approx 0.41\quad >\ \quad S({{\gamma }_{m}})\approx 0.33$$ The deduced value is sensible compared to our numerical calculations, including a value of $S$ modestly larger ($\sim 25\%$) in steady state compared to at the overshoot.
Using the result in Eq.(22) in the above analytic relations yields quantitative predictions for steady state properties in the large Wi limit. The results are: $$\begin{aligned}
& {{\gamma }_{\infty }}\approx 1.20\quad ,\quad {{Q}_{\infty }}\equiv {{\gamma }_{\infty }}h({{\gamma }_{\infty }})\approx 0.78 \\
& \frac{{{\sigma }_{\infty }}}{{{G}_{e}}}\approx 0.9\ \quad ,\quad \frac{{{d}_{T,\infty }}}{{{d}_{T}}}\approx 0.94\sqrt{Wi}\nonumber \\
& W{{i}_{eff,\infty }}\approx 1.6\quad ,\quad {{\tau }_{eff,\infty }}\approx \frac{1.6}{{\dot{\gamma }}} \nonumber\end{aligned}$$ The effective strain and $Q$-function are modestly larger than their values at the stress overshoot. The flow stress in units of the equilibrium shear modulus, and the effective Wi number, are sensibly of order unity, and modestly less than at the stress overshoot. The effective relaxation time scales as the inverse of the flow rate. The tube diameter scales as ${\textrm{Wi}}^{1/2}$, qualitatively identical to its behavior at the stress overshoot.
The above analytically derived results all agree well with the numerical calculations in section III, and thus provide a mathematically and physically clear picture of their origin. One difference might appear to be that the analytically obtained power law scaling of the steady state tube diameter does not fully agree with the numerical results in the inset of Fig.2. This is a consequence of the slow approach to the asymptotic large Wi limit, which can be understood by including the leading order correction to the tube diameter scaling in Eq. (43). Using the second equality in Eq. (2) and Eqs. (32)-(34), one obtains: $$\frac{{{d}_{T,\infty }}}{{{d}_{T}}}=\frac{1.07}{\sqrt{{\textrm{Wi}}}}-\frac{0.75}{{\textrm{Wi}}}+....$$ This result captures the pre-asymptotic behavior as shown in the inset of Fig.2.
Finally, we consider the tube survival function in the steady state. Equations (43) and (26) yield: $$\Psi (\gamma )\approx \exp \left( -b\cdot \gamma \right)\ \quad ,\ \gamma >>{{\gamma }_{m}}$$ where $b\sim 1$. This form is employed in the simplest versions of CCR in phenomenological models \[4,13\]. Again the steady state behavior differs little from what is predicted just beyond overshoot since the overshoot is weak and CCR sharply emerges there.
Conceptually, we re-emphasize that in our approach, CCR emerges in a predictive manner beyond the characteristic time (or strain) scale associated with the stress overshoot as a consequence of our self-consistent single-polymer dynamic theory for the anharmonic tube confinement field and a rather simple rheological constitutive formulation. The intuitive heart of our calculation rests on the idea that as polymers move transversely the tube constraints nonlinearly soften, and vice-versa \[2\]. What we call the “direct force” contribution to the dynamic free energy that controls transverse polymer displacements is related to the macroscopic stress. Thus, the tube confinement field controls how polymers move, but the motion of polymers determine how the tube confinement field evolves. This appears to be a conceptually different physical picture than embedded in the GLaMM \[4\] and other phenomenological models. Furthermore, our emergent CCR mechanism cannot be related to chain retraction since the rods are of fixed length. Instead, we have formulated a theory in which deformation, shear rate, and flow modify the dynamic confinement field implemented in a single particle microrheological spirit. This idea is general in that it transcends material type (molecules, atoms, colloid, polymers) and the physical origin of the dynamical confinement (caging in glasses, physical bonds in gels, entanglements, etc.) \[2,8,9,16-18\]. Of course, for flexible coil liquids, chain retraction may indeed trigger CCR, and the quantitative strength of tube-softening-based CCR versus chain-retraction-based CCR for these systems is an open issue in our approach.
Startup Continuous Shear Rheology of Chain Melts
=================================================
What is the connection between our analysis of rod shear rheology and that of liquids composed of contour length relaxed d-PP chains? Qualitatively, we expect all results carry over within the IAA simplification \[3\] assuming (as in paper I) a rubber-like network picture in which the entanglement elastic modulus is not perturbed by strain \[1\]. If this is true, then the microscopic absolute yield results of paper I also carry over, and we obtain numerically sensible predictions for the nearly Wi-independent location of the stress overshoot which are close to its microscopic absolute yield estimate. Similarly, emergent CCR, and its implications on the flow curve and shear thinning, carry over. Importantly, the rod-based results for these quantities are numerically sensible for chain melts in slow flows \[12,15\], e.g., ${{\sigma }_{\infty }}\approx 0.9{{G}_{e}}$. Of course, at a fully quantitative level, differences between rods and the d-PP chain model must be present, the analysis of which is beyond the scope of this paper. For the general case where contour length equilibration in chain liquids is not assumed, the problem is more complicated since an evolution equation for chain stretch is required and both stretch and orientational degrees of freedom contribute to the stress. This is a direction for future work, but to the extent that chains recover their equilibrium contour length quickly in slow flows, one might expect some or many of the results presented in this article for rods to be qualitatively applicable.
Tube Dilation in Steady State Flowing Entangled Chain Melts
===========================================================
The above considerations of the relation of the nonlinear rheology of rods and flexible chains motivates an attempt to quantitatively compare our theory with the recent simulations \[5\] for the steady state behavior of atomistic models of polyethylene melts of moderate degree of entanglement $Z\sim 14$. Baig et.al. separately determined how both the effective entanglement density and the degree of orientational order varied over a wide range of Wi in the flow state \[5\]. As noted above, the effect of stress is likely more subtle in chain systems compared to rods, and so here we explore the simplest idea that in the steady state the relation between tube diameter and $S$ that we derived in equilibrium still applies. We neglect, however, the direct stress effect on the tube diameter. For our rigid rod theory this is a radical simplification, and here it reflects the fact that our model does not uniquely specify how to translate macroscopic stresses to the PP scale for flexible chains. Clarifying this issue is a future goal.
In Eq. (I.29) we noted that the tube diameter in the d-PP model, considering only orientational ordering of the primitive paths and neglecting any contributions from stress, can be written as $$1=\frac{{{\rho }_{PP}}L_{e}^{3}}{16{{\pi }^{2}}\sqrt{2}}\,F\left( \frac{{{L}_{e}}}{{{r}_{l}}} \right)\,G$$ where the functions $G$ and $F(x)$ are discussed in paper I. Thus, given the joint probability distribution for the orientation of the PP segments (which determines $G$), one can explicitly compute the orientation-induced softening of the tube diameter.
Since the simulation study \[5\] only reported a scalar measure of the orientational distribution of PP steps – the order parameter $S$ – and not the full distribution, we employ two simple estimates of the orientational distribution. As a first estimate, we assume that the polymer orientational distribution, $\alpha ({{\vec{u}}_{j}})$, is that of a nematic fluid with a given value of $S$ \[9\]. As a second approach, we assume orientation is induced by an affine step shear deformation of amplitude $\gamma $, and take the associated degree of order to be $S=-2\gamma /(\gamma -3\sqrt{{{\gamma }^{2}}+4})$ \[9\]. This captures the intuition that the orientational effects in steady state shear are likely better captured by a distribution with the same symmetry.
![Effective degree of entanglement as a function of orientational order parameter in the steady state. Points are the simulation results of Baig et al.\[5\]. Curves are the theoretical calculations described in the text, where the solid blue curve corresponds to the step-shear case and the dashed red curve to the Onsager-like nematic order parameter case. ](P2F5){width="0.9\linewidth"}
We plot the results of these calculations in Fig. 5 in terms of the relative reduction of entanglement density, $Z/{{Z}_{bulk}}={{({{d}_{T}}(0)/{{d}_{T}}(S))}^{2}}$, together with the simulation data. We find that both theoretical curves (with no adjustable parameters) are in reasonably good agreement with simulation. We also find that the curves are only modestly sensitive to the details of the orientational distribution used to determine the value of $S$. Since the correct steady state orientational distribution is almost certainly different from both of our simple models, we take this as an encouraging sign that given our theory would also produce reasonable results if we had used the exact orientational distribution. We note that a recent modification of the CCR idea within the phenomenological tube model framework \[14\], different than our work, also appears to be consistent with these simulation results \[5\].
Summary and Discussion
=======================
Our primary focus in this article has been the numerical and analytic analysis of startup continuous shear rheology based on our force-level, self-consistent, anharmonic tube theory of entangled rod polymers. Although the entropic barrier to transverse motion, and hence the tube confinement field, can be destroyed at a critical stress in our approach (“absolute microscopic yield”), this effect was neglected in order to focus on our theoretical predictions based on longitudinal reptation (perturbed by, and coupled to, macroscopic deformation and orientation) as the sole origin of terminal relaxation and dissipation. The neglect of stress-assisted transverse barrier hopping, which makes the derivation of analytic results possible, is appropriate for heavily entangled needle fluids in regimes where the fast flows do not generate enough stress to strongly decrease the dynamic free energy barrier to transverse motion.
For rigid needles, the theory predicts that with increasing strain the tube dilates, entanglements are lost, and reptation speeds up. We further predict that the stress overshoot has its origin in the massive weakening of the entanglement network, not in the affine over-orientation effect as in traditional tube model approaches. We note that this non-classical conclusion is sensitive to numerical prefactors of order unity that enter our approach. Beyond the overshoot, a form of CCR robustly emerges, corresponding to a scaling of the terminal relaxation time as the inverse shear rate. Its physical origin is mechanically-accelerated reptation via a degree of tube dilation that is close to its steady state value. The overshoot location is only weakly rate-dependent, and although the tube is not destroyed for the heavily entangled systems analyzed here, it is massively weakened via the same mechanism underlying microscopic absolute yielding in the nonlinear elastic scenario of paper I.
Thus, we are lead to a qualitatively new view of the possible origin and meaning of the stress overshoot compared to existing phenomenological tube models: it arises due to strong tube dilation and is an elastic-viscous crossover due to deformation-induced disentanglement. Precisely this same mechanism leads to our prediction of a stable flow curve, including a quantitatively sensible value of the flow stress plateau and degree of shear thinning. While we have contrasted our approach with the physics of existing phenomenological CCR models, convective-constraint-release ideas have not been developed for rod systems as they lack the chain retraction mechanism underlying CCR in flexible coil melts. We expect that our mechanism of generating emergent CCR-like physics Ð as a consequence of external stress inducing a local force on the tube which couples the mesoscopic scale tube physics with the macroscopic rheological response Ð is a generic one that is neither limited to rigid rod systems nor necessarily tied to chain retraction nor other specific features of chain melts. Indeed, the same basic mechanism accounts well for a (near) stress plateau and shear thinning in colloidal glasses \[16\] and gels \[18\], and polymer glasses \[17\], within the nonlinear Langevin equation theory framework as applied to systems where relaxation and flow is controlled by stress-assisted thermally activated dynamics.
The predicted behavior for needles is argued to qualitatively apply to flexible chain melts in the low Rouse Wi regime where a PP description built on rapid contour length equilibration is plausible. In this slow flow regime, our theoretical results seem qualitatively consistent with the non-classical ideas of Wang and coworkers of deformation-induced disentanglement \[19-21\]. Our work also provides a theoretical basis for their speculative arguments \[19-21\] concerning the finite “cohesion” of the tube, here in the precise and restricted sense of *transverse* entanglement localization.
Our predictions for the connections between tube dilation and the rod orientational order parameter under affine deformation conditions (nonlinear elastic scenario \[1\]) were shown to be in reasonable accord with steady state simulations of an entangled chain polymer melt. This connection is in the spirit of our finding that a full dynamical treatment of the stress-strain response exhibits features below and near the stress overshoot which are well anticipated by the nonlinear elastic limit analysis of paper I based on global affine deformation and a deformed anharmonic tube field.
From a broad perspective, we believe that our force-level, self-consistent approach addresses the following three questions (in the context of rods and contour-length-relaxed PP chains) posed by McLeish \[12\] regarding key foundational issues for entangled polymer linear and nonlinear rheology. (i) What is the nature of the tube confinement field? (ii) Can one relax the assumption that the tube confining field is unchanged in nonlinear flow? (iii) What is the correct nonlinear physics of constraint release in strong flows?
Future experimental tests of our theory for rod rheology in startup shear should focus on heavily entangled stiff polymer systems such as F-actin, or even more rigid systems like microtubules or other well-dispersed model rigid rod fluids. Novel active microrheology studies of entangled F-actin solutions \[22,23\] have recently appeared with many fascinating observations reported under interrupted startup shear conditions which contain aspects of nonlinear step strain and continuous shear bulk rheology measurements. The experimental data has been interpreted as showing multiple non-classical tube model features, which are suggested to be qualitatively consistent with our theory. A partial list includes: entanglement network yielding at strains of order unity, tube tightening at early times followed by dilation, unusual power law kinetics of entanglement network healing after yielding, and shear thinning. Highly non-classical behavior was also reported by the same group for entangled DNA solutions \[24\]. However, we caution that F-actin is not a rigid rod, and the mechanism for storing stress and transverse tube localization involves physical aspects associated with polymer semiflexibility that differ from both rigid rods and flexible chains \[25,26\]. The extension of our approach to explicitly treat such semiflexible polymers in the bulk and under probe microrheology situations remains an open problem. Thus, standard bulk rheology measurements on rigid rod entangled synthetic or biopolymer systems are urgently needed to definitively test our non-classical predictions. We also encourage new simulations be performed for needle fluids, spanning the pre-overshoot to steady flow regimes to test our inter-connected predictions of how orientation, tube diameter, and other quantities evolve with strain and Wi.
The theoretical frontier within the context of our force-level approach is now entangled flexible chain liquids at Rouse Wi numbers exceeding unity, i.e., flow rates where polymers strongly stretch. Here, Wang and coworkers \[19-21\] have proposed conceptually new ideas associated with an intermolecular “grip force” required for chains to stretch under deformation, and a force imbalance condition (“elastic yielding”) required before stretched chains can retract, corresponding to effective an entropic barrier to restore contour length (near) equilibrium. A fundamental theoretical basis for these ideas does not presently exist, nor is it obvious how (or if) they can be reconciled \[15\] with the phenomenological DE and GLaMM models. Progress on these fundamental issues is urgently needed. In the slow flow regime, recent Brownian dynamics simulations have found non-classical behavior \[27\] (but, importantly, another simulation did not \[28\]) challenging the DE idea of rapid Rouse-like retraction. In the fast flow high Rouse Wi regime, a remarkable fractional power-law scaling of the stress overshoot stress and strain has been observed \[20, 27\], which is in strong disagreement with existing phenomenological tube models. As will be reported in paper III of this series \[30\], the grip force concept has now been microscopically formulated, and a model for non-Rouse chain retraction constructed, which have been quantitatively confronted with experiments and simulations. Finally, by building on the advances in papers I, II and III, we aim to construct a microscopic, force-level theory for entangled chain liquids applicable at all values of Wi which includes a self-consistent treatment of the transverse and longitudinal aspects of entanglement physics. Work is in progress in this direction, and upon completion will be the subject of paper IV of this series.
[10]{}
K. S. Schweizer and D. M. Sussman, preceding paper I. D. M. Sussman and K. S. Schweizer, Macromolecules, 46, 5684 (2013). M. Doi and S. F. Edwards, The Theory of Polymer Dynamics, (Oxford University Press, Oxford, 1986); M. Doi and S. F. Edwards, J. Chem. Soc. Faraday II, 74, 1789, 1802, 1818 (1978); 75, 38 (1979). R. S. Graham, A. E. Likhtman, T. C. B. McLeish, and S. T. Milner, J. Rheol. 47, 1171 (2003). C. Baig, V. G. Mavrantzas and M. Kroger, Macromolecules 43, 6886 (2010). D. M. Sussman and K. S. Schweizer, Phys. Rev. Lett., 107, 078102 (2011). D. M. Sussman and K. S. Schweizer, J. Chem. Phys., 139, 234904 (2013). D. M. Sussman and K. S. Schweizer, Macromolecules, 45, 3270 (2012). D. M. Sussman and K. S. Schweizer, J. Chem. Phys., 135, 131104 (2011). D. M. Sussman and K. S. Schweizer, Phys. Rev. Lett., 109, 168306 (2012). P. G. de Gennes, Scaling Concepts in Polymer Physics, Cornell University Press: Ithaca NY (1979); P. G. de Gennes, J. Chem. Phys. 55 572 (1971). T. C. B. McLeish, Adv. Phys. 51, 13709 (2002). G. Marrucci, J. Non-Newtonian Fluid Mech., 62, 279 (1996). G. Ianniruberto and G. Marrucci, J. Rheol., 58, 89 (2014). F. Snijkers, R. Pasquino, P. D. Olmsted, D. Vlassopoulos, J. Phys. Condensed Matter 27, 473002(2015). V. Kobelev and K. S. Schweizer, Phys. Rev. E 71, 021401 (2005). K. Chen, E. J. Saltzman and K. S. Schweizer, Ann. Rev. Condensed Matter Physics, 1, 277 (2010). V. Kobelev and K. S. Schweizer, J. Chem. Phys. 123, 164903 (2005). S. Q.Wang, Soft Matter, 11, 1454 (2015); J. Poly. Sci. Poly. Phys. 46, 2660 (2008). S. Q. Wang, S. Ravindranath, Y. Wang, and P. Boukany, J. Chem. Phys. 127, 064903 (2007); Y. Wang and S. Q. Wang, J. Rheol. 53, 1389 (2009). S. Q. Wang, Y. Wang, S. Cheng, X. Li, X. Zhu, and H. Sun, Macromolecules, 46, 3147 (2013). T. T. Falzone and R. M. Robertson-Anderson, ACS Macro Letters 4, 1194 (2015). T. T. Falzone, S. Blair and R. M. Robertson-Anderson, Soft Matter 11, 4418 (2015). C. D. Chapman and R. M. Robertson-Anderson, Phys. Rev. Lett., 113, 098303(2014). S. Ramanathan and D. C. Morse, Phys. Rev. E 76 010501(R) (2007). C. P. Broedersz and F. C. MacKintosh, Rev. Mod. Phys., 86, 995 (2014). S. Ravindranath and S. Q. Wang, J. Rheol., 52, 681 (2008); P. E. Poukany, S. Q. Wang and X. Wang, J.Rheol.,53, 617 (2009); F. Snijkers and D. Vlassopolous, J. Rheol., 55, 1167 (2011). Y. Lu, L. An, S. Q. Wang, Z.-G.Wang, Macromolecules 48, 4164 (2015); ACS Macro Letters, 2, 561 (2013); 3, 569 (2014). J. Cao and A. E. Likhtman , ACS Macro Letters, 4, 1376 (2015). K. S. Schweizer, in preparation.
|
---
abstract: 'We show that skyrmions on the surface of a magnetic topological insulator may experience an attractive interaction that leads to the formation of a skyrmion-skyrmion bound state. This is in contrast to the case of skyrmions in a conventional chiral ferromagnet, for which the intrinsic interaction is repulsive. The origin of skyrmion binding in our model is the molecular hybridization of topologically protected electronic orbitals associated with each skyrmion. Attraction between the skyrmions can therefore be controlled by tuning a chemical potential that populates/depopulates the lowest-energy molecular orbital. We find that the skyrmion-skyrmion bound state can be made stable, unstable, or metastable depending on the chemical potential, magnetic field, and easy-axis anisotropy of the underlying ferromagnet, resulting in a rich phase diagram. Finally, we discuss the possibility to realize this effect in a recently synthesized Cr doped ${\qty(\mathrm{Bi}_{2-y}\mathrm{Sb}_{y})}_{2}\mathrm{Te}_3$ heterostructure.'
author:
- 'Kunal L. Tiwari, J. Lavoie, T. Pereg-Barnea, W. A. Coish'
bibliography:
- 'bib.bib'
title: 'Tunable skyrmion-skyrmion binding on the surface of a topological insulator'
---
Introduction
============
The surface of a strong three-dimensional topological insulator hosts a set of two-dimensional surface states protected by topology, provided that the surface does not break the protecting symmetries.[@2010_Hasan; @2016_Bansil; @2016_Chiu] Within the bulk gap, these states are characterized by a chiral Dirac cone dispersion. When time-reversal symmetry is broken, these surface states are no longer protected and may be gapped. In a magnetic topological insulator (MTI), magnetic moments result in broken time-reversal symmetry. The magnetic moments may couple to each other through direct or indirect \[Ruderman-Kittel-Kasuya-Yosida (RKKY)\] exchange and form an ordered state.[@2009_Liu; @2011_Zhu] These moments may also couple to the electronic subsystem through a Zeeman-like term proportional to the local magnetization. In the case of uniform out-of-plane ferromagnetic order, the magnetization gives the surface Dirac electrons a finite mass, resulting in a gap in the surface-state spectrum.
The gapped surface states can be effectively described by a massive Dirac model. Therefore, any sign change of the Dirac mass leads to localized Jackiw-Rebbi modes [@2019_Tokura; @1976_Jackiw; @1984_Jackiw]. Consequently, when the massive Dirac electrons are coupled to ferromagnetic moments there will be a set of protected one-dimensional edge states associated with the boundary between magnetic domains with opposite magnetization. Such edge states are responsible for the anomalous quantum Hall effect,[@2013_Chang] and are thought to give rise to butterfly hysteresis in the magnetotransport properties of magnetic topological insulators.[@2015_Nakajima; @2017_Tiwari] Magnetic skyrmions, topological defects in the magnetization, give rise to similar topologically protected electronic states at the skyrmion perimeter.[@2015_Hurst]
![(a) The magnetic texture, $\m = {\qty(m_x, m_y, m_z)}^{T}$, of an isolated skyrmion, and (b), the probability density, $\abs{\psi\qty(\r)}^2$, of an electronic state bound to the skyrmion. In (a), the color scale represents $\mz$, ranging from $\mz=-1$ (blue) to $\mz =1 $ (red). Arrows indicate the in-plane component of the magnetization, ${\qty(m_x, m_y)}^{T}$. The upper inset shows $\mz$ along the dashed line. The skyrmion radius $R$, defined through Eq. , and healing length $\xi$, defined through Eq. , are the radius and width of the white ring, respectively. The magnetization plotted in (a) was determined numerically by solving Eq. using the procedure described in Appendix \[ap\_2sknumerics\] with boundary conditions corresponding to a single skyrmion. In (b) we plot the probability density of the lowest-energy in-gap electronic orbital for $\lO=R$ \[with $\lO$ defined in Eq. \]. The wavefunction, $\psi$, was determined using the procedure described in Appendix \[ap\_electronic\] with the magnetization plotted in (a). Here, $\psi$ corresponds to the $j=\flathalf$ state of the continuum model \[see Eqs. , \]. We find numerically that this state has energy $\flatfrac{E}{\Delta}=-0.614$. The probability density, $\abs{\psi}^2$, along the slice indicated by the dashed line is plotted in the top panel.\[fig\_1sk\_cmaps\]](fig1.pdf)
Magnetic skyrmions are localized topologically stable configurations of the magnetization for chiral ferromagnets.[@2013_Nagaosa; @2006_Rossler] We consider skyrmions in a planar magnetic system having an easy-axis anisotropy and in the presence of a finite applied magnetic field perpendicular to the surface (along $\zh$), both of which tend to stabilize the skyrmion phase.[@2006_Rossler] Far from a skyrmion, the magnetization, $\m$, is uniformly aligned with the applied magnetic field, $\m \parallel\zh$. At the center of a skyrmion, the magnetization is anti-aligned with the applied magnetic field, $\m
\parallel-\zh$. Across the perimeter of a skyrmion, moving radially outward, $m_z$ smoothly interpolates between these boundary conditions as the magnetization vector tilts in-plane, with the in-plane component of magnetization perpendicular to the to the radial direction $\hat{r}$ (for Bloch-type skyrmions) \[Fig. \[fig\_1sk\_cmaps\](a)\]. The magnetization vector $\m\qty(\r)$ at position $\r$ wraps the unit sphere as $\r$ traverses the entire plane. This wrapping is a manifestation of the skyrmion’s topological nature; the magnetization associated with a skyrmion cannot be smoothly deformed to a uniform ferromagnetic state. This fact, coupled with a ferromagnetic exchange interaction that prohibits sharp changes in the magnetization, makes skyrmions exceptionally stable. This stability is present even if the skyrmion is higher in energy than the uniform ferromagnetic phase. In Ref. \[\], Hurst *et al.* consider a topological insulator surface coupled to a planar magnet that hosts a skyrmion. They find a discrete set of protected orbitals bound to the skyrmion \[see, e.g., Fig. \[fig\_1sk\_cmaps\](b)\]. Similar electronic states have also been investigated in related systems.[@2013_Ferreira; @2015_Uchoa]
In this paper, we consider the interaction between a pair of skyrmions on the surface of a MTI. In the absence of the Dirac surface states, a pair of skyrmions in a chiral ferromagnet experiences a mutually repulsive interaction at all distances.[@2013_Lin] This repulsive interaction decays exponentially with the inter-skyrmion separation over the magnetic healing length, $\xi$. The healing length is controlled by the applied magnetic field and easy-axis anisotropy. Once the Dirac electronic system is coupled to the magnetic system, skyrmion-bound electronic states form. Their wave functions overlap and hybridize when two skyrmions are close. If an electron occupies the lowest-lying hybridized electronic state, there will be an attractive contribution to the skyrmion-skyrmion interaction from the associated molecular binding energy. This attractive interaction also decays exponentially with separation and is governed independently by the skyrmion-bound orbital decay length. Occupation of the lowest-energy electronic orbital, and hence the attractive interaction, can be controlled by coupling the electronic system to a reservoir and tuning its chemical potential. Frustrated magnetic systems, with competing antiferromagnetic and ferromagnetic couplings, may show an oscillating repulsive/attractive interaction between skyrmions as a function of distance, even in the absence of an electronic system.[@2015_Leonov; @2017_Kharkov] In this work, in contrast, our focus is on chiral ferromagnetic systems, where the magnetic interaction is purely repulsive.
A central result of this work is a zero-temperature skyrmion-skyrmion binding phase diagram, displaying the stability of bound skyrmion pairs as a function of the electrons’ chemical potential and magnetic healing length. The healing length may be controlled through the applied magnetic field and the easy-axis anisotropy. We also extend these results to low, but finite, temperature and establish concrete conditions to realize this effect experimentally. Understanding and controlling magnetic skyrmion binding through the electronic system may be important for both classical skyrmionic devices[@2017_Fert] and for possible qubit implementations.[@2013_Ferreira]
The remainder of this paper is structured as follows. In Section \[sec\_msystem\], we review the theory of individual magnetic skyrmions. Evaluating the magnetic free energy of a system containing two skyrmions as a function of skyrmion-skyrmion separation reveals a short-range repulsive interaction. In Section \[sec\_esystem\], we consider the electronic subsystem. Topologically protected electron orbitals bound to single skyrmions hybridize to form molecular orbitals in a two-skyrmion system. We determine the grand potential of this system in contact with an electronic reservoir as a function of skyrmion-skyrmion separation and conclude that the electronic system gives rise to an attractive interaction. In Section \[sec\_phasediagram\], we analyze the total skyrmion-skyrmion interaction as a function of tunable parameters, finding a phase diagram for skyrmion-skyrmion binding at zero temperature, and then addressing the case of low, but finite, temperature. Finally, in Section \[sec\_considerations\], we discuss the possibility to realize this effect in $\textrm{Cr}_{x}{(\textrm{Bi}_{1-y}\textrm{Sb}_{y})}_{2-x}\textrm{Te}_{3} /
{(\textrm{Bi}_{1-y}\textrm{Sb}_{y})}_{2}\textrm{Te}_{3}$ and justify the approximations we have made in the context of this material.
Short-range magnetic repulsion {#sec_msystem}
==============================
In this section, we consider the magnetic subsystem in isolation to determine the range and character of the intrinsic repulsive skyrmion-skyrmion interaction. Starting from a standard free energy for a planar chiral magnet, with the addition of an easy-axis anisotropy, we determine the single-skyrmion magnetization. The skyrmion is then characterized by two length scales: the radius, $R$, and the healing length, $\xi$ \[see Fig. \[fig\_1sk\_cmaps\](a)\]. Next, we numerically determine the magnetic free energy of a system of two skyrmions as a function of inter-skyrmion separation. This analysis reveals that the magnetic repulsive interaction decays exponentially with inter-skyrmion separation over the healing length, $\xi$. If $\xi$ is sufficiently short, the repulsion will be overcome by a longer-range attractive interaction due to the electronic subsystem.
The standard free energy for a two-dimensional chiral magnet stabilizing skyrmions includes the ferromagnetic exchange interaction, the Dzyaloshinskii-Moriya interaction (DMI), and a perpendicular applied magnetic field.[@2013_Nagaosa] We consider this magnetic free energy with an additional term accounting for an easy-axis anisotropy, which may stabilize skyrmions in the absence of an applied magnetic field. The free energy density, $f$, and total magnetic free energy, $F_M$, are $$\begin{aligned}
f &= \frac{J}{2}{\qty(\grad\m)}^{2} + D\m\cdot\qty(\grad\times\m) -B \mz -
K\mz^{2},\label{eq_magf_bare}\\
\FM &= \int \dd[2]{\r}f\label{eq_FMtotal}.\end{aligned}$$ Here, $J>0$ is proportional to the exchange constant, $D$ arises from the DMI,[@1958_Dzyaloshinsky; @1960_Moriya] $B$ is proportional to the out-of-plane applied magnetic field, and $K$ is the easy-axis anisotropy. The magnetic system has a natural inverse length scale $\kappa$, and a natural energy scale $B_s$: $$\begin{aligned}
\kappa &= \frac{D}{J}, \\
\Bs &= \frac{D^2}{J}.\end{aligned}$$
We neglect fluctuations in $\abs{\m}$, which may be penalized by terms quadratic and quartic in $\abs{\m}$ (not explicitly included here). Without loss of generality, we set $\abs{\m}=1$. For $D=0$ and $B>0$, the ground state of the magnetic system is uniform, with $\m = \zh$ throughout the plane. For finite $D$ and sufficiently large $B$ and/or $K$, skyrmions may exist as locally stable features in the magnetization.[@2006_Rossler] The presence of a skyrmion in the ferromagnetic background may either increase or decrease the magnetic free energy depending on $B$ and $K$. A discussion of the phase diagram of chiral magnets governed by Eq. is given in Refs. \[\].
![Skyrmion stability phase diagram in the $K,B$ plane. (a) for small $K$ and $B$, the magnetic system is unstable toward a spin spiral phase. Above this phase boundary (black line), isolated skyrmions minimize the magnetic free energy. Colored dots indicate the values of $B,K$ used to perform numerical calculations in the stable phase for Figs. \[fig\_FMofX\],\[fig\_hybridization\] below (see Table \[table\_nparams\]). The phase boundary was determined using the procedure described in Appendix \[ap\_stability\]. In (b) and (c), we plot the healing length, $\xi$, (blue), the skyrmion radius, $R$, (red), and the ratio $\xi/R$ (black) as a function of applied magnetic field (b) and anisotropy (c) for the slices indicated in (a) with a dashed and dotted line, respectively. The ratio $\xi/R$, which must be small for a skyrmion bound state to form, is minimized for small applied field and anisotropy approaching the phase boundary []{data-label="fig_magnetic_parameters"}](fig2.pdf)
For sufficiently large $B$ and/or $K$, a magnetic skyrmion minimizes the magnetic free energy, i.e., it solves $$\begin{aligned}
\fdv{\FM}{\m}=&0\label{eq_FMfdv},\end{aligned}$$ with the boundary conditions $\m\qty(r=0) = -\zh$ and $\m{\qty(r\rightarrow\infty)} = \zh$ for positive $B$. These boundary conditions ensure that the skyrmion is centered at $r=0$, and that the magnetization far from the skyrmion approaches the value it would have in the uniform ferromagnetic phase. The magnetization profile of the skyrmion may be parameterized as $$\begin{aligned}
\m\qty(\r)=
\mqty(
\sin\qty[\Theta\qty(r)]\cos\qty(W\phi+\phi_0)\\
\sin\qty[\Theta\qty(r)]\sin\qty(W\phi+\phi_0)\\
\cos\qty[\Theta\qty(r)]),\end{aligned}$$ where $W$ is the winding number of the skyrmion, and $\phi_0$ determines its chirality. For $D>0$, the specific form of the DMI considered in Eq. stabilizes spiral-like (Bloch) skyrmions, with a definite chirality, $\phi_0=+\flatfrac{\pi}{2}$,[^1] and a definite winding number, $W=+1$ (see, e.g., Ref. ).[^2] An individual skyrmion satisfies the equation $$\begin{aligned}
\fdv{\FM}{\Theta} = 0, \label{eq_1skODE}\end{aligned}$$ with the boundary condition $\Theta\,\qty(r=0)=\pi$, $\Theta\qty(r\rightarrow\infty)=0$. Away from the skyrmion core, for $\Theta \ll 1$ and $\kappa r \gg 1$, Eq. is solved asymptotically by $$\begin{aligned}
\Theta\qty(r) \propto \BesselK{1}{\frac{r}{\xi}}\label{eq_thetaasymptotic},\end{aligned}$$ where $\mathcal{K}_n$ is the modified Bessel function of the second kind. The healing length, $$\begin{aligned}
\xi = \frac{1}{\kappa}\sqrt{\frac{\Bs}{B+2K}}\label{eq_xidef},\end{aligned}$$ sets the scale for decay of the in-plane magnetization far from the skyrmion. The asymptotic solution given in Eq. can be further simplified to $\Theta\qty(r)\propto\sqrt{\flatfrac{\xi}{r}}\exp(-\flatfrac{r}{\xi})$ in the limit $r\gg \xi$. The skyrmion radius, $R$, may be found numerically by inverting $$\begin{aligned}
\Theta\qty(R) = \frac{\pi}{2}\label{eq_Rdef}\end{aligned}$$ for $\Theta\qty(r)$ satisfying Eq. . At the skyrmion radius, $r=R$, the magnetization is entirely in-plane. The magnetic texture of an isolated skyrmion is plotted in Fig. \[fig\_1sk\_cmaps\](a).
Although Eq. has a skyrmion solution for any finite $B$ or $K$, this solution does not describe a minimum in the free energy for sufficiently small $B$ and $K$. Below a critical line in the $B,K$ plane, skyrmions are unstable to a transition towards a spin-spiral phase.[@2017_Han] We numerically determine this phase boundary, plotted in Fig. \[fig\_magnetic\_parameters\](a) (see also Table \[table\_nparams\]) through the procedure described in Appendices \[ap\_2sknumerics\] and \[ap\_stability\]. Above this phase boundary, localized skyrmions exist as stable minima of the magnetic free energy. A more extensive review of the theory of magnetic skyrmions is given in Refs. \[\].
![The dependence of the magnetic free energy, $\FM$, on skyrmion-skyrmion separation, $x$, for a pair of skyrmions. The magnetic free energy was determined numerically by pinning the position of the skyrmions and allowing the remaining magnetic degrees of freedom to relax (see Appendix \[ap\_2sknumerics\] for details). We compare $\FM(x)$ for several values of the applied field, $B$, and anisotropy, $K$ (colored dots). The single-skyrmion length scales for each $\qty(B,K)$ as well as the color-scheme are presented in Table \[table\_nparams\]. The magnetic free energy relative to its large-separation limit, $\FM\qty(x)-\FM(\infty)$, is fit with the function ${\BesselK{1}{\flatfrac{x}{\xi}}}{}$ (dashed line) and the proportionality factor $F_0$ is determined independently for each data set. The healing length, $\xi$, is set by Eq. . Inset: the magnetization profile for $B/B_s=0.1$, $K/B_s=0.9$ with $x/\xi=9.12$.[]{data-label="fig_FMofX"}](fig3.pdf)
We now consider a system of two skyrmions separated by a distance $x$. Our goal is to establish that the intrinsic inter-skyrmion interaction is repulsive and short-range, with length scale $\xi$. This form of the intrinsic interaction is expected based on the following reasoning. Far from the skyrmion core, the magnetization is mostly aligned with the applied field save for a small in-plane component. Two skyrmions repel each other because they favor opposing in-plane magnetization in the region between them.[@2013_Lin] The magnitude of this in-plane component is determined by the length scale $\xi$. We therefore expect a short-range repulsive interaction between skyrmions with length scale $\xi$.
To verify this picture, we treat this problem numerically on a lattice in a similar manner to Ref. \[\]. We prepare an initial magnetic configuration with two skyrmions separated by a specified distance. We then freeze a magnetic moment within each skyrmion core and minimize the magnetic free energy with respect to variations in the remaining moments. Given this relaxed magnetization, we calculate the magnetic free energy and establish the final inter-skyrmion separation, $x$ for the relaxed configuration. By repeating this calculation with different initial inter-skyrmion separations, we determine the magnetic free energy as a function of $x$. Further details are given in Appendix \[ap\_2sknumerics\]. The results of this calculation are given in Fig. \[fig\_FMofX\]. We find that the intrinsic magnetic inter-skyrmion interaction is well fit by a modified Bessel function (of first order and second kind) with the separation, $x$, scaled by $\xi$, consistent with Eq. . This agreement holds over several orders of magnitude of interaction strength. For sufficiently large inter-skyrmion separations, errors in the numerical calculation of $\FM(x)$ become comparable to the interaction energy \[$\FM(x)-\FM(\infty)$\], and we can no longer make numerical predictions. These results are entirely consistent with the analogous calculation presented in Ref. \[\], where it was shown that the magnetic healing length determines the range of the repulsive skyrmion-skyrmion interaction for $K=0$.
In the analysis given above, we have neglected the long-range magnetic dipolar interaction. This can be justified for skyrmions with a sufficiently small radius $R$ (reducing the total moment associated with each skyrmion) and for a moderate separation $x$. In Sec. \[sec\_considerations\], we provide a more detailed justification of this approximation in the context of a candidate material, $\textrm{Cr}_{x}{(\textrm{Bi}_{1-y}\textrm{Sb}_{y})}_{2-x}\textrm{Te}_{3} /
{(\textrm{Bi}_{1-y}\textrm{Sb}_{y})}_{2}\textrm{Te}_{3}$.
------- ------------- ------------ --------- ---------
Color $\kappa\xi$ $\kappa R$ $B/\Bs$ $K/\Bs$
0.73 2.85 0.1 0.9
0.69 2.35 0.1 1.0
0.66 1.94 0.1 1.1
0.71 1.84 0.2 0.9
0.81 2.14 0.25 0.64
0.89 2.14 0.35 0.45
------- ------------- ------------ --------- ---------
: Magnetic-free-energy parameters and single-skyrmion length scales for the numerical results used throughout this work. The single-skyrmion length scales, $\xi$ and $R$, are determined from $B$ and $K$ through Eqs. and . We choose $B$ and $K$ within the stable region of the skyrmion phase diagram \[see Fig. \[fig\_magnetic\_parameters\](a)\] to tune the ratio $\xi/R$. We set $\lambda_0=R$ when calculating the electronic states to ensure that the skyrmions host discrete sets of orbitals.[]{data-label="table_nparams"}
Attractive interaction {#sec_esystem}
======================
This section addresses the attractive interaction effected by the electronic system. First, we consider the electronic structure of the MTI surface resulting from the hybridization of single-skyrmion orbitals. Next, we integrate out the electronic subsystem, accounting for an electronic reservoir at fixed chemical potential, to determine the grand potential as a function of skyrmion-skyrmion separation. This analysis leads us to conclude that the MTI surface states can give rise to an attractive skyrmion-skyrmion interaction.
In the absence of a magnetic subsystem, a three-dimensional topological insulator surface is characterized by an odd number of spin-orbit-coupled Dirac cones. We consider a single Dirac cone centered at $\k = 0$, which is coupled to the magnetic system through a Zeeman-like term proportional to the local magnetization: $$\begin{gathered}
H_\mathrm{MTI}= \hbar v\sum_{\bf k}c_{\bf k}^\dagger {\qty(\k\times\bsigma)}_{z} c_{\bf k} \\
-\Delta\int d^2 {\bf r} \psi_{\bf r}^\dagger \m\qty(\r)\cdot\bsigma \psi_{\bf r},
\label{eq_MTIHamiltonian}\end{gathered}$$ where $c_{\bf k}$ is a spinor of electronic annihilation operators in momentum and $\psi_{\bf r} = {1\over \sqrt A} \sum_{\bf k} \exp(i{\bf k\cdot r }) c_{\bf k}$ is its real space counterpart defined with the system’s area $A$. Furthermore, $v$ is the Fermi velocity, and $\Delta$ is proportional to the exchange interaction between the surface electrons and the magnetic system. These parameters imply a natural length scale: $$\begin{aligned}
\lO = \frac{\hbar v}{\Delta},\label{eq_l0defn}\end{aligned}$$ and a natural energy scale, $\Delta$.
The MTI surface Hamiltonian, Eq. , neglects the magnetic vector potential. This is justified if the $U(1)$ phase acquired by electrons in the skyrmion-bound orbitals is small, i.e., the flux through the skyrmion area is smaller than the flux quantum. This condition can be written as $R^2\tilde{B} \ll
\Phi_0$ where $\tilde{B}\propto B$ is the dimensionful applied magnetic field. For a skyrmion of radius $R=50$ nm, this condition is satisfied for $\tilde{B} \ll 1$ T. Equation also neglects the direct Zeeman coupling of the surface electrons to the applied magnetic field. This is justified if the Zeeman-like exchange coupling to the magnetization is large compared to the Zeeman energy. In Sec. \[sec\_considerations\], we show that this limit is experimentally relevant.
For a uniform magnetization and finite $\Delta$, $\mz$ gives the surface Dirac electrons a finite mass, while the in-plane components simply shift the Dirac point from $\k=0$. The resulting massive Dirac cone is characterized by a finite Berry curvature, with sign determined by the sign of $m_z$. Thus, gapless states will be found where $m_z$ changes sign.[@1976_Jackiw; @1984_Jackiw] These states are given below as the eigenstates of Eq. with a magnetization given by the solution of Eq. . Since the skyrmion magnetization has rotational symmetry, the energy eigenstates are a product of angular and radial functions: $$\begin{aligned}
\braket{\r}{\psi_{j}} = \mqty(e^{i\qty(j-\half)\phi} & 0 \\
0& e^{i\qty(j+\half)\phi})\boldsymbol{\chi}_j\qty(r)\label{eq_totalwavefn},
\end{aligned}$$ where $j$ is the half-integer total-angular-momentum quantum number, and $\boldsymbol{\chi}_j\qty(r)= \qty(\chi_j^\uparrow\qty(r),\chi_j^\downarrow\qty(r))^{T}$ is the radial wave function. If the in-plane component of the magnetization is divergenceless, as in the case of Bloch skyrmions, we may work in a gauge where it does not enter the radial wave equation:[@2015_Hurst] $$\begin{aligned}
\mqty(-\mz\qty(r)& -\lO\dv{r}-\lO\frac{j+\half}{r} \\
\lO\dv{r}-\lO\frac{j-\half}{r} & \mz\qty(r))
\boldsymbol{\chi}_j\qty(r)
=\frac{E_{j}}{\Delta}
\boldsymbol{\chi}_j\qty(r).
\label{eq_radial_wave_eqn}\end{aligned}$$ For ${r-R}{}\gg \xi$, the magnetization approaches $\zh$ and the asymptotic behavior of the bound-state wavefunction is $$\begin{aligned}
\boldsymbol{\chi}_j\qty(r)
\sim
\mqty(\sqrt{1-\frac{E_j}{\Delta}}\BesselK{j-\half}{\frac{r}{\lambda_j}}\\
\sqrt{1+\frac{E_j}{\Delta}}\BesselK{j+\half}{\frac{r }{\lambda_j}}),\label{eq_1skasymptotic}\end{aligned}$$ where $$\begin{aligned}
\lambda_j = \frac{\lO}{\sqrt{1-{\qty(\frac{E_j}{\Delta})}^2}}.\label{eq_lambdadefn}\end{aligned}$$
For $R \gtrsim \lO$ there is a discrete set of states within the Dirac mass gap with $\bra{\r}\ket{\psi_j}$ localized to the skyrmion. Away from the skyrmion, these wavefunctions decay with length scale $\lambda_j$. Hurst [*et al.*]{} (Ref. ) provide an explicit solution for the wavefunctions in the limit $\xi=0$ as well as a transcendental equation for the electronic spectrum as a function of $R$ (similar solutions are also given for related models in Refs. ). We determine the bound states exactly on a lattice for the full single-skyrmion magnetization, $\m^\mathrm{sk}\qty(\r)$. The magnetization profile, $\m^\mathrm{sk}\qty(\r)$, is determined from the solution to Eq. using the approach discussed in Appendix \[ap\_electronic\]. In Fig. \[fig\_1sk\_spectrum\], we plot the spectrum from Ref. \[\] as a function of $R$ (black lines) together with the numerically determined spectrum for a skyrmion with $\xi=0.256\,R$ (blue dots).
![Dependence of the in-gap spectrum on skyrmion radius. The spectrum given in Ref. \[\] for $\xi=0$ (black lines) is plotted with the numerically determined spectrum for $B/\Bs=0.1$ and $K/\Bs=0.9$ (blue dots). For $R/\lO\sim1$, the skyrmion hosts a discrete set of bound orbitals, while for $R\gg\lO$ the skyrmion hosts a linearly dispersing continuum of chiral edge states.[]{data-label="fig_1sk_spectrum"}](fig4.pdf)
In a system hosting two skyrmions, the single-skyrmion orbitals will overlap and hybridize to form molecular orbitals. We focus on the lowest-energy electronic state with $j=1/2$ in each skyrmion and construct a two-level Hamiltonian to describe the orbital hybridization: $$\begin{aligned}
H^\textrm{Mol} = \sum_{i=1,2}E_0 \cid\ci + t\qty( c_1^\dagger c_2 + c_2^\dagger
c_1)\label{eq_hopping},\end{aligned}$$ where $E_0$ is the energy of the lowest-energy single-skyrmion orbital and $c_i$ annihilates an electron in the lowest-energy single-skyrmion orbital of skyrmion $i$. The tunneling amplitude is given by $$\begin{aligned}
t = \mel{\psi^1}{M_1}{\psi^2} = \mel{\psi^2}{M_1}{\psi^1}\label{eq_tdefn},\end{aligned}$$ where $\ket{\psi^i}$ are the lowest-energy ($j=\flatfrac{1}{2}$ for $R=\lO$ and $v>0$) single-skyrmion orbitals centered at $\r_i$, defined through Eq. , and $$\begin{aligned}
M_{i} = -\Delta \qty[\m^\mathrm{sk}\qty(\r-\r_{i})-\zh]\cdot\bsigma\label{eq_1skM}\end{aligned}$$ is the ‘atomic potential’ of skyrmion $i$ relative to the ferromagnetic background. The tunneling amplitude, $t\qty(x)$, decays exponentially for ${x-2R}{}\gg\l$ where $\l \equiv \l_j$ for $j$ corresponding to the lowest-energy single-skyrmion orbital. This parametric dependence comes from the asymptotic behavior of the single-skyrmion orbitals defined in Eq. .
![Molecular binding energy relative to the energy of the single-skyrmion $j=\flathalf$ state as a function of skyrmion-skyrmion separation. The single-skyrmion orbitals decay into the magnetic bulk with length scale $\lambda$ \[Eq. with $j=\flathalf$\], resulting in a similar decay for $t(x)$. The phenomenological dependence considered in Sec. \[sec\_phasediagram\], ${E_{0}-E_-}{{}}~=~{t_0}
\BesselK{0}{\frac{x}{\lambda}}$, is plotted with the dashed line. Here, $t_0$ is determined for each data set independently through a fit to the numerical results (colored circles, corresponding to the parameters given in Table \[table\_nparams\], which are determined using the approach described in Appendix \[ap\_electronic\]). The single-skyrmion orbital decay length, $\l$, is calculated using Eq. with the numerically determined energy of the single-skyrmion orbitals.[]{data-label="fig_hybridization"}](fig5.pdf)
Equation is diagonalized by molecular orbitals with energies $$\begin{aligned}
E_{\pm} = E_0\pm\abs{t\qty(x)}.\label{eq_molecular_energies}\end{aligned}$$ We neglect the doubly-occupied state for a skyrmion pair at a sufficiently small separation $x$ and for a weakly-screened long-range Coulomb interaction. In Sec. \[sec\_considerations\], we justify this approximation in the context of recently synthesized Cr doped ${\qty(\mathrm{Bi}_{2-y}\mathrm{Sb}_{y})}_{2}\mathrm{Te}_3$ heterostructures. Figure \[fig\_hybridization\] presents the binding energy as a function of $x$ for skyrmions with radius $R = \lambda_0$. The binding energy is the difference between the lowest-lying molecular orbital, $E_-\qty(x)$, and the single-skyrmion orbital energy, $E_0 = E_-\qty(\infty)$. The binding energy indeed decays with length scale $\lambda$, and is well fit by $t_0\BesselK{0}{\frac{x}{\lambda}}$, as expected from Eq. , and the asymptotic single-skyrmion wavefunctions, given in Eq. .
![Two-skyrmion electronic spectrum ($E_+$ and $E_-$, black) and zero-temperature grand potential ($\Phi$, blue) at (a) $\mu > E_0$ and (b) $\mu < E_0$ for the tunneling amplitude given in Eq. . The chemical potential, $\mu$, is plotted in gray. For $\mu>E_-$, the lowest-energy molecular orbital is occupied and the grand potential is given by $\Phi=E_--\mu$. When $\mu<E_-$, the lowest-energy molecular orbital is unoccupied and the grand potential is simply given by the energy of the unoccupied state ($\Phi=0$). The tunneling amplitude, $t=\flatfrac{\qty(E_+- E_-)}{2}$, decays exponentially with separation, over the single-skyrmion orbital decay length, $\lambda$.[]{data-label="fig_grandpotential"}](fig6.pdf)
The effective interaction between a skyrmion pair is determined by integrating out the electronic degrees of freedom. The electronic state is determined through the grand potential for a fixed separation $x$: $$\begin{aligned}
\Phi\qty(x) =& -T\ln\qty(\Z),\label{eq_GPdefn}\\
\Z =& 1 + e^{-\frac{E_- - \mu}{T}}+ e^{-\frac{E_+ - \mu}{T}},\label{eq_Zdefn}
$$ Where we set Boltzman’s constant to 1 and assume that the electronic subsystem is in contact with a reservoir at temperature $T$ and chemical potential $\mu$. The molecule is restricted to support $N=0,1$ electrons, consistent with the discussion following Eq. .
The $T=0$ behavior of $\Phi$ is sketched in Fig. \[fig\_grandpotential\]. For $\mu > E_0$, the lowest-energy molecular orbital is occupied at all $x$ and the grand potential is $\Phi = E_-\qty(x)-\mu$. For $\mu < E_0$, the lowest-energy molecular orbital crosses the chemical potential at a separation $\xmu$, above which it becomes unoccupied. $x_\mu$ is defined by $$\begin{aligned}
E_-\qty(\xmu) \equiv \mu\label{eq_xmudefn}.\end{aligned}$$ For a monotonically increasing energy $E_-\qty(x)$ at $T=0$, the grand potential is therefore: $$\begin{aligned}
\Phi =
\begin{cases}
E_-\qty(x)-\mu & x < \xmu\\
0 & x > \xmu
\end{cases}. \label{eq_Phipiecewise}\end{aligned}$$ Since $-\Phi^\prime(x)< 0$, at $T=0$, the electronic system effects an attractive force for $x<\xmu$ or when $\mu > E_0$. If this attractive interaction overcomes the short-range repulsive magnetic interaction discussed in Sec. \[sec\_msystem\], a stable bound skyrmion molecule will form.
Pair-binding phase diagram {#sec_phasediagram}
==========================
The total free energy is given by the sum of the contributions from the electronic system and the magnetic system, $$F(x) = F_M(x)+\Phi(x).\label{eq_FofX}$$ Based on the possible functional forms of $F(x)$ we identify four regimes whose nature and boundaries will be discussed below. The phase diagram is depicted in Fig. \[fig\_phasediagram\](a) and a representative free-energy plot for each phase is shown in Figs. \[fig\_phasediagram\](b)-(e). We denote the regimes of our model as:\
(i) the ‘stable’ regime, where the free energy has a unique global minimum at finite skyrmion-skyrmion separation. We denote this separation $x_B$ and refer to it as the bond length. The stable regime is colored green in Fig. \[fig\_phasediagram\].\
(ii) the ‘stable-hysteretic’ regime, where, in addition to the global minimum at $x_B$, there is a local minimum as $x\to\infty$. This regime is colored in beige in the Fig. \[fig\_phasediagram\].\
(iii) the ‘metastable-hysteretic’ regime, where there is a *local* minimum at $x_B$ and a *global* minimum at infinite separation. This phase is blue in Fig. \[fig\_phasediagram\].\
(iv) the ‘unstable’ regime, where the free energy is a monotonically decreasing function of separation, and there is no bound state. This phase is colored white in Fig. \[fig\_phasediagram\].
Zero temperature
----------------
At zero temperature, the electronic free energy is defined piecewise through $\Phi=\min\left[E_-(x)-\mu, 0\right]$, as shown in Eq. and Fig. \[fig\_grandpotential\](b). For any pair of $\mu$, $\xi$, the phase at zero temperature is determined from the balance of the repulsive (magnetic) and attractive (electronic) forces at a finite separation. We recall that the magnetic contribution to the free energy, $F_M$, is always repulsive and decays quickly over a length scale $\xi$, while the electronic contribution is attractive whenever the lower orbital state is populated and is governed by the length scale $\lambda$. Moreover, we note that, for a bound state to form, the repulsion should be shorter-range than the attraction, $\xi<\lambda$. We therefore scan $\xi<\lambda$ and find the phase boundaries for each $\xi$ as a function of the chemical potential, $\mu$.
*(i) The stable phase*—if $\mu$ is above the lowest molecular orbital energy, $E_-$, for all separations $x$, i.e., $\mu>E_0$, then the orbital is always occupied and $\Phi(x) = E_-(x)-\mu$ for all $x$. We therefore have both repulsion and attraction everywhere and a balance of forces occurs at $x_B$ satisfying $$\label{eq_xb}
F_M'(x_B) + E_-'(x_B) = 0.$$ This stable phase is characterized by a free energy with a unique minimum at $x_B$. In Fig. \[fig\_phasediagram\], the regime is bounded from below by $$\mu>E_0.$$
*(ii) The stable-hysteretic regime*—in addition to the stable regime, there are two other regimes in which a bound state occurs. As discussed above, when the chemical potential crosses $E_-(x)$ at some separation $x_\mu$, there is an attractive force for $x<x_\mu$ but no attraction for $x>x_\mu$. If the forces balance at a separation where the lower-energy orbital is populated, i.e., $x_B<x_\mu$, there will be a skyrmion bound state. The stable-hysteretic regime and the stable regime are distinguished by the behavior of the free energy at large separations. Because there is no attractive interaction for $x>\xmu$ in the stable-hysteretic regime, there is a free-energy minimum as $x\to \infty$. Between the to minima there is a cusp-like energy barrier at $x_\mu$ as depicted in Fig. \[fig\_phasediagram\](c). In the stable-hysteretic phase the minimum at $x_B$ arises at a lower free energy than the minimum at $x\to\infty$, $$F(x_B)<F(x\to\infty),$$ so the bound state is stable. However, two skyrmions prepared at large separation experience a repulsive interaction and so remain unbound. This is the hysteretic behavior alluded to in the name of the phase.
*(iii) The metastable-hysteretic regime*—the metastable-hysteretic regime is distinguished from the stable-hysteretic regime by the relative energies of the bound and unbound configurations. In this regime the unbound configuration globally minimizes the free energy, $$F(x_B)>F(x\to\infty),$$ but the molecular orbital is occupied at the bond length, $x_B<x_\mu$. The free-energy minimum at $x_B$ is local and the bound state is therefore metastable. The lower bound of this phase is found when the molecular orbital is depopulated at $x_B$, $x_B=x_\mu$, and the attraction can no longer overcome the repulsion at any separation.
*(iv) The unstable regime*—in this regime, the repulsive interaction dominates the free energy for all separations. In this case the only free-energy minimum occurs as $x\to \infty$. This occurs when the molecular orbital is unoccupied at $x_B$, as defined in Eq. , so $x_B$ is not a minimum of the free energy. The unstable phase is defined by the inequality: $$x_\mu < x_B,$$ and the phase boundary is found by comparing the two lengths.
![(a) Zero-temperature phase diagram describing regions of stable (green), stable-hysteretic (beige), metastable-hysteretic (blue), and unstable (white) skyrmion pair binding as a function of the chemical potential, $\mu$, and magnetic healing length, $\xi$, for $\xi<\lambda$. (b) For large chemical potential ($\mu >E_{0}$) the lowest-energy molecular orbital is occupied and the total skyrmion-skyrmion interaction, $F = \Phi+F_M$, has a unique minimum, $F\qty(x_B)$, at a finite skyrmion-skyrmion separation, $x_B$. Thus, the skyrmions form a bound state. (c,d) For intermediate chemical potential, the lowest-energy molecular orbital is occupied for $x<x_\mu$, but is unoccupied for $x>x_\mu$. The bound configuration remains locally stable, but for $x>x_\mu$ the skyrmion-skyrmion interaction is repulsive. Thus, both the bound and unbound configurations will be long-lived. Which of these configurations is the ground state is determined by comparing $F\qty(x_B)$ and $F\qty(x\to\infty)$. (e) For sufficiently low chemical potential, we have $x_\mu < x_b$, and the skyrmion-skyrmion interaction is strictly repulsive. The unbound configuration is then favored. Please note that we set the unknown $F_0$ such that $F(2R)=t_0(2R)$ for each $\xi$. \[fig\_phasediagram\]](fig7.pdf)
In Fig. \[fig\_phasediagram\](a), we estimate the phase boundaries as described above using the asymptotic behaviour of $F_M(x)$ and $\Phi(x)$. In Fig. \[fig\_FMofX\], we show that for $x\gg\xi,2R$, the magnetic free energy, $F_M(x)$ has the asymptotic form $$F_M(x) \sim F_0 \BesselK{1}{\frac{x}{\xi}},
\label{eq_phenomFMofx}$$ where $F_0$ characterizes the strength of the repulsive interaction. Similarly, in Fig. \[fig\_hybridization\], we show that the tunneling amplitude, $t(x)$ at $x\gg\l$, has the asymptotic form $$t(x) \sim t_0 \BesselK{0}{\frac{x}{\l}}\label{eq_phenomt},
$$ where $t_0$ characterizes the strength of the tunnel splitting.
We use the above asymptotic forms to find the bond length $x_B$, the scale $x_\mu$, and the free energy at the minimum $F(x_B)$ and use these quantities to draw the phase diagram in Fig. \[fig\_phasediagram\]. Moreover, it is possible to approximate the functional form of the free energy further by taking the asymptotic limit of the above Bessel functions: $$\begin{aligned}
F_M(x) &\sim\sqrt{\pi\over 2} F_0 \left({x\over \xi}\right)^{-{1\over 2}} e^{-x/\xi} && x\gg\xi,
\label{eq_approxFM}\\
E_-(x) &\sim E_0-\sqrt{\pi\over 2} t_0 \left({x\over \lambda}\right)^{-{1\over 2}} e^{-x/\lambda} && x\gg\lambda.
\label{eq_approxE-}\end{aligned}$$ Balancing the derivatives of the two expressions above and neglecting subleading terms in $\lambda/x$ and $\xi/x$, we obtain: $$\begin{aligned}
x_B \sim \frac{\lambda\xi}{\lambda-\xi} \ln\left(\sqrt{\frac{\lambda}{\xi}}\frac{F_0}{t_0}\right).\label{eq_xBasym}\end{aligned}$$ This limit is consistent with our assumption of $x \gg \lambda >\xi$ when $$\begin{aligned}
{F_0 \over t_0} \ll \sqrt{\xi\over \lambda}.\end{aligned}$$ Equation shows that the bond length becomes shorter as the ratio $\xi/\lambda$ is decreased. The minimum of the free energy can be found by substituting Eq. into the approximated form of $F(x)$ using Eqs. , or Eqs. , . For $x\gg \xi,\lambda$, $E_-(x)$ and $F_M(x)$ are approximately exponential giving $$\begin{aligned}
F_M'(x) \sim -{1\over \xi} F_M(x) && E_-'(x)\sim -{1\over\lambda}t(x).
\end{aligned}$$ This helps simplify the expression for $F(x_B)$ as the forces \[$-F_M'(x)$, $-E_-'(x)$\] are equal in magnitude and opposite in sign at the minimum $x_B$. We may therefore write $$\begin{aligned}
F(x_B)& = E_-(x_B)+F_M(x_B)-\mu,\\
&\sim E_-(x_B)\left(1-\frac{\xi}{\lambda}\right) -\mu,
\end{aligned}$$ which demonstrates that, as expected, the bound-configuration minimum is deeper for smaller ratios of $\xi/\lambda$.
One should note that these estimates are based on the asymptotic behavior of $t(x)$ and $F_M(x)$ and hence are accurate only in the limits mentioned above. However, we stress that a bound state may exist well beyond these limits. For example, in the limit where $\xi \ll \lambda$ we may view the magnetic repulsion as a hard-shell repulsion. Therefore, the bound state would form at $x=2R$ where the repulsion drops to zero while the attractive force is given by $-E_-'(2R)\ne 0$.
Finite temperature
------------------
At finite temperature the free energy changes due to thermal occupation of the molecular orbital states as well as thermal fluctuations in the skyrmion separation $x$ and possible changes in the magnetic free energy. In this section we include the temperature only through its effect on the population of the molecular orbitals. Beginning in the stable regime at zero temperature, we determine the conditions for which there may still be a bound state when the temperature is raised. This may be addressed qualitatively by plotting $F(x)$ as defined in Eq. , with the finite-temperature $\Phi$ given by Eq. , see, e.g., Fig. \[fig\_finite\_T\].
We assess the effect of temperature on the bound state in the following way. We assume a large chemical potential such that one of the molecular orbitals is always occupied, $\mu-E_\pm\gg T$ (but we still neglect double occupancy). At finite temperature, the average occupations of the states with energies $E_-$ and $E_+$ shift away from the zero-temperature limits of 1 and 0. The electronic free energy, $\Phi(x)=\bar{E}-TS$, has contributions from both the entropy $S$ and the average energy $\bar{E}$. When $T\gg E_+-E_-$, the populations of the two states are comparable, giving $S\to \ln 2$, and the average energy also becomes independent of $x$: $\bar{E}\to \left(E_+(x)+E_-(x)\right)/2=E_0$. Since $\Phi$ becomes $x$-independent, the attractive force is suppressed as the temperature is raised. We therefore define the temperature scale $T^*$, controlled by the typical molecular energy scale at the free-energy minimum: $$T^* = t(x_B).$$ For $T\ll T^*$, the population of the $E_-$ state is significantly larger than that of $E_+$, the zero-temperature analysis applies, and the bound state persists. Above this temperature, the electronic contribution to the free energy is suppressed. For $T>T^*$ the suppression of the attractive force is linear in $t(x)/T$ and therefore one may find a free-energy minimum even above $T^*$. Since one of the molecular states is always occupied under the conditions described above, we can write the partition function $$\begin{aligned}
\Z = e^{E_0-\mu}\left(e^{-t(x)/T}+e^{t(x)/T}\right),\end{aligned}$$ and consequently find the attractive force, $$\begin{aligned}
f(x) = {T\over \Z}{d \Z \over dx} = \tanh\left({t(x)\over T}\right)t'(x). \end{aligned}$$ At high temperature, the attractive force $f(x)$ is suppressed, and decays exponentially at half the length scale ($\lambda/2$) relative to the decay length of the low-temperature attractive force ($\lambda$): $$f(x)\simeq \frac{t(x)}{T}t'(x)\propto e^{-2x/\lambda};\quad T\gg t(x),\,x\gg\lambda.$$ Although it is suppressed by temperature, the above force may still overcome the magnetic repulsion at large separations provided $\xi<\lambda/2$. For $\xi \ll \lambda/2$, we expect to see a free-energy minimum at $T>T^*$ which becomes shallow as the temperature is increased. For $\lambda>\xi\gtrsim\lambda/2$, we expect the minimum to vanish above $T^*$. The two types of behavior can be seen in Fig. \[fig\_finite\_T\].
![Total free energy as a function of separation at $T=0$ (black), $T=\flatfrac{T^*}{5}$ (blue), $T=T^*$ (grey), and $T=5T^*$ (red). (a) For $\xi=\lambda/3$, there is a well-defined free-energy minimum for all $T$, but this minimum is suppressed for $T> T^*$. (b) For $\xi = 2\lambda/3$, there is a free-energy minimum only for $T\ll T^*$.\[fig\_finite\_T\]](fig8.pdf){width="\columnwidth"}
Potential realization {#sec_considerations}
=====================
A promising candidate material to host skyrmions that may realize this effect is the topological insulator heterostructure $\textrm{Cr}_{x}{(\textrm{Bi}_{1-y}\textrm{Sb}_{y})}_{2-x}\textrm{Te}_{3} /
{(\textrm{Bi}_{1-y}\textrm{Sb}_{y})}_{2}\textrm{Te}_{3}$. In Ref. \[\], Yasuda, *et al.* argue that this heterostructure hosts a skyrmion ground state for $\tilde{B}\lesssim
0.1$ T at $T\lesssim10$ K. Band structure calculations suggest that this material has an electronic gap of $\Delta\approx10$ meV and a Fermi velocity of $\hbar
v\approx500$ meV$\cdot$nm.[@2016_Yasuda] This suggests that skyrmions of radius $R\gtrsim 50\,\mathrm{nm}$ would host skyrmion bound states in this material. Accurate characterization of the actual skyrmion size, e.g., by magnetic atomic force microscopy, would be necessary to ensure that this condition is fulfilled. For a tunnel coupling $t\qty(2R)=1\,\mathrm{meV}$, and skyrmion-skyrmion binding energy $\simeq 0.1\,\mathrm{meV}$, skyrmion-skyrmion bound states should form when $T\lesssim 1$ K.
Throughout this paper, we have neglected the repulsive dipolar interaction between skyrmions as well as the magnetic vector potential. We have also assumed that double-occupancy of the molecular orbitals is suppressed by a weakly-screened Coulomb interaction. Here we argue that these approximations are justified for the heterostructure considered in Ref. \[\] for skyrmions of radius $R=\lO=50$ nm and with $t\qty(2R)=1$ meV. Such skyrmions will have $E_j/\Delta\approx-0.6$ for their lowest-energy single-skyrmion orbitals (see Fig. \[fig\_1sk\_spectrum\]), and will therefore have an orbital decay length, defined through Eq. , $\l\approx 60$ nm.
The repulsive dipolar interaction between skyrmions will dominate both the short-range repulsive interaction considered in Sec. \[sec\_msystem\] and the short-range attractive interaction examined in Sec. \[sec\_esystem\] at large separations. We approximate the dipolar interaction energy as $E_D \approx \frac{\mu_0\abs{\d}^2}{4\pi x^3}$ where $\d\approx\frac{\pi \alpha gs R^2 l}{V} \mu_B \zh$ is the dipole moment due to the core of a skyrmion. We estimate $\alpha\approx0.6$ as the number of dopants per unit cell,[@2016_Yasuda] $V\approx0.59$ nm$^3$ as the unit cell volume,[@1972_Jenkins] $l\approx2$ nm as the depth of the magnetic layer,[@2016_Yasuda] and $g s\approx 1$ as the magnitude of the moment of the dopant atoms. The force due to the dipolar interaction balances the force due to the attractive interaction at $x_D$ satisfying $\Phi'\qty(x_D) = E_D'\qty(x_D)$, with $\Phi\qty(x)$ defined in Eq. . Under the above assumptions, this equality is satisfied for $x_D\approx450$ nm at $T=20$ mK. Below this separation, the repulsive dipolar force may be safely neglected.
In Eq. , we have neglected the contribution of the magnetic vector potential by arguing that the $U(1)$ phase acquired by an electron in a skyrmion-bound orbital of radius $R=50$ nm is small for $\tilde{B}\ll 1$ T. This is easily satisfied for $\tilde{B}\lesssim 0.1$ T, the field strength at which the skyrmion crystal reported in Ref. \[\] is stable.
Finally, we assumed that the molecular orbitals support only $N=0$ or $1$ electrons because of a large charging energy. The doubly-occupied state will be irrelevant if the chemical potential is well below the sum of the excited molecular orbital energy and the charging energy, i.e., if $E_+\qty(x)+U\qty(x)-\mu\gg T$, where $U\qty(x)$ is the charging energy. This is naturally satisfied at all separations if $E_0 - \mu \gg T$. Thus, the majority of the phase diagram, Fig. \[fig\_phasediagram\](a), is valid even for a vanishing Coulomb interaction, provided that the temperature is small compared to the binding energy. For a weakly-screened Coulomb interaction, $U\qty(x) = \frac{e^2}{4\pi\epsilon}\frac{\exp(-\flatfrac{x}{\lD})}{ x}$, where $\lD$ is the Debye length and $\epsilon$ is the permittivity, the doubly-occupied state may be neglected for separations smaller than $x_C$ satisfying $E_+\qty(x_C)+U\qty(x_C)-\mu=T$. Assuming $\mu =E_0 $, $\lD=500$ nm, $\epsilon=\epsilon_0$ (i.e., $\frac{e^2}{4\pi \epsilon} = 1\,e$V-nm), and $T=20$ mK, we find $x_C\approx2.5$ $\mu$m.
Based on the length scales estimated above, our analysis will be applicable provided that the skyrmions are confined to a sample of size $L<\mathrm{min}\left(x_C, x_D\right)\approx 450$ nm. Larger samples may still host stable bound states, but the neglected effects will dominate the skyrmion-skyrmion interaction at large separations.
In addition to the promising MTI candidate material studied in Ref. , there may also be other systems where it is possible to realize this effect, e.g. a TI device functionalized with a magnetic top layer,[@2017_Gong; @2017_Huang] or even in a 2-dimensional topological insulator (2DTI) setting. This can be realized in a 2DTI where there are two massive Dirac cones of unequal gap sizes (such as the Haldane model[@Haldane] with sublattice asymmetry[@Semenoff]). If the coupling to the magnetic system changes the mass of the two valleys in the same way, there would be a regime in which a sign change in the magnetization amounts to a sign change in the Dirac mass of only one valley. In the presence of skyrmions this would lead to electronic bound states of the kind discussed here.
The DMI assumed here is essential to stabilize skyrmions. This interaction is allowed in the bulk only for non-centrosymmetric materials. However, an interface may also break inversion symmetry leading to a robust DMI (see, e.g., Ref. for a review). A magnetic thin film on top of a topological insulator may then naturally lead to the required DMI at the interface,[@2017_Zarzuela; @2018_Zarzuela] without the need to specialize to non-centrosymmetric materials.
Summary and conclusion
======================
In summary, we have shown that a pair of skyrmions on the surface of a magnetic topological insulator may experience a mutually attractive interaction. In the absence of the topological insulator’s Dirac surface electrons, the skyrmions will experience a repulsive interaction. The contribution of the electronic system may be switched on and off by tuning the chemical potential. For large chemical potential, the skyrmions form a bound state, while for low chemical potential they remain unbound. For intermediate chemical potential, both the bound and unbound states are locally stable. A chemical-potential sweep from low to high and back should therefore lead to hysteretic binding and unbinding of the skyrmion pair. This hysteretic binding may find use in skyrmionic devices,[@2017_Fert] either as means of information storage, or as means to couple electronic and skyrmionic degrees of freedom. It may also be possible to generate GHz spin waves without microwave-frequency magnetic fields via an AC gate voltage. Such a magnon source could allow direct coupling of conventional and spin-wave circuits. The conditions laid out here for skyrmion binding could furthermore be used to realize stable qubits from the resulting molecular states.[@2013_Ferreira] Beyond the predictions made here, the hybridization of skyrmion-bound orbitals may lead to novel magnetoelectric effects within the skyrmion crystal phase, and the contribution of the electronic free energy may modify the skyrmion crystal phase boundary.
There is significant experimental effort in growing and characterizing magnetic topological insulators capable of supporting quantized anomalous Hall conductance and related domain-wall-bound-state phenomena. Current state-of-the-art Cr doped ${\qty(\mathrm{Bi}_{2-y}\mathrm{Sb}_{y})}_{2}\mathrm{Te}_3$ heterostructures show signs of a skyrmion crystal phase,[@2016_Yasuda; @2017_Liu; @2019_Jiang] while the recent discovery of easy-axis ferromagnetic van der Waals materials may lead to a new class of magnetic topological insulators.[@2017_Gong; @2017_Huang] Magnetic topological insulators proximity-coupled to a ferromagnetic thin film have already been shown to host skyrmions.[@2018_Zhang] Observing the effect considered in this work may be a natural next step toward demonstrating that skyrmion physics on the surface of a topological insulator leads to new and exciting effects.
This work was enabled in part by support provided by NSERC, FRQNT, INTRIQ, CIFAR, Nordea Fonden, the FRQNT doctoral scholarship, and Compute Canada.
Numerical solution of Eq. {#ap_2sknumerics}
==========================
In the main text, we introduce a phenomenological form of $\FM\qty(x)$, the magnetic free energy for two skyrmions separated by distance $x$. Since we have no exact analytical solution for the free energy, $\FM(x)$, or generally Eq. with arbitrary boundary conditions, we must numerically determine $\m\qty(\r)$ and $\FM\qty(x)$ to justify our phenomenological model. We outline our numerical approach to minimize the magnetic free energy here.
To minimize $\FM$, we evolve $\m$ with the partial differential equation $$\begin{aligned}
\pdv{\m}{\tau} = \vb{H}-\qty(\m\cdot\vb{H})\m\label{eq_relax}\end{aligned}$$ where $\tau$ is the simulation time and $$\begin{aligned}
\vb{H} = -\fdv{F_M}{\m}.\end{aligned}$$ Eq. is simply the component of the Landau-Lifschitz-Gilbert equation[@2004_Gilbert] that leads to relaxation. In the long-time limit, when $\pdv{\m}{\tau}=0$, the magnetization has been evolved to a configuration that minimizes $\FM\qty[\m]$ under the constraint $\abs{\m}=1$. We implement boundary conditions by setting $\vb{H}\qty(\r)=0$ for $\r$ at boundaries and choosing the correct initial condition.
To find $\FM\qty(x)$, we initialize the magnetization in an approximate configuration in the two-skyrmion topological sector that should be close to the ground state. We separate the magnetization into a left and a right region and use the numerical solution of Eq. to prepare a single skyrmion centered at $-\flatfrac{x'}{2}$ in the left region. Similarly, we prepare a skyrmion at $\flatfrac{x'}{2}$ in the right region. Then we fix $\vb{H}\qty(\pm
\flatfrac{x'\xh}{2})=0$ to pin the skyrmion cores and evolve the magnetization under Eq. until dynamics cease. From the resulting magnetization we can numerically evaluate Eq. to find $\FM$. However, the pinned skyrmion-skyrmion separation, $x'$, is not the distance between the skyrmion centers. Since the skyrmions repel, they will move away from each other under Eq. until the pinned magnetic moments reach the skyrmion perimeters. We determine the location of the skyrmion centers, $\r_\pm$, by fitting the final $m_z$ with the ansatz $$\begin{aligned}
\Theta\qty(\r) =& 2\sum_{i\in\pm}\arctan\qty(\frac{\sinh{\flatfrac{R}{\xi
}}}{\sinh{\flatfrac{\rho_i}{\xi}}})\\
{\rho}_{\pm} =& \abs{\r-\r_\pm},\end{aligned}$$ where $m_z = \cos\Theta$. This fitting procedure is justified in the limit $\flatfrac{\qty(x-2R)}{\xi} \gg 1$.
Numerical determination of single-skyrmion instability phase boundary {#ap_stability}
=====================================================================
In the Sec. \[sec\_msystem\], we argue that the magnetic free energy is stationary for a skyrmion texture, i.e. Eq. is solved by Eq. , with $\Theta$ given by the solution of Eq. . While this is true, for sufficiently low $\qty(B,K)$, the single-skyrmion texture does not minimize $\FM$. It is instead a saddle-point solution. In this region of parameter space, skyrmions are unstable to transitions towards a spin-spiral phase. This phase boundary is well known in the literature,[@2014_Banerjee] and is identified by our numerical simulations. For $\qty(B,K)$ below the instability phase boundary, we find that Eq. , which minimizes $F_M$ under the constraint $\abs{\m}=1$, takes an initial single-skyrmion configurations to a final spiral configurations. To determine the phase boundary, we determine the lifetime of the skyrmion configuration, prepared using the numerical solution of Eq. , under evolution with Eq. . We define the lifetime to be the time elapsed before the magnetization deviates from cylindrical symmetry. Specifically, we calculate the time $\taus$ at which the $z$ component of the magnetization deviates from its azimuthal average by a small threshold: $$\begin{aligned}
\int \dd{\r}{\qty(m_z\qty(\r,\taus) - \ev{m_z\qty(\r,\taus)}_\phi)}^{2} =
\delta\end{aligned}$$ where $$\begin{aligned}
\ev{m_z}_\phi = \frac{1}{2\pi}\int \dd{\phi} m_z\qty(r,\phi).\end{aligned}$$ The skyrmion lifetime diverges at the instability phase boundary. We fit $\taus\qty(B,K)$ for $B,K$ within the unstable phase using $$\begin{aligned}
\taus\qty(B,K) = {A\qty(K-\Ks\qty(B))}^\alpha\end{aligned}$$ to determine $\Ks\qty(B)$, the critical anisotropy for a given applied field. The phase boundary plotted in Fig. \[fig\_phasediagram\] is determined by these $\Ks\qty(B)$ for varying $B$. This phase boundary is consistent with the phase diagram presented in Ref. \[\].
Numerical calculation of in-gap electronic states {#ap_electronic}
=================================================
To justify our tight-binding model and verify the analytical single-skyrmion orbitals, we determine the electronic surface states numerically on a lattice. We can construct a lattice model whose low-energy excitations are governed by Eq. . Following Ref. \[\], we consider an electronic system governed by $$\begin{aligned}
H = \sum_{\k}\ckd
h_{\k}\ck + \Delta\sum_i \psi_i\bsigma\cdot\m_i\psi_i, \end{aligned}$$ where $$\begin{aligned}
h_{\k} =& v\qty[\sin(k_x)\sigma_y - \sin(k_y)\sigma_x]
\nonumber\\&+
\eta\qty[2-\cos(k_x)-\cos(k_y)]\sigma_z.\label{eq_numerical_bloch}\end{aligned}$$ The term proportional to $\eta$ in Eq. gaps the lattice model’s Dirac cones at the edge of the Brillouin zone, leaving low-energy excitations only around $\k = 0$. Thus for $\eta >\Delta$, the low-energy, long-wavelength, physics of this model should be given by Eq. .
We used the python package Kwant[@2014_Groth] to determine the electronic structure associated with the magnetization generated from the simulations described in Appendix \[ap\_2sknumerics\].
[^1]: More generally, the DMI for a two-dimensional thin film can be written as $f_{DM}=D_1\mathbf{m}\cdot\left(\nabla\times\mathbf{m}\right)+D_2\left[\left(\mathbf{m}\cdot\nabla\right)m_z-m_z\left(\nabla\cdot\mathbf{m}\right)\right]$. For concrete calculations in this manuscript, we have chosen $D_1=D>0$, $D_2=0$, leading to stable Bloch-type skyrmions with $\phi_0=+\pi/2$. In the more general case ($D_1\ne 0,\,D_2\ne 0$), some other (fixed) value of $\phi_0$ will be stabilized. For any value of $\phi_0$, the qualitative arguments leading to magnetic repulsion hold \[see the discussion following Eq. , below\]. Because the electronic states are topological in origin, we expect their presence to be robust to changes in the in-plane magnetization, so the electronic interaction is also robust to changes in the type of skyrmion (value of $\phi_0$), although the wavefunctions may be subject to a more complicated wave equation, Eq. . We therefore expect our qualitative analysis of skyrmion binding to hold for skyrmions of Néel-type ($\phi_0=0$), Bloch-type ($\phi_0=\pi/2$), and intermediate type \[$\phi_0 \in (0,\pi/2)$\].
[^2]: This is in contrast with the case of a centrosymmetric frustrated magnetic system, not considered here, where skyrmions of either winding number, $W=\pm 1$, may be stable.[@2016_Lin]
|
---
author:
- |
Zhuo Pan$^{\ast}$, Yu Yang$^{\ast}$, Xianyue Li, Shou-Jun Xu$^{\dagger}$\
[School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics ]{}\
[and Complex Systems, Lanzhou University, Lanzhou, Gansu 730000, China]{}
title: '**The complexity of total edge domination and some related results on trees**'
---
For a graph $G = (V, E)$ with vertex set $V$ and edge set $E$, a subset $F$ of $E$ is called an $\emph{edge dominating set}$ (resp. a $\emph{total edge dominating set}$) if every edge in $E\backslash F$ (resp. in $E$) is adjacent to at least one edge in $F$, the minimum cardinality of an edge dominating set (resp. a total edge dominating set) of $G$ is the [*edge domination number*]{} (resp. [*total edge domination number*]{}) of $G$, denoted by $\gamma^{'}(G)$ (resp. $\gamma_t^{'}(G)$). In the present paper, we prove that the total edge domination problem is NP-complete for bipartite graphs with maximum degree 3. We also design a linear-time algorithm for solving this problem for trees. Finally, for a graph $G$, we give the inequality $\gamma^{'}(G)\leqslant \gamma^{'}_{t}(G)\leqslant 2\gamma^{'}(G)$ and characterize the trees $T$ which obtain the upper or lower bounds in the inequality.
Introduction
============
Dominating problems have been subject of many studies in graph theory, and have many applications in operations research, e.g., in resource allocation and network routing, as well as in coding theory. There are many variants of domination, we mainly fucus on the total edge domination which is a variant of edge domination. Edge domination is introduced by Mitchell and Hedetniemi $\cite{mh77}$ and is related to telephone switching network $\cite{l68}$. Edge domination is also related to the approximation of the vertex cover problem, since an independent edge dominating set is a matching $\cite{k72}$.
In this paper we in general follow $\cite{hhs98}$ for natation and graph theory terminology. All graphs considered here are finite, undirected, connected, have no loops or multiple edges. Let $G=(V, E)$ be a graph with vertex set $V$ and edge set $E$. A subset $F$ of $E$ is called an $\emph{edge dominating set}$ (abbreviated for [*ED-set*]{}) of $G$ if every edge not in $F$ is adjacent to at least one edge in $F$. The edge domination number, denoted by $\gamma ^{'}(G)$, is the minimum cardinality of an ED-set of $G$. An ED-set of $G$ with cardinality $\gamma^{'}(G)$ is called a [*$\gamma^{'}(G)$-set*]{}. The edge domination problem has been studied by several authors for example $\cite{hk93, k98, x06, yg80}$. Yannakakis and Gavril $\cite{yg80}$ showed that, the edge domination problem is NP-complete even when graphs are planar or bipartite of maximum degree 3, but solvable for trees and claw-free chordal graphs.
The concept of the total edge domination, a variant of edge domination, was introduced by Kulli and Patwari $\cite{kp91}$. A subset $F_t$ of $E$ is called a $\emph{total edge dominating set}$ (abbreviated for [*TED-set*]{}) of $G$ if every edge is adjacent to at least one edge in $F_t$. The [*total edge domination number*]{}, denoted by $\gamma^{'}_{t}(G)$, is the minimum cardinality of a TED-set of $G$. A TED-set of $G$ with cardinality $\gamma^{'}_{t}(G)$ is called a [*$\gamma^{'}_{t}(G)$-set*]{}. Zhao et al. proved [@zlm14] that the total edge domination problem is NP-complete for planar graphs with maximum degree three, and for undirected path graphs and also constructed a linear algorithm for total edge domination problem in trees by a label method. For more study on total edge domination, see for example references $\cite{ms13, pc16, v14}$.
As far as we know, there is no discussion on the complexity of total edge domination problem for bipartite graphs. For this reason, we prove that the total edge domination problem is NP-complete for bipartite graphs with maximum degree 3. We also design another linear time algorithm for computing $\gamma_t^{'}(T)$ of a tree $T$ by the dynamic programming method, different from the algorithm in $\cite{zlm14}$. Kulli et al. $\cite {kp91}$ gave the lower bound of the total edge domination number for a graph $G$: $\gamma^{'}(G)\leqslant \gamma^{'}_{t}(G)$, it is obvious that $\gamma^{'}_{t}(G)\leqslant 2\gamma^{'}(G)$. So, for any graph $G$, $\gamma^{'}(G)\leqslant \gamma^{'}_{t}(G)\leqslant 2\gamma^{'}(G)$. In this paper, we show that the bounds are sharp and characterize trees achieving the lower or upper bound.
**Notation.** Let $G=(V, E)$ be a graph. For $v\in V$, denote by $N_{G}(v)$ the [*open neighborhood*]{} of $v$ in $G$, i.e., $N_{G}(v)=\{u\in V|~uv \in E\}$, by $deg_G(v)$ the size of $N_{G}(v)$ called the [*degree*]{} of $v$, and by $E_{G}(v)$ the set of all the edges of $G$ incident with $v$, i.e., $E_{G}(v)=\{e\in E |$ $v$ is incident with $e\}$. Similarly, for $e\in E$, denote by $N_{G}(e)$ the [*open neighbourhood*]{} of $e$ in $G$, i.e., $N_{G}(e)=\{e'\in E|$ $e'$ is adjacent to $e \}$ and by $N_{G}[e]=N_{G}(e)\cup \{e\}$ the [*closed neighbourhood*]{} of $e$. For two vertices $u, v\in V$, the [*distance*]{} $d_{G}(u,v)$ is defined as the length of a shortest path between $u$ and $v$ in $G$. We define the shorter distance between vertex $w$ and one endpoint of edge $e$ as the [*distance*]{} between $w$ and $e$, denoted by $d_G(w, e)$. The maximum distance among all pairs of vertices is called the $diameter$ of $G$, denoted by $diam(G)$. If there is no ambiguity in the sequel, the subscript in the notation is omitted.
A $leaf$ of a graph $G$ is a vertex of degree one and a [*support vertex*]{} (resp. [*strong support vertex*]{}) of $G$ is a vertex adjacent to a leaf (resp. adjacent to at least two leaves). A [*leaf edge*]{} (or [*pendant edge*]{}) of $G$ is an edge with one leaf as an endpoint. Consider one vertex of a tree as special, called the [*root*]{} of this tree. A tree with the fixed root is a [*rooted tree*]{}. For a vertex $v$ of a rooted tree $T$ with root $r$, a neighbour of $v$ away from $r$ is called a [*child*]{}. For a positive integer $k$, a [*star*]{} $S_{1, k}$ is a tree that contains exactly one non-leaf vertex called a [*center vertex*]{} and $k$ leaves. A [*double star*]{} is a tree that contains exactly two non-leaf vertices called [*center vertices*]{}.
The result on NP-completeness
==============================
In this section, we are going to prove that the total edge domination problem is NP-complete for bipartite graphs with maximum degree 3. To prove that a problem $P$ is NP-complete, it is enough to prove that $P\in\mathcal{NP}$ and to show that a known NP-complete problem is reducible to the problem $P$ in polynomial time. The known NP-complete problem used in our reduction is the SAT-3 restricted problem as follows:
0.1in
[@y81].
[**Instance:**]{} A set of clauses $C_{1}, C_{2},\ldots, C_{p}$ containing only variables, with at most three literals per clause, such that every variable occurs two times and its negation once.
[**Question:**]{} Is there a truth assignment of zeros and ones to the variables satisfying all the clauses?
0.1in
The decision total edge domination problem is stated as follows:
0.1in
**Instance:** A graph $G = (V, E)$ and a positive integer $k\leqslant |E|$.
**Question:** Does $G$ have a total edge dominating set of size at most $k$?
0.1in
Now we can state our main result in this section.
\[th:NPC\] The total edge domination problem for bipartite graphs with maximum degree 3 is NP-complete.
The reduction is from the SAT-3 restricted problem. Consider a set of clauses $\{C_{1},\ldots, C_{p}\}$ with variables $x_{1}, x_{2},\ldots, x_{n}$ as input for the SAT-3 restricted problem. Now we construct a graph $G=(V, E)$. For any $1\leqslant l\leqslant p$, there are two adjacent vertices, say $d_{l}$ and $d'_{l}$, corresponding to the clause $C_l$, denoted by $G_l$. For any $1\leqslant i\leqslant n$, there is a subgraph of $G$, which is a disjoint union of three paths $a_ia_{i, 0}a_{i, 1}a_{i, 2}$, $b_ib_{i, 0}b_{i, 1}b_{i, 2}$, $c_ic_{i, 0}c_{i, 1}c_{i, 2}$, and two edges $a_ic_i$ and $c_ib_i$, corresponding to the variable $x_i$, denoted by $G_{x_i}$ (see Fig. \[Fig:NPCConG\]). For any clause $C_{l}$, if $x_i\in C_{l}$, then we connect $d_{l}$ to one of vertices $a_{i, 0}$ and $b_{i, 0}$ to ensure that $d(a_{i, 0})=3$ and $d(b_{i, 0})=3$ (from conditions in SAT-3 RES); if $\overline{x_i}\in C_{l}$, then we connect $d'_{l}$ to $c_{i, 0}$ (for an example, see Fig. 1). It is obvious that $G$ is bipartite, coloring vertices with white and black, shown as Fig. \[Fig:NPCConG\]. We will show that there is a truth assignment of zeros and ones to the variables satisfying all clauses $\{C_{1}, C_{2},\ldots, C_{p}\}$ if and only if $G$ has a total edge dominating set of size $6n$.

\[Fig:NPCConG\]
Necessity: Given a satisfying assignment of the clauses, define a set $F$ of edges as follows (assume that $x_i$ is in two clauses $C_{l_1}$ and $C_{l_3}$, and $\overline{x_i}$ is in clause $C_{l_2}$): $$\begin{aligned}
F=&\{a_{i, 0}d_{j_{1}}, a_{i, 0}a_{i, 1}, b_{i, 0}d_{j_{3}}, b_{i, 0}b_{i, 1}, c_ic_{i, 0}, c_{i, 0}c_{i, 1}\mid x_{i}=1\}\\
& \cup \{a_ia_{i, 0}, a_{i, 0}a_{i, 1}, b_ib_{i, 0}, b_{i, 0}b_{i, 1}, c_{i, 0}d'_{j_2}, c_{i, 0}c_{i, 1}\mid x_{i}=0\},\end{aligned}$$ (see Fig. \[Fig:NPCConTEDS\]). It is obvious that $F$ is a TED-set of size $6n$.
\[Fig:NPCConTEDS\]
Conversely, we assume that $G$ has a TED-set $F$ of size $6n$. For any $1\leqslant i\leqslant n$, in view of leaf edges $a_{i, 1}a_{i, 2}, b_{i, 1}b_{i, 2}, c_{i, 1}c_{i, 2}$, $F$ must contain three edges $a_{i, 0}a_{i, 1}, b_{i, 0}b_{i, 1}, c_{i, 0}c_{i, 1}$ and its respective adjacent edges. Thus the subgraph $G_{x_i}$ contains exactly 6 edges in $F$. For the convenience of proof, we assume that $x_i$ is contained in clauses $C_{l_1}$ and $C_{l_2}$, and $\overline{x_i}$ is contained in clause $C_{l_3}$.
[**Case 1.**]{} $c_ic_{i, 0}\not\in F$.
In this case, $F$ must contain $a_ia_{i, 0}, b_ib_{i, 0}$ and we may assume that the edge adjacent to $c_{i, 0}c_{i, 1}$ in $F$ is $c_{i, 0}d'_{l_3}$, otherwise we can add $c_{i, 0}d'_{l_3}$ into $F$ by deleting $c_{i, 1}c_{i, 2}$ from $F$.
[**Case 2.**]{} $c_ic_{i, 0}\in F$.
Similar to Case 1, we can assume that $a_{i, 0}d_{l_1}, b_{i, 0}d_{l_2}\in F$.
Therefore, regardless of whether $F$ contains $c_ic_{i, 0}$, we can always give a special total edge dominating set $F$ of size $6n$. We define a truth assignment $\tau$ by, if $c_ic_{i, 0}\in F$, setting $x_i=1$ and $x_i=0$, otherwise. Since $F$ is a TED-set constructed as above, at least one edge in $F$ is adjacent to $d_{l}d'_{l}$ for every $l$ (note that $d_{l}d'_{l}\notin F$). Consequently $\tau$ satisfies all clauses.
The degree of vertices except for $d_l$ and $d'_l$ in $G$ constructed above is at most 3, but if $C_l=x_{i_1}x_{i_2}x_{i_3}$ (resp., $\overline{x_{i_1}}~\overline{x_{i_2}}~\overline{x_{i_3}}$), then $d_{G}(d_l)=4$ (resp., $d_{G}(d'_l)=4$). Then we use a tricky technique: (1) replace $H$ shown as Fig. \[Fig:NPCDeg4a\] for $d_ld'_l$ and, (2) replace the three edges connecting the vertices $a, b, c$ corresponding to variables and $d_l$ (resp. $d'_l$) with the three edges connecting $a, b, c$ and $x, y, z$ in $H$, respectively, say $ax, by, cz$.
It is easy to show by a straightforward case analysis that: for a TED-set $F$ of $G$,\
(1). if none of the three edges $\{ax, by, cz\}$ belongs to $F$, then $F$ contains at least nine edges from $H$, see Fig. \[Fig:NPCDeg4a\].\
(2). if one of three edges $\{ax, by, cz\}$ is in $F$, say $ax$, then $F$ contains at least eight edges from $H$, see Fig. \[Fig:NPCDeg4b\].
Especially, let $s$ be the number of 3-literal clauses which satisfies that the literals contained are all positive or all negative. Then we can similarly show that there is a truth assignment of zeros and ones to the variables satisfying all clauses $\{C_{1}, C_{2},\ldots, C_{p}\}$ if and only if $G$ has a total edge dominating set of size $6n+8s$.
From the proof of Theorem \[th:NPC\], the graph constructed has a girth of at least 10.
The total edge domination problem for bipartite graphs of girth at least 10 with maximum degree 3 is NP-complete.
The notations are as in the proof of Theorem \[th:NPC\]. By the construction of $G$, there are no edges among $G_{l}$’s (or $H$) and among $G_{x_i}$’s. So a cycle $C$ is either in $H$ ( note that there is no cycles in $G_l$ or $G_{x_i}$)or formed by going through $G_{l_1}$, $G_{x_{i_1}}$, $G_{l_2}$, $G_{x_{i_2}}$, $\ldots$, $G_{l_k}$, $G_{x_{k}}$, $G_{l_1}$ $(k\geqslant 2)$; in the second case the intersection of $C$ and $G_{x_i}$ contains at least three edges and so the length of $C$ is at least $5k\geqslant 10$. Note that the girth of $H$ is more than 12.
A linear-time algorithm for trees
=================================
In this section, we work on a linear-time algorithm for finding the total edge domination number of a tree by using the dynamic programming method.
First, we define some sets and some parameters. Let $T$ be a tree with an edge $e$. We define: $$\begin{aligned}
\mathcal{F}_{1}(T, e) := \{& F|~F \text{ is a TED-set of } T \text{ with } e \in F\};\\
\mathcal{F}_{0}(T, e) := \{ & F|~F \text{ is a TED-set of } T \text{ with } e \notin F\} ;\\
\mathcal{F}_{\overline{1}}(T, e) := \{& F|~F \text{ is an ED-set of } T \text{ with a unique isolated edge $e$ in $F$}\};\\
\mathcal{F}_{\overline{0}}(T, e) := \{& F|~F \text{ is a TED-set of } T-e, \text{but $e$ is not dominated by $F$}\}.$$
It is easily obtained
\[Le:foursetsnonEmp\] Let $e$ be a leaf edge of tree $T$. Then\
$\mathcal{F}_1(T, e)\neq \emptyset$ if and only if $T\neq K_2$;\
$\mathcal{F}_0(T, e)\neq \emptyset$ if and only if $T$ has at least 3 edges;\
$\mathcal{F}_{\overline{1}}(T, e)\neq \emptyset$ (resp. $\mathcal{F}_{\overline{0}}(T, e)\neq \emptyset$) if and only if $T\setminus N[e]$ has no $K_2$ as components.
We denote $$\begin{aligned}
\gamma'_1(T, e): =& \mbox{min} \{~{|F|}~\big|~ F \in \mathcal{F}^{}_{1}(T,e)\};\\
\gamma'_0(T, e):=& \mbox{min} \{{~|F|}~\big|~ F \in \mathcal{F}_{0}(T,e)\};\\
\gamma'_{\overline{1}}(T, e): =& \mbox{min} \{~{|F|}~\big|~ F\in \mathcal{F}_{\overline{1}}(T,e)\};\\
\gamma'_{\overline{0}}(T,e):=& \mbox{min} \{~{|F|}~\big|~F\in\mathcal{F}_{\overline{0}}(T,e)\}.\end{aligned}$$
By convention, if a set is empty, then we set the value as infinity. For example, if $\mathcal{F}_{\overline{0}}(T,e)= \emptyset$, then we set $\gamma'_{\overline{0}}(T,e)=\infty$. We can define $F\in \mathcal{F}_{1}(T, e)$ (resp. $\mathcal{F}_{0}(T, e)$, $\mathcal{F}_{\overline{1}}(T, e)$, $\mathcal{F}_{\overline{0}}(T, e)$) of minimum cardinality as a $\gamma'_{1}(T, e)$ (resp. $\gamma'_{0}(T, e)$, $\gamma'_{\overline{1}}(T, e)$, $\gamma'_{\overline{0}}(T, e)$)[*-set*]{} of $T$. We give some inequality relationships among four values defined as above.
\[ineq:ParaRela\] Let $T$ be a tree with an edge $e$. If $\mathcal{F}_{1}(T, e), \mathcal{F}_{0}(T, e), \mathcal{F}_{\overline{1}}(T, e)$ and $\mathcal{F}_{\overline{0}}(T,e)$ are non-empty sets, then\
(1) $\gamma'_{1}(T,e) \leqslant \gamma'_{0}(T,e)+1$;\
(2) $\gamma'_{1}(T,e) \leqslant \gamma'_{\overline{1}}(T,e)+1$;\
(3) $\gamma'_{1}(T,e)\leqslant \gamma'_{\overline{0}}(T,e)+2$;\
(4) $\gamma'_{\overline{1}}(T,e) \leqslant \gamma'_{\overline{0}}(T,e)+1$.
Let $e'$ be any edge in $N(e)$.
\(1) Let $F\in \mathcal{F}_{0}(T,e)$. Then there exists an edge $e''\in F$ adjacent to $e$ and further $F+e$ is a TED set of $T$ containing $e$. Therefore $\gamma'_{1}(T,e) \leqslant \gamma'_{0}(T,e)+1$.
\(2) Let $F_{0}\in \mathcal{F}_{\overline{1}}(T,e)$. Then $e\in F_{0}$ and $N(e)\cap F_{0}=\emptyset$ by the definition of $\mathcal{F}_{\overline{1}}(T,e)$. $F_{0}+e'$ is a TED-set of containing $e$. Therefore $\gamma'_{t,1}(T,e)\leqslant \gamma'_{\overline{1}}(T,e)+1$.
\(3) Let $F_{1}\in \mathcal{F}_{\overline{0}}(T,e)$. Then $N[e]\cap F_{1}= \emptyset$ by the definition of $\mathcal{F}_{\overline{0}}(T,e)$. $F_{1}+e +e'$ is a TED-set of $T$ containing $e$. Thus $\gamma'_{1}(T,e)\leqslant \gamma'_{t,\overline{0}}(T,e)+2$.
\(4) Let $F_{2}\in \mathcal{F}_{\overline{0}}(T,e)$. Then $F_2+ e$ is an ED-set of $T$ with a unique isolated edge $e$ by the definition of $\mathcal{F}_{\overline{1}}(T,e)$. Thus $\gamma'_{\overline{1}}(T,e)\leqslant \gamma'_{t,\overline{0}}(T,e)+1$.
Before giving the dynamic programming algorithm, we designed an edge data structure as follows.
Root the tree $T$ at any leaf, say $r$. The [*height*]{}, denoted by $h$, of $T$ is the maximum distance between $r$ and all other vertices of $T$. The [*level*]{} $i$ $(0\leqslant i\leqslant h)$ is the set of vertices of $T$ with a distance $i$ from $r$.
For such a rooted tree $T$ of order $n+1$, let us label the edges of $T$ as $1, 2, \ldots, n$. We go through every level from $h$ to 1. For each $i$, $1\leqslant i\leqslant h$, we traverse the edges connecting the vertices on $i$ and $i-1$ in any order, from left to right. We list the fathers of all edges of $T$ (the edge numbered $n$ has no father by writing father $[n]=0$ ), so we can use a data structure called an [*edge parent array*]{} to represent $T$. Let $e^0$ be a non-leaf edge in rooted tree $T$, $u$ the endpoint of $e^0$ away from the root. Denote by $N_c(e^0)$ the set of neighbors of $e^0$ with endpoints $u$, called [*children neighbors*]{} of $e^0$, say $\{e^1, e^2, \ldots, e^q\}$ for some integer $q$. For $0\leqslant j\leqslant q$, let $T^j$ be the component containing $e^j$ of $T\setminus (\{e^0, e^1, \ldots, e^q\}\setminus \{e^j\})$.
Let $T$ be a rooted tree with a non-leaf edge $e^0$ and $N_c(e^0)=\{e_1, e_2, \ldots, e_q\}$ for some integer $q \geqslant 1$. For $0\leqslant j\leqslant q$, $T^j$ are defined as above, and denote $$\begin{aligned}
\theta_{j}:=&\min\{\gamma'_{1}(T^{j}, e^{j}),\gamma'_{0}(T^{j}, e^{j}), \gamma'_{\overline{1}}(T^{j},e^{j}),\gamma'_{\overline{0}}(T^{j}, e^{j})\};\\
A_1:=&\{j\in\{1,2,\ldots,q\}|\theta_{j}=\gamma'_{1}(T^{j}, e^{j}) \};\\
A_2:=&\{j\in\{1,2,\ldots,q\}|\theta_{j}=\gamma'_{0}(T^{j}, e^{j}) \};\\
A_3:=&\{j\in\{1,2,\ldots,q\}|\theta_{j}=\gamma'_{\overline{1}}(T^{j}, e^{j}) \};\\
A_4:=&\{j\in\{1,2,\ldots,q\}|\theta_{j}=\gamma'_{\overline{0}}(T^{j}, e^{j}) \}.\\\end{aligned}$$ Then
$$\begin{aligned}
(1). &~\gamma'_{1}(T, e^0)=
\begin{cases}
\min\{\gamma'_{1}(T^0, e^0),\gamma'_{\overline{1}}(T^0, e^0)\}+\sum\limits^{q}_{j=1}\theta_{j}, &\substack{\text{if}~A_1\cup A_3\neq\emptyset};\\
\min\{\gamma'_{1}(T^0, e^0), \gamma'_{\overline{1}}(T^0, e^0)+1\}+
\sum\limits^{q}_{\substack{j=1}}\theta_{j}, &\substack{\text{if} ~A_1\cup A_3= \emptyset.}
\end{cases}
\\
(2). & ~\gamma'_{0}(T, e^0)=
\begin{cases}
\min\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0,e^0)\}+\sum\limits^{q}_{j=1}\theta_{j}, &\substack{\text{if}~A_1\neq\emptyset~or ~|A_3|\geqslant 2;}\\
\min\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+
\sum\limits^{q}_{\scriptsize{j=1}}\theta_{j}+1,& \substack{\text{if}~ A_1=\emptyset~and~|A_3|=1 ~or~\\ A_1=A_3=\emptyset, ~A_2\neq \emptyset, ~A_4\neq \emptyset;}\\
\min\{\gamma'_{0}(T^0, e^0),\gamma'_{\overline{0}}(T^0, e^0)+1\}+\sum\limits^{q}_{\substack{j=1}}\theta_{j}, &\substack{\text{if } A_1=A_3=A_4=\emptyset;}\\
\min\{\gamma'_{0}(T^0, e^0),\gamma'_{\overline{0}}(T^0, e^0)\}+\sum\limits^{q}_{j=1}\theta_{j} +1,&\substack{\text{if}~ A_1=A_2=A_3=\emptyset,~ and~there~is~ j\in A_4\\~such~that~ \gamma'_{1}(T^{j},e^{j})-\gamma'_{\overline{0}}(T^{j},e^{j})=1; }\\
\min\{\gamma'_{0}(T^0, e^0),\gamma'_{\overline{0}}(T^0, e^0)\}+\sum\limits^{q}_{j=1}\theta_{j} +2,
& \substack{\text{if}~ A_1=A_2=A_3=\emptyset~and~ any~j\in A_4,\\ \gamma'_{1}(T^{j},e^{j})-\gamma'_{\overline{0}}(T^{j},e^{j})=2. }\\
\end{cases}
\\
(3). &~\gamma'_{\overline{1}}(T, e^0)= \gamma'_{\overline{1}}(T^0, e^0)+\sum\limits^{q}_{j=1} \min \{\gamma'_{0}(T^{j}, e^{j}), \gamma'_{\overline{0}}(T^{j}, e^{j})\};
\\
(4). &~\gamma'_{\overline{0}}(T, e^0)= \gamma'_{\overline{0}}(T^0,e^0)+\sum\limits^{q}_{j=1} \gamma'_{0}(T^{j}, e^{j}).\end{aligned}$$
For the convenience, for $0\leqslant j\leqslant q$, we define $F_{T^j}=F_T\cap T^j$ for an edge subset $F_T$ of $T$ and thus $|F_T|=\sum_{j=0}^q |F_{T^j}|$. Especially, for $0\leqslant j\leqslant q$, if $F_T$ is a TED-set of $T$, then $F_{T^j}\in \mathcal{F}_{1}(T^{j},e^{j})\cup \mathcal{F}_{0}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{1}}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{0}}(T^{j},e^{j})$ by the definition. Denote $\overline{N_c}(e^0)=N(e^0)\setminus N_c(e^0)$.
(1). Let $F_{T}$ be a $\gamma'_{ 1}(T, e^0)$-set.
[**Case 1.1.**]{} $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$.
In this case, the restriction $F_{T^0}$ of $F_T$ on $T^0$ is a TED-set of $T^0$, further a $\gamma'_{ 1}(T^0,e^0)$-set. For any $j$ ($1\leqslant j\leqslant q$), $F_{T^j}$ is a set of size $\theta_j$ in $\mathcal{F}_{1}(T^{j},e^{j})\cup \mathcal{F}_{0}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{1}}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{0}}(T^{j},e^{j})$ by the definition of $F_{T^j}$. So $$\begin{aligned}
\gamma'_{1}(T, e^0)= \gamma'_{1}(T^0, e^0)+ \sum\limits^{q}_{j=1}\theta_{j}.\end{aligned}$$
[**Case 1.2.**]{} $\overline{N_c}(e^0)\cap F_{T}= \emptyset$.
In this case, $F_{T^0}\in\mathcal{F}_{\overline{1}}(T^0, e^0)$. Thus $$\label{ineq:case12}
\gamma'_{1}(T, e^0)\geqslant \gamma'_{ \overline{1}}(T^0, e^0)+\sum_{j=1}^q \theta_j.$$ In order to connect $e^0$ in $F_T$, there exists some $1\leqslant j\leqslant q$ such that $e^j\in F_{T^j}$.
[**Subcase 1.2.1.**]{} $A_1\cup A_3\neq \emptyset$, say, $j_1\in A_1$.
We take any $\gamma'_{ \overline{1}}(T^0, e^0)$-set $B^0$ and $\gamma'_{1}(T^{j_1}, e^{j_1})$-set $B^{j_1}$. For any $j\neq j_1$ $(1\leqslant j\leqslant q)$, we choose an edge set $B^j$ of size $\theta_j$ in $\mathcal{F}_{1}(T^{j},e^{j})\cup \mathcal{F}_{0}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{1}}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{0}}(T^{j},e^{j})$. Then $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{ \overline{1}}(T^0, e^0)+\sum_{j=1}^q \theta_j$ satisfying $|N
(e_0)\cap (\cup_{j=0}^q B^j)|\geqslant1$. Combined with , we have
$$\begin{aligned}
\gamma'_{1}(T, e^0)= \gamma'_{\overline{1}}(T^0, e^0)+
\sum\limits^{q}_{j=1}\theta_{j}.\end{aligned}$$
[**Subcase 1.2.2.**]{} $A_1\cup A_3=\emptyset$, i.e., $A_1=\emptyset$ and $A_3=\emptyset$.
In this subcase, equality does not hold in Eq. . If $A_2\neq \emptyset$, combined with Lemma \[Le:foursetsnonEmp\], Lemma \[ineq:ParaRela\] (1) and $A_1=\emptyset$, for any $j\in A_2$, $\gamma'_{1}(T^j, e^j)= \gamma'_{0}(T^j, e^j)+1=\theta_j+1$. Otherwise, If $A_2=\emptyset$, then $A_4=\{1, 2, \ldots, q\}(\neq \emptyset).$ Combined with Lemma \[ineq:ParaRela\] (4) and $A_3=\emptyset$, for any $j\in A_4$, $\gamma'_{\overline{1}}(T^{j}, e^{j})= \gamma'_{\overline{0}}(T^{j}, e^{j})+1=\theta_j+1$. Similar to Subcase 1.2.1, whatever which case it is, we can construct a TED-set of $T$ of size $\gamma'_{\overline{1}}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ satisfying $|N(e_0)\cap (\cup_{j=0}^q B^j)|\geqslant1$. So $$\begin{aligned}
\gamma'_{1}(T, e^0)=
\gamma'_{\overline{1}}(T^0, e^0)+
\sum\limits^{q}_{\substack{j=1}}\theta_{j}+1.
\end{aligned}$$
(2). Let $F_{T}$ be a $\gamma'_{ 0}(T, e^0)$-set.\
If $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$, then the restriction $F_{T^0}$ of $F_T$ on $T^0$ is a TED-set of $T^0$, further a $\gamma'_{0}(T^0, e^0)$-set. So $$\label{ineq:case01}
\gamma'_{0}(T, e^0)\geqslant \gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j.$$
If $\overline{N_c}(e^0)\cap F_{T}= \emptyset$, then the restriction $F_{T^0}$ of $F_T$ on $T^0$ belongs to $\mathcal{F}_{\overline{0}}(T^0, e^0)$, further a $\gamma'_{\overline{0}}(T^0, e^0)$-set. So $$\label{ineq:case02}
\gamma'_{0}(T, e^0)\geqslant \gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j.$$
[**Case 2.1.**]{} $A_1\neq\emptyset$, say $j_{1}\in A_1$.
We take any $\gamma'_{0}(T^0, e^0)$-set in the case of $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$ and any $\gamma'_{\overline{0}}(T^0, e^0)$-set in the case of $\overline{N_c}(e^0)\cap F_{T}= \emptyset$. Denoted by $B^0$, any $\gamma'_{1}(T^{j_1}, e^{j_1})$-set $B^{j_1}$, and for any $j\neq j_1$ $(1\leqslant j\leqslant q)$, an edge set $B^j$ of size $\theta_j$ in $\mathcal{F}_{1}(T^{j},e^{j})\cup \mathcal{F}_{0}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{1}}(T^{j},e^{j})\cup
\mathcal{F}_{\overline{0}}(T^{j},e^{j})$. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j$ in the case of $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$ or $\gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j$ in the case of $\overline{N_c}(e^0)\cap F_{T}= \emptyset$ satisfying $|N(e_0)\cap (\cup_{j=0}^q B^j)|\neq 0$. Combined with and , we have $$\begin{aligned}
\gamma'_{0}(T, e^0)=\mbox{min}\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+ \sum\limits^{q}_{j=1}\theta_{j}.\end{aligned}$$
[**Case 2.2.**]{} $A_1=\emptyset$ and $A_3\neq \emptyset$.
If $|A_3|\geqslant 2$, then, for $j_1, j_2\in A_2$, we can take a $\gamma'_{\overline{1}}(T^{j_1}, e^{j_1})$-set $B^{j_1}$ and a $\gamma'_{\overline{1}}(T^{j_2}, e^{j_2})$-set $B^{j_2}$. The others $B^0$ and $B^j$ for $1\leqslant j\leqslant q$ and $j\neq j_1, j_2$ are taken as Subcase 2.1. Similarly, we can obtain $$\begin{aligned}
\gamma'_{0}(T, e^0)=\mbox{min}\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+ \sum\limits^{q}_{j=1}\theta_{j}.\end{aligned}$$
If $|A_3|=1$, say $A_3=\{j_{3}\}$, then neither Eq. nor Eq. take equality in this case. According to Lemma \[ineq:ParaRela\] (2) and $A_1=\emptyset$, $\gamma'_{1}(T^{j_3}, e^{j_3})=\gamma'_{\overline{1}}(T^{j_3}, e^{j_2})+1= \theta_{j_3} + 1$. We take a $\gamma'_{1}(T^{j_3}, e^{j_3})$-set $B^{j_3}$. The others $B^0$ and $B^j$ for $1\leqslant j\leqslant q$ and $j\neq j_3$ are taken as Subcase 2.1. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$ or $\gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e^0)\cap F_{T}= \emptyset$ satisfying $|N(e_0)\cap (\cup_{j=0}^q B^j)|\neq 0$. So $$\begin{aligned}
\gamma'_{0}(T, e^0)=\mbox{min}\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+ \sum\limits^{q}_{j=1}\theta_{j}+1.\end{aligned}$$
[**Case 2.3.**]{} $A_1=A_3=\emptyset$ and $A_2\neq \emptyset$.
If $A_4=\emptyset$, i.e., $A_2=\{1, 2, \ldots, q\}$, and $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$, then we take any $\gamma'_{0}(T^0, e^0)$-set $B^0$. For $1\leqslant j \leqslant q$, we take any $\gamma'_{0}(T^{j}, e^j)$-set $B^j$. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j$.
If $A_4=\emptyset$ and $\overline{N_c}(e^0)\cap F_{T}= \emptyset$, then equality does not hold in Eq. . By Lemma \[ineq:ParaRela\] (1) and $A_1=\emptyset$, for any $ j\in A_2, \gamma'_{1}(T^j, e^j)= \gamma'_{0}(T^j, e^j)+1=\theta_j+1$. We take a $\gamma'_{1}(T^{j_1}, e^{j_1})$-set $B^{j_1}$ for some $1\leqslant j_1 \leqslant q$ and others $B^j$ for any $0\leqslant j\leqslant q$ and $j\neq {j_{1}}$ are taken as in Subcase 2.1. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$.
If $A_4\neq 0$, then equality does not hold in Eqs. and . By Lemma \[ineq:ParaRela\] (1) and $A_1=\emptyset$, for any $ j\in A_2, \gamma'_{1}(T^j, e^j)= \gamma'_{0}(T^j, e^j)+1=\theta_j+1$. We take any $\gamma'_1(T^
{j_{1}}, e^{j_{1}})$-set $B^{j_{1}}$ for some $1\leqslant {j_{1}}\leqslant q$ and the others $B^j$ for any $0\leqslant j\leqslant q$ and $j\neq {j_{1}}$ are taken as in Subcase 2.1. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$ or $\gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e^0)\cap F_{T}= \emptyset$.
So $$\begin{aligned}
\gamma'_{0}(T, e^0)=
\begin{cases}
\min\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)+1\} +\sum\limits^{q}_{j=1}\theta_{j},&\substack{\text{ if } A_4=\emptyset};\\
\min\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+\sum\limits^{q}_{\substack{j=1}}\theta_{j}+1,&\substack{\text{ if}~A_4\neq\emptyset}.
\end{cases}
\end{aligned}$$
[**Case 2.4.**]{} $A_1=A_2=A_3=\emptyset$, i.e., $A_4=\{1, 2, \ldots, q\}$.
In this case, to obtain a $\gamma'_0(T, e^0)$-set, we need one $\gamma'_1(T^{j'}, e^{j'})$-set or at least two $\gamma'_{\overline{1}}(T^{j''}, e^{j''})$-sets for $1\leqslant j''\leqslant q$. So, equality does not hold in Eqs. and . By Lemma \[ineq:ParaRela\] (4), for each $\forall j\in A_4$, we have $\gamma'_{\overline{0}}(T^{j}, e^{j})+1\leqslant\gamma'_{1}(T^{j}, e^{j})\leqslant \gamma'_{\overline{0}}(T^{j}, e^{j})+2$. If there exists $j_{4}\in A_4$ such that $\gamma'_{1}(T^{j_{4}},e^{j_{4}})-\gamma'_{\overline{0}}(T^{j_{4}},e^{j_{4}})=1$, then we can take a $\gamma'_{1}(T^{j_{4}}, e^{j_{4}})$-set $B^{j_4}$ and the others $B^j$ for $0\leqslant j\leqslant q$ and $j\neq j^4$ are taken as in Subcase 2.1. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$ or $\gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e)\cap F_{T}= \emptyset$. Otherwise, for all $j$, $\gamma'_{1}(T^j, e^{j})-\gamma'_{\overline{0}}(T^j, e^j)=2.$ Thus, the left-hand sides in both Eqs. and are at least two more than the right-hand sides. We can take a $\gamma'_{1}(T^{j_{4}}, e^{j_{4}})$-set $B^{j_4}$ and the others $B^j$ for $0\leqslant j\leqslant q$ and $j\neq j^4$ are taken as in Subcase 2.1. Thus $\cup_{j=0}^q B^j$ is a TED-set of $T$ of size $\gamma'_{0}(T^0, e^0)+\sum_{j=1}^q \theta_j+2$ in the case of $\overline{N_c}(e^0)\cap F_{T}\neq \emptyset$ or $\gamma'_{\overline{0}}(T^0, e^0)+\sum_{j=1}^q \theta_j+1$ in the case of $\overline{N_c}(e^0)\cap F_{T}= \emptyset$. Therefore
$$\begin{aligned}
\gamma'_{0}(T, e^0)=
\begin{cases}
\min\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+\sum\limits^{q}_{\substack{j=1}}\theta_{j}+1,&\substack{there~is~j~such~that~ \gamma'_{1}(T^{j}, e^{j})-\gamma'_{t,\overline{0}}(T^{j},e^j)=1};\\
\min\{\gamma'_{0}(T^0, e^0), \gamma'_{\overline{0}}(T^0, e^0)\}+\sum\limits^{q}_{j=1}\theta_{j}+2,&\substack{for~any~j~such~that~ \gamma'_{1}(T^{j}, e^{j})-\gamma'_{t,\overline{0}}(T^{j},e^j)=2}.
\end{cases}
\end{aligned}$$
(3). Let $F_{T}$ be a $\gamma'_{\overline{1}}(T, e)$-set.
The restriction $F_{T^0}$ of $F_T$ on $T^0$ belongs to $\mathcal{F}_{\overline{1}}(T^0, e)$, for $1\leqslant j\leqslant q$, the restriction $F_{T^j}$ of $F_T$ on $T^j$ belongs to $\mathcal{F}_0(F^j, e^j)$ or $\mathcal{F}_{\overline{0}}(F^j, e^j)$, the converse also holds. Therefore $$\begin{aligned}
\gamma'_{\overline{1}}(T, e)= \gamma'_{\overline{1}}(T^0, e)+\sum\limits^{q}_{j=1} \mbox{min} \{\gamma'_{0}(T^{j}, e^{j}), \gamma'_{\overline{0}}(T^{j}, e^{j})\}.\end{aligned}$$
(4). Let $F_{T}$ be a $\gamma'_{\overline{0}}(T, e)$-set.\
The restriction $F_{T^0}$ of $F_T$ on $T^0$ belongs to $\mathcal{F}_{\overline{0}}(T^0, e)$, for $1\leqslant j\leqslant q$, the restriction $F_{T^j}$ of $F_T$ on $T^j$ belongs to $\mathcal{F}_0(F^j, e^j)$, the converse also holds. Therefore $$\begin{aligned}
\gamma'_{\overline{0}}(T, e)= \gamma'_{\overline{0}}(T^0, e)+\sum\limits^{q}_{j=1} \gamma'_{0}(T^{j}, e^{j}).\end{aligned}$$
By Theorem 3.1, we give algorithms as follows.
an edge $i$ of a rooted tree $T$ which represent by its edge parent array $[1,2,3,\ldots,n]$. $\gamma'_{1}(T,i')$ $i^{'}\leftarrow father (i)$; $N_{c}(i')\leftarrow children (i')$; $T^{0}\leftarrow ~the ~component ~containing ~i' ~of ~T-(N_{c}(i')) $; $T^{j}\leftarrow ~the ~component ~containing ~j ~of ~T-(N_{c}(i')+i'-j) $; $\theta_{j}\leftarrow\min\{\gamma'_{1}(T^{j}, j),\gamma'_{0}(T^{j}, j),\gamma'_{\overline{1}}(T^{j},j),\gamma'_{\overline{0}}(T^{j}, j)\};$ $A_1\leftarrow\{j\in N_{c}(i')|\theta_{j}=\gamma'_{1}(T^{j}, j) \};$ $A_3\leftarrow\{j\in N_{c}(i')|\theta_{j}=\gamma'_{\overline{1}}(T^{j}, j) \};$ $\gamma'_{1}(T, i')\leftarrow\min\{\gamma'_{1}(T^0, i'),\gamma'_{\overline{1}}(T^0, i')\}+\sum\limits^{}_{j\in N_{c}(i')}\theta_{j}$ $\gamma'_{1}(T, i')\leftarrow\min\{\gamma'_{1}(T^0, i'), \gamma'_{\overline{1}}(T^0, i')+1\}+ \sum\limits^{}_{j\in N_{c}(i')}\theta_{j}$.
an edge $i$ of a rooted tree $T$ which represent by its edge parent array $[1,2,3,\ldots,n]$. $\gamma'_{\bar{1}}(T,i')$ $i^{'}\leftarrow father (i)$; $N_{c}(i')\leftarrow children (i')$; $T^{0}\leftarrow ~the ~component ~containing ~i' ~of ~T-(N_{c}(i')) $; $T^{j}\leftarrow ~the ~component ~containing ~j ~of ~T-(N_{c}(i')+i'-j) $; $\gamma'_{\overline{1}}(T, i')\leftarrow \gamma'_{\overline{1}}(T^0, i')+\sum\limits^{}_{j\in N_{c}(i')} \min \{\gamma'_{0}(T^{j}, j), \gamma'_{\overline{0}}(T^{j}, j)\}$
an edge $i$ of a rooted tree $T$ which represent by its edge parent array $[1,2,3,\ldots,n]$. $\gamma'_{0}(T,i')$; $N_{c}(i')\leftarrow children (i')$; $T^{0} \leftarrow ~the ~component ~containing ~i' ~of ~T-(N_{c}(i')) $; $T^{j}\leftarrow ~the ~component ~containing ~j ~of ~T-(N_{c}(i')+i'-j) $; $\theta_{j}\leftarrow\min\{\gamma'_{1}(T^{j}, j),\gamma'_{0}(T^{j}, j), \gamma'_{\overline{1}}(T^{j},j),\gamma'_{\overline{0}}(T^{j}, j)\};$ $A_1 \leftarrow\{j\in N_{c}(i')|\theta_{j}=\gamma'_{1}(T^{j}, j) \};$ $A_2 \leftarrow\{j\in N_{c}(i')|\theta_{j}=\gamma'_{0}(T^{j}, j) \};$ $A_3 \leftarrow\{j\in N_{c}(i')|\theta_{j}=\gamma'_{\overline{1}}(T^{j}, j) \};$ $A_4 \leftarrow \{j\in N_{c}(i')|\theta_{j}=\gamma'_{\overline{0}}(T^{j}, j) \};$
$\gamma'_{0}(T, i')\leftarrow \min\{\gamma'_{0}(T^0, i'), \gamma'_{\overline{0}}(T^0, i')\}+\sum\limits^{}_{j\in N_{c}(i')}\theta_{j}$
$\gamma'_{0}(T, i')\leftarrow\min\{\gamma'_{0}(T^0, i'), \gamma'_{\overline{0}}(T^0, i')\}+
\sum\limits^{}_{j\in N_{c}(i')}\theta_{j}+1$
$\gamma'_{0}(T, i')\leftarrow\min\{\gamma'_{0}(T^0, i'),\gamma'_{\overline{0}}(T^0, i')+1\}+\sum\limits^{}_{j\in N_{c}(i')}\theta_{j}$ $\gamma'_{0}(T, i')\leftarrow\min\{\gamma'_{0}(T^0, i'),\gamma'_{\overline{0}}(T^0, i')\}+\sum\limits^{}_{j\in N_{c}(i')}\theta_{j} +1$ $\gamma'_{0}(T, i')\leftarrow\min\{\gamma'_{0}(T^0, i'),\gamma'_{\overline{0}}(T^0, i')\}+\sum\limits^{}_{j\in N_{c}(i')} +2$
an edge $i$ of an rooted tree $T$ which represent by its edge parent array $[1,2,3,\ldots,n]$. $\gamma'_{1}(T,i')$ $i^{'}\leftarrow father (i)$; $N_{c}(i')\leftarrow children (i')$; $T^{0}\leftarrow ~the ~component ~containing ~i' ~of ~T-(N_{c}(i')) $; $T^{j}\leftarrow ~the ~component ~containing ~j ~of ~T-(N_{c}(i')+i'-j) $;
$\gamma'_{\overline{0}}(T, i')\leftarrow \gamma'_{\overline{0}}(T^0, i')+\sum\limits^{m}_{j\in N_{c}(i')} \gamma'_{0}(T^{j}, j)$
an edge rooted tree $T$ represent by its edge parent array $[1,2,3,\ldots,n]$. a minimum total edge domination number of $T$. $\gamma'_{1}(T,1)\leftarrow \infty$; $\gamma'_{0}(T,1) \leftarrow \infty$; $\gamma'_{\overline{1}}(T,1)\leftarrow1$; $\gamma'_{\overline{0}}(T,1)\leftarrow0$; $i^{'}\leftarrow father (i)$; $N_{c}(i')\leftarrow children (i')$; $T^{0}\leftarrow ~the ~component ~containing ~i' ~of ~T-(N_{c}(i')) $; $T^{j}\leftarrow ~the ~component ~containing ~j ~of ~T-(N_{c}(i')+i'-j) $; $\theta_{j}:=\min\{\gamma'_{1}(T^{j}, j),\gamma'_{0}(T^{j}, j), \gamma'_{\overline{1}}(T^{j},j),\gamma'_{\overline{0}}(T^{j}, j)\};$ $A_1:=\{j\in N_{c}(i')|\theta_{j}=\gamma'_{1}(T^{j}, j) \};$ $A_2:=\{j\in N_{c}(i')|\theta_{j}=\gamma'_{0}(T^{j}, j) \};$ $A_3:=\{j\in N_{c}(i')|\theta_{j}=\gamma'_{\overline{1}}(T^{j}, j) \};$ $A_4:=\{j\in N_{c}(i')|\theta_{j}=\gamma'_{\overline{0}}(T^{j}, j) \};$
$\gamma'_{1}(T, i')$=Determine the value of $\gamma'_{1}(T, i')$. $\gamma'_{0}(T, i')$=Determine the value of $\gamma'_{0}(T, i')$. $\gamma'_{\bar{1}}(T, i')$=Determine the value of $\gamma'_{\overline{1}}(T, i')$. $\gamma'_{\bar{1}}(T, i')$=Determine the value of $\gamma'_{\overline{0}}(T, i')$.
$\gamma'(T) = min\{ \gamma'_{1}(T,n), \gamma'_{0}(T,n), \gamma'_{\bar{1}}(T,n), \gamma'_{\bar{0}}(T,n) \}$
Algorithm 5 produces the total edge domination number of a tree in linear-time.
It is easy to know that the running times of Algorithms 1, 2, 3 and 4 are constant times. Then Algorithm 5, needing to visit each father edge $e$ of $T$ once, and all of the statements within which can be executed in a constant time, so with an adequate data structure the algorithm works in linear-time.
Characterizing $(\gamma'_{t}=2\gamma')$-trees and $(\gamma'_{t}=\gamma')$-trees
===============================================================================
In this section we provide a constructive characterization of trees satisfying $ \gamma'_{t}(T) = 2\gamma'(T)$ and $\gamma'_{t}(T) = \gamma'(T)$, denoted by $(\gamma'_{t}=2\gamma')$-trees and $(\gamma'_{t}=\gamma')$-trees, respectively.
First, we begin with some properties of specific graphs used in this section.
\[Ex:StaDou\] Let $T$ be a star or a double star. Then $\gamma^{'}(T)=1$ and $\gamma^{'}_{t}(T)=2$.
\[Ex:Paths\] If $T$ is a path with five vertices, then $\gamma^{'}(T)=\gamma^{'}_{t}(T)=2$. If $T$ is a path with six vertices, then $\gamma^{'}(T)=2$ and $\gamma^{'}_{t}(T)=3$.
\[Th:NoLeafEdge\] Let $G$ be a connected graph of diameter $\geqslant 4$. Then there exists a minimum edge dominating set (resp. a minimum total edge dominating set) $D$ of $G$ such that $D$ contains no leaf edges of $G$.
Suppose to the contrary that each minimum edge dominating set contains some leaf edges and $D$ is a minimum edge dominating set containing least leaf edges. Then for each leaf edge $e\in D$, $N(e)\cap D=\emptyset$, otherwise, $D-e$ is a smaller edge dominating set, a contradiction. Choose one non-leaf edge $e'$ of $N(e)$, then $D'=D-e+e'$ is a new minimum edge dominating set containing less leaf edges than $D$, a contradiction. Similarly, we can prove the total version.
\[Co:diam4\] Let $T$ be a tree with diameter 4. Then $\gamma^{'}_{t}(T)=\gamma^{'}(T)$.
The induced subgraph of all non-leaf edges in $T$ is a star $S_{1,k}$. In order to dominate all leaf edges, by Theorem \[Th:NoLeafEdge\], $E(S_{1,k})$ is a minimum edge dominating set and also a TED-set of $T$, so $\gamma^{'}_{t}(T)\leqslant\gamma^{'}(T)$, combined with $\gamma^{'}(T)\leqslant\gamma^{'}_{t}(T)$, we get $\gamma^{'}_{t}(T)=\gamma^{'}(T)$.
\[diam=5\] Let $T$ be a tree with diameter 5. Then $\gamma^{'}_{t}(T)=\gamma^{'}(T)$ or $\gamma^{'}_{t}(T)=\gamma^{'}(T)+1$.
From the condition, the induced subgraph of all non-leaf edges in $T$ is exactly a double star, say $H$ and two adjacent center vertices $r, t$. Let $D$ be a minimum edge dominating set of $T$ containing no leaf edges by theorem \[Th:NoLeafEdge\] and, $e$ an leaf edge in $H$, say $e=vr$ or $vt$. Since, in $T$, $v$ is incident with at least one leaf edge, $D$ contains $e$. Thus $(E(H)-rt)\subseteq D$. Combined that $D+rt$ induces an connected subgraph, exactly $H$, further $D+rt$ is a total edge dominating set of $T$, so $\gamma^{'}_{t}(T)=\gamma^{'}(T)$ or $\gamma^{'}_{t}(T)=\gamma^{'}(T)+1$.
$(\gamma'_{t}=2\gamma')$-trees
------------------------------
In this subsection we provide a constructive characterization of trees $T$ satisfying $\gamma'_{t}(T)=2\gamma'(T)$. Note that a star or double star satisfies the condition above. In what follows we consider the trees satisfying the condition other than stars.
Our aim is to describe an inductive procedure of the tree $T$ with $\gamma'_{t}(T)=2\gamma'(T)$ by labelling. For the initiated step, for any vertex $v$ of $P_4$, we give a label $C$ or $L$ to $v$, denoted by $l(v)$, defined as $l(v)=L$ if $v$ is a leaf of $P_4$, $l(v)=C$, otherwise. For convenience, we call an edge with both endpoints labelled $C$ as $C-C$ edge.
Let $\mathcal{T}$ be the family of labelled trees $T$ containing the labelled $P_{4}$ as the initiated labelled tree, constructed inductively by the two operations $\mathcal{O}_{1}$, $\mathcal{O}_{2}$ listed below (i.e., constructing a bigger labelled tree $T'$ from a smaller labelled tree $T$ in $\mathcal{T}$).\
**Operation** $\mathcal{O}_{1}$: Let $T \in \mathcal{T}$ and $v$ a vertex of $T$ with $l(v) = L$ such that: (1). each vertex labelled $C$ of distance 2 from $v$ is adjacent to a leaf vertex; (2). For any $C-C$ edge $wu$ of distance 1 from $v$, say $v$ is adjacent to $u$, either $u$ has a leaf other than $v$ or $N(w)-u$ are all leaves. Construct a bigger tree ${T'}$ in $\mathcal{T}$ from $T$ and a labelled $P_{4}$ by identifying $v$ and a leaf vertex of $P_{4}$, labelling the identified vertex as $L$ and keeping the labels of the other vertices unchanged, see Fig. \[Fig:1=2a\].\
**Operation** $\mathcal{O}_{2}$: Let $T \in \mathcal{T}$ and $v$ a vertex of $T$ with $l(v) = C$. Construct a bigger tree ${T'}$ in $\mathcal{T}$ from $T$ by adding a new vertex $u$ adjacent to $v$, labelling $u$ as $L$, keeping the labels of the other vertices unchanged, see Fig. \[Fig:1=2b\].
\[fig:1=2 operations\]
From the two operations above, we can get the following simple observations.
\[obs:1\] Let $T\in\mathcal{T}$. Then
1. Each leaf vertex is labelled $L$ and each support vertex is labelled $C$.
2. Exactly one neighbor of each vertex labelled $C$ is labelled $C$, and the remaining neighbours are labelled $L$.
3. No two vertices labelled L are adjacent.
4. If one endpoints of a $C-C$ edge has a non-leaf neighbor labelled $L$, then the other endpoint has one leaf neighbor.
\[Le:CCEDS\] Let $T\in \mathcal{T}$ and $U$ the set of edges whose endpoints are labelled $C$ in $T$. Then $U$ is a $\gamma'(T)$-set.
By Observation \[obs:1\] (2) and (3), we know that $U$ is an edge dominating set of $T$ and further each component of the induced subgraph $T[U]$ is $K_2$. By Observation \[obs:1\] (4) and Theorem \[Th:NoLeafEdge\], the size of any edge dominating set is at least $|D|$. Thus, $U$ is a $\gamma'(T)$-set of $T$.
\[Le:TIs1=2\] Let $T\in\mathcal{T}$. Then $T$ is a $(\gamma'_{t}=2\gamma')$-tree.
We proceed by induction on the size $m$ of the edge set of a tree $T\in \mathcal{T}$. For the initial step, it is obvious that $\gamma'_{t}(P_4)=2\gamma'(P_4)$. For the inductive hypothesis, we assume that, for every $\overline{T}\in \mathcal{T}$ of edge size less than $m$, $\gamma'_{t}( \overline{T})=2\gamma'(\overline{T})$. Let $T\in \mathcal{T}$ with edge size $m$, and suppose $T$ is obtained from a tree $\overline{T}\in \mathcal{T}$ by one of two operations. We need to prove that $\gamma'_{t}( T)=2\gamma'(T)$. Next, we divide two cases to analyze according to which operation is used to construct the tree $T$ from $\overline{T}$.
**Case 1.** $T$ is obtained from $\overline{T}$ and a labelled $P_4=u_{1}u_{2}u_3u_4$ by Operation 1, i.e., identifying $u_1$ and $v (\in V(\overline{T}))$, denoted by $v$ the identifying vertex in $T$.
By Lemma \[obs:1\], we have $\gamma'(T)=\gamma'(\overline{T})+1$. Next, we just need to show $\gamma'_{t}(T)=\gamma'_{t}(\overline{T})+2$.
On the one hand, the union of a $\gamma'_{t}(\overline{T})$-set of $\overline{T}$ and $\{vu_{2}, u_{2}u_{3}\}$ is a TED-set of $T$, further $\gamma'_{t}(T)\leqslant\gamma'_{t}(\overline{T})+2$. On the other hand, it is sufficient to show that $\gamma'_t(\overline{T})+2\leq \gamma'_t(T)$. Without loss of generality, let $N_{\overline{T}}(v)=\{v_{1},\ldots, v_{r}\}$ for some positive integer $r$. For $1\leqslant i\leqslant r$, from the definition of Operation 1 and Observation \[obs:1\] (3), $l_{\overline{T}}(v)=L$ and $l_{\overline{T}}(v_{i})=C$; by Observation \[obs:1\] (2), we denote by $w_{i}$ $(1\leqslant i\leqslant r)$ the unique vertex labelled $C$ adjacent to $v_i$ in $\overline{T}$; and by the choice of $v$ in the definition of Operation 1, $w_{i}$ has one leaf neighbor in $\overline{T}$.
By Theorem \[Th:NoLeafEdge\], we let $F_t$ be such a $\gamma'_{t}(T)$-set that $F_t$ contains no leaf edges. If the restriction $F_t|_{\overline{T}}$ of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, then $\gamma'_t(\overline{T})+2\leq \gamma'_t(T)$. In what follows we assume that $F_t|_{\overline{T}}$ is not a TED-set of $\overline{T}$, then $|E_{\overline{T}}(v)\cap F_t|\leqslant 1$.
If $E_{\overline{T}}(v)\cap F_t=\emptyset$, then $F_t|_{\overline{T}}$ does not dominate some edge incident with $v$ in $\overline{T}$, say $vv_i$ for some integer $i$, further there is no leaf edge $e$ incident with $v_i$ in $T$, otherwise $F_t$ does not dominate $e$ in $T$. By the choice of $v$ in Operation 1, all neighbors of $w_i$ other than $v_i$ are all leaves, a contradiction with the choice of $F_t$. If $E_{\overline{T}}(v)\cap F_t$ has a unique edge, say $vv_{i}$ for some $i$, then $w_iv_i\notin F_t$. Since $w_i$ has a leaf vertex by the choice of $v$ in Operation 1, there is one edge in $F_t$ incident with $w_i$. Therefore the restriction of $F_t-vv_i+v_iw_i$ on $\overline{T}$ is a TED-set of $\overline{T}$, further $\gamma'_t(\overline{T})+2\leqslant \gamma'_t(T)$.
**Case 2.** $T$ is obtained from $\overline{T}$ by adding a new vertex $u$ adjacent to $v$ labelled $C$ (i.e., Operation 2).
By Lemma \[Le:CCEDS\], we can easily get $\gamma'(T)=\gamma'(\overline{T})$. Then $\gamma'_{t}(T)\leqslant 2\gamma'(T)= 2\gamma'(\overline{T})=\gamma'_{t}(\overline{T})\leqslant\gamma'_{t}(T)$, and so $\gamma'_{t}(T)=2\gamma'(T)$.
Combined the two cases above, we have $\gamma'_{t}(T)=2\gamma'(T)$ for $T\in \mathcal{T}$.
\[Le:1=2CloNeiIntEmp\] Let $T$ be a tree with $\gamma'_{t}(T)=2\gamma'(T)$, $F$ a $\gamma'(T)$-set. Then $N[e]\cap N[e']=\emptyset$ for any distinct edges $e, e'\in F$.
By contradiction. Assume that there exist two edges $e, e'$ in $F$ such that $N[e]\cap N[e']\neq\emptyset$, say $e''\in N[e]\cap N[e']$. Now we construct a TED-set $S$ of $T$ from $F+ e''$: for any edge $f\in F-e- e'$, adding an edge adjacent to $f$ to $F+ e''$. Then $|S|\leqslant 2|F|-1=2\gamma'(T)-1$, a contradiction.
\[cor 1=2\] Let $T$ be a tree with $\gamma'_{t}(T)=2\gamma'(T)$, $vu$ and $uw$ two adjacent edges in $T$. Then $v, w$ and $u$ can’t all be support vertices.
This follows directly from Lemma \[Le:1=2CloNeiIntEmp\].
\[Le:1=2IsT\] Let $T$ be a non-star tree with $\gamma'_{t}(T)=2\gamma'(T)$. Then $T\in\mathcal {T}$.
We proceed by induction on the edge size of a non-star tree $T$ with $\gamma'_{t}(T)=2\gamma'(T)$. For the initial step, if $T$ is a tree with $diam(T) =3$, then $T$ is a double star with $\gamma'(T)=1$ and $\gamma'_{t}(T)=2$, so we can obtain $T$ from a labelled $P_4$ by doing a series of Operation $\mathcal{O}_2$. By Corollaries \[Co:diam4\] and \[diam=5\], if $T$ is a tree with $diam(T)=4$ or $5$, then $T$ does not satisfy $\gamma'_{t}(T)=2\gamma'(T)$. In what follows let $T$ be a tree of edge size $m$ and diameter at least 6 with $\gamma'_{t}(T)=2\gamma'(T)$. For the inductive hypothesis, we assume that every tree $\overline{T}$ of edge size less than $m$ with $\gamma'_{t}(\overline{T})=2\gamma'(\overline{T})$ is in $\mathcal {T}$.
If a support vertex $v$ has two leaf neighbor in $T$ with $\gamma'_{t}(T)=2\gamma'(T)$ and $w$ is one of leaf neighbors of $v$, then $v$ is still a support vertex in $\overline{T}=T-w$. Combined with Theorem \[Th:NoLeafEdge\], a minimum edge dominating set (resp. a minimum total edge set) of $\overline{T}$ containing no leaf edges is exactly a minimum edge edge dominating set (resp. a minimum total edge set) of $T$ containing no leaf edges. So $\gamma'(\overline{T})= \gamma'(T)$, $\gamma'_{t}(\overline{T})= \gamma'_{t}(T)$. Therefore, $2\gamma'(\overline{T})= 2\gamma'(T)=\gamma'_{t}(T)=\gamma'_{t}(\overline{T})$. By the inductive hypothesis, $\overline{T}\in \mathcal{T}$ with a labeling. By Observation \[obs:1\] (1), the support vertex $v$ is labelled $C$ in $\overline{T}$. Thus we can obtain the tree $T$ by applying Operation $\mathcal{O}_{2}$ to $\overline{T}$.
Let $P$ be a longest path in $T$, say $P=v_{0}v_{1}\ldots v_{t}$ for some $t$ ($t \geqslant 6$) and denoted by $e_{i}=v_{i}v_{i+1}$. If $v_2$ has a leaf neighbor, say $v'_1$, let $\overline{T}=T-v'_1$. By Theorem \[Th:NoLeafEdge\], $\overline{T}$ has a $\gamma'(\overline{T})$-set (resp. a $\gamma'_t(\overline{T})$-set ) containing $e_1$, which is still a $\gamma'(T)$-set (resp. a $\gamma'_t(T)$-set), so $\gamma'_t(\overline{T})=\gamma'_t(T)=2\gamma'(T)=2\gamma'(\overline{T}).$ By the inductive hypothesis, $\overline{T}\in \mathcal{T}$ with a labelling. By Observation \[obs:1\] (1) and (2), the vertices $v_1$ and $v_2$ are labelled $C$ in $\overline{T}$. Thus we can obtain the tree $T$ by applying Operation $\mathcal{O}_{2}$ to $\overline{T}$.
In what follows we assume that each support vertex of $T$ has exactly one leaf neighbor and $v_2$ is not a support vertex. Let $F$ be a $\gamma'(T)$-set of $T$ containing non-leaf edges by Theorem \[Th:NoLeafEdge\], thus $e_1\in F$. For convenience, we root $T$ at the vertex $v_{t}$.
\[Cl:1=2IsTv3Chi\] For every child $v$ of $v_3$, the subtree of $T-v_3$ containing $v$ is exactly $P_3$.
By contradiction. If $v$ is a leaf, then there exists an edge incident with $v_3$ in $F$, say $e$. Note that $e_1\in F$. But $N[e]\cap N[e_1]\neq \emptyset$, a contradiction with Lemma \[Le:1=2CloNeiIntEmp\]. If $v$ has only leaf children, then $vv_3\in F$. Similarly, we can obtain a contradiction because $N[vv_3]\cap N[e_1]\neq \emptyset$. If $v$ has at least two support children, then $|E(v)\cap F|\geqslant 2$, a contradiction. So $v$ has exactly one support child, combined with the same role of $v$ as $v_2$ in the choice of $P$ and the assumption, we obtain the claim.
\[Cl:1=2IsTv4Chi\] For a child $v'_3$ of $v_4$, the length of a longest path starting at $v'_3$ in the subtree $T-v_4$ containing $v'_3$ is not 2.
Assume to the contrary that there exists one child $v'_3$ of $v_4$ such that the length of a longest path $P$ starting at $v'_3$ in the subtree $T-v_4$ containing $v'_3$ is 2, say $P=v'_3v'_2v'_1$. Obviously, $v'_3\neq v_3$ and $v'_3v'_2\in F$. Combined with Lemma \[Le:1=2CloNeiIntEmp\] and $e_1\in F$, $E(v_4)\cap F=\emptyset$, then $e_3$ is not dominated by $F$, a contradiction.
\[Cl:1=2IsTv5Chi\] If there exists a child $v'_3$ of $v_4$ such that the subtree of $T-v_4$ containing $v'_3$ is $P_2$, then $v_5$ has no leaf child.
Similar to the analysis of Claims \[Cl:1=2IsTv3Chi\] and \[Cl:1=2IsTv4Chi\], we can show it by contradiction.
\[Cl:1=2IsTv6Chi\] If $diam(T)>6$ and there exist no children $v'_3$ of $v_4$ such that the subtree of $T-v_4$ containing $v'_3$ is $P_2$, then $v_4$ and $v_5$ are both support vertices.
Suppose to the contrary. Since there is no subtree of $T\setminus \{v_4\}$ containing $v'_3$ isomorphic to $P_2$.
\[Fig:1=2IsTv\_6Chi\]
Let $\{v_3^1, v_3^2, \ldots, v_3^s\}$ be the set of non-leaf children of $v_4$ for some positive integer $s$. For any $1\leqslant i\leqslant l$, combined the assumption and Claim \[Cl:1=2IsTv4Chi\], the length of a longest path starting at $v_3^i$ in the subtree $T-v_4$ containing $v_3^i$ is 3. By the symmetry of $v_3$ and $v_3^i$ and Claim \[Cl:1=2IsTv3Chi\], the subtree of $T-v_4$ containing $v_3^i$ is exactly $P_4$, say $v_3^iv_2^iv_1^iv_0^i$. By the choice of $F$ and Lemma \[Le:1=2CloNeiIntEmp\], for any $1\leqslant i\leqslant s$, $v_2^iv_1^i\in F$ and $E(v_3^i)\cap F=\emptyset$. So $\{e_{4}\}\subseteq F$.
If $v_4$ is not support, let $e'_7$ be the unique edge in $(E_{T}(v_7)- e_6)\cap F\neq \emptyset$ by $\{e_{4}\}\subseteq F$ and Lemma \[Le:1=2CloNeiIntEmp\]. Now we construct a TED-set $F_t$ of $T$ from ${F}_{0}=F-e_{4}+e_5$ by first adding the common neighbor edge $e_6$ of $e_5$ and $e'_7$ in $F_0$, second, for any $1\leqslant i\leqslant l$, adding $v_2^iv_3^i$ into $F_0$, and adding a neighbor edge of each edge in $F_0- \{e_5, e'_7\}+\{v_1^1v_2^1, v_1^2v_2^2, \cdots, v_1^sv_2^s \}$ (see Fig. \[Fig:1=2IsTv\_6Chi\](a)). It is obvious that $F_t$ is a TED-set of $T$ and $|F_t|\leqslant 2|F|-1$, a contradiction.
If $v_5$ is not support, let $A$ be the set of vertices of distance 2 from $v_5$ in the subtree of $T-e_4$ containing $v_5$. For $v\in A$, $|F\cap E(v)|=1$, say $e_v$, by $e_4\in F$ and Lemma \[Le:1=2CloNeiIntEmp\], and denoted by $e'_v$ the unique edge of $E(v)$ of distance 1 from $v_5$. Note that $e'_v\neq e_v$ because $v_5$ is of distance 2 from $e_v$. Now we can construct a TED-set $F'_t$ of $T$ from ${F}'_{0}=F-e_{4}+e_3$ by first adding $e_2$ and the set $\{e'_v|v\in A\}$ into $F'_0$, second adding a neighbor edge of each edge in $F'_0-\{e_v| v\in A\}- \{e_1, e_3\}$ (see Fig. \[Fig:1=2IsTv\_6Chi\](b)). Note that $\{e_1, e_2, e_3\}\in F'_t$. It is obvious that $F'_t$ is a TED-set of $T$ and $|F'_t|\leqslant 2|F|-1$, a contradiction. So we prove Claim \[Cl:1=2IsTv6Chi\].
By Claim \[Cl:1=2IsTv3Chi\] and the assumption that each support vertex of $T$ has exactly one leaf neighbor and $v_2$ is not a support vertex, $d(v_1)=d(v_2)=2$, thus the subgraph induced by $\{v_0, v_1, v_2, v_3\}$ is $P_4$. Let $\overline{T}=T-\{v_{0},v_{1},v_{2}\}$.
\[Cl:1=2IsTCut1=2\] $\gamma'_{t}(\overline{T}) = 2\gamma'(\overline{T})$.
Combined with Lemma \[Le:1=2CloNeiIntEmp\] and $e_1\in F$, we have $E(v_3)\cap F= \emptyset$, thus the restriction of $F$ on $\overline{T}$ is an ED-set of $\overline{T}$, further $\gamma'(\overline{T})\leqslant\gamma'(T)-1$. Combined with the obvious inequality: $\gamma'_{t}(T)\leqslant\gamma'_{t}(\overline{T})+2$, we have $2\gamma'(\overline{T})\leqslant2(\gamma'(T)-1) = 2\gamma'(T)-2 = \gamma'_{t}(T)-2\leqslant\gamma'_{t}(\overline{T})\leqslant2\gamma'(\overline{T})$. Consequently we must have equality throughout this inequality chain. Particularly, we have $\gamma'_{t}(\overline{T})=2\gamma'(\overline{T})$.
By Claim \[Cl:1=2IsTCut1=2\] and the inductive hypothesis, $\overline{T}\in \mathcal{T}$ with a labelling. In what follows we show that $T$ is obtained from $\overline{T}$ by Operation $\mathcal{O}_1$ (the identifying vertex is $v_3$, the role of $v$). By Claim \[Cl:1=2IsTv3Chi\] and Observation \[obs:1\] (1), (2), (3), we have $l(v_3)=L$, $l(v_4)=C$. In the case $l(v_5)=C$, if $diam(T)=6$, then all neighbors of $v_5$ other than $v_4$ are all leaves; if $diam(T)>6$, then each child of $v_4$ is labelled $L$ and by Claim \[Cl:1=2IsTv6Chi\], $v_4$ and $v_5$ have both one leaf neighbor. In the other case $l(v_5)=L$, there is one child $v'_3$ of $v_4$ labelling C. Combined with Corollary \[obs:1\] and Claim \[Cl:1=2IsTv4Chi\], $v'_{3}$ has only leaf children. For the other C-C edges $wu$ of distance 1 from $v_3$, say $v_3$ is adjacent to $u$, i.e., $u$ is the child of $v_3$, by Claim \[Cl:1=2IsTv3Chi\], $N(w)-u$ are all leaves. Combined all cases above, it is obvious that the $C-C$ edge incident with $v_4$ of distance 1 from $v_3$ satisfies the condition in Operation $\mathcal{O}_1$. Therefore we can apply Operation $\mathcal{O}_1$ from $\overline{T}$ to obtain the tree $T$, further, ${T}\in \mathcal{T}$.
As an immediate consequence of Lemmas \[Le:TIs1=2\] and \[Le:1=2IsT\], we have
\[Th:1=2NecSuf\] A non-star tree is a $(\gamma'_{t}=2\gamma')$-tree if and only if $T\in \mathcal{T}$.
$(\gamma'_{t}=\gamma')$-trees
-----------------------------
In this subsection we provide a constructive characterization of $(\gamma'_{t}=\gamma')$-trees $T$, i.e., a tree satisfying $\gamma'_{t}(T)=\gamma'(T)$. We use edge labelling to describe a procedure of constructing $T$ recursively, which is different from the vertex labelling in the previous subsection. By Example \[Ex:StaDou\] and Corollary \[Co:diam4\], for the initial step, let $T$ be a tree with $diam(T)=4$, in which each edge is either a leaf edge or a support edge, we label support edges in $T$ with $S$, leaf edges adjacent to at least two non-leaf-edges with $L_{2}$, other leaf edges with $L_{1}$.
Let $\mathcal{T}_{t}$ be the family of edge-labelled trees $T$ that contains edge-labelled trees with diameter 4 and is under the five operations $\mathcal{O}_{1}$, $\mathcal{O}_{2}$, $\mathcal{O}_{3}$, $\mathcal{O}_{4}$, $\mathcal{O}_{5}$ listed below: constructing a bigger tree from a smaller tree in $\mathcal{T}_{t}$. For convenience, we call an edge labelled $S$ (resp. $L_1, L_2$) in $T\in \mathcal{T}_t$ an $S$ (resp. $L_1, L_2$)-edge, and denote by $D(T)$ the set of $S$-edges. First, according to the label of the associated edges of the vertex $v$ in an edge-labelled tree $T\in \mathcal{T}_t$, we partition the vertex set of $T$ into the following four subsets $A_{1}, A_{2}, B$ and $C$ listed below: $$\begin{aligned}
A_{1}:=&\{v| \text{ Only one } S-\text{edge in } E(v) \};\\
A_{2}:=&\{v| \text{ At least two } S-\text{edges in } E(v)\};\\
B:=&\{v|\text{ All edge in } E(v) \text{ are } L_{2}-\text{edges}\};\\
C :=& V-A_{1}-A_{2}-B.\end{aligned}$$
\[Fig:1=1VerLab\]
Now, we list the five operations $\mathcal{O}_{1}$, $\mathcal{O}_{2}$, $\mathcal{O}_{3}$, $\mathcal{O}_{4}$, $\mathcal{O}_{5}$:
**Operation** **$\mathcal{O}_{1}$**: Let $T\in \mathcal{T}_{t}$, $v$ a vertex of $T$ belonging to $A_{1}\cup A_{2}$. Construct a bigger tree $T'$ in $\mathcal{T}_{t}$ from $T$ by adding a new vertex $u$ adjacent to $v$. If $v\in A_{1}$, then label $vu$ as $L_1$; (by definition, $u$ is in $C$, $A_1, A_2, B$ are unchanged;) if $v\in A_{2}$, then label $vu$ as $L_2$ (note that $u\in B$ and $A_1, A_2, C$ are unchanged), see Fig. \[fig:1=1a\].\
**Operation** **$\mathcal{O}_{2}$**: Let $T \in \mathcal{T}_{t}$, $v$ a vertex of $T$ belonging to $A_{2}$. Construct a bigger tree $T'$ in $\mathcal{T}_{t}$ from $T$ by adding two new adjacent vertices $u_{1}, u_{2}$, connecting $v$ and $u_1$ and labelling $vu_{1}$ as $S$ and $u_{1}u_{2}$ as $L_{1}$ (obviously, $u_1\in A_1$ and $u_2\in C$), see Fig. \[fig:1=1b\].\
**Operation** **$\mathcal{O}_{3}$**: Let $T \in \mathcal{T}_{t}$, $v\notin A_1$ a vertex of $T$ satisfying, in the case $v\in C$, that each $L_1$-edge in $E(v)$ is either adjacent to one leaf edge or contained in a $P_4=vwxy$, whose edges are labelled as $L_1, L_1, L_2$ consecutively and all edges in $E(x)$ are $L_2$-edges except $wx$. Construct a bigger tree $T'$ in $\mathcal{T}_{t}$ from $T$ by adding a new path $u_{1}u_{2}u_{3}u_{4}u_{5}$ to join $v$ and $u_2$, and labelling $u_{2}u_{3}$, $u_{3}u_{4}$ as $S$, $vu_{2}$, $u_{1}u_{2}$, $u_{4}u_{5}$ as $L_{1}$, see Fig. \[fig:1=1c\]. (From the definition, $u_2, u_4 \in A_1$, $u_3\in A_2$, $u_1, u_5\in C$ and if $v\in B$, then $v$ is moved from $B$ to $C$.)\
**Operation** **$\mathcal{O}_{4}$**: Let $T \in \mathcal{T}_{t}$, $v\in B$ a vertex of $T$. Construct a bigger tree $T'$ in $\mathcal{T}_{t}$ from $T$ by adding a new path $u_{1}u_{2}u_{3}u_{4}$ to join $v$ and $u_1$, and labelling $vu_{1}, u_{3}u_{4}$ as $L_{1}$, $u_{1}u_{2}, u_{2}u_{3}$ as $S$, see Fig. \[fig:1=1d\]. (Similarly, $u_1, u_3\in A_1$, $u_2\in A_2$, $u_4\in C$, and $v$ is moved from $B$ to $C$.)\
**Operation** **$\mathcal{O}_{5}$**: Let $T \in \mathcal{T}_{t}$, $v$ a vertex of $T$. Construct a bigger tree $T'$ in $\mathcal{T}_{t}$ from $T$ by adding a new path $u_{1}u_{2}u_{3}u_{4}u_{5}$ to join $v$ and $u_{3}$, and labelling $vu_3$ as $L_{2}$, $u_{1}u_{2}, u_{4}u_{5}$ as $L_{1}$, $u_{2}u_{3}, u_{3}u_{4}$ as $S$, see Fig. \[fig:1=1e\]. (From the definition, $u_2, u_4\in A_1$, $u_1, u_5\in C$, $u_3\in A_2$ and if $v\in B$, then $v$ is moved from $B$ to $C$.)\
\[fig:1=1\]
From the five operations above, we can get the simple observations as follows.
\[ob2\]
Let $T\in\mathcal{T}_{t}$.
1. \[Ob:EndpOfL1\]One endpoint of an $L_1$-edge is incident with exactly one $S$-edge, the other endpoint is incident with either non $S$-edges or at least two $S$-edges.
2. \[Ob:L2NbSedge\] An $L_2$-edge is adjacent to at least two $S$-edges.
3. \[Ob:LEisL1orL2\] A leaf edge is labelled $L_1$ or $L_2$. Furthermore, a leaf edge adjacent to exactly one non-leaf edge $e$ is labelled $L_1$ and $e$ is labelled as $S$.
4. \[Ob:SedgeisTED\] Each edge in $T$ is adjacent to at least one $S$-edge, and each component of the induced subgraph $T[D(T)]$ is a nontrivial star. Further, $D(T)$ is a total edge dominating set of $T$.
\[Le:TIs1=1\] Let $T\in\mathcal{T}_{t}$. Then $D(T)$ is a $\gamma'_{t}(T)$-set and $T$ is a $(\gamma'_{t}=\gamma')$-tree.
Let $T\in \mathcal{T}_t$, we first prove that $D(T)$ (simply, $D$) is a $\gamma'_{t}(T)$-set. By Observation \[ob2\] (\[Ob:SedgeisTED\]), $D$ is a TED-set of $T$. It is sufficient to find a set $L$ of $L_1$-edges of size $|D|$ such that each edge in $L$ has exactly one neighbor $S$-edge. In order to prove that there is such an edge set of each tree $T$ in $\mathcal{T}_{t}$, we proceed by induction on the size $m$ of the edge set of $T$. For the initial step, the leaves adjacent to exactly one non-leaf edge of $T$ with diameter 4 construct the required set $L$. For the inductive step, we assume each tree $\overline{T}$ of size less than $m$ in $\mathcal{T}_t$ has a set $\overline {L}$ of $L_1$-edges such that each edge in $\overline{L}$ has exactly one neighbor $S$-edge. Now we divide five cases as follows:
**Case 1.** $T$ is obtained by applying Operation $\mathcal{O}_{1}$ from $\overline{T}$ and a vertex $u$.
In this case, $D(T)=D(\overline{T})$, and let $L=\overline{L}$, which is the desired set for $T$.
**Case 2.** $T$ is obtained by applying Operation $\mathcal{O}_{2}$ from $\overline{T}$ and an edge $u_1u_2$ in which a vertex $v$ in $\overline{T}$ is adjacent to $u_1$.
In this case, $D(T)$ is one more $S$-edge than $D(\overline{T})$. By Observation \[ob2\] (\[Ob:EndpOfL1\]), there is no $L_1$-edges in $\overline{L}$ incident with $v$. So $\overline {L}\cup \{u_1u_2 \}$ is a desired set for $T$.
**Case 3.** $T$ is obtained by applying Operation $\mathcal{O}_{3}$ from $\overline{T}$ and a path $u_1u_2u_3u_4u_5$.
In this case, $D(T)$ is two more edges than $D(\overline{T})$. If $v\in A_2\cup B$, by the definitions of $A_2, B$ and Observation \[ob2\] (\[Ob:EndpOfL1\]), there are no $L_1$-edges incident with $v$ in $\overline{T}$. So $\overline {L}\cup \{u_1u_2, u_4u_5\}$ is a desired set for $T$.
When $v\in C$, if there is no $L_1$-edge incident with $v$ in $\overline{L}$, then $\overline {L}\cup \{u_1u_2, u_4u_5\}$ is a desired set for $T$. Otherwise, let $e'=vw$ be the $L_1$-edge in $\overline {L}$, from the definition of Operation $\mathcal{O}_3$, there is one leaf edge $e''$ incident with $w$ or there exists a $P_4=vwxy$ in $\overline{T}$, whose edges are labelled as $L_1, L_1, L_2$ consecutively and all edges in $E(x)$ are $L_2$-edges except $wx$, then $(\overline {L}-e')\cup \{u_1u_2, u_4u_5, e''\}$ or $(\overline {L}-e')\cup \{u_1u_2, u_4u_5, wx\}$ is a desired set for $T$.
Therefore, we can always find a desired set for $T$ in this case.
**Case 4.** $T$ is obtained by applying Operation $\mathcal{O}_{4}$ from $\overline{T}$ and a path $u_1u_2u_3u_4$.
In this case, $D(T)$ is two more edges than $D(\overline{T})$. If there is no $L_1$-edge in $\overline {L}$ adjacent to some edge in $E(v)$, then $\overline {L}\cup \{vu_1, u_3u_4, \} $ is a desired set for $T$. Otherwise, let $wx$ be the $L_1$-edge in $\overline {L}$ adjacent to some edge in $E(v)$. By Observation \[ob2\] (\[Ob:EndpOfL1\]),(\[Ob:L2NbSedge\]), without loss of generality, assume $x\in A_1$, then there is an $L_1$-edge $xy$ in $E(x)$ such that $y$ is either a leaf vertex or only incident with $L_2$-edges except $xy$. So $(\overline {L}-wx)\cup \{vu_1, u_3u_4, yx\}$ is a desired set for $T$.
Hence, we can always find a desired set for $T$ in this case.
**Case 5.** $T$ is obtained by applying Operation $\mathcal{O}_{5}$ from $\overline{T}$ and a path $u_1u_2u_3u_4u_5$.
In this case, $D(T)$ is two more edges than $D(\overline{T})$. So $\overline {L}\cup \{u_1u_2, u_4u_5\}$ is a desired edge set for $T$.
Combined the five cases above, for $T \in \mathcal{T}_{t}$, we can always find an edge set $L$ collecting $L_1$-edge such that each edge in $L$ has exactly one neighbor $S$-edge. Since the edges in $L$ need at least $|L|$ edges to dominate, $\gamma'(T)\geqslant |L|=|D|$. Hence, $D$ is a $\gamma'_{t}(T)$-set and $T$ is a $(\gamma'_{t}=\gamma')$-tree
\[Le:1=1NonTriSta\] Let $T$ be a $(\gamma'_{t}=\gamma')$-tree, $F_{t}$ a $\gamma'_{t}(T)$-set. Then any component of the induced subgraph $T[F_{t}]$ is nontrivial star.
By contradiction. If there is a $P=v_1v_2v_3v_4$ in $T[F_{t}]$, then the edges which are dominated by $v_2v_3$ are also dominated by $v_1v_2$ or $v_3v_3$. So $F_{t}-v_2v_3$ is an edge dominating set with cardinality $|F_{t}|-1$, a contradiction. Hence every component of $T[F_{t}]$ is a nontrivial star.
\[Le:1=1IsT\] Let $T$ be a $(\gamma'_{t}=\gamma')$-tree. Then $T\in\mathcal{T}_{t}$.
We proceed by induction on the edge size of a nontrivial tree $T $ satisfying $\gamma'_{t}(T)=\gamma'(T)$. For the initial step, by Corollary \[Co:diam4\], a tree $T$ with diameter 4 satisfies $\gamma'_{t}(T)=\gamma'(T)$ and is in $\mathcal{T}_t$. For the inductive hypothesis, we assume that every tree $\overline{T}$ with $\gamma'_{t}(\overline{T})=\gamma'(\overline{T})$ has edge size less than $m$ and $diam(\overline{T})\geqslant 5$, there exists an edge label such that $\overline{T}\in\mathcal {T}$.
If a support vertex $v$ of $T$ has at least two leaf neighbors, say $u$ and $w$ two of them, then $v$ is still a support vertex in $\overline{T}=T-w$. By Theorem \[Th:NoLeafEdge\], any minimum edge dominating set of $\overline{T}$ containing no leaf edges is still an edge dominating set of $T$. So $\gamma'(T)=\gamma'_{t}(T)=\gamma'(\overline{T})\leqslant \gamma'_{t}(\overline{T})\leqslant \gamma'_{t}(T) $ and $\gamma'_{t}(\overline{T})=\gamma'(\overline{T})$. Hence, by the inductive hypothesis, $\overline{T}\in\mathcal{T}_{t}$. By Observation \[ob2\] (\[Ob:EndpOfL1\]), (\[Ob:L2NbSedge\]), (\[Ob:LEisL1orL2\]), $uv$ is an $L_1$- or $L_2$-edge and $v\in A_2\cup A_1$ in $\overline{T}$. We can obtain the tree $T$ by applying Operation $\mathcal{O}_{1}$ from $\overline{T}$ and a new vertex $w$, so $T\in\mathcal{T}_{t}$. We may assume that each support vertex of $(\gamma'_t=\gamma')$-tree $T$ of edge size $m$ has exactly one leaf neighbor, denoted by Assumption 1.
If a support vertex $v$ of $T$, say $w$ is a leaf neighbor of $v$, has a support neighbor with degree 2, then let $\overline{T}=T -w$. Similar to the discuss as above, $\gamma'_{t}(\overline{T})=\gamma'(\overline{T})$ and by the inductive hypothesis, $\overline{T}\in \mathcal{T}_t$. By Observation \[ob2\] (\[Ob:LEisL1orL2\]), $uv$ is an $S$-edge in $\overline{T}$, combined with Observation \[ob2\] (\[Ob:SedgeisTED\]), $v\in A_2$. We can obtain the tree $T$ by applying Operation $\mathcal{O}_{1}$ from $\overline{T}$ and a new vertex $w$, so $T\in\mathcal{T}_{t}$. We may assume that there is no support vertex which has a support neighbor of degree 2, denoted by Assumption 2.
If $v$ has at least three support neighbors of degree 2 in $T$, say $\{u_1, u_2, \ldots, u_l\}$ and $l\geqslant 3$, and set $\overline{T}$ as the tree from $T$ by deleting $\{u_3, u_4, \ldots, u_l\}$ and their respective children, then similar to the discuss as above, $\overline{T}\in \mathcal{T}_t$ and $T$ is obtained from $\overline{T}$ by applying a series of Operation $\mathcal{O}_2$, so $T\in\mathcal{T}_{t}$. Hence we may assume that every vertex has at most two support neighbors of degree 2, denoted by Assumption 3.
Let $F_{t}$ be a $\gamma'_{t}(T)$-set containing non-leaf edges, $P=v_{0}v_{1}\ldots v_{t}$ the longest path of $T$, say the edge $e_{i}=v_{i}v_{i+1}$. Obviously, $v_{1}$ is a support vertex of degree 2 and each child of $v_{2}$ is a support vertex of degree 2. We root $T$ at the vertex $v_t$.
Since $e_0$ is a leaf edge, $F_t$ must contain $e_1$. Combined with Lemma \[Le:1=1NonTriSta\] and the choice of $F_t$, it is impossible to contain both $e_2$ and $e_3$ in $F_t$, i.e., $e_2\in F_t$ and $e_3\notin F_t$ or $e_2\notin F_t$ and $e_3\in F_t$ or $e_2\notin F_t$ and $e_3\notin F_t$.
Combined with Assumptions 2 and 3, $d(v_2)=2$ or $3$. Next, we divide two cases according to the degree of $v_2$.
**Case 1.** $d(v_2)=3$.
By Assumptions 1 and 2, $v_2$ has another support child $v'_1$ of degree 2, say $v'_0$ is the child of $v'_1$.
**Subcase 1.1.** $e_2\in F_t$.
In this subcase, we can let $\overline{T}=T-\{v_{0},v_{1}\}$. Combined with Lemma \[Le:1=1NonTriSta\] and the choice of $F_t$, the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further, $\gamma'_{t}(\overline{T})\leqslant \gamma'_{t}(T)-1$. Combined with an obvious inequality: $\gamma'(T)\leqslant \gamma'(\overline{T})+1$, we have $\gamma'(\overline{T})+1 \leqslant \gamma'_{t}(\overline{T})+1 \leqslant \gamma'_{t}(T)=\gamma'(T)\leqslant \gamma'(\overline{T})+1$, and so $\gamma'_{t}(\overline{T})$ =$ \gamma'(\overline{T})$. By the inductive hypothesis, there is an edge label of $\overline{T}$ such that $\overline{T}\in \mathcal{T}_{t}$. By Observation \[ob2\] (\[Ob:LEisL1orL2\]), $v'_{1}v_2$ is an $S$-edge in $\overline{T}$, combined with Observation \[ob2\] (\[Ob:SedgeisTED\]), there are at least two $S$-edges incident with $v_2$ in $\overline{T}$ in either case, $v_2\in A_2$. We can obtain the tree $T$ by applying Operation $\mathcal{O}_{2}$ from $\overline{T}$ and a new edge $v_{0}v_{1}$, so ${T}\in \mathcal{T}_{t}$.
**Subcase 1.2.** $e_2\notin F_t$.
Let $\overline{T}=T-\{v_{0},v_{1},v'_{1},v'_{0}, v_2\}$. Since edges $v'_{1}v'_{0}$ and $e_0$ are leaf edges, combined with Lemma \[Le:1=1NonTriSta\] and the choice of $F_t$, the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further, $\gamma'_{t}(\overline{T})\leqslant \gamma'_{t}(T)-2$.
Combined with an obvious inequality: $\gamma'(T) \leqslant \gamma'(\overline{T})+2$, we have $\gamma'(\overline{T}) +2\leqslant \gamma'_{t}(\overline{T})+2\leqslant \gamma'_{t}(T)=\gamma'(T)\leqslant \gamma'(\overline{T})+2$, and so $\gamma'_{t}(\overline{T})= \gamma'(\overline{T})$. By the inductive hypothesis, there is an edge label of $\overline{T}$ such that $\overline{T}\in \mathcal{T}_{t}$. We can obtain the tree $T$ by applying Operation $\mathcal{O}_{5}$ from $\overline{T}$ and a path $v_0v_1v_2v'_1v'_{0}$, so ${T}\in \mathcal{T}_{t}$.
**Case 2.** $d(v_{2})=2$.
Since $d(v_2)=2$, we have $\{e_1,e_2 \}\subseteq F_t$ by the choice of $F_t$. So $(E(v_3)- e_2)\cap F_t=\emptyset$ by Lemma \[Le:1=1NonTriSta\].
\[1=1v3child\] Let $v'_2$ be a child of $v_3$ other than $v_2$. Then $v'_2$ is a leaf vertex.
By contradiction. $v'_2$ has at most one support child by symmetry and Assumption 3. Then $v'_2v_3$ belongs to $F_t$ by the choice of $F_t$, a contradiction with Lemma \[Le:1=1NonTriSta\].
\[Cl:1=1v’4child\] Let $v'_4$ be any non-leaf child of $v_5$. Then each subtree $T_{4'}$ of $T- v'_4$ not containing $v_5$ is isomorphic to one of the graphs in the following figure.
![The subgraphs following $v'_4$.[]{data-label="fig:1=1v4Chi"}](v4child.pdf)
Let $v'_3$ be the child of $v'_4$ in $T_{4'}$. If the length of a longest path starting at $v'_3$ in $T_{4'}$ is 3, combined with symmetry, Claim \[1=1v3child\] and Assumptions 1, 2, 3, then $T_{4'}$ is isomorphic to (a) or (b). If the length of a longest path starting at $v'_3$ in $T_{4'}$ is 2, combined with Assumptions 1, 2, 3, then $T_{4'}$ is isomorphic to (e) or (f). If the length of a longest path starting at $v'_3$ in $T_{4'}$ is 1, by Assumption 1, then $T_{4'}$ is isomorphic to (c). If the length of a longest path starting at $v'_3$ in $T_{4'}$ is 0, then $T_{4'}$ is (d). Therefore, $T_{4'}$ is isomorphic to one of the graphs in the Fig. \[fig:1=1v4Chi\].
If $v_4$ has one non-leaf child, say $v''_3$, such that the subtree $T_4$ of $T-v_4$ containing $v''_3$ is isomorphic to (e) in Fig. \[fig:1=1v4Chi\], then $T_4$ is a $P_5$. Let $\overline{T}$ be the subtree of $T-T_4$. If $v_4v''_3 \in F_t$, then $F_t-v_4v''_3+e_3-e_2$ is still an ED-set of $T$ of size $|F_t|-1$, a contradiction. Hence $v_4v''_3\notin F_t$ and the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further $\gamma'_t(\overline{T})\leqslant \gamma'_t(T)-2$. Combined with an obvious inequality: $\gamma'(T)\leqslant \gamma'(\overline{T})+2$, we have $\gamma'(\overline{T})+2\leqslant \gamma'_t(\overline{T})+2 \leqslant \gamma'_t(T)=\gamma'(T)\leqslant \gamma'(\overline{T})+2$, so $\gamma'(\overline{T})=\gamma'_t(\overline{T})$. By the inductive hypothesis, $\overline{T} \in \mathcal{T}_t$ with an edge labelling. Thus $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal {O}_5$. So $T \in \mathcal{T}_t$. In what follows assume that there is no subtree of $T-v_4$ not containing $v_5$ isomorphic to (e) in Fig. \[fig:1=1v4Chi\], denoted by Assumption 4.
\[Cl:1=1v5\] If $|E(v_5)\cap F_t|\geqslant 1$ and there is a child, say $v''_4$, of $v_5$ such that there is a subtree $T_{4''}$ of $T-v''_4$ not containing $v_5$ isomorphic to (e) in Fig. \[fig:1=1v4Chi\], then $T$ is obtained from $\overline{T}=T- T_{4''}$ by applying Operation $\mathcal {O}_5$.
Let $v'''_3$ be the child of $v''_4$ in $T_{4''}$. Obviously $T_{4''}$ is a $P_5$. In one case $|E(v_5)\cap F_t|\geqslant 2$, if $v''_4v'''_3\in F_t$, then $F'_t=F_t-v''_4v'''_3+v''_4v_5$ is still a TED-set of $T$. The restriction of $F'_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further $\gamma'_t(\overline{T})\leqslant \gamma'_t(T)-2$. Similar to the discuss as above, $\overline{T} \in \mathcal{T}_t$. Thus $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal {O}_5$. If $v''_4v'''_3\notin F_t$, then the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$. Similar to the discuss as above, $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal {O}_5$.
In the other case $|E(v_5)\cap F_t|=1$, say $e'_5=E(v_5)\cap F_t$. Let $x$ be any non-leaf neighbor of $v_5$ in $T$. We claim that $|E(x)\cap F_t|\neq 1$. Indeed, if this is not the case, say $e_x=E(x)\cap F_t$, then $F_t-e'_5-e_x+xv_5$ is an ED-set of $T$, a contradiction. So $v''_4v'''_3\notin F_t$. Combined Lemma \[Le:1=1NonTriSta\], similar to the discuss as above, $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal {O}_5$.
In what follows assume that, if $|E(v_5)\cap F_t|\geqslant 1$ and let $v'_4$ be a child of $v_5$, there is no subtree of $T-v'_4$ not containing $v_5$ isomorphic to (e) in Fig. \[fig:1=1v4Chi\], denoted by Assumption 5.
By Claim \[Cl:1=1v’4child\], let $\{v_3^1, v_3^2, \ldots, v_3^w\}$ be the set of children of $v_4$ such that the subtree of $T-v_4$ containing $v_3^i$ is isomorphic to (a) for $0\leqslant i\leqslant w$, $\{u_{3}^{1},u_{3}^{2},\ldots,u_{3}^{z}\}$ the set of children of $v_4$ such that the subtree of $T-v_4$ containing $u_{3}^{j}$ is isomorphic to (b) in Fig. \[fig:1=1v4Chi\] for $0\leqslant j\leqslant z$. Combined with the structure of (a) and (b) and Lemma \[Le:1=1NonTriSta\], we have $|(E(v_3^i)-v_3^iv_4)\cap F_t|=1$ and $|(E(u_3^j)-u_3^jv_4)\cap F_t|=1$, say $e_v^i=(E(v_3^i)-v_3^iv_4)\cap F_t$ and $e_u^j=(E(u_3^j)-u_3^jv_4)\cap F_t$ for each $i$ and $j$. Then,
\[Cl:1=1d(v4)=2\] $w\leqslant 1$. Further, if $w= 1$, then $d(v_4)=2$.
By contradiction. If $w\geqslant 2$, then $F_t-e_v^1+e_3-e_v^2$ is an ED-set of $T$ of size $|F_t|-1$, a contradiction. So $w\leqslant 1$.
Assume that $d(v_4)\geqslant 3$ when $w=1$. If $z\neq 0$, then $F_t-e_u^1+v_4u_3^1-e_v^1$ is an ED-set of $T$ of size $|F_t|-1$, a contradiction. If there is a subtree of $T-v_4$ not containing $v_5$ isomorphic to one of (c), (d) and (f) in Fig. \[fig:1=1v4Chi\], then $E(v_4)\cap F_t\neq \emptyset$ by the choice of $F_t$ and Lemma \[Le:1=1NonTriSta\], thus $F_t-e_v^1$ is an ED-set of $T$ of size $|F_t|-1$, a contradiction. Therefore, if $w=1$, then $d(v_4)=2$.
By Claim \[Cl:1=1d(v4)=2\], we have the following two claims.
\[1=1v4hasnoChid(g)\] There is no subtree of $T-v_4 $ not containing $v_5$ isomorphic to (f) in Fig. \[fig:1=1v4Chi\].
By Claim \[Cl:1=1d(v4)=2\], we just need to consider the case $w=0$. By contradiction. If there is a subtree of $T-v_4$ containing one child, say $v'_3$, of $v_4$ isomorphic to (f), then $|E(v'_3)\cap F_t|\geqslant 2$ and $v_4v'_3\in F_t$. Thus $F_t-v_4v'_3-e_2+e_3$ is an ED-set of $T$ of size $|F_t|-1$ by Lemma \[Le:1=1NonTriSta\], a contradiction.
Combined with Assumption 4 and Claim \[1=1v4hasnoChid(g)\], there is no subtree of $T-v_4$ not containing $v_5$ isomorphic to (e) or (f).
\[Cl:1=1E(v4)0or2\] If $w=1$, then $E(v_4)\cap F_t=\emptyset$. Otherwise, $|E(v_4)\cap F_t|\neq 1$.
By contradiction. Assume $E(v_4)\cap F_t\neq\emptyset$ when $w=1$, then $F_t-e_2$ is an ED-set of $T$, a contradiction. Assume $|E(v_4)\cap F_t|=1$ when $w=0$, then $e_4\in F_t$ by Claims \[Cl:1=1v’4child\] and \[1=1v4hasnoChid(g)\]. Thus $F_t -e_4+e_3-e_2$ is an ED-set of $T$, a contradiction. Therefore, if $w=1$, then $E(v_4)\cap F_t=\emptyset$. Otherwise, $|E(v_4)\cap F_t|\neq 1$.
Combined with Claims \[Cl:1=1v’4child\], \[1=1v4hasnoChid(g)\] and \[Cl:1=1E(v4)0or2\], if $|E(v_4)\cap F_t|\geqslant 2$, then there is a subtree of $T-v_4$ not containing $v_5$ isomorphic to graph (c) in Fig. \[fig:1=1v4Chi\], i.e., a $P_2=uv$, say $u$ is a child of $v_4$. By Claim \[Cl:1=1d(v4)=2\], the subtree $T^a$ of $T-v_4$ containing $v_3$ is isomorphic to (b). Let $\overline{T}$ be the subtree of $T-T^a$. By $|E(v_4)\cap F_t|\geqslant 2$, the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further $\gamma'_t(\overline{T})\leqslant \gamma'_t(T)-2$. Combined with an obvious inequality: $\gamma'(T)\leqslant \gamma'(\overline{T})+2$, we have $\gamma'(\overline{T})+2\leqslant \gamma'_t(\overline{T})+2 \leqslant \gamma'_t(T)=\gamma'(T)\leqslant \gamma'(\overline{T})+2$, and so $\gamma'(\overline{T})=\gamma'_t(\overline{T})$. By the inductive hypothesis, $\overline{T} \in \mathcal{T}_t$ with an edge labelling. In $\overline{T}$, by Observation \[ob2\] (\[Ob:LEisL1orL2\]), the leaf edge $uv$ is an $L_1$-edge and $v_4u$ is an $S$-edge. Combined with Observation \[ob2\] (\[Ob:SedgeisTED\]), $v_4\in A_2$. Therefore $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal{O}_3$. So $T \in \mathcal{T}_t$.
If $E(v_4)\cap F_t=\emptyset$, then there is no subtree of $T-v_4$ not containing $v_5$ isomorphic to (c) or (d). Combined with Claims \[Cl:1=1v’4child\], \[Cl:1=1d(v4)=2\], Assumption 4 and the above analysis, then we may assume that each subtree of $T-v_4$ not containing $v_5$ is isomorphic to (a) or (b), denoted by Assumption 6.
By Assumption 6, we can divide two subcases to discuss according to the subtree $T_4$ of $T-v_4$ containing $v_3$ is isomorphic to (a) or (b) as follow.
**Subcase 2.1.** $T_4$ is isomorphic to (a).
By Claims \[Cl:1=1d(v4)=2\] and \[Cl:1=1E(v4)0or2\], $d(v_3)=d(v_4)=2$ and $E(v_4)\cap F_t=\emptyset$. Obviously, $e_4\notin F_t$ and $|E(v_5)\cap F_t|\geqslant 1$.
\[Cl:1=1E(v5)isBig2\] $|E(v_5)\cap F_t|\geqslant 2$.
If $|(E(v_5)- e_4)\cap F_t|=1$, say $e'_5=E(v_5)\cap F_t$, then $F_t-e'_5 + e_4- e_2$ is an ED-set of $T$ of size $|F_t|-1$, a contradiction.
Combined with $|E(v_5)\cap F_t|\geqslant 2$ and $E(v_4)\cap F_t=\emptyset$, there is a child other than $v_4$, say $v'_4$, of $v_5$ such that $v_5v'_4\in F_t$. Obviously, $v'_4$ is not a leaf. Then
\[Cl:1=1v5hasSupChid\] $v'_4$ is a support vertex of degree 2.
We first show that $v'_4$ is a support vertex. Assume to the contrary that $v'_4$ has no leaf children, combined with Claim \[Cl:1=1v’4child\], Assumption 5 and Lemma \[Le:1=1NonTriSta\], every subtree of $T-v'_4$ not containing $v_5$ is isomorphic to (a) or (b) in Fig. \[fig:1=1v4Chi\]. Obviously $F_t-v_5v'_4$ is still an ED-set of $T$ of size $|F_t|-1$, a contradiction. Therefore, $v'_4$ is a support vertex.
If $d(v'_4)\geqslant 3$, then we denote by $T'$ a subtree of $T-v'_4$ containing a non-leaf child of $v'_4$. Combined with Assumption 5 and Claim \[Cl:1=1v’4child\], $T'$ is not isomorphic to (a) or (e) in Fig. \[fig:1=1v4Chi\]. By Lemma \[Le:1=1NonTriSta\], $T'$ is not isomorphic to (c) or (f). Hence $T'$ is isomorphic to (b), say $v'_3$ the non-leaf child of $v'_4$, $v'_2$ the non-leaf child of $v'_3$. Obviously, $F_t-v_5v'_4-v'_2v'_3+v'_3v'_4$ is an ED-set of $T$ of size $|F_t|-1$, a contradiction. So $d(v'_4)=2$.
Since $d(v_2)=d(v_3)=d(v_4)=2$, the subgraph induced by $\{v_0, v_1, v_2, v_3 \}$ is $P_4$. Let $\overline{T}= T-\{v_0,v_1,v_2,v_3\}$. By $E(v_4)\cap F_t=\emptyset$, the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further $\gamma'_t(\overline{T})\leqslant \gamma'_t(T)-2$. Combined with an obvious inequality: $\gamma'(T)\leqslant \gamma'(\overline{T})+2$, we have $\gamma'(\overline{T})+2\leqslant \gamma'_t(\overline{T})+2 \leqslant \gamma'_t(T)=\gamma'(T)\leqslant \gamma'(\overline{T})+2$, and so $\gamma'(\overline{T})=\gamma'_t(\overline{T})$. By the inductive hypothesis, $\overline{T} \in \mathcal{T}_t$ with an edge labelling. By Observation \[ob2\] (\[Ob:LEisL1orL2\]), $e_4$ is an $L_2$- or $L_1$-edge. Combined with Claim \[Cl:1=1v5hasSupChid\] and Observation \[ob2\] (\[Ob:EndpOfL1\]), (\[Ob:SedgeisTED\]), $e_4$ is an $L_2$-edge. Since $d_{\overline{T}}(v_4)=1$, $v_4\in B$. Therefore $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal{O}_4$, $T\in \mathcal{T}_t$.
**Subcase 2.2.** $T_4$ is isomorphic to (b).
In this subcase, let $v'_4$ be any non-leaf child of $v_5$, by symmetry and Assumption 6, there is no subtree of $T-v'_4$ not containing $v_5$ isomorphic to (a) in Fig. \[fig:1=1v4Chi\], and each subtree of $T-v_4$ not containing $v_5$ is isomorphic to (b), i.e., a $P_5$. Let $\overline{T}=T-T_4$. Since $e_3\notin F_t$ by Lemma \[Le:1=1NonTriSta\], the restriction of $F_t$ on $\overline{T}$ is a TED-set of $\overline{T}$, further $\gamma'_t(\overline{T})\leqslant \gamma'_t(T)-2$. Combined with an obvious inequality: $\gamma'(T)\leqslant \gamma'(\overline{T})+2$, we have $\gamma'(\overline{T})+2\leqslant \gamma'_t(\overline{T})+2 \leqslant \gamma'_t(T)=\gamma'(T)\leqslant \gamma'(\overline{T})+2$, further $\gamma'(\overline{T})=\gamma'_t(\overline{T})$. By the inductive hypothesis, $\overline{T} \in \mathcal{T}_t$ with an edge labelling. Combined with the structure of (b) and Observation \[ob2\] (\[Ob:L2NbSedge\]), then in $\overline{T}$, all edges connecting $v_4$ and its children being $L_1$-edges, so $v_4\in C$ by Observation \[ob2\] (\[Ob:EndpOfL1\]). If in $\overline{T}$, $e_4$ is an $L_2$-edge or $e_4$ is an $L_1$-edge and adjacent to a leaf edge, then $T$ is obtained from $\overline{T}$ by applying Operation $\mathcal{O}_3$. In what follows we assume that in $\overline{T}$, $e_4$ is an $L_1$-edge and adjacent to non-leaf edges, denoted by Assumption 7. Note that in $\overline{T}$, there is only one $S$-edge in $E_{\overline{T}}(v_5)$, say $e'_5$, and $v_5\in A_1$ in $\overline{T}$.
By Lemma \[Le:TIs1=1\], all $S$-edges in $\overline{T}$ construct a minimum total edge dominating set $D(\overline{T})$. Then, $F'_t=D(\overline{T})+e_1+e_2$ is a minimum total edge dominating set of $T$ by $\gamma'_t(\overline{T})+2=\gamma'_t(T)$. Further, $|E(v_5)\cap F'_t|=1 $, Claim \[Cl:1=1v5\] still holds. Then, we have the following claim:
\[1=1e5L1v6L2\] $e_5$ is an $L_1$-edge in $E_{\overline{T}}(v_5)$, and all edges in $(E_{\overline{T}}(v_6)- e_5)$ are $L_2$-edges.
Let $v'_4$ be any non-leaf child of $v_5$ other than $v_4$ and $v'_3$ any child of $v'_4$ in $\overline{T}$. There is no subtree of $\overline{T}- v'_4$ containing $v'_3$ isomorphic to (e) in $\overline{T}$ by Assumption 5 and Claim \[Cl:1=1v5\]. We claim that the length of a longest path starting at $v'_3$ in the subtree $T'$ of $T-v'_4 $ containing $v'_3$ is 3 or 1. If the length of a longest path starting at $v'_3$ in $T'$ is 2 or 0, by Lemma \[Le:1=1NonTriSta\] and Observation \[ob2\] (\[Ob:LEisL1orL2\]), (\[Ob:SedgeisTED\]), then $v'_4\in A_1$ in $\overline{T}$, a contradiction by Observation \[ob2\] (\[Ob:EndpOfL1\]) and Lemma \[Le:1=1NonTriSta\]. By symmetry, Claim \[Cl:1=1v’4child\] and Assumptions 1, 2, 3, we know that $T'$ is isomorphic to (b) or (c). Let $\{z^1_4,\ldots, z^h_4 \}$ be the set of children of $v_5$ such that there is a subtree $T^r_4$ of $T- z^r_4 $ not containing $v_5$ is isomorphic to (b) in $\overline{T}$ for $1\leqslant r\leqslant h$. Let $z^r_3$ be the child of $z^r_4$ in $T^r_4$. By the structure of (b), then $|E(z^r_3)\cap F'_t|=1$, say $e^r=(E(z^r_3)- z^r_3z^r_4)\cap F'_t$ for each $r$. If $E(v_6)$ has an $S$-edge other than $e_5$, then $F'_t-\{e^1,\ldots,e^h \} +\{z^1_3z^1_4,\ldots,z^h_3z^h_4 \}-e'_5$ is an ED-set of $T$ of size $|F'_t|-1$, a contradiction. So all edges in $(E(v_6)- e_5)$ are $L_1$- or $L_2$-edges. By Observation \[ob2\] (\[Ob:L2NbSedge\]), (\[Ob:SedgeisTED\]), $e_5$ is an $L_1$-edge.
By contradiction. If there is one edge $e'_6=v_6v'_7$ in $(E_{\overline{T}}(v_6)- e_5)$ is an $L_1$-edge, then there is exactly one $S$-edge $e''_6$ incident with $v'_7$ by Observation \[ob2\] (\[Ob:EndpOfL1\]). Thus $F'_t-e''_6+e'_6-\{e^1,\ldots,e^h \} +\{z^1_3z^1_4,\ldots,z^h_3z^h_4 \}-e'_5$ is an ED-set of $T$ of size $|F'_t|-1$, a contradiction. Therefore, all edges in $(E_{\overline{T}}(v_6)- e_5)$ are $L_2$-edges.
Combined with the structure of (b) and Claim \[1=1e5L1v6L2\], all $L_1$-edges in $(E_{\overline{T}}(v_4)-e_4)$ are adjacent to a leaf edge, and there exist a $P_4$ starting at $v_4$, whose edges are labelled as $L_1$, $L_1$, $L_2$ consecutively, and all edges in $(E_{\overline{T}}(v_6)-e_5)$ are $L_2$-edges. Hence we can obtain $T$ from $\overline{T}$ by applying Operation $\mathcal{O}_3$.
As an immediate consequence of Lemmas \[Le:TIs1=1\] and \[Le:1=1IsT\], we have the following characterization of $(\gamma'_{t}=\gamma')$-trees.
A tree is a $(\gamma'_{t}=\gamma')$-tree if and only if $T\in \mathcal{T}_{t}$.
Acknowledgements
================
This work was funded in part by National Natural Science Foundation of China (Grants No. 11571155, 11201205).
T.W. Haynes, S.T. Hedetniemi, P.J. Slater, Fundermentals of Domination in Graphs, Marcel Dekker, New York, 1998.
J.D. Horton, K. Kilakos, Minimum edge dominating sets, SIAM J. Discrete Math. 6(3) (1993) 375-387.
R. Karp, Reducibility among combinatorial problems, Complexity of Computer Computations, R.E. Miller and J.W. Thatcher, eds., Plenum Press, New York, 1972, pp. 85-104.
K. Kilakos, On the complexity of edge domination, Master’s Thesis, University of New Brunswick, New Brunswick, Canada, 1998.
V.R. Kulli, D.K. Patwari, On the edge domination number of a graph, in: Proceedings of the Symposium on Graph Theory and Combinatorics, Cochin, 1991, in: Publication, vol. 21, Centre Math. Sci. Trivandrum, 1991, pp. 75-81.
C.L. Lru, Introduction to Combinatorial Mathematics, McGraw-Hill, New York, 1968.
S. Mitchell, S.T. Hedetniemi, Edge domination in trees, Congr. Numer. 19 (1977) 489-509.
M.H. Muddebihal, A.R. Sedamkar, Characterization of trees with equal edge domination and end edge domination numbers, Mathematical Theory and Modeling, 5 (2013) 33–42.
M.N.S. Paspasan, S.R. Canoy, Edge domination and total edge domination in the join of graphs, Appl. Math. Sci. 10 (2016) 1077-1086.
S. Velammal, Equality of connected edge domination and total edge domaination in graphs, International Journal of Enhanced Research in Science Technology and Engineering 5 (2014) 198-201.
B. Xu, Two classes of edge domination in graphs, Discrete Appl. Math. 154 (2006) 1541-1546.
M. Yannakakis, Edge-deletion problems, SIAM J. Comput. 10 (1981) 297-309.
M. Yannakakis, F. Gavril, Edge dominating sets in graphs, SIAM J. Appl. Math. 38 (1980) 364-372.
Y.C. Zhao, Z.H. Liao, L.Y. Miao, On the algorithmic complexity of edge total domination, Theoret. Comput. Sci. 6 (2014) 28-33.
|
---
abstract: 'HD 105 is a nearby, pre-main sequence G0 star hosting a moderately bright debris disc ($L_{\rm dust}/L_{\star} \sim 2.6\times10^{-4}$). HD 105 and its surroundings might therefore be considered an analogue of the young Solar System. We refine the stellar parameters based on an improved Gaia parallax distance, identify it as a pre-main sequence star [with an age of 50 $\pm$ 16 Myr]{}. The circumstellar disc was marginally resolved by *Herschel*/PACS imaging at far-infrared wavelengths. Here we present an archival ALMA observation at 1.3 mm, revealing the extent and orientation of the disc. We also present *HST*/NICMOS and VLT/SPHERE near-infrared images, where we recover the disc in scattered light at the $\geq$ 5-$\sigma$ level. This was achieved by employing a novel annular averaging technique, and is the first time this has been achieved for a disc in scattered light. Simultaneous modelling of the available photometry, disc architecture, and detection in scattered light allow better determination of the disc’s architecture, and dust grain minimum size, composition, and albedo. We measure the dust albedo to lie between 0.19 and 0.06, the lower value being consistent with Edgeworth-Kuiper belt objects.'
author:
- 'J. P. Marshall'
- 'J. Milli'
- 'É. Choquet'
- 'C. del Burgo'
- 'G. M. Kennedy'
- 'L. Matrà'
- 'S. Ertel'
- 'A. Boccaletti'
bibliography:
- 'refs.bib'
title: 'Comprehensive analysis of HD 105, a young Solar System analog'
---
Introduction {#sec:intro}
============
Through the results of the *Kepler* transit-based exoplanet survey, we are now aware that exoplanets are near-ubiquitous [@2016Coughlin; @2016ForemanMackey]. Circumstellar debris, most commonly identified through excess emission at infrared wavelengths [@2014Matthews], is less frequently found. However, this is more severely biased and limited by current instrumental sensitivity. A detection rate of 20 % has been recorded for cool discs around nearby FGK-type stars [@2013Eiroa; @2016Montesinos], with a slightly higher detection rate around A-type stars [@2014Thureau]. A tentative trend for a higher incidence of debris discs is seen around stars with sub-Solar metallicities hosting low mass planets [@2012Wyatt; @2014aMarshall]. For high mass planets, a recent study of disc-host systems has identified a tentative correlation between the presence of $> 5~M_{\rm Jup}$ planets and debris discs [@2017Meshkat]. A study of exoplanet host stars and debris discs reveals a trend between these components of planetary systems being seen together [@2014Matthews]. However, no evidence of correlations between these properties has been seen in larger stellar samples, whilst potentially attributable to the paucity of information on both faint debris discs and low mass planets around nearby stars, the presence of real correlations cannot be ruled out due to sample construction [@2015MoroMartin].
A critical component in developing an understanding of the diversity of architectures exhibited by known planetary systems is to obtain multi-wavelength resolved images of their dusty debris discs [e.g. @2014Ertel; @2014bMarshall; @2016Marshall; @2017Hengst]. Using a sample of 34 resolved debris discs a relationship between stellar luminosity and dust properties has been identified, showing that the dust grains around higher luminosity stars are closer to emitting like blackbodies [@2014Pawellek]. Fitting that observed relationship with a range of dust material compositions further identified that a mixture of astronomical silicate and water ice provided the best fit to the distribution of spatially resolved discs [@2015PawKri]. The presence of icy material in cool debris discs has also been inferred through modelling of continuum emission of several spatially resolved debris discs, supporting the adoption of icy materials in the analysis of debris dust [@2012Lebreton; @2016Morales]. More recently, a relationship between stellar luminosity and disc radius has been measured for a similar sized sample of debris discs imaged at sub-millimetre wavelengths [@2018bMatra]. However, no evidence was found for the trend of disc radius relative to blackbody radius identified at far-infrared wavelengths. This may be attributed to the low spatial resolution and bias toward higher luminosity stars of the far-infrared sample making it less representative overall.
Analysis of the infrared continuum emission from debris dust can only take our understanding so far. To obtain insight into the dust grain composition and structure (porosity) we require measurement of the silicate features at mid-infrared wavelengths [e.g. @2006Beichman; @2006Chen; @2009Lawler; @2012Johnson; @2015Mittal], and/or [measurement of scattered light from the disc (either total intensity or polarised light). A review of recent advancements in the interpretation of scattered light imaging of debris discs is presented in [@2018Hughes]]{}. In the case of silicate features, a minority of debris disc host stars exhibit features in their mid-infrared spectra [@2006Beichman; @2014Chen], suggesting the constituent dust grains are large ($>~10~\mu$m) and/or cold. In the latter case, the scattered light brightness is poorly correlated with expectations from the disc brightness in continuum emission [e.g. @2014Schneider] but, for those discs that have been imaged in scattered light, determination of the dust optical and scattering properties has been possible [e.g. @2007Graham; @2010Krist; @2012Rodigas; @2014Rodigas; @2014Soummer; @2016Choquet; @2016Schneider]. Complementary to the properties of solid material in debris discs, the recent detections of a gaseous component to some of these systems [@2016Greaves] provides insight into the volatile content of the dust parent bodies [e.g. @2017Matra], and its origins [@2017Kral].
For Sun-like stars, from which we may draw parallels with the evolution of the Solar System, there exist only a few cases with comprehensive multi-wavelength data sets to support such detailed analyses, e.g. HD 15115 [@2008Debes; @2015Macgregor], HD 61005 [@2007Hines; @2009Maness; @2010Buenzli; @2016Olofsson], HD 107146 [@2011Ertel; @2015Ricci; @2018Marino], HD 207129 [@2010Krist; @2011Marshall; @2012Loehne], HIP 17439 [@2014Ertel; @2014Schuppler], and HD 10647 [@2010Liseau; @2016Schuppler]. The addition of more targets to this list, particularly within the narrow range of stellar spectral types representative of Solar analogues (i.e. early G-type main sequence stars cf a broad range of F6 to K0 spectral types), is therefore important to understand the diversity of potential outcomes for planet formation processes. The relative novelty of the Solar System may therefore be determined by comparison to the range of properties exhibited by analogous debris discs systems.
HD 105 is a young, Sun-like star at a distance of 40 pc that hosts a moderately bright debris disc ($L_{\rm dust}/L_{\star} \sim 2.6\times10^{-4}$). HD 105 is a member of the Tucana-Horologium association and has a well established age of 28 $\pm$ 4 Myr [@2000Torres; @2000ZucWeb]. Its debris disc was reported as being marginally resolved in far-infrared *Herschel*/PACS imaging observations and found to have a radius of $\sim$ 50 au [@2012Donaldson]. With a spectral type of G0, HD 105 might therefore be considered an analogue of the young Solar System.
Here we model the continuum emission and structure of HD 105’s debris disc using a combination of far-infrared and sub-millimetre photometry obtained from a new reduction of archival *Herschel*/PACS and SPIRE images, and a spatially resolved archival image of the disc at millimetre wavelengths from ALMA. We complement this approach with analysis of near-infrared images from *HST*/NICMOS and VLT/SPHERE in order to constrain the dust optical properties through scattered light.
In Sect. \[sec:obs\], we present the imaging and photometric data compiled to model both the stellar and disc components of this system. In Sect. \[sec:meth\_res\] we lay out our approach to analysing the combined data set and state our findings. Sect. \[sec:dis\] the results are put in context, and their impact on our understanding of this system. Finally, in Sect. \[sec:con\], we summarise our findings and detail the conclusions of this work.
Observations {#sec:obs}
============
We have compiled broad band photometric data from optical to millimetre wavelengths to construct the SED of HD 105. These previous observations, combined with the ALMA, VLT/SPHERE, and *HST*/NICMOS imaging data presented here, facilitate comprehensive modelling of the HD 105 system. A summary of the photometry used in the modelling process is presented in Table \[table:disc\_phot\].
At optical wavelengths we use the Strömgren data from [@2015Paunzen], Johnson $BV$ and Cousins $I$ data from [@1997Mermilliod], near-infrared 2MASS $JHK_{s}$ [@2003Cutri], and *WISE* W1 and W2 fluxes [@2010Wright] to scale the model stellar photosphere and gauge its contribution to the total emission.
In the mid-infrared, the *WISE* W3 (12 $\mu$m) and W4 (22 $\mu$m) fluxes are complemented by *AKARI*/IRC 9 $\mu$m data point [@2010Ishihara], a *Spitzer*/MIPS measurement at 24 $\mu$m, and *Spitzer*/IRS photometry at 13 $\mu$m and 31 $\mu$m. The *Spitzer*/IRS spectrum was obtained from CASSIS[^1] [@2011Lebouteiller]. As the target exhibits no evidence of mid-infrared excess, the spectrum was scaled to the photospheric model using a least-squares fit weighted by the measurement uncertainties at wavelengths $< 15~\mu$m. Target fluxes were then extracted in two windows centred at 13 and 31 $\mu$m and 4 $\mu$m in width using the error-weighted average of values within the bin.
At far-infrared wavelengths, we combine *ISO*/ISOPHOT measurements at 60 and 90 $\mu$m [@2001Spangler] and a *Spitzer*/MIPS measurement at 70 $\mu$m [@2009Carpenter] with *Herschel*/PACS measurements at 70, 100, and 160 $\mu$m. The spatial resolution of *ISO* is much poorer (1.5 beam FWHM) than *Spitzer* or *Herschel* (6–18 beam FWHM); inclusion of background contamination and errors in the zero level calibration elevating the *ISO*-measured fluxes are possibilities [@2003Heraudeau; @2003delBurgo]. However, the target is well isolated in the *Herschel* maps and there is good agreement between measurements by all three facilities at similar wavelengths so we therefore do not consider it to have a significant impact on the shape of the SED.
The *Herschel*[^2] observations were first presented in [@2012Donaldson]. Here we re-reduced these data to obtain revised values for the source fluxes based on updates to the instrument calibration made available in the intervening time. The data reduction and analysis were carried out in the same manner as described in [@2013Eiroa]. The PACS data were reduced in the *Herschel* Interactive Processing Environment[^3] [[hipe]{}, @2010Ott] using the standard pipeline processing scripts ([hipe]{} version 15, PACS calibration 78). Target fluxes were measured in each mosaic using an aperture of 15radius and a sky annulus of 30 to 40. Appropriate aperture and colour corrections (determined by a blackbody fit to the aperture corrected measurements), based on values tabulated in [@2014Balog], were applied to the measurements before modelling.
We also present sub-millimetre measurements taken from *Herschel*/SPIRE (Programme ot2\_aroberge\_3, PI: A. Roberge) and an APEX/LABOCA measurement at 880 $\mu$m [@2010Nilsson]. The SPIRE observations were again reduced in [hipe]{} version 15, using SPIRE calibration 14\_3. The target fluxes in the three SPIRE maps were measured using the *sourceExtractorSussextractor* routine. The millimetre SED is further constrained by ALMA photometry at 1300 $\mu$m (Programme 2012.1.00437.S, PI: D. Rodriguez) and an ATCA measurement at 9 mm taken from [@2017Marshall].
[lrlc]{} 0.349 & 891 $\pm$ 2 & Strömgren $u$ & 1\
0.411 & 2071 $\pm$ 6 & Strömgren $b$ & 1\
0.440 & 1897 $\pm$ 14 & Johnson $B$ & 2\
0.467 & 2989 $\pm$ 6 & Strömgren $v$ & 1\
0.546 & 3669 $\pm$ 2 & Strömgren $y$ & 1\
0.550 & 3499 $\pm$ 26 & Johnson $V$ & 2\
0.790 & 4468 $\pm$ 41 & Cousins $I$ & 3\
1.235 & 4139 $\pm$ 73 & 2MASS $J$ & 4\
1.662 & 3425 $\pm$ 73 & 2MASS $H$ & 4\
2.159 & 2383 $\pm$ 42 & 2MASS $K_{s}$ & 4\
3.40 & 1152 $\pm$ 122 & *WISE* W1 & 5\
4.60 & 699 $\pm$ 24 & *WISE* W2 & 5\
9.00 & 189 $\pm$ 13 & AKARI/IRC9 & 6\
12.0 & 114 $\pm$ 6 & *WISE* W3 & 5\
22.0 & 35 $\pm$ 2 & *WISE* W4 & 5\
24.0 & 29 $\pm$ 1 & *Spitzer*/MIPS & 7\
31.0 & 22 $\pm$ 7 & *Spitzer*/IRS & 8\
60.0 & 143 $\pm$ 3 & *ISO*/PHT & 9\
70.0 & 141 $\pm$ 10 & *Spitzer*/MIPS & 7\
70.0 & 132 $\pm$ 8 & *Herschel*/PACS & 10\
90.0 & 167 $\pm$ 8 & *ISO*/PHT & 9\
100.0 & 168 $\pm$ 8 & *Herschel*/PACS & 10\
160.0 & 112 $\pm$ 9 & *Herschel*/PACS & 10\
250.0 & 57 $\pm$ 10 & *Herschel*/SPIRE & 11\
350.0 & 38 $\pm$ 15 & *Herschel*/SPIRE & 11\
880.0 & 10.7 $\pm$ 5.9 & APEX/LABOCA & 12\
1300.0 & 2.0 $\pm$ 0.4 & ALMA & 11\
9000.0 &0.042 $\pm$ 0.014 & ATCA & 13\
Methodology and results {#sec:meth_res}
=======================
Here we undertake a determination of the stellar parameters by fitting a grid of stellar atmosphere models to archival high resolution spectra. We also inferred the same stellar parameters as well as the mass and age from stellar evolution models combined with a Bayesian approach. We then present a self-consistent analysis of both the millimetre and scattered light imaging data to determine the disc architecture and dust albedo, respectively. The disc structure is applied as a constraint in modelling of the disc spectral energy distribution (SED) in combination with photometry from archival sources.
Stellar parameters
------------------
A summary of the stellar properties used in this work are given in Table \[table:star\_props\]. The stellar position, distance (parallax), proper motions, [and $G$ band magnitude ($G~=~7.3833~\pm~0.0007$)]{} are taken from the Gaia DR 2 catalogue [@2016Prusti; @2016Lindegren; @2018Brown]. The stellar physical properties were derived by modelling of available high resolution spectra along with optical and near-infrared photometry.
[lll]{} Right Ascension ([*hms*]{}) & 00 05 52.54 & 1\
Declination ([*dms*]{}) & -41 45 11.0 & 1\
Proper motions (mas/yr) & 97.96, -76.51 & 2\
Distance (pc) & 38.85 $\pm$ 0.08 & 2\
$V$ (mag) & 7.513 $\pm$ 0.005 & 3\
$B-V$ (mag) & 0.600 $\pm$ 0.003 & 3\
Spectral type & G0 & 4\
Luminosity ($L_{\odot}$) & [1.216 $\pm$ 0.005]{} & 4\
Radius ($R_{\odot}$) & [1.009 $\pm$ 0.003]{} & 4\
Mass ($M_{\odot}$) & [1.116 $\pm$ 0.012]{} & 4\
Temperature (K) & [6034 $\pm$ 8]{} & 4\
Surface gravity, $\log g$ & [4.478 $\pm$ 0.006]{} & 4\
Metallicity, \[Fe/H\] & 0.02 $\pm$ 0.04 & 5\
$\nu \sin i$ (km/s) & 17.5 $\pm$ 0.5 & 4\
Age (Myr) & [50 $\pm$ 16]{} & 4\
[The stellar parameters of HD 105, i.e. luminosity, radius, [effective temperature, surface gravity, age]{}, and mass, were derived by using the absolute $G$ magnitude, parallax, $B-V$ colour, and \[Fe/H\] as input parameters using the Bayesian approach applied in [@2016delBurgo; @2018delBurgo]. We derived an age of 50 $\pm$ 16 Myr, which is in agreement with that one more constrained assuming membership of Tucana Horologium, of 30 Myr (del Burgo et al. in preparation). The colour $B-V = 0.600~\pm~0.003$ is taken from [@1997Mermilliod], and [metallicity]{} \[Fe/H\]$~=~0.02~\pm~0.04$ from [@2014Tsantaki]. The observed $G$ magnitude was converted to absolute magnitude $M_{G}$ using the *Gaia* DR2 parallax of $\pi = 25.75~\pm~0.06$ mas [@2018Brown].]{} We find that HD 105 is a pre-main-sequence star. [In order to derive the stellar parameters]{} we downloaded and arranged a grid of PARSEC isochrones [version 1.2S, @2012Bressan; @2014Chen; @2015Chen; @2014Tang], with steps in age of 5 per cent, steps in mass of 0.1 per cent, and steps of [0.02]{} dex in \[Fe/H\]. [The lower and upper limits of these parameters are well beyond 6-$\sigma$ around the solution, see Table \[table:star\_props\]. Synthetic photometry estimates were obtained using filter curves from [@2018Evans] for the Gaia $G$ band, and [@2006MaizApellaniz] for the $B$ and $V$ bands]{}. For a more detailed description see @2018delBurgo.
In addition, high resolution FEROS and HARPS spectra were also used to obtain the stellar effective temperature $T_{\rm eff}$, metallicity \[Fe/H\], projected rotational velocity $\nu \sin i$, and surface gravity $\log g$. We performed a comparison with BTSettl [stellar atmosphere models]{} [@2012Allard]. The models in the grid were modified in order to be compared with the FEROS spectrum similar to the approach of [@2009delBurgo]. First, the synthetic spectra were transformed to take into account the stellar projected rotational velocity ($\nu \sin i$) using the formalism of [@1992Gray], with a limb darkening parameter equal to 0.8. These spectra were convolved with a Gaussian that mimics the instrumental profile along the dispersion axis. The resulting spectra were rebinned to the same resolution and grid of the observed spectrum. This was corrected from the velocity shift from a cross-correlation analysis with the models. All modelled and observed spectra were normalized before performing the comparison.
We obtain values from the FEROS spectrum ($R = 48,000$) of $T_{\rm eff} = 6000~\pm~50~$K, $\log g = 4.50~\pm~0.25$, and $\nu \sin i = 17.5~\pm~0.5$ km/s. Values derived from the higher resolution HARPS spectrum ($R = 115,000$) are consistent with those of FEROS, with the FEROS values being determined at higher signal-to-noise. [These values are consistent with those derived from the aforementioned stellar atmosphere models.]{}
We have a good agreement between the stellar evolution analysis and the stellar atmosphere model fitting for the parameters in common. The values obtained through this analysis are likewise consistent with available values from the literature e.g. [@2005ValFis], [@2014Tsantaki].
The H$\alpha$ and Ca[ii]{} H&K lines in the FEROS spectrum are presented in Fig. \[fig:feros\]. The median FEROS spectrum, calculated from the available spectra (in black) is plotted together with our best fitting model. Some regions of the H$\alpha$ segment of the spectrum have been blanked out due to contamination from telluric lines at those wavelengths (see Fig. \[fig:feros\], top). Despite the low $\nu \sin i$ (consistent with a more pole-on viewing angle), we see clear evidence of Ca[ii]{} H&K emission in the spectrum (see Fig. \[fig:feros\], bottom). This is consistent with activity in the stellar chromosphere, an indicator of stellar youth.
We also examined the high resolution spectra for evidence of gas revealed by the presence of circumstellar absorption or emission lines. The origin of such gas may be primordial, or secondary from e.g. photodesorption or cometary activity. We note that the FEROS fibre input has a 2footprint on the sky that could include light scattered from material in the circumstellar disc, depending on the seeing quality. No evidence of such features is seen. No evidence of gas features in either absorption (implying cool gas) or emission (implying hot gas) were identified in the spectra.
ALMA millimetre imaging data
----------------------------
An ALMA band 6 (1.3 mm) observation for HD 105 was downloaded from the ESO ALMA Science Archive[^4]. The observation was originally carried out as part of project 2012.1.00437.S (PI: D. Rodriguez) during Cycle 1. The spectral setup consists of four windows. Three windows were set up to measure the continuum, each with 128 channels over 2 GHz bandwidth. The fourth covered the $^{12}$CO (2-1) line at 230.538 GHz and sampled its 0.94 GHz bandwith with 3840 channels (0.32 kms$^{-1}$), providing a velocity resolution of 0.64 kms$^{-1}$. In combination, the four channels provide 6.9 GHz bandwidth to study the continuum emission. The on-source integration time for HD 105 was 2238 s. Neptune was used as the flux calibrator, J0006-0623 was the bandpass calibrator, and J0012-3954 was the phase calibrator.
Calibration and reduction of the ALMA observation were carried out in CASA 4.1 using the provided scripts. Image reconstruction was carried out using the *clean* task combining all four spectral windows for the greatest signal-to-noise. We reconstruct the image using natural weighting. With natural weighting the continuum image r.m.s. noise is 26.8 $\mu$Jy beam$^{-1}$. The dirty beam has an ellipsoidal FWHM $0\farcs95~\times~0\farcs67$ at a position angle of 88$\degr$, equivalent to a spatial resolution of 34$~\times~$26 au.
### Disc architecture {#sssec:resolv}
The disc is visible as an annular structure centred on the stellar position, but the low signal-to-noise of the observations means that it is not detected with high significance ($>~3-\sigma$) at all position angles. The ALMA image and the best fit annular disc model are presented in Fig. \[fig:hd105\_alma\].
We model the architecture of HD 105’s disc as a single annulus with a semi-major axis $R$ and width $\Delta R$ oriented at a position angle $\theta$ and inclination $i$. The disc surface brightness exponent $\alpha$ is assumed to be constant, and a vertical opening angle of 10$\degr$, similar to the Solar system, was assumed. A grid of models is generated spanning reasonable values for each of these parameters. To determine the best-fit properties for the disc we subtract each disc model, convolved with the dirty ALMA beam (FWHM $\sim~0.9\arcsec\times0.7\arcsec$), from the observed image and seek to minimise the residuals within the region of the model-subtracted image where there is significant emission in the observed image.
[lcccc]{} Inclination ($\degr$) & 0 – 90 & 19 & Linear & 50 $\pm$ 5\
Position Angle ($\degr$) & 0 – 90 & 19 & Linear & 15 $\pm$ 5\
Semi-major axis (au) & 50 – 150 & 41 & Linear & 85 $\pm$ 5\
Semi-major axis ($\arcsec$) & & & & 2.16 $\pm$ 0.13\
Belt width (au) & 10 – 50 & 21 & Linear & 30 $\pm$ 10\
Belt width ($\arcsec$) & & & & 0.75 $\pm$ 0.25\
S.B. exponent, $\alpha$ & 0.0 & 1 & Fixed & 0.0\
Total disc flux (mJy) & 1.0 – 4.0 & 301 & Linear & 1.45 $\pm$ 0.28\
Dust mass ($M_{\oplus}$) & & & & 0.035 $^{+0.013}_{-0.009}$\
The best-fit architecture derived from the model fitting is dominated by the disc ansae, which are the only regions of the disc detected at high signal-to-noise ($>$ 3-$\sigma$). The disc semi-major axis is 216 $\pm$ 013 (85 $\pm$ 5 au), [with]{} an inclination of 50 $\pm$ 5 with respect to the line of sight at a position angle of 15 $\pm$ 5. The width of the disc annulus is 075 $\pm$ 025 (30 $\pm$ 10 au). The belt is perhaps marginally resolved in the ALMA image, but the quality of the data does not allow us to conclude that, so we hereafter assume the belt is unresolved. The best-fit disc architecture obtained from this approach is summarised in Table \[table:disc\_arch\].
The total flux of the disc in the sub-millimetre, 1.45 $\pm$ 0.28 mJy, may be used to calculate the mass of dust grains, assuming that the disc is fully optically thin at 1.3 mm. For a disc observed at a frequency $\nu$ with a flux $F_{\nu}$ the dust mass, $M_{\rm dust}$, is given by $$M_{\rm dust} = F_{\nu} d_{\star}^{2} / \kappa_{\nu} B_{\nu, T_{\rm dust}}$$ where $d_{\star}$ is the stellar distance, $\kappa_{\nu}$ is the dust opacity, $T_{\rm dust}$ is the dust temperature, and $B_{\nu,T_{\rm dust}}$ is the Planck function [@1993ZucBec; @2006Draine]. Assuming the dust opacity $\kappa$ is 1.7 gcm$^{-2}$ [@1990Beckwith; @1994Pollack; @2006Draine], that is commonly assumed for debris dust [e.g. @2013Panic; @2017Holland], we calculate a mass of $0.035^{+0.013}_{-0.009}~$M$_{\oplus}$ for the dust. This value does not include uncertainty on the parameter $\kappa$, which leads to an uncertainty in $M_{\rm dust}$ of a factor 3 to 5.
### Non-detection of cold CO emission {#sssec:no_gas}
Molecular carbon monoxide (CO) has been identified in around a dozen debris disc systems [e.g. @2016Greaves; @2017Moor], believed in most cases to be the result of secondary production after liberation from exocomets [@2017Kral]. Most of the stars identified with gas in their debris discs are young ($t_{\rm age}~<~30$ Myr) A-type stars, with relatively bright debris discs ($L_{d}/L_{\star}~\sim~10^{-3}$). Here we search the ALMA observation for emission from the disc associated with the CO (2-1) transition at 230.538 GHz following the method employed to extract CO emission at low signal-to-noise in ALMA data as presented in [@2015Matra; @2016Marino; @2017Marino].
To begin we produced a continuum subtracted measurement set from the HD 105 data using *uvcontsub*. The continuum subtracted data set is then *clean*ed to produce an image cube with the same weighting and pixel scale as the continuum image. The cube spans barycentric velocities between -25 and 25 kms$^{-1}$, with a step size of 0.5 kms$^{-1}$. The cube is corrected for the effect of the primary beam using the *impbcor* task before further analysis. No significant CO emission is observed in the velocity-integrated, continuum-subtracted image produced by integrating the cube.
We then proceed by implementing the spectro-spatial filtering technique presented in [@2017Matra]. Any circumstellar CO is assumed to originate from collisions between planetesimals and therefore reside in the same region around the star as the debris dust detected in the continuum image. The CO is further assumed to be situated in a vertically flat disc, orbiting with Keplerian velocity around a star of 1.114 $M_{\odot}$. We then determine the projected radial velocity for each position in the disc image given the extent and inclination of the disc for cases of the disc rotating toward or away from us. Using the predicted velocity field the spectrum at each position is then shifted to match the stellar velocity (i.e. spectral filtering). We then integrate over all positions where continuum emission is detected to avoid contributions to the CO spectrum from locations and velocities where no circumstellar CO is expected (i.e. spatial filtering). The spectro-spatial filtering yields a 3-$\sigma$ upper limit of 8.5 mJy kms$^{-1}$ to CO emission.
We use a [non-Local Thermodynamic Equilibrium]{} CO excitation code to calculate a CO mass upper limit, following [@2018aMatra]. This code includes fluorescent excitation from the central star at ultraviolet wavelengths. Depending on the excitation conditions we determine a CO mass upper limit ranging from 2.5$\times10^{-7}$ to 2.5$\times10^{-6}$ $M_{\oplus}$. This translates to upper limits on the CO and CO$_{2}$ fraction (by mass) in exocomets between $<~88~\%$ and $<~46~\%$ for the respective mass upper limits. This limit is consistent with the range of values for other exocometary discs, and Solar system bodies [see e.g. @2017Matra].
Detection of scattered light {#sssec:sca}
----------------------------
HD 105 has been observed by both *HST*/NICMOS and VLT/SPHERE. The disc was not directly visible in either data set. However, we recover the disc at the correct angular separation and orientation using an angular averaging technique to aggregate the disc emission to a detectable level. From this measurement of the disc scattered light brightness we determine the albedo of the dust grains at near-infrared wavelengths.
### HST/NICMOS
HD 105 was observed with the *Hubble Space Telescope* (HST) NICMOS instrument at two epochs using the coronagraphic mode (mask radius 03) of the mid-resolution NIC2 channel (007565 pixel$^{-1}$). The first data were acquired on UT-1998-11-03 as part of program GTO-7226 (PI: E. Becklin), a survey looking for giant planets around young nearby stars [@2005Lowrance], using the F160W filter of bandpass very comparable to the H band (pivot wavelength $1.600~\mu$m, 98%-integrated bandwidth $0.410~\mu$m). The second dataset was obtained on UT-2006-07-23 as part of program GO-10527 (PI: D. Hines), a survey looking for debris discs around stars with infrared excess detected with *Spitzer* as part of the FEPS program [@2008Hillenbrand]. The data were obtained with the F110W filter, which is twice as extended toward shorter wavelengths as the J band (pivot wavelength $1.116~\mu$m, 98%-integrated bandwidth $0.584~\mu$m).
The F160W data were acquired at two different spacecraft orientations, obtaining a total of 6 exposures with HST rolled by $\sim~30\degr$ in the middle of the sequence, to enable subtraction of the PSF with Roll Differential Imaging [@1999Lowrance]. The total exposure time is 1344 s for this dataset. The F110W dataset includes 7 exposures all obtained with the same telescope orientation, for a total exposure time of 2016 s.
We reprocessed both datasets as part of the *Archival Legacy Investigations for Circumstellar Environments* (ALICE) program (PI: R. Soummer), a consistent re-analysis of the NICMOS coronagraphic archive using modern PSF subtraction techniques [@2014dChoquet; @2018Hagan]. Assembling PSF libraries from multiple reference stars in the whole archive and using the PCA-based KLIP algorithm for PSF subtraction [@2012Soummer], this program demonstrated a gain in point source sensitivity by a factor of 20 at $1\arcsec$ from the star over classical reference star differential imaging [@2016Debes], and enabled the detection of 11 faint debris discs from NICMOS archival data [@2014Soummer; @2016Choquet; @2017Choquet; @2018Choquet].
For HD 105, we used the reprocessed data from the ALICE public database[^5] for the F110W dataset, and an unpublished ALICE reduction for the F160W dataset with an $11\times11\arcsec$ field of view, more favorable to disc detection with sensitivity limits better than the data published in the ALICE database. PSF subtraction for the F160W dataset was achieved using the 69 first eigen-modes of a library assembling 125 images from reference stars exclusively, and excluded a central zone within a radius of 12 pixels. The F110W dataset was PSF-subtracted with the 211 first eigen-modes of a 640-image library from reference stars only, excluding a central area of radius 5 pixels. The final images were obtained by rotating all exposures to North pointing up, co-adding and scaling them to surface brightness units based on NIC2’s plate-scale and its calibrated photometric factors ($F_{\nu}=1.21121~\mu$Jy s DN$^{-1}$ in F110W, $F_{\nu}=1.49585~\mu$Jy s DN$^{-1}$ in F160W). We are able to recover the disc scattered light in the F160W filter image [in the disc’s ansae at low signal-to-noise]{}, as shown in Fig. \[fig:hst\_scattered\].
To obtain a [more significant]{} measurement of the disc surface brightness than can be obtained from the reduced images, we make use of the high spatial resolution of the scattered light image and measure the surface brightness radial profiles. [We computed the surface brightness averaged in 5-pixel wide ($0.38\arcsec$) elliptical apertures of increasing semi-major axes $a$ with semi major axis $a$ increasing with steps of 1 pixel (76 mas) and semi-minor axis $b = a \cos(50\degr)$, oriented $15\degr$ east-of-north, to trace the disc deprojected average radial profile according to the ALMA detection geometry]{}. We estimate the corresponding uncertainties assuming uncorrelated noise, as $\sigma/\sqrt(N_{\rm pix})$ with $\sigma$ the standard-deviation in each aperture and $N_{\rm pix}$ the number of pixel in the aperture. Since only reference stars were used for PSF subtraction, these measurements are not affected by disc self-subtraction. They are however subject to over-subtraction by the PCA algorithm along with PSF features [@2012Soummer; @2016Pueyo]. We corrected our [average profiles and uncertainties]{} by estimating the algorithm throughput with analytical forward modeling. From this exercise we find a significant peak with a surface brightness of 15 $\pm$ 1 $\mu$Jy/arcsec$^2$ at a separation of around 2.2 from the star, consistent with the ALMA image. The binned radial profiles are presented in Fig. \[fig:hst\_scattered\].
{width="33.00000%"}
### VLT/SPHERE
HD 105 was observed with the Spectro-Polarimeter High-contrast Exoplanet REsearch [SPHERE, @2008Beuzit] on the night of October, 2nd 2015. These observations used the Infra-Red Dual-beam Imager and Spectrograph [IRDIS, @2008Dohlen] of SPHERE to obtain high-contrast and high-angular resolution images of the circumstellar environment of HD 105, in the broad-band H filter (centred at 1.625 , width 291 nm) with the apodized Lyot coronagraph of diameter 185mas. They are part of the SPHERE High-Angular Resolution Debris Disc Survey[^6] [SHARDDS, @2018Milli]. SHARDDS is an open-time program on SPHERE to perform the first comprehensive near-infrared survey of all bright, nearby debris discs (infrared excess greater than $10^{-4}$), yet undetected in scattered light, within 100 pc. It already led to [detections of several discs and a brown dwarf companion]{}: HD 114082 [@2016Wahhaj], 49 Ceti [@2017Choquet] and HD 206893 [@2017Milli]. HD 105 was observed for a total of 50 min on-source, among which the total exposure time is 2432 s, in pupil-stabilized mode to allow Angular Differential Imaging [@2006Marois]. The source was observed just before meridian crossing, which resulted in a parallactic angle rotation of $35\degr$.
The data were sky-subtracted, flat-fielded and bad pixel corrected using the official SPHERE Data Reduction and Handling pipeline [@2008Pavlov] in order to make a temporal cube of 606 frames. Each coronagraphic frame was re-centred using the set of four satellite spots imprinted in the image during the centring sequence. This sequence is obtained by applying a waffle pattern to the deformable mirror and was done prior to and after the deep science coronagraphic observations. Then, different data reduction techniques were applied to subtract the stellar halo and try to detect the scattered light of the disc.
At the expected semi-major axis of the disc of 2.16, the technique that yields the best sensitivity for point source detection is the classical Angular Differential Imaging [cADI, @2006Marois] technique. However for an azimuthally extended structure such as the HD 105 disc, the throughput of the algorithm is very low because of self-subtraction [@2012Milli] and we estimated it to be a few percent. The 5-$\sigma$ sensitivity of the ADI reduction is 100$\mu$Jy at 2.2after throughput correction, and we are therefore unable to confirm the *HST*/NICMOS detection with this data reduction technique.
Therefore, we also applied the Reference Star Differential Imaging (R\[S\]DI) technique, commonly used in space-based observations [@1999Schneider; @2009Lafreniere; @2011Soummer], and more recently applied to ground-based observations [@2016Gerard]. We implemented this technique in the SHARDDS program by building a library of all the coronagraphic images obtained in the program. We then selected 610 images from 22 stars most correlated with the ones from HD 105 (excluding the HD 105 images themselves) and used this set of references to subtract the stellar halo, using a custom PCA algorithm, keeping 100 modes. The disc is not directly visible in the VLT/SPHERE image.
[The image is binned by a factor 4, giving a new pixel size of 49 mas, corresponding to the measured FWHM in the non-coronagraphic image. In the image reduced with PCA-RDI, the flux is integrated in elliptical apertures having an aspect ratio matching the inclination of the disc as seen by ALMA, and a radial width of 10 binned pixels or 490 mas. This ellipse is increased iteratively by step of 1 binned pixels or 49 mas, and at each iteration, the signal is computed as the sum of the flux over this elliptical aperture. It is corrected for the PCA throughput and converted in $\mu$Jy/arcsec$^{2}$ using the binned pixel size of 49 mas and the star flux as measured in the non coronagraphic image. The noise is given by the standard deviation of the pixels over the same elliptical annulus.]{}
[This averaging technique increases the depth to which we can detect disc scattered light emission using geometrical prior from the resolved mm detection.]{} We find a peak in emission at 5.9 $\mu$Jy/arcsec$^{2}$ for a semi-major axis of 2.3, consistent with the disc semi-major axis, as shown in Fig. \[fig:sphere\_scattered\]. This is a factor of $\sim$4 fainter than the disc detected by *HST*/NICMOS; examination of the separate channels reveals that the right channel is consistent with the NICMOS-detected signal in both semi-major axis and brightness, but the left channel is not. Both channels are consistent in the presence of emission between 1.5 and 3.5, consistent with HD 105’s disc. The origin of the discrepancy between the *HST*/NICMOS and VLT/SPHERE detections potentially lies in the estimation of noise, which has been assumed Gaussian here, but it can be seen in the reduced image that the noise is non-Gaussian.
{width="33.00000%"} {width="33.00000%"} {width="33.00000%"}
### Combined modelling
The scattered light measurements from *HST*/NICMOS and VLT/SPHERE can be used to calculate the albedo of the dust grains. Assuming optically thin dust, and using the approximation from [@1999Weinberger], the maximum allowed optical depth $\tau$ times the albedo $\omega$ is $4\pi\phi^2 S/F$ where $S$ is the surface brightness of the disc in mJy/arcsec$^2$, $F$ the total star flux in mJy and $\phi$ the separation of the scatterers. Approximating the optical depth by $\tau = 2 f \phi \cos(i) / [ d\phi (1-\omega) ]$ where $d\phi$ is the disc width in arcsec and $f$ the fractional luminosity, we obtain: $$S = \frac{f F \omega}{(2\pi \times \phi \times d\phi \times \cos(i) )(1-\omega)}$$ Using a disc semi-major axis $\phi=2.16\arcsec$ with a width $d\phi=0.75\arcsec$ at an inclination $i=50\degr$ we obtain an albedo of 0.15 based on the NICMOS surface brightness (15 $\mu$Jy/arcsec$^{2}$), or 0.06 based on the SPHERE measurement (5.9 $\mu$Jy/arcsec$^{2}$). The apparent surface brightness can be significantly enhanced in case of strong forward scattering [@2015HedSta; @Milli2017_hr4796; @2016Olofsson] but given the inclination of the HD 105 disc scattering angles below $40\degr$ are not probed.
With the radial averaging, we have been able to measure dust albedoes a factor of 3 to 10 deeper than expected. The detection of HD 105 in scattered light made here points the way to recovery of similarly faint discs in scattered light where the disc geometry is already known from spatially resolved continuum imaging. Deeper near-infrared or optical images are required to spatially resolve the scattered light disc, in particular to determine the degree of asymmetry in the dust scattering.
[Constraints on the dust composition]{}
---------------------------------------
[Using the mean surface brightness from the scattered light detections we can use the inferred albedoes to constrain the dust grain composition. We calculate the scattering albedo $\omega = F_{\rm scat}/(F_{\rm scat}+F_{\rm therm})$, where $F_{\rm scat}$ is the scattered light fractional luminosity in the F160W filter, and $F_{\rm therm}$ is the dust continuum fractional luminosity. Following [@2018Choquet], we assume that the debris disc to be composed entirely of dust grains with a single size and of a single composition. It is further assumed that the same dust grains responsible for the scattered light are also responsible for the thermal emission so the value of $F_{\rm therm}$ is therefore held fixed for all the tested combinations of grain size and compositions, i.e. $L_{\rm dust}/L_{\star} = 2.6\times10^{-4}$. For consistency, we keep the dust mass of the disc fixed to that calculated for HD 105 based on its ALMA mm flux.]{}
[For each combination of grain size and composition we calculate the scattered light brightness of the disc at the observed orientation and inclination of HD 105, convolving the disc’s scattered light SED with the F160W bandpass to determine the scattered light fractional brightness $F_{\rm scat}$, and hence the scattering albedo $\omega$. The scattered light modelling is carried out using the 3-D radiative transfer code [Hyperion]{} [@2011Robitaille]. We test grain sizes from 0.1 to 500 $\mu$m, and dust compositions of pure astronomical silicate, and water ice:astronomical silicate mixtures in the ratios 10:90, 30:70, and 50:50. Optical constants for the materials were taken from [@2003Draine] (astronomical silicate) and [@1998LiGre] (water ice). Calculation of the optical constants for composite materials was done using the Bruggeman effecive medium theory [@1935Bruggeman].]{}
[We find that dust grain sizes of a few microns, and either pure silicate or moderately icy compositions, are consistent with the dust albedo derived from the NICMOS-observed mean disc surface brightness.]{}
![[Plot of dust grain size vs. scattering albedo illustrating the constraints on dust grain size and composition possible with the new scattered light detection of HD 105’s disc. The horizontal black lines denote the expected dust albedo required to reproduce the mean surface brightness from NICMOS (solid, $\omega = 0.15$) and SPHERE (dashed, $\omega = 0.06$). The coloured lines denote the scattering albedoes for discs composed of dust grains with a single size and composition. We find that silicate (or slightly icy) dust grains around a few $\mu$m in size are consistent with the NICMOS-derived scattering albedo; larger grains ($\geq~10~\mu$m) are required to reproduce the SPHERE-derived albedo, but this would be inconsistent with the typical grain size of dust around Sun-like stars of a few to ten times the blow-out radius.]{} \[fig:dust\_alb\]](hd105_dust_albedoes.pdf){width="50.00000%"}
Revised SED model
-----------------
Here we combine the results previously determined from each dataset in order to model the full system. We use the disc architecture, as determined from the ALMA image, [and the dust scattering albedo measurement from the scattered light images, as constraints in a radiative transfer model of the debris disc]{}. We model the disc as being an annulus lying 70 to 100 au from the star with uniform radial surface brightness, as per the ALMA image. The disc is composed of [compact, spherical]{} dust grains described by a power law size distribution ($dn \propto a^{-q} da$) with exponent $q$ spanning a size range from $a_{\rm min}$ to $a_{\rm max}$. The minimum dust grain size is a free parameter, but the maximum grain size is fixed at 10 mm to best replicate the millimetre wavelength photometry. [The dust composition is assumed to be pure astronomical silicate [@2003Draine].]{}
The resultant disc SED based on the best-fit model parameters is presented in Fig \[fig:sed\_bf\], and a summary of the best-fit disc properties (including the disc extent as fixed input) is provided in Table \[table:disc\_pwrlaw\]. The stellar photosphere component (dashed black line) is represented by a Castelli-Kurucz model [@2004CK] of appropriate to the stellar spectral type ($\log g = 4.5$ cm/s$^{2}$, $T_{\rm eff} = 6000$ K, \[Fe/H\] = 0.02) scaled to the optical and near-infrared fluxes (green data points). Blue and red data points denote *Spitzer*/IRS and *Herschel*/PACS fluxes, respectively. Grey data points are the excess (i.e. total - star) fluxes for wavelengths $>~20\mu$m. The individual models comprising the model grid used fit the observations is represented by the set of solid light grey lines, with the best-fit disc model denoted by the dashed dark grey line. The total model of the system (star+disc) is denoted by the solid black line.
[lcccc]{} Semi-major axis (au) & & 1 & Fixed & 85\
Belt width (au) & & 1 & Fixed & 30\
$\alpha$ & 0.0 & 1 & Fixed & 0.0\
$a_{\rm min}$ ($\mu$m) & 0.5 – 20.5 & 41 & Linear & 3.25$^{+3.00}_{-1.00}$\
$a_{\rm max}$ ($\mu$m) & 10$^{4}$ & 1 & Fixed & 10$^{4}$\
$q$ & 2.50 – 4.50 & 11 & Linear & 3.55$^{+0.30}_{-0.15}$\
Composition & — & 1 & — & Astro. sil.\
![SED of HD 105, from power law modelling. Green dots denote optical and infrared photometry used to scale the stellar photosphere model. Blue lines and dots denote *Spitzer*/IRS data. Orange dots denote *Herschel*/PACS data. White dots denote ancilliary photometry from various literature sources (see Table \[table:disc\_phot\]). Uncertainties are 1$\sigma$. Dark grey data points denote excess values. The greyed-out region denotes the envelope of disc models produced from the grid parameter space. The dashed line denotes the individual stellar photosphere and disc components, whilst the solid line is the total emission.\[fig:sed\_bf\]](hd105_best_fit_astrosil_sed.pdf){width="50.00000%"}
Discussion {#sec:dis}
==========
There is a growing body of work examining the resolved extent of debris discs as a function of their host stars’ luminosities [e.g., @2013Booth; @2013Morales; @2014Pawellek; @2015PawKri]. Generally, the resolved extent of debris discs around lower luminosity stars are found to be greater than would be predicted from their blackbody temperature when compared to debris discs around higher luminosity stars.
We calculate the parameter $\Gamma$ ($R_{\rm disc}/R_{\rm bb}$) for HD 105’s disc using the relationship between stellar luminosity and disc temperature (radius) derived in [@2015PawKri], tacitly assuming a dust composition of ice and astronomical silicate. We find $\Gamma = 2.43~\pm~0.42$, whereas a value of $4.94~\pm~0.42$ would have been predicted from the relationship shown in [@2015PawKri]. The dust grains that constitute the disc would thus be larger than we expect from [@2015PawKri].
However, the minimum grain size inferred from our revised SED model is 3.25$^{+3.00}_{-1.00}$ $\mu$m, consistent with the expectations of [@2015PawKri]. We also infer a size distribution exponent of $q = 3.55^{+0.30}_{-0.15}$. Our fit to the SED in the sub-millimetre is dominated by the far-infrared photometry, particularly the PACS 160 $\mu$m. In combination, the best-fit grain size and slopes parameters result in a steeper millimetre SED than inferred from the 9-mm ATCA data point. A relatively bright point source resides at 10 from HD 105 in the ALMA image. Whilst this could be responsible for contaminating the APEX/LABOCA 880 $\mu$m data point, the ATCA beam was 5$\times$4, so it would be well separated from the disc at 9 mm. Alternatively, stellar chromospheric emission could be contributing to the 9-mm flux measurement; enhanced emission (above that predicted from Rayleigh-Jean extrapolation) has been observed at millimetre wavelengths for several stars [e.g. @2013Macgregor; @2015Liseau; @2016Chavez]. We might also infer that the dust grains have enhanced emissivity at sub-millimetre wavelengths, an inferrence supported by the relatively shallow millimetre spectral slope index $\beta = 0.3$. Increasing the emissivity of dust grains at millimetre wavelengths can be achieved in many ways, including increasing the porosity or adopting a core-mantle model.
As an additional data point in the menagerie of resolved debris discs around Sun-like stars, we can examine the properties of the system in contrast to discs around stars of similar spectral type. The Jena catalogue of resolved debris discs[^7] has 16 debris disc host stars that fit this criterion, taken from various literature sources [@2010Krist; @2011Ertel; @2012Krist; @2013Eiroa; @2014bMarshall; @2014Soummer; @2015Kennedy; @2016DodsonRobinson; @2016Choquet; @2016LiemanSifry; @2016Moor; @2016Morales]. Amongst this cohort, the properties of HD 105’s disc are fairly typical, having a radius of $\sim~90~$au (70 to 150 au), a typical dust temperature around 50 K (44 to 112 K), and a fractional luminosity of a few $\times10^{-4}$ (1$\times10^{-5}$ to 5$\times10^{-3}$). However, the constituents of this data set are heterogeneous, the discs having been resolved through various methods (optical, far-infrared, and millimetre) and the host stars having a wide variety of ages.
Through angular averaging we have managed to obtain a detection of the disc in scattered light. The angular averaging method, used here for the first time, facilitated a detection of disc scattered light and a deeper probe of the dust albedo than obtainable from standard analysis of the imaging observations through injection of a disc model; application of the angular averaging method to similar cases of continuum bright discs that remain undetected in scattered light are expected to yield further detections. The architecture of HD 105’s disc bears striking resemblance to those of both HD 207129 [@2010Krist; @2011Marshall; @2012Loehne] and HD 377 [@2016Steele; @2016Choquet]. The albedo determined from the scattered light brightness calculated here is comparable to that found for dust grains in imaged discs of comparable fractional luminosity around nominally similar stars e.g. HD 207129 [albedo $\sim$ 0.05 @2010Krist], and the properties of planetesimals in the Solar system.
Conclusions {#sec:con}
===========
We have presented an analysis combining the interpretation of high resolution stellar spectra alongside new and archival scattered light, far-infrared, and (sub-)millimetre imaging observations of HD 105 and its circumstellar disc. In combination, these observations have provided the basis for a more detailed interpretation of the system, including its architecture and evolutionary state.
From a Bayesian approach applied to stellar evolution models we confirm its youth, with an age of [50 $\pm$ 16 Myr]{}. Additionally, we found activity indicators which are consistent with being at a pre-main sequence evolutionary stage. The disc’s planetesimal belt is spatially resolved at millimetre wavelengths. The best-fitting architecture is found to be a single annulus with a semi-major axis of 85 au and a width of 30 au at a moderate inclination of 50$\degr$. The disc is more compact than would be predicted from application of simple stellar luminosity-disc radius relationships, suggesting that the dust grains comprising the disc are either large, or efficient emitters at (sub-)millimetre wavelengths [e.g. through increased porosity @2003delBurgo].
There is no evidence of any offset between the planetesimal belt and the stellar position or any evidence of non-axisymmetric structure in the disc, either of which might imply the presence of a low mass companion interacting with the disc. No evidence of significant CO 2-1 line emission was found from the area of the disc, consistent with recent discoveries of gas in young ($<$ 60 Myr) debris discs being found in discs around A-type stars.
Using an angular averaging technique to measure the scattered light radial profile, we have obtained a first detection of HD 105’s debris disc in scattered light, determining a value for the dust albedo between 0.15 and 0.06. The application of this new technique to similar extant data sets, where the spatial extent of the disc is known from continuum emission but scattered light imaging has proven fruitless, should easily yield additional detections of debris discs in scattered light. This will provide an expanded pool of systems from which inferences can be drawn regarding their scattered light and thermal emission properties.
[Comparison of the dust albedo inferred from the scattered light with simple grain models and simplifying assumptions with water ice/astronomical silicate compositions allows us to infer that the dust grains are consistent with either pure astronomical silicate such as is typically assumed for debris dust, or a moderately icy composition as might be expected for material lying well beyond the snow-line of the system and consistent with SED modelling of a number of other debris discs.]{}
[We thank the anonymous referee for their constructive criticism that improved the manuscript.]{}
*Herschel* is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2012.1.00437.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
This research has made use of NASA’s Astrophysics Data System.
[This paper has made use of the Python packages [astropy]{} [@2013AstroPy; @2018AstroPy], [SciPy]{} [@SciPy], [matplotlib]{} [@2007Hunter], and [Hyperion]{} [@2011Robitaille].]{}
[JPM acknowledges research support by the Ministry of Science and Technology of Taiwan under grants MOST104-2628-M-001-004-MY3 and MOST107-2119-M-001-031-MY3, and Academia Sinica under grant AS-IA-106-M03.]{}
EC acknowledges support from NASA through Hubble Fellowship grant HF2-51355 awarded by STScI, which is operated by AURA, Inc. for NASA under contract NAS5-26555, for research carried out at the Jet Propulsion Laboratory, California Institute of Technology. This work is based on data reprocessed as part of the ALICE program, which was supported by NASA through grants HST-AR-12652 (PI: R. Soummer), HST-GO-11136 (PI: D. Golimowski), HST-GO-13855 (PI: E. Choquet), HST-GO-13331 (PI: L. Pueyo), and STScI Director’s Discretionary Research funds.
CdB acknowledges that this work has been supported by Mexican CONACyT research grant CB-2012-183007
GMK is supported by the Royal Society as a Royal Society University Research Fellow.
LM acknowledges support from the Smithsonian Institution as a Submillimeter Array (SMA) Fellow.
[^1]: The Cornell Atlas of Spitzer/IRS Sources (CASSIS) is a product of the Infrared Science Center at Cornell University, supported by NASA and JPL.
[^2]: *Herschel* is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
[^3]: [hipe]{} is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia.
[^4]: <http://almascience.eso.org/aq/>
[^5]: <https://archive.stsci.edu/prepds/alice/>
[^6]: ESO program ID 096.C-0388(A)
[^7]: <http://www.astro.uni-jena.de/index.php/theory/catalog-of-resolved-debris-disks.html>
|
---
abstract: 'We study the usage of the X-ray light curve, column density toward the hard X-ray source, and emission measure (density square times volume), of the massive binary system $\eta$ Carinae to determine the orientation of its semi-major axis. The source of the hard X-ray emission is the shocked secondary wind. We argue that, by itself, the observed X-ray flux cannot teach us much about the orientation of the semi-major axis. Minor adjustment of some unknown parameters of the binary system allows to fit the X-ray light curve with almost any inclination angle and orientation. The column density and X-ray emission measure, on the other hand, impose strong constrains on the orientation. We improve our previous calculations and show that the column density is more compatible with an orientation where for most of the time the secondary$-$the hotter, less massive star$-$is behind the primary star. The secondary comes closer to the observer only for a short time near periastron passage. The ten-week X-ray deep minimum, which results from a large decrease in the emission measure, implies that the regular secondary wind is substantially suppressed during that period. This suppression is most likely resulted by accretion of mass from the dense wind of the primary luminous blue variable (LBV) star. The accretion from the equatorial plane might lead to the formation of a polar outflow. We suggest that the polar outflow contributes to the soft X-ray emission during the X-ray minimum; the other source is the shocked secondary wind in the tail. The conclusion that accretion occurs at each periastron passage, every five and a half years, implies that accretion had occurred at a much higher rate during the Great Eruption of $\eta$ Car in the 19th century. This has far reaching implications for major eruptions of LBV stars.'
author:
- Amit Kashi and Noam Soker
title: 'USING X-RAY OBSERVATIONS TO EXPLORE THE BINARY INTERACTION IN ETA CARINAE'
---
INTRODUCTION {#sec:intro}
============
The $P=5.54 \yr$ ($P =2022.7 \pm 1.3~$d; Damineli et al. 2008a) periodicity of the massive binary system is observed in the entire electromagnetic band (e.g., radio, Duncan & White 2003; IR, Whitelock et al. 2004; visible, van Genderen et al. 2006, Fernandez Lajus et al. 2008; emission and absorption lines, Damineli et al. 1997, 2008a, b; X-ray, Corcoran 2005). The periodicity follows the 5.54 years periodic change in the orbital separation in this highly eccentric, $e \simeq 0.9$, binary system (e.g., Hillier et al. 2006).
It is generally agreed that the orbital plane lies in the equatorial plane of the bipolar structure$-$the Homunculus, such that the inclination angle (the angle between a line perpendicular to the orbital plane and the line of sight) is $i=42$ (Davidson et al. 2001; Smith 2002). However, there is a disagreement about the orientation of the semimajor axis in the orbital plane$-$the periastron longitude. We will use the commonly used periastron longitude angle $\omega$: $\omega=0^\circ$ for a case when the secondary is toward the observer at an orbital angle of $90^\circ$ before periastron, and so on, as summarized in equation (\[eq:omega\]), $$\label{eq:omega}
\omega =
\begin{cases}
~~0^\circ & \qquad {\rm secondary~toward~observer~}90^\circ~{\rm before~periastron}
\\
~90^\circ & \qquad {\rm secondary~toward~observer~at~periastron}
\\
180^\circ & \qquad {\rm secondary~toward~observer~}90^\circ~{\rm after~periastron}
\\
270^\circ & \qquad {\rm secondary~toward~observer~at~apastron}.
\end{cases}$$
While some groups argue that the secondary (less massive) star is away from us during periastron passages, $\omega=270^\circ$ (e.g., Nielsen et al. 2007; Damineli et al. 2008), others argue that the secondary is toward us during periastron passages, $\omega=90^\circ$ (Falceta-Gonçalves et al. 2005; Abraham et al. 2005; Kashi & Soker 2007, who use the angle $\gamma=90^\circ-\omega$). Other semimajor axis orientations have also been proposed (Davidson 1997; Smith et al. 2004; Dorland 2007; Henley et al. 2008; Okazaki et al. 2008).
In a recent paper Kashi & Soker (2008; hereafter KS08) examined a variety of observations that shed light on the orientation of the semi-major axis: (1) The Doppler shifts of some He I P-Cygni lines that are attributed to the secondary wind, of one Fe II line that is attributed to the primary wind, and of the Paschen emission lines that are attributed to the shocked primary wind. (2) The hydrogen column density toward the binary system as deduced from X-ray observations by Hamaguchi et al. (2007; hereafter H07). (3) The ionization of surrounding gas blobs by the radiation of the hotter secondary star. KS08 found that all of these support an orientation where for most of the time the secondary$-$the hotter less massive star$-$is behind the primary star ($\omega=90^\circ$). The secondary comes closer to the observer only for a short time near periastron passage.
In a more recent paper, Parkin et al. (2009, hereafter P09) built a model to fit the X-ray cyclical light curve in the $2-10 \keV$ band as observed by RXTE (Corcoran 2005). From their modelling they deduced that the secondary is away from us at, or somewhat after, periastron ($\omega=270-300^\circ$). This is an opposite orientation to that deduced by us in a previous paper (KS08). In the present paper we take the challenge to compare the contradicting conclusions of P09 and KS08. For that we critically follow the arguments of P09 (sections \[sec:xray\] and \[sec:collapse\]), and recheck and improve some of our previous calculations (section \[sec:N\_H\]). We find severe problems with many of their assumptions. We conclude that their model fails to account for the X-ray observations. We also introduce new calculations, and suggest that a polar outflow is responsible to some of the observed soft x-ray emission (section \[sec:diss\]) during the X-ray minimum.
THE X-RAY EMISSION {#sec:xray}
===================
The hard X-ray emission observed in $\eta$ Car is emitted by the shocked secondary wind (Corcoran et al. 2001; Pittard & Corcoran 2002; Akashi et al. 2006, hereafter A06). For constant winds’ properties, the volume of the shocked secondary wind changes along the orbit as $V \propto r^3$, where $r$ is the orbital separation. The time scale for the shocked gas to flow out of the emitting volume goes as $r$, implying that the amount of gas in the volume goes as $dm_x \propto r$ as well. Therefore, the X-ray intrinsic emission goes as $$L_{xi} =n_e n_ p V \Lambda \propto \left( \frac{dm_x}{V} \right)^2 V \propto r^{-1},
\label{eq:lxem}$$ where $\Lambda$ is the emissivity, and $n_e$ and $n_p$ are the electron and proton number densities, respectively. The $L_{xi} \propto r^{-1}$ variation is assumed by P09. There are small variations to this dependance. A06 considered the relative motion of the two stars, through its influence on the ram pressures of the two winds. They found that before periastron the X-ray emission is larger than that given by equation (\[eq:lxem\]), while after periastron it is lower.
The relation $L_{xi} \propto r^{-1}$ is generic to X-ray emission from colliding fast winds. For $\eta$ Car the eccentricity is very high, and the increase in the internal X-ray emission from apastron to periastron is by a factor of $\ga 20$. Any model, and any observer from what ever direction, would detect a sharp increase in the X-ray emission when the system is near periastron. The direction to the observer would change the observed flux because of the dependance of the column density of the absorbing gas on the direction. However, there are several unknown parameters of the binary system and the winds which can be adjusted to accommodate almost any orientation. For that, the X-ray light curve by itself cannot be used to deduce the periastron longitude, or even the inclination angle. This was nicely shown by A06, who could fit the X-ray light curve by using both an inclination of $i=0^\circ$ (observer above the orbital plane) and $i=42^\circ$ with $\omega=180^\circ$, by slightly varying two parameters of the model that are related to the two winds (no fine tuning was required).
There is a very deep X-ray minimum lasting for $\sim 10$ week following periastron passage (Corcoran 2005). In all models of the kind discussed here it is required to fit the X-ray emission during the $\sim 10$ week minimum by simply *assuming* this minimum. Therefore, the behavior of the X-ray emission during the X-ray minimum, and a few days before and after, cannot be used to discriminate between different orientations. P09 reject the $\omega =90^\circ$ semi-major orientation ($\theta=180^\circ$ in their notation) based only on the behavior during the X-ray minimum. They presented only the case $\omega = 90^\circ$ with $i = 90^\circ$, rather than taking $i=42^\circ$.
They also examined our preferred case with $(i, \omega)=(42^\circ, 90^\circ)$, and found the resulted flux at phase $0.98$ to be $\sim 5$ times below the observed one (Parkin, R., private communication 2009). However, there are problems with their calculation. First, P09 assume the emission measure to increase as $EM\propto r^{-1}$, where $r$ is the orbital separation. However, the increase in the emission measure is smaller. For example, at phase $0.99$ the orbital separation is $16$ times smaller than at apastron, but the emission measure is only $\sim 7$ times larger than the apastron value (H07). In addition, before periastron the two stars approach each other, an effect that increases the X-ray emission for constant wind properties, e.g., by $\sim 30 \% $ at phase $0.98$ (A06); this effect was not considered by P09.
The conclusion is that the secondary wind starts being disturbed weeks before periastron, and the emission measure does not increase as much as assumed by P09. To explain the observed flux, the column density cannot increase by the large factor predicted by the $\omega\simeq 270 ^\circ$ periastron longitude. A shallower increase in the column density is reproduced by our preferred periastron longitude $\omega = 90 ^\circ$.
We can summarize this section by stating that fitting the X-ray light curve with a simple model based on the intrinsic X-ray emission alone cannot teach us much about the orientation. Almost any inclination and periastron orientation can be fitted with some adjustment of the poorly known binary and wind parameters.
THE COLUMN DENSITY {#sec:N_H}
==================
In this section we discuss the hydrogen column density as deduced from X-ray observations by H07. This section is based on the calculations of KS08, but significant improvements are added.
We start by describing the observations which we are later rely upon. The column density toward the hot gas was deduced by H07 using their XMM-Newton observations. The binary system itself and its close surroundings are not resolved, but they are distinguished from the extended X-ray emission in the XMM-Newton observations. The XMM-Newton observations cover a very small fraction of the orbit. The RXTE observations, on the other hand cover more than 2 orbits (Corcoran 2005), but do not have the spatial resolution to separate the central source from the extended source. In any case, most of the X-ray emission during most of the orbit result from the binary system.
We shall focus our discussion on the hot $>5 \keV$ component. At phase $0.47$ the column density toward this component is $N_H=1.7\pm 0.3 \times 10^{23} \cm^{-2}$, as deduced by XMM-Newton observations (H07). All the $N_H$ observations and their errors as stated by H07 appear in Fig. \[fig:NH\]. Though the compact object dominates the emission, the uncertainty can be somewhat larger due to combination of multiple spectral components in the data. Our confidence in the column density values derived by H07 is based also on their finding that the column density has not varied between phases $0.47$ and $0.923$.
The binary parameters are as in our previous paper (where references are given): The assumed stellar masses are $M_1=120 M_\odot$ and $M_2=30 M_\odot$, the eccentricity is $e=0.9$, and the orbital period is $P=2024 \days$. The stellar winds’ mass loss rates and terminal velocities are $\dot M_1=3 \times 10^{-4} M_\odot \yr^{-1}$, $\dot M_2 =10^{-5} M_\odot \yr^{-1}$, $v_{\rm 1,\infty}=500 \km \s^{-1}$ and $v_{\rm 2,\infty}=3000 \km \s^{-1}$. For these parameters, the half opening angle of the winds collision region (WCR) near apastron is $\phi_a \simeq 60 ^\circ$ (A06). For the inclination angle we take $i = 42^\circ$.
H07 modeled the variable X-ray emission with two components. A hot component that explains all the emission above $5 \keV$, and an extra soft component that contributes to the emission below $5 \keV$. Near apastron the hot component has a temperature of $kT=3.3 \keV$, while the extra soft component has a temperature of $kT = 1.1 \keV$. In KS08 we have studied only the hot component because we know this region comes from the strongly shocked (perpendicular shock) secondary wind before it suffers any adiabatic cooling. Therefore, we can safely estimate its location to be close to the stagnation point of the colliding winds (apex), but somewhat closer to the secondary. The source and location of the $kT \sim 1 \keV$ gas, on the other hand, is less secure, but it probably resides in an extended region. The very important thing we do learn from H07 analysis of the X-ray emission by the $kT \sim 1 \keV$ gas is that the contribution of absorbing gas around the winds interaction region to the column density is $N_{H-e} \la 5 \times 10^{22} \cm^{-2}$. This implies that any column density of $N_H > 5 \times 10^{22} \cm^{-2}$ must come from material close to the binary system, at most $\sim 100 \AU$.
P09 tried to fit the RXTE light curve of the emission in the $2-10 \keV$ band (Corcoran 2005). The temperature of the extra soft component is $kT_{\rm soft} \simeq 1.1 \keV$, and its contribution to the $2-10 \keV$ band is expected to be small relative to that of the hot ($kT=3.3 \keV$) component (e.g., Sarazin & Bahcall 1977). Indeed, the hot component contributes $70 \%$ of the observed X-ray emission in the $2-10 \keV$ band (deduced from figures 8 and 9 of H07). Therefore, in fitting the column density based on the X-ray emission in the $2-10 \keV$ band, one better compare it to the column density toward to hot component of H07 (their table 5), rather to the entire spectrum of H07 (the entire spectrum studied by H07 goes below $2 \keV$). In any case, we shall stay with the hot component as it is better defined.
The column density toward the hot gas at phase $0.47$ is $N_H=1.7\times 10^{23} \cm^{-2}$. This is hard to explain in a model where the secondary is toward us during this phase, i.e., $\omega \sim 270^\circ$. This is because the opening angle of the conical shell (the collision surface of the two winds) is $\phi_a \simeq 60^\circ$, while the inclination angle is $i\simeq 42^\circ$. This implies that $90^{\circ} -i< \phi_a$, such that our line of sight toward the shocked secondary wind would go through the undisturbed secondary wind. The column density through this low density gas is very low. Instead, we suggest that the secondary star resides on the far side during apastron, and the column density includes the undisturbed primary wind as well as the shocked primary wind.
To calculate the column density for our preferred orientation of $\omega=90^\circ$, we follow KS08 and use the geometry as drawn schematically in Fig. \[fig:NHgeo\]. The contact discontinuity shape is approximated by an hyperboloid at a distance $D_{g2}\simeq0.3 r$ from the secondary at the stagnation point (the stagnation point is the point where the two winds’ momenta balance each other).
Our model of the primary wind includes a weak pre-shock magnetic field. In the post shock region the gas is highly compressed, and the magnetic field becomes dominant. The magnetic pressure limits the compression of the postshocked primary wind. The uncertainties in the intensity and geometry of the magnetic field introduce large uncertainties (Kashi & Soker 2007a). The magnetic field role is parameterized by the compression factor $\eta_B$ which is the ratio between the pre-shock magnetic and ram pressure (see equations 9-11 in Kashi & Soker 2007a). Following the model for the post shocked primary wind which had been presented in Kashi & Soker 2007a, we will use the quantity $$\label{eq:fm}
f_m =
\begin{cases}
v_{\rm wind1} \sin^2\psi & \qquad v_{\rm wind1} \sin^2\psi < \left(\frac{3}{\eta_B \sin^2\psi}\right)^{1/2}
\\
\left(\frac{3}{\eta_B \sin^2\psi}\right)^{1/2} & \qquad v_{\rm wind1} \sin^2\psi < \left(\frac{3}{\eta_B \sin^2\psi}\right)^{1/2}
\end{cases}$$ where $\psi$ is the angle between the slow wind velocity and the primary wind shockwave, $v_{\rm wind1}$ is the relative speed between the secondary and the primary wind, given in equation (\[eq:vwind1\]) below, and $\eta_B=0.002$ is a parameter (Kashi & Soker 2007a).
The value of $\eta_B=0.002$ implies that the magnetic field has a negligible role before the shock, but not after. The column density is $$n_H = \frac{0.43f_m \dot M_1}{4 \pi r_{1s}^2 v_{\rm wind1} \mu m_H } .
\label{eq:nH}$$ The compression factor together with a few more assumptions allow us to calculate the velocity of the post shock primary wind out from the shock region, and the width of the conical shock, $d_p$ (see equation 12 in Kashi & Soker 2007a). From the width we can calculate the values of $l_{out}$ and $l_{in}$, defined in Fig. \[fig:NHgeo\].
The source of the hard X-ray is the post-shocked secondary wind (Pittard & Corcoran 2002; Corcoran 2005; A06), taken here at the point marked on Fig. \[fig:NHgeo\] by ‘X-ray source’. This point is located at a distance of $(1-u)D_{g2}$ from the stagnation point, or a distance of $uD_{g2}$ from that point. The X-ray emitting region is more extended, but it does not extend to large distances from the secondary, and our assumption is adequate. We assume $u = 0.7$. As evident from the figure, the column density has two main components: The post-shocked primary wind component, $N_{H,\rm{shock}}$ (the conical shell), and the undisturbed, free-expanding, primary wind component ($N_{Hi1}$). We calculate the contribution of each component to the total column density ($N_{H,\rm{tot}}$) as a function of orbital angle $\theta$ ($\theta$=0 at periastron).
As in our previous papers, we approximate the shape of the colliding winds conical shell as an hyperploid. In the present study we consider some additional effects, listed next, that makes the geometry more complicated and the results more accurate. Similar considerations were used by us (Kashi & Soker 2009b) to explain the complex P Cygni profile of the He I $\lambda 10830$[Å]{} high excitation line.
\(1) **Wind acceleration.** We take the primary wind acceleration into account. We describe it as a $\beta$-profile $$v_1(r_1)=v_s+(v_{\rm 1,\infty}-v_s)\left(1-\frac{R_1}{r_1}\right)^{\beta} ,
\label{eq:v1}$$ with a parameter $\beta=1$, where $v_s=20 \km \s^{-1}$ is the sound velocity on the primary surface, $v_{\rm 1,\infty}=500 \km \s^{-1}$ is the primary wind terminal velocity (which was already defined earlier), $r_1$ is the distance from the primary center, and $R_1 = 180 \AU$ is the primary radius. The acceleration of the primary wind becomes important close to periastron, when the binary separation becomes only slightly larger than the primary radius. Being slower and denser close to the surface, the accelerating primary wind yields a denser post shocked primary wind, and affects other variables in this already complex geometry. Its slower velocity makes the hyperboloid asymptotic opening angle wider, and its rotation more pronounced, as we now explain.
\(2) **Hyperboloid asymptotic opening angel.** The radial (along the line joining the two stars) component of the relative velocity between the secondary star and the primary wind is $v_1-v_r$, where $v_r$ the radial component of the orbital velocity; $v_r$ is negative when the two stars approach each other. The relative speed between the secondary and the primary wind is $$v_{\rm wind1} = \left[v_\theta^2 + (v_1-v_r)^2 \right]^{1/2},
\label{eq:vwind1}$$ where $v_\theta$ is the tangential component of the orbital velocity. The orbital motion and the variation of the primary wind velocity with distance from the primary have an influence on the hyperboloid asymptotic opening angel $\phi_a$. We use the expression given by Eichler & Usov (1993) $$\phi_a \sim 2.1
\left(1-\frac{\eta_w^{{4}/{5}}}{4}\right)\eta_w^{{2}/{3}} ,
\label{eq:phia}$$ where $$\eta_w \equiv \sqrt{\frac{\dot M_2 v_{\rm 2,\infty}}{\dot M_1 v_{\rm wind1}}}
\label{eq:eta}$$ (note that this definition of $\eta_w$ is different from the one used by P09, where $\eta_{P09}=\eta_w^2$). $\phi_a$ has a minimum value of $\sim 58^\circ$ $33$ days before periastron, and reaches maximum of $\sim 72^\circ$ $25$ days after periastron.
\(3) **Hyperboloid rotation.** We take into consideration the rotation of the hyperboloid relatively to the line connecting the two stars. Namely, the winding of the WCR around the secondary star, as the secondary orbits the primary across periastron. This rotation (winding) has a considerable influence close to periastron. We define $\delta \phi$ to be the angle measured from the secondary between the direction to the primary and that to the stagnation point (see Soker 2005 for further details) $$\cos (\delta \phi)=\frac{v_1-v_r}{v_{\rm wind1}}.
\label{eq:deltaphi}$$ We find that the maximum value of $\delta \phi$ is obtained $\sim 6$ days after periastron, where it reaches $\simeq 64^\circ$. This makes the rotation of the hyperboloid non-negligible and very important for the more accurate calculation we perform in this paper. The equatorial plane of the geometry described above is presented in Fig. \[fig:hyperboloid\].
We take the inclination angle to be $i=42^\circ$, and assume that the secondary is away from us during a apastron passage ($\omega=90^\circ$). This geometry explicitly determines the direction from which the system is observed (i.e. line of sight) at each orbital phase. For every orbital angle $\theta$ we calculate the relevant direction angle $\xi$ to the observer. Considering the orientation of the conical shell at that orbital angle, we calculate the thickness of the conical shell in that direction and integrate $n_H$ over the width to find the column density of the first component: $$N_{H,\rm{shock}} = \int_{l_{\rm{out}}}^l n_H\,dl \label{eq:NHshock},$$ where $l=l_{\rm{out}}+l_{\rm{in}}$ (see Fig. \[fig:NHgeo\]). The second component contributing to the column density, the undisturbed primary wind, is calculated from the point on the line of sight where the shock terminates, to infinity (contribution decreases fast with distance) $$N_{Hi1} = \int_l^\infty n_H\,dl \label{eq:NHi1}.$$ To calculate the total column density we added a constant third component of $1 \times 10^{22} \cm^{-2}$ to account for the material residing in the outer regions, e.g., in the Homunculus and in the ISM, the same value as P09 used.
The three column density components and the total column density are plotted in Fig. \[fig:NH\]. The column density toward the hot component based on the spectrum above $5 \keV$, $N_H[>5 \keV]$, from H07 is also plotted.
Our model reproduces to within a factor of two the results of H07, and is with agreement with its qualitative behavior. This is done without any parameters fitting. Namely, we simply take the values of the different parameters as in our previous papers, where some parameters were adjusted to fit other observations of $\eta$ Car, such as the radio emission, some helium lines, and more.
Our results clearly shows that from our preferred line of sight ($\omega=90^\circ$,$i=42^\circ$) the column density hardly changes during most of the orbital cycle and can supply the required high column density, in accord with observations. When the system approaches periastron passage there is a fast increase of $N_{H,\rm{shock}}$ and $N_{Hi1}$, followed by a decrease after periastron passage.
One thing must be kept in mind. The ten-week X-ray minimum cannot be explained by the model. A different ingredient must be incorporated; one that extinguishes the conical shell. We take this process to be accretion onto the secondary star. For that, our calculation of $N_H$ with the WCR included is not applicable at all during the minimum, i.e., the phase period $\sim 0-0.035$. The flow structure around the secondary is more complicated, as it includes the accretion process, and possibly a polar outflow (jets) that results the extra soft source during the X-ray minimum (see section \[sec:ex\]). The column density during the minimum might be better represented by an undisturbed primary wind in the entire space. As we are not sure where the polar outflow is shocked and forms the X-ray emitting regions, we take 3 possibilities for its location: At the location of the secondary itself (which is the average of two opposite jets); a region above the primary (perpendicular to the equatorial plane) of $y_x=0.25r$, and of $y_x=0.5r$. In Fig. \[fig:min\] the column density to the X-ray emitting region in the cases is plotted during the X-ray minimum, and $20$ days (a phase of $0.01$) before and after the minimum.
We note that during the X-ray minimum the column densities toward the hard and soft X-ray sources are about equal (ratio of $\sim 1$), compared to a ratio of $> 3$ near apastron. The results presented in Fig. \[fig:min\] suggest that the hard and soft X-ray components reside close to the secondary, and are not originated in the tail of the WCR. We also note that the observation point at phase $0.042$ is better fitted with a model that includes the conical shell (Fig. \[fig:NH\]) than with a spherical primary wind, also shown as the solid blue line in Fig. \[fig:min\]. This points is well after the X-ray minimum, and this is an expected result. The point at phase $-0.01$ can be marginally fitted if the WCR does exist, by changing somewhat the unknown parameters. However, it seems we can better fit it by assuming that the WCR is already highly disrupted at that time. Namely, the collapse of the WCR starts around $-0.01$ or somewhat earlier, as suggested by A06.
We note that the observed column density at phase $0.47$ of $N_H=17\times10^{22} \cm^{-2}$ (H07) occurred when the $2-10 \keV$ emission was $10-15 \%$ above its average value during that time (Corcoran 2005). This could result from a higher density of the primary wind. According to the model of A06 the dependance of the X-ray emission on the primary mass loss rate is $L_x \sim \dot M_1^{1/2}$. Namely, it is quite possible that the primary mass loss rate, and hence $N_H$, were higher than the average value near apastron by a factor of $\sim 1.25$, and this is the reason H07 did not find the column density to increase between phases $0.47$ ($-0.53$) and 0.92 ($-0.08$).
Another effect not considered by us, and that introduces more variations, both in the time variation and in the absorption by the conical shell along different directions, is the corrugated structure of the shocked primary wind that results from instabilities (Pittard & Corcoran 2002; Pittard et al. 1998; Okazaki et al. 2008; P09).
As mentioned earlier, from behind the secondary shock (namely, if the secondary is toward us near apastron), it is not possible to account for $N_H=17\times10^{22} \cm^{-2}$ column density near apastron. It cannot come from the nebula, as the nebula can supply $5 \times10^{22}\cm^{-2}$ at most, as deduced from the column density toward the low temperature gas (the extra soft component, H07). Also, from figure 2 of P05 we learn that when the system is at apastron, there is a region extending to $\sim 500 \AU$ behind he secondary star that is cleaned from the primary wind by the tenuous secondary wind. Namely, when observed through the secondary wind region the column density of the primary wind is very low, practically negligible. In the case of $\omega\simeq 270 ^\circ$ (which we oppose), one would indeed observe through the secondary wind, because $90^\circ -i < \phi_a$.
To show this point, we have repeated our calculation for an opposite periastron longitude $\omega=270^\circ$. The results are shown in Fig. \[fig:NH270\]. We see that for that case the column density long before and after periastron is highly underestimated. Moreover, close to periastron the calculated hydrogen column density reaches $\sim 3\times 10^{24}\cm^{-2}$ (outside the plotted region of Fig. \[fig:NH270\]). This is about 5 times the observed value, while our orientation yields values less than twice the observed value. Even though we admit our calculation of $N_H$ is not applicable during the X-ray minimum, we crudely do get the increase factor of the column density between apastron and periastron as observed by H07. In the case of $\omega=270^\circ$, our calculations show a huge jump in the column density, not compatible with observations. This is mostly pronounced at phase $0.923$ ($-0.077$), when the calculated column density for $\omega=270^\circ$ is too low to explain the observed value, and the minimum has not started yet.
THE COLLAPSE OF THE WIND COLLISION REGION ONTO THE SECONDARY STAR {#sec:collapse}
=================================================================
P09 discussed the possibility that the WCR collapses onto the secondary. In their discussion they mentioned that if the collapse occurs, then the secondary wind is continued to be blown on the other side of the secondary. They do not consider accretion. We now show that the regular secondary wind cannot be blown during the event, not even half of it, and that if the WCR collapse occurs, then accretion in inevitable.
The residual wind {#sec:residual}
-----------------
P09 assumed that during X-ray minimum about half of the secondary wind is continued to be blown at $\sim 3000 \km \s^{-1}$ from the side of the secondary not facing the primary. They then took the X-ray to come from regions away from the secondary, as if the colliding wind regions was unaffected. We show that this cannot be the case.
Let the wind blown posses a fraction $\zeta$ of the regular wind, such that the mass loss rate is $\dot M_{2m} = 10^{-5} \zeta M_\odot \yr^{-1}$. From figure 3 of P09 we see that during the X-ray minimum the conical shell is wind-up in such a way that the secondary wind cannot escape. At phase 1.02 (40 days after periastron passage) the wind can propagate at most $20 \AU$ before encountering the primary dense wind; most of the secondary wind is shocked at a much closer distance of $< 5 \AU$. It does it within $20 \AU/ 3000 \km \s^{-1} = 12$ day, which is shorter than the time since minimum starts, but not by much. The volume of space available for the shocked secondary wind is $\Delta V \sim (20\AU)^3 \simeq 3 \times 10^{43} \cm^{3}$. The mass blown during this time (40 days) is $\Delta M=1.1 \times 10^{-6} \zeta M_\odot$. The proton number density is $$n_p \simeq 3 \times 10^7 \zeta \cm^{-3}.
\label{eq:np}$$ At a temperature of $10^8 \K$ the emissivity is $\Lambda =2.5 \times 10^{-23} \erg \s^{-1} \cm^3$, and the total X-ray luminosity of the bubble is (about half emitted in the $2-10 \keV$ band) $$L_{xm} \simeq 2 \times 10^{35}
\left( \frac{\zeta}{0.5} \right)^2 \erg \s^{-1}.
\label{eq:lx}$$ The emission measure ${\rm EM} \equiv n_p n_e \Delta V$, where $n_e$ is the electron density, is $${\rm EM}_{xm} \simeq 10^{58}\left( \frac{\zeta}{0.5} \right)^2 \cm^{-3}.
\label{eq:em}$$ This emission measure is an order of magnitude larger than that of the hot gas deduce from X-ray observations during minimum X-ray emission (H07). If we reduce $\zeta$ much below $\zeta=0.5$, the volume of the shocked secondary wind will be smaller. So in order to reduce the emission measure to ${\rm EM}_{xm} \simeq 3 \times 10^{57} \cm^{-3}$, we should have $\zeta < 0.1$. Namely, an almost complete shutdown of the secondary wind.
As seen in figure 2 of P09, the volume available for the shocked secondary wind during X-ray minimum is not closed, and the post-shocked wind will flow out, reducing the density and emission measure. However, the outflow time scale is not much shorter than the duration of the minimum, and this will not reduce the emission measure by much. On the other hand, there are effects that operate to increase the emission measure in the model of P09. (1) The side that blows the wind in their model is opposite to the primary direction. However, because of the winding of the tail (its spiral structure) this side is not toward the tail of the shocked region, and the secondary wind will be shocked in a short distance from the secondary of only several$\times \AU$ on average, at phase 1.02. (2) As mentioned above, the volume of $\Delta V \simeq (20\AU)^3$ is taken from their figure 2, which is drawn for a full blown wind. If the secondary wind is weaker, the volume will be smaller. (3) As discussed below, the gravity of the secondary will bend the primary wind stream lines toward the secondary. This will increase the ram pressure of the primary wind, and by that further reduce the volume.
The accretion phase {#sec:acc0}
-------------------
### The inevitability of accretion {#sec:acc}
P09 mentioned that the primary wind comes very close to the secondary, but did not consider accretion according the Bondi-Hoyle-Lyttleton model, but rather took the accretion radius to be the secondary radius. This is not consistent with the well established accretion process where the accretion radius should be the Bondi-Hoyle accretion radius $$R_{\rm acc2} = \frac {2G M_2}{v^2_{\rm wind1}}= 0.2
\left( \frac{v_{\rm wind1}}{500 \km \s^{-1}} \right)^{-2} \AU,
\label{eq:racc2}$$ for a secondary mass of $M_2=30 M_\odot$. The accretion radius was calculated by A06, and was found to be $R_{\rm acc2} \simeq 0.5 \AU$ at phase $1.01-1.02$. This is $\sim 5$ times the radius of the secondary star.
As was discussed by Soker (2005), A06, and Kashi & Soker (2009a), the secondary stellar radiation pressure and wind cannot prevent accretion. We note that Soker (2005) considered the ram pressure of the secondary wind, as well as the radiation pressure. That ram pressure and radiation pressure cannot prevent accretion is true for a smooth wind, and more so as dense blobs are expected to exist in the primary wind due to stochastic mass loss processes and instabilities in the colliding winds (Pittard & Corcoran 2002). P09 mentioned that the secondary wind can destroy the falling dense blobs. This process should be studied in a future paper, but must include the magnetic field within the blobs. The magnetic field in the blob might prevent efficient ablation. Kashi & Soker (2009a) further include the acceleration zone of the primary wind, and find the secondary stellar gravity to be more important even than what was found by Soker (2005) and A06. Neglecting the secondary gravity by P09 makes their results questionable, e.g., their claim that no substantial accretion occurs is not supported.
We do note that the accreted mass cannot account for the strong X-ray component by itself. The temperature of the strong soft X-ray component during the X-ray minimum is $kT \simeq 0.5-1 \keV$ (H07), about an order of magnitude below the hard X-ray component. This temperature is formed in the postshocked region of gas flowing with a velocity of $650-1300 \km \s^{-1}$. In the case of an inflow the gas is compresses, and a velocity range of $\sim 600-1200 \km \s^{-1}$ is required. In the case of an outflow, adiabatic cooling occurs and the required outflow velocity is $\sim 800-1600 \km \s^{-1}$. The free fall velocity onto the secondary star is $v_{\rm ff} = 750 (M_2/30 M_\odot)^{1/2}(R_2/20 R_\odot)^{-1/2} \km \s^{-1}$. However, the accreted flow will be shocked somewhere above the surface, and it seems it cannot account for the temperature of the soft component.
Neither the accreted mass can account for the emission measure. The high emission measure of the soft component is $EM_s=n_e n_p V \simeq 10^{59} - 10^{60} \cm^{3}$ (H07), for an average of $\sim 10^{59.5} \cm^{3}$. It is not clear how long this high emission measure phase lasts, as H07 give only two measurements during the X-ray minimum. We assume this high emission measure phase to last 5 weeks. During a time period of $t_m \simeq 5$ weeks the total intrinsic X-ray emission (before absorption) is $E_{xs}=EM_s \Lambda t_m \simeq 5 \times 10^{43} \erg$. In Kashi & Soker (2009a) we estimated the accreted mass during the $10~{\rm weeks}~=0.2 \yr$ X-ray minimum to be $M_{\rm acc} \simeq 0.4 - 3.3 \times 10^{-6} M_\odot$. The available energy is at most $E_a \simeq G M_2 M_{\rm acc}/R_2 \simeq 10^{43} \erg$, for $M_{\rm acc} = 2 \times 10^{-6} M_\odot$. The accreted gas cannot account for the high emission measure of the soft X-ray component. An alternative is discussed in section \[sec:ex\].
### X-ray emission during the accretion phase {#sec:ex}
In our calculations (here and in KS08) we studied the properties of the hot gas, i.e., the component that is the source of the X-ray emission above $5 \keV$ according to H07. Let us examine the properties of the extra soft component. Its temperature is $kT \simeq 0.5-1.1 \keV$, and its contribution to the X-ray emission is larger than that of the hot component in the $<3.2 \keV$ X-ray band (H07). In our calculations (here and in KS08) the column density is calculated to a region near the stagnation point of the colliding winds. This is the region where the secondary wind shock is strong and no adiabatic cooling has occurred yet, and hence hard X-ray emission is expected. In all cases and parameters the average calculated column density during the X-ray minimum is above the observed value. This is true for both the hot and the extra soft component. This suggests that the X-ray emission region is located at somewhat larger distance from the secondary star.
During the minimum the emission measure of the hard X-ray decreases substantially, while that of the soft X-ray increases by more than an order of magnitude (H07). As the emission measure of the soft component increases, so does the column density toward its emission region. The column densities of the hard and soft component become comparable; at all other phases the column density of the soft component is smaller than that of the hard component. The high column density shows that the emitting region cannot be too far from the secondary star. We suggest that the regular winds collision does not occur during the X-ray minimum. Rather, there is a polar outflow (jets).
During the X-ray minimum the regular secondary wind is substantially suppressed to only few percents of its regular intensity. This explains the small emission measure of the hard component. Instead, we suggest that the accretion disk around the secondary star, formed by the accreted primary wind material, forces the outflow to direct in the polar directions. The outflowing material is composed of the secondary wind and accreted gas. In Kashi & Soker (2009a) we estimated the average rate at which the secondary accretes mass from the primary during the X-ray minimum to be $\dot M_{\rm acc} \sim 10^{-5} M_\odot \yr ^{-1}$. We note that the accretion rate is about equal to the undisturbed secondary wind. This suggests that the accretion process substantially disturbes the wind, but cannot completely prevent an outflow.
To account for the soft X-ray component, with a total radiated energy of $E_{xs} \simeq 5 \times 10^{43} \erg$ (see section \[sec:acc\]), we can take an average mass loss rate over the 10 weeks of $\dot M_{\rm polar} \sim 10^{-5} M_\odot \yr^{-1}$ with a velocity of $1600
\km \s^{-1}$. P09 found that the post shocked region of gas flowing at this velocity can match the spectrum at X-ray minimum. Namely, the secondary mass loss rate does not change much, but it is forced to the polar direction. The high mass loss rate over a smaller angle results in a less efficient acceleration process, and the terminal speed is about a half of its regular value. This polar outflow cannot escape from the dense primary wind, and most of its kinetic energy is converted to X-ray emission. The presence of a polar outflow in $\eta$ Car, before and during the X-ray minimum, was suggested before, based on the behavior of the He II $\lambda$4686 line (Soker & Behar 2006), and on the Doppler shift of X-ray lines (Behar et al. 2007). Theoretical motivation for a strong secondary polar outflow during and at the end of the X-ray minimum is given by Kashi & Soker (2009a).
DISCUSSION AND SUMMARY {#sec:diss}
======================
We studied the usage of the X-ray properties to deduce the orientation of the semi-major axis of $\eta$ Car. It is agreed by most researchers that the inclination of the binary system is $i \simeq 42^ \circ$. However, there is a dispute on the direction of the semi-major axis in the equatorial plane, the so called periastron longitude, or orientation. The definition of the periastron longitude angle in the orbital plane is given in equation (\[eq:omega\]).
In section \[sec:xray\] we argued that the X-ray light curve by itself cannot be used to deduce the orientation. This conclusion is based in part on the results of A06, who showed that with some adjustment of the binary unknown parameters, e.g., exact wind properties, the X-ray light curve can be fitted by using almost any periastron longitude and any inclination angle. The only large differences between the different orientations occur near periastron passage. But there, all models must assume a suppression of X-ray emission, such that the period of $\sim 3$ months around periastron passage is useless in simple models that use only the X-ray light curve.
In section \[sec:N\_H\] we modelled the column density as deduced from X-ray observations by H07. We considered some additional effects that we didn’t take into account in KS08, which make our modelling more accurate and our conclusion that $\omega=90^\circ$ more reliable. The periastron longitude deduced in P09, $\omega=270^\circ - 300^\circ$, is more or less opposite to our preferred value of $\omega=90^\circ$. We showed that their periastron longitude is not in agreement with observations of the column density, particulary close to apastron.
In section \[sec:residual\] we considered the secondary wind during the X-ray minimum. We showed that the emission measure of the hard X-ray emission as measured by H07 during the X-ray minimum, constrains the regular secondary wind, i.e., that with the same terminal velocity of $\sim 3000 \km \s^{-1}$, to have a mass loss rate of $< 0.1$ times its regular value, and probably only $\sim 0.01$ times its regular value.
The emission measure of the soft X-ray component, on the other hand, is very large during the X-ray minimum (H07). In section \[sec:ex\] we suggested that the source of this emission is shocked polar outflow (or a collimated polar wind, or two jets). The polar outflow is formed by the accretion process, which focuses the secondary wind to polar directions, and launches some material from the accretion disk. The inevitability of the accretion process (Kashi & Soker 2009a) was discussed in section \[sec:acc\]. The denser wind along the polar directions makes acceleration less efficient. This result in a slower outflow $\sim 1000-2000 \km \s^{-1}$, and softer X-ray emission.
It is important to note that the presence of a polar outflow before and during the X-ray minimum was suggested before (Soker & Behar 2006; Behar et al. 2007; Kashi & Soker 2009a).
To summarize the discrepancy between the model of P09 (who claimed for $\omega\simeq
270^\circ$) and ours ($\omega\simeq 90^\circ$), we think that P09 deduced a wrong semimajor axis orientation for the following reasons.
1. They gave a heavy weight to the light curve. However, the light curve can be fitted by almost any inclination and orientation angles, with some adjustment of parameters (no fine tuning is required; A06).
2. In the $2-10 \keV$ band the main source ($70 \%$ of the observed flux) of the X-ray emission is the hot component that was defined by H07. The column density should be the one toward this component, and not toward the entire emitting gas, that contains also component at a temperature of $\sim 1 \keV$. The column density calculated by P09 is much lower than observed.
3. P09 assumed that the emission measure goes as $r^{-1}$, where $r$ is the orbital separation. However, the emission measure deduced from observations increases by a smaller factor as the system approaches periastron (H07).
4. According to the semimajor orientation deduced by P09, near apastron our line of sight goes through the tenuous secondary wind. The column density is very low as evident from our Fig. \[fig:NH270\]. The calculation of P09 did not show this low column density.
5. The calculated behavior during the ten-week X-ray minimum (and few days before and after) cannot be reproduced by the models (neither ours, or A06, or of P09), and cannot be used to reject or accept a model; it seems accretion of the primary wind by the secondary is inevitable. In rejecting the $\omega\simeq 90^\circ$ orientation, P09 gave too much weight to the X-ray minimum period.
Recent observations of the RXTE lightcurve of $\eta$ Car reveal an early recovery of the X-ray minimum (Corcoran 2009). While the previous two X-ray minima lasted for 10 weeks, the 2009 X-ray minimum lasted for only $\sim 5$ weeks. The possibility that the X-ray emission can recover several days earlier than in previous cycles was mentioned by us before (section 8.3 in Kashi & Soker 2009a), as we were considering small ($\sim 10\%$) fluctuations in the primary wind properties. The early recovery is attributed to a weaker primary wind that allows the secondary wind to recover earlier. In the accretion model the very early recovery teaches us that the major changes in the primary wind are related to the acceleration process (Kashi & Soker 2009a), and that the variations are large in the sense that the wind reaches its terminal speed much closer to the primary. A quantitative study is the subject of a forthcoming paper. The implication of the accretion process goes beyond present day $\eta$ Car. During the 1837-1856 Great Eruption a mass of $\sim 10-20 M_\odot$ was lost by the primary (smith et al. 2003; Smith 2006; Smith & Ferland 2007). If accretion occurs at present periastron passages, when mass loss rate is lower by more than three orders of magnitudes than that during the Great Eruption, it must have occurred during the Great Eruption along most, or even all, of the orbit (Soker 2001, 2007). The gravitational energy released by the accreted mass could have been the major extra energy in the Great Eruption, both in extra radiation and wind’s kinetic energy (Soker 2007). It is possible that major eruptions of luminous blue variables (LBVs) are related to such accretion events as the primary losses high amounts of mass.
We thank Ehud Behar for helping us with the calculations of the X-ray emission. We also thank Ross Parkin and Julian Pittard and an anonymous referee for useful comments. This research was supported by grants from the Israel Science Foundation, and from Asher Space Research Institute at the Technion.
Abraham, Z., Falceta-Gonçalves, D., Dominici, T. P., Nyman, L.-A, Durouchoux, P., McAuliffe, F., Caproni, A., & Jatenco-Pereira, V. 2005, A&A, 437, 977
Akashi, M., Soker, N., & Behar, E. 2006, ApJ, 644., 451 (A06)
Behar, E., Nordon, R., & Soker, N. 2007, ApJ, 666, L97
Corcoran, M. F. 2005, AJ, 129, 2018
Corcoran, M. F. 2009, in The RXTE X-ray Lightcurve of Eta Carinae site $\texttt{(http://asd.gsfc.nasa.gov/Michael.Corcoran/eta\_car/etacar\_rxte\_lightcurve/index.html)}$
Corcoran, M. F., Ishibashi, K., Swank, J. H., & Petre, R., 2001, ApJ, 547, 1034
Damineli, A., Conti, P. S., & Lopes, D. F. 1997, NewA, 2, 107
Damineli, A., Hillier, D. J., Corcoran M. F. et al. 2008 MNRAS, 384, 1649 Davidson, K. 1997 NewA, 2, 387D
Davidson, K., Smith, N., Gull, T.R., Ishibashi, K., & Hillier, D.J., 2001, AJ, 121, 1569.
Dorland, B. N., 2007, PhDT, 9D, “An Astrometric Analysis of Eta Carinae’s Eruptive History Using HST WF/PC2 and ACS Observations”
Duncan, R. A., & White, S. M. 2003, MNRAS, 338, 425
Eichler, D., & Usov, V. 1993, ApJ, 402, 271
Falceta-Gonçalves, D., Jatenco-Pereira, V., & Abraham, Z. 2005, MNRAS, 357, 895
Hamaguchi, K. Corcoran, M. F., Gull, T., Ishibashi, K., Pittard, J. M., Hillier, D. J., Damineli, A., Davidson, K., Nielsen, K. E. & Kober, G. V., 2007, ApJ, 663, 522 (H07)
Henley, D. B., Corcoran, M. F., Pittard, J. M., Stevens, I. R., Hamaguchi, K., & Gull T. R. 2008, ApJ, 680, 705
Hillier, D. J., Gull, T., Nielsen, K., Sonneborn, G., Iping, R., Smith, N., Corcoran, M., Damineli, A., Hamann, F. W., Martin, J. C., & Weis, K. 2006, ApJ, 642, 1098 Kashi, A., & Soker, N. 2007, NewA, 12, 590 Kashi, A., & Soker, N. 2008, MNRAS, 390, 1751 (KS08) Kashi, A., & Soker, N. 2009a, NewA, 14, 11 Kashi, A., & Soker, N. 2009b, accepted by MNRAS, (arXiv:0808.4132) Fernandez Lajus, E., Schwartz, M., Salerno, N.; Torres, A., Farina, C., Llinares, C., Calderon, J. P., Bareilles, F., Gamen, R., Niemela, V. S. 2008, A&A, preprint.
Nielsen, K. E., Corcoran, M. F., Gull T. R., Hillier, D. J., Hamaguchi, K., Ivarsson, S. & Lindler, D. J. 2007, ApJ, 660, 669
Okazaki, A. T., Owocki, S. P., Russell, C. M., P. & Corcoran, M. F. 2008, in Massive Stars as Cosmic Engines, IAU Sym. 250, eds. F. Bresolin, P.A. Crowther and J. Puls (Cambridge University Press), 133 (arXiv:0803.3977)
Parkin, E. R., Pittard, J. M., Corcoran, M. F., Hamaguchi, K., & Stevens I. R. 2009, MNRAS in press (arXiv:0901.0862) (P09)
Pittard, J. M., Corcoran, M. F. 2002, A&A 383 636P
Pittard, J. M., Stevens, I. R., Corcoran, M. F., & Ishibashi, K. 1998, MNRAS, 299, L5
Sarazin, C. L., & Bahcall, J. N. 1977, ApJS, 34, 451
Smith, N. 2002, MNRAS, 337, 1252
Smith, N. 2006, ApJ, 644, 1151
Smith, N., & Ferland, G. J. 2007, ApJ, 655, 911
Smith, N., Gehrz, R., D., Hinz, P. M., Hoffmann, W. F., Hora, J. L., Mamajek, E. E., & Meyer, M. R. 2003, AJ, 125, 1458
Smith, N., Morse, J. A., Collins, N. R., & Gull, T. R. 2004, ApJ, 610, L105
Soker, N. 2001, MNRAS, 325, 584
Soker, N. 2005, ApJ, 635, 540 Soker, N. 2007, ApJ, 661, 490 Soker, N., & Behar, E. 2006, ApJ, 652, 1563
van Genderen A. M., Sterken, C., Allen, W. H., & Walker, W. S. G. 2006, JAD, 12, 3
Whitelock, P. A., Feast, M. W., Marang, F., & Breedt, E. 2004, MNRAS, 352, 447
|
[Decompositions of linear spaces induced by $n$-linear maps]{}
$^{a}$ Departamento de Matemáticas, Universidad de Cádiz. Puerto Real, Cádiz, España.
$^{b}$ CMCC, Universidade Federal do ABC. Santo André, Brasil.
$^{c}$ Universidade de Coimbra, CeBER, CMUC e FEUC, Coimbra, Portugal
E-mail addresses:
Antonio Jesús Calderón (ajesus.calderon@uca.es),
Ivan Kaygorodov (kaygorodov.ivan@gmail.com),
Paulo Saraiva (psaraiva@fe.uc.pt).
[**ABSTRACT.**]{} Let $\mathbb V$ be an arbitrary linear space and $f:\mathbb V \times \ldots \times \mathbb V \to \mathbb V$ an $n$-linear map. It is proved that, for each choice of a basis ${\mathcal B}$ of $\mathbb V$, the $n$-linear map $f$ induces a (nontrivial) decomposition $\mathbb V= \oplus V_j$ as a direct sum of linear subspaces of $\mathbb V$, with respect to ${\mathcal B}$. It is shown that this decomposition is $f$-orthogonal in the sense that $f(\mathbb V, \ldots, V_j, \ldots, V_k, \ldots, \mathbb V) =0$ when $j \neq k$, and in such a way that any $V_j$ is strongly $f$-invariant, meaning that $f(\mathbb V, \ldots, V_j, \ldots, \mathbb V) \subset V_j.$ A sufficient condition for two different decompositions of $\mathbb V$ induced by an $n$-linear map $f$, with respect to two different bases of $\mathbb V$, being isomorphic is deduced. The $f$-simplicity – an analogue of the usual simplicity in the framework of $n$-liner maps – of any linear subspace $V_j$ of a certain decomposition induced by $f$ is characterized. Finally, an application to the structure theory of arbitrary $n$-ary algebras is provided. This work is a close generalization the results obtained by A. J. Calderón (2018) [@Yo4].
[*Keywords*]{}: Linear space, $n$-linear map, orthogonality, invariant subspace, decomposition theorem.
[*2010MSC*]{}: 15A03, 15A21, 15A69, 15A86.
Introduction
============
The main idea of this paper is to present an $n$-ary ($n>2$) generalization of the results achieved by the first author on the decomposition of linear spaces induced by bilinear maps on a linear space [@Yo4].
In the mentioned paper, given a linear space $\mathbb V$ of arbitray dimension and a bilinear map $f$ on $\mathbb V$, Calderón introduced the notions of $f$-orthogonal, $f$-invariant and strongly $f$-invariant subspaces, as well as the notion of $f$-simplicity, which are just the usual notions of orthogonality, invariance and simplicity, but now defined with respect to $f$. Then, for a fixed basis of $\mathbb V$, he developed connection tecnhiques allowing to obtain a first nontrivial decomposition of $\mathbb V$ as the direct sum of $f$-orthogonal vector subspaces. In order to improve the obtained decomposition he introduced an adequate equivalence relation on the above family of linear subspaces, leading to the first main result: a nontrivial decomposition of $\mathbb V$ as an $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces, with respect to a fixed basis. After that, observing that different choices of the bases of $\mathbb V$ may lead to different decompositons, he studied sufficient conditions to assure induced isomorphic decompositions of $\mathbb V$ with respect to different bases of $\mathbb V$. Another important result gives necessary and sufficient conditions for the $f$-simplicity of the linear subspaces in the second decomposition of $\mathbb V$. The author ends the paper providing an application of the previous results to the the structure theory of arbitrary algebras.
At this point, a parenthesis is due to underline the considerable amount of recent works where the above mentioned and similar connection techniques are applied as a tool to obtain interesting results in the frameworks of several types of algebras. Without being exhaustive, these techniques were used, for instance, along with the notions of multiplicative basis and quasi-multiplicative basis not only related with algebras (see Caledrón and Navarro, [@Yo; @Yo2]), but also with some $n$-ary generalizations (see, [*e.g.*]{}, the works of Calderón, Barreiro, Kaygorodov and Sánchez in [@bcks; @kmod; @Yo_n_algebras]). Further, connection techniques were also applied in the context of graded Lie algebras (see Calderón (2014) [@Yo3]) and to obtain structural results on graded Leibniz triple systems (see Cao and Chen (2016) [@Cao2]).
The present work follows an approach that uses, as close as possible, generalized $n$-ary versions of the techniques applied in [@Yo4], obtaining generalized results which are similar to those of Calderón.
The paper is organized as follows. In Section 2 we present the necessary basic notions related with $n$-linear maps and develop all connection techniques needed to obtain the main results. As a consequence, we get that each choice of a basis ${\mathcal B} $ of $\mathbb V$ rises a first nontrivial decomposition of $\mathbb V$, induced by $f$, as an $f$-orthogonal direct sum of linear subspaces with respect to ${\mathcal B} $. This decomposition is then enhanced by the introduction of an adequate equivalence relation on the above family of linear subspaces, leading to our first main result: $\mathbb V$ decomposes as a nontrivial $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces, with respect to a fixed basis.
In Section 3 the relation among the previous decompositions of $\mathbb V$ given by different choices of its bases is discussed. Concretely, after defining the notion of orbit associated to an $n$-linear map $f$, it is shown that if two bases, ${\mathcal B} $ and ${\mathcal B}^{\prime} $ of $\mathbb V$ belong to the same orbit under an action of a certain subgroup of ${\rm GL}(\mathbb V)$ on the set of all bases of $\mathbb V$, then they induce isomorphic decompositions of $\mathbb V$.
In Section 4 we generalize the concept of $i$-division basis to the case of $n$-ary algebras. After that, we obtain a characterization of the $f$-simplicity of the components of the main decomposition obtained in Section 2. That is, we prove that any of the linear subspaces in the decomposition of $\mathbb V$ in $f$-orthogonal, strongly $f$-invariant linear subspaces of $\mathbb V$ is $f$-simple if and only if its annihilator is zero and it admits an $i$-division basis.
Finally, in Section 5 an application of the previous results to the the structure theory of arbitrary $n$-ary algebras is included.
Development of the techniques. First decomposition theorem
==========================================================
We begin by noting that throughout the paper all of the linear spaces $\mathbb V$ considered are of arbitrary dimension and over an arbitrary base field ${\mathbb F}$. Hereinafter, $\mathbb V$ is a linear space and $f: \mathbb V\times \dots \times \mathbb V \to \mathbb V$ an $n$-linear map on $\mathbb V$, $n\geq 2$. We start recalling some notions concerning $\mathbb V$ and $f$.
Two linear subspaces $V_1$ and $V_2$ of $\mathbb V$ are called [*$f$-orthogonal*]{} if $$f(\mathbb V,\dots, V_1^{(i)},\dots,V_2^{(j)},\dots,\mathbb V)=0,$$ for all $i,j\in \left\{1,\dots,n \right\}$, $i \neq j$, where the notations $V_1^{(i)}$ and $V_2^{(j)}$ mean that $V_1$ and $V_2$ occupy the $i$-th and $j$-th entries of $f$, respectively.
It is also said that a decomposition of $\mathbb V$ as a direct sum of linear subspaces $$\mathbb V= \bigoplus_{j\in J} V_j$$ is [*$f$-orthogonal*]{} if $V_j$ and $V_k$ are $f$-orthogonal for any $j,k \in J$, with $j \neq k
$.
A linear subspace $ W$ of $\mathbb V$ is called [*$f$-invariant* ]{} if $$f(W,\dots,W)\subset W.$$ The linear space $W$ is called [*strongly $f$-invariant* ]{} if $$f(\mathbb V,\dots, W^{(i)},\dots,\mathbb V)\subset W,$$ for all $i \in \left\{1,\dots,n \right\}$. The linear space $\mathbb V$ will be called [*$f$-simple*]{} if $$f(\mathbb V,\dots,\mathbb V) \neq 0$$ and its only strongly $f$-invariant subspaces are $\{0\}$ and $\mathbb V$.
The [*annihilator*]{} of $f$ is defined as the set $${\rm Ann}(f)=\{v \in \mathbb V: f(\mathbb V,\dots, v^{(i)},\dots,\mathbb V)=0, \text{ for all } i \in \left\{1,\dots,n \right\}\}.$$
Let us fix a basis ${\mathcal B}=\{e_i\}_{ i \in I}$ of $\mathbb V$. For each $e_i \in {\mathcal B}$, we introduce a symbol $\overline{e}_i \notin {\mathcal B}$ and the following set $$\overline{{\mathcal B}} := \{\overline e_i : e_i \in {\mathcal B}\}.$$ We will also write $\overline{(\overline{e}_i)} := e_i \in {\mathcal B}$, $\mathbb V^{*}:=\mathbb V \setminus \{0\}$ and $\mathcal{P}(\mathbb V^{*})$ the power set of $\mathbb V^{*}$.
We define the $n$-linear mapping
$$\label{gene}
F: \mathcal{P}(\mathbb V^{*})\times \left( ({\mathcal B}\dot\cup \overline{{\mathcal B}})\times \dots \times ({\mathcal B}\dot\cup \overline{{\mathcal B}} )\right) \to \mathcal{P}(\mathbb V^{*})$$
as
- $ F(\emptyset , {\mathcal B}\dot\cup \overline{{\mathcal B}}, \dots, {\mathcal B}\dot\cup \overline{{\mathcal B}} )= \emptyset$.
- For any $\emptyset \neq U \in {\mathcal P}( \mathbb V^{*})$ and $\xi_i \in {\mathcal B} $, $i=1,\dots,n-1,$ $$\hspace{-1cm} F(U, \xi_{1},\dots, \xi_{n-1})=\left( \bigcup_{
\begin{array}{c}
k\in \{1,\dots,n\}\\
\sigma \in {\mathbb S}_{n-1}
\end{array}
} \{f(\xi_{\sigma (1)},\dots, u^{(k)}, \ldots, \xi_{\sigma (n-1)}):u \in U\} \right)\setminus \{0\} .$$
- For any $\emptyset \neq U \in {\mathcal P}(\mathbb V^{*})$ and $\ov{\xi}_i \in
\overline{{\mathcal B}}$, $i=1,\dots,n-1,$ $$\hspace{-1cm} F(U , \ov{\xi}_{1},\dots,\ov{\xi}_{n-1})=\left(
\bigcup_{
\begin{array}{c}
k\in \{1,\dots,n\}\\
\sigma \in {\mathbb S}_{n-1}
\end{array}
} \{u \in \mathbb V: f(\xi_{\sigma (1)},\dots, u^{(k)},\dots, \xi_{\sigma (n-1)})\in U\} \right)\setminus \{0\} .$$
- $F(U , \xi_{1},\dots, \xi_{n-1})=\emptyset$, if there are $i,j\in \{1,\dots,n-1\},\ i\neq j$, such that $\xi_{i}\in {\mathcal B}$, $\xi_{j}\in \overline{{\mathcal B}}$.
\[Fsym\] It is clear that $$F(U , \xi_{\sigma (1)},\dots, \xi_{\sigma (n-1)}) = F(U ,\xi_{1},\dots,\xi_{n-1}),$$ and $$F(U , \ov{\xi}_{\sigma (1)},\dots, \ov{\xi}_{\sigma (n-1)}) = F(U ,\ov{\xi}_{1},\dots,\ov{\xi}_{n-1}),$$ for all $\xi_{1},\dots,\xi_{n-1}\in {\mathcal B},\ \ov{\xi}_{1},\dots,\ov{\xi}_{n-1}\in \ov{{\mathcal B}},\ \sigma \in {\mathbb S}_{n-1}$.
\[lema1\] Concerning the mapping $F$ previously defined, we have
1. For any $v \in \mathbb V^{*}$ and $\xi_i \in {\mathcal B}\ i=1,\dots,n-1$,\
$w \in F(\{v\} , \xi_{1},\dots,\xi_{n-1})$ if and only if $v \in F(\{w\},\ov{\xi}_{1},\dots,\ov{\xi}_{n-1})$.
2. For any $U \in \mathcal{P}(\mathbb V^{*})$ and $\xi_{i}\in {\mathcal B}\dot\cup \overline{{\mathcal B}}, \ i=1,\dots,n-1$,\
$v \in F(U ,\xi_{1},\dots,\xi_{n-1})$ if and only if $F(\{v\},\ov{\xi}_{1},\dots,\ov{\xi}_{n-1})\cap U \neq \emptyset $.
1\. Let us start admitting that $w \in F(\{v\} , \xi_{1},\dots,\xi_{n-1})$, being $v \in \mathbb V^{*}$ and $\xi_i \in {\mathcal B},\ i=1,\dots,n-1$. This means that $$w=f(\xi_{\sigma (1)},\dots,v^{(k)}, \dots, \xi_{\sigma (n-1)}),$$ for some $k\in \{1, \dots ,n-1\}$ and $\sigma \in {\mathbb S}_{n-1}$, and thus $$v\in F(\{w\}, \ov{\xi}_{\sigma (1)},\dots, \ov{\xi}_{\sigma (n-1)}).$$ According to the previous remark, we have: $$v\in F(\{w\}, \ov{\xi}_{1},\dots, \ov{\xi}_{n-1}).$$ The reciprocal result can be proved analogously.
2\. Suppose that $U\in \mathcal{P}(\mathbb V^{*})$ and $\xi_{i}\in {\mathcal B}\dot\cup \overline{{\mathcal B}}, \ i=1,\dots,n-1.$ Let us first admit that $v \in F(U ,\xi_{1},\dots,\xi_{n-1})$. Then $v \in F(\{w\},\xi_{1},\dots,\xi_{n-1})$ for some $w\in U$. By item 1., this is equivalent to $w\in F(\{v\}, \ov{\xi}_{1},\dots, \ov{\xi}_{n-1})$ and thus $$w\in F(\{v\},\ov{\xi}_{1},\dots, \ov{\xi}_{n-1})\cap U \neq \emptyset .$$ The reciprocal assertion can be proved in a similar way.
\[connection\]Let $e_i, e_j \in {\mathcal B}$. We say that $e_i $ is [*connected*]{} to $e_j$ if either,
- $e_i=e_j$ or
- there exists an ordered list $(X_1,X_2,\dots,X_m)$, where $X_{i}=\left(a_{i1},\dots,a_{in-1} \right)$ such that $a_{ik} \in {\mathcal B}\dot\cup \overline{{\mathcal B}} $, $i \in \{1,\dots,m\},\ k\in \{1,\dots,n-1\},$ satisfying:
1. $F(\{e_i\} , X_1)\neq\emptyset$,\
$F(F(\{e_i\} , X_1),X_{2})\neq\emptyset,\\
\vdots\\
F (\dots (F(F(\{e_i\} , X_1),X_{2}),\dots,X_{m-1})\neq\emptyset$.
2. $e_j\in F( F (\dots (F(F(\{e_i\} , X_1),X_{2}),\dots,X_{m-1}), X_{m}).$
In this case we say that $(X_1,X_2,\dots,X_m)$ is a [*connection*]{} from $e_i$ to $e_j$.
\[lema3\] Let $(X_1, X_2, \dots, X_{m-1}, X_m)$ be any connection from $e_i$ to $e_j$, where $e_i$ and $e_j$ are arbitrary elements in ${\mathcal B}$, with $e_i \neq e_j$. Then the ordered list $(\overline{X}_m,\overline{X}_{m-1},\dots,\overline{X}_2,\overline{X}_1)$ is a connection from $e_j$ to $e_i$.
The proof will be done by induction on $m$. In the case $m=1$ we have that $e_j\in F(\{e_i\} , X_1)=F(\{e_i\},a_{11},\dots,a_{1 n-1})$ implying that $$e_i\in F(\{e_j\} , \overline{a}_{11},\dots,\overline{a}_{1 n-1})=F(\{e_j\} , \overline{X}_1),$$ by 1. of Lemma \[lema1\]. Thus $(\overline{X}_1)$ is a connection from $e_j$ to $e_i$.
Admit now that the assertion holds for any connection with $m\geq 1$ elements, and let us show this assertion also holds for any connection $$(X_1,X_2,\dots,X_m,X_{m+1})$$ with $m+1$ ($(n-1)$-tuples) elements. So, consider a connection $(X_1,X_2,\dots,X_m,X_{m+1})$ from $e_i$ to $e_j$. Let us begin by setting $$U:=F( F (\dots (F(F(\{e_i\} , X_1),X_{2}),\dots,X_{m-1}), X_{m}).$$ Applying 2. of Definition \[connection\] we have that $e_j \in F(U , X_{m+1}).$ Then, by 2. of Lemma \[lema1\], $F(\{e_j\} , \overline{X}_{m+1})\cap U\neq\emptyset$. Admit that $$\label{eqq1}
x\in F(\{e_j\} , \overline{X}_{m+1})\cap U\neq \emptyset.$$ Since $x\in U$ we have that $(X_1, X_2, \dots, X_{m-1}, X_m)$ is a connection from $e_i$ to $x$ with $m$ elements. Henceforth $(\overline{X}_m,\overline{X}_{m-1},\dots,\overline{X}_2,\overline{X}_1)$ connects $x$ to $e_i$. From here, and by equation (\[eqq1\]), we obtain $$e_i \in F(F(\dots (F(F(\{e_j\} , \overline{X}_{m+1}), \overline{X}_m),\dots , \overline{X}_{2}),\overline{X}_{1}),$$ which means that $$(\overline{X}_{m+1},\overline{X}_{m},\dots,\overline{X}_2,\overline{X}_1)$$ connects $e_j$ to $e_i$.
\[equi\] The relation $\sim$ in ${\mathcal B}$, defined by $e_i\sim e_j$ if and only if $e_i$ is connected to $e_j$, is an equivalence relation.
The relation $\sim$ is clearly reflexive (see (i) of Definition \[connection\]) and symmetric (see Lemma \[lema3\]). Hence let us verify its transitivity.
Admit that $e_i, e_j, e_k \in {\mathcal B}$ are pairwise different such that $e_i \sim e_j$ and $e_j \sim e_k$ (the cases in which two among those elements are equal are trivial). Then there are connections $(X_1,\dots,X_m)$ and $(Y_1,\dots,Y_p)$ from $e_i$ to $e_j$ and from $e_j$ to $e_k$, respectively. Therefore, $(X_1,\dots,X_m,Y_1,\dots,Y_p)$ is a connection from $e_i$ to $e_k$ showing the transitivity of $\sim$, and the result is proved.
Henceforth, by the above defined equivalence relation, we introduce the quotient set $${\mathcal B}/ \sim := \{[e_i] : e_i \in {\mathcal B}\},$$ where $[e_i]$ stands for the set of elements in ${\mathcal B}$ which are connected to $e_i$.
For each $[e_i] \in {\mathcal B}/ \sim$ we may introduce the linear subspace $$V_{[e_i]}:= \bigoplus_{e_j \in [e_i] } {\mathbb F} e_j,$$ allowing us to write $$\label{elp3}
\mathbb V = \bigoplus\limits_{[e_i]\in {\mathcal B}/ \sim} V_{[e_i]}.$$ Next we show that this is a decomposition of $\mathbb V$ in pairwise $f$-orthogonal subspaces.
\[elp1\] For any $[e_i],[e_j] \in {\mathcal B}/ \sim$ with $[e_i] \neq [e_j]$, we have that $$f(\mathbb V,\dots,V_{[e_i]}^{\left(k_{1}\right)},\dots,V_{[e_j] }^{\left(k_{2}\right)},\dots,\mathbb V)=0, \label{f-orth_dec}$$ for all $k_{1},k_{2}\in \{1,\dots,n\},\ k_{1} \neq k_{2}$.
In order to prove (\[f-orth\_dec\]) it is sufficient to show that $$f(\xi_{\sigma (1)},\dots,V_{[e_i]}^{\left(k_{1}\right)},\dots,V_{[e_j] }^{\left(k_{2}\right)},\dots,\xi_{\sigma (n-2)})=0,$$ for any permutation $\sigma \in {\mathbb S}_{n-2}$, $ \xi_{1},\dots,\xi_{n-2} \in {\mathcal B}$. Admit the opposite assertion. Then there are $e_k \in [e_i]$, $e_p \in [e_j] $ and $v\in \mathbb V^{*}$ such that $$v=f(\xi_{\sigma (1)},\dots,e_{k}^{\left(k_{1}\right)},\dots,e_{p }^{\left(k_{2}\right)},\dots,\xi_{\sigma (n-2)}), \label{aux}$$ for some $\sigma \in {\mathbb S}_{n-2}$. By definition of $F$, from (\[aux\]) we may deduce two facts: $$\text{ (i) } v\in F(\{e_{k}\},e_{p},\xi_{1},\dots,\xi_{n-2}),$$ $$\text{ (ii) } v\in F(\{e_{p}\},e_{k},\xi_{1},\dots,\xi_{n-2}).$$ From (ii) and 1. of Lemma \[lema1\], we have $$\text{ (iii) } e_{p}\in F(\{v\},\ov{e}_{k},\ov{\xi}_{1},\dots,\ov{\xi}_{n-2})$$ From (i) and (iii), we observe that $\left(X_{1},X_{2}\right)$, where $$X_{1}=\left(e_{p},\xi_{1},\dots,\xi_{n-2}\right) \text{ and }X_{2}=\left(\ov{e}_{k},\ov{\xi}_{1},\dots,\ov{\xi}_{n-2}\right),$$ is a connection from $e_{k}$ to $e_{p}$. Thus, $[e_i]=[e_k]=[e_p]=[e_j]$, causing a contradiction.
As a consequence of Lemma \[elp1\] and equation (\[elp3\]) we have.
\[meta\] Given $\mathbb V$ and $f$ as initially defined, $\mathbb V$ decomposes as the $f$-orthogonal direct sum of linear subspaces $$\mathbb V = \bigoplus\limits_{[e_i]\in {\mathcal B}/ \sim}V_{[e_i]}.$$
The family of linear subspaces of $\mathbb V$ formed by all of the $V_{[e_i]}$, $[e_i]\in {\mathcal B}/ \sim$, which gives rise to the decomposition in Proposition \[meta\], is not good enough for our purposes. So we need to introduce a new equivalence relation on this family, as follows.
We begin by observing that the above mentioned decomposition of $\mathbb V$ allows us to consider, for each $V_{[e_i]}$, the projection map $$\Pi_{V_{[e_i]}}: \mathbb V \to V_{[e_i]}.$$
Also, let us consider these family of nonzero linear subspaces of $\mathbb V$, $$\hbox{${\mathcal{F}}:=\{V_{[e_i]}:[e_i]\in {\mathcal B}/ \sim \}.$}$$
\[ane3\]
We will say that $V_{[e_i]} \approx V_{[e_j]}$ if and only if either $V_{[e_i]} = V_{[e_j]}$ or there exists a subset $$\{[\xi_1],[\xi_2], \ldots,[\xi_m]\} \subset {\mathcal B}/ \sim,$$ such that
- $[\xi_1]=[e_i]$ and $[\xi_m]=[e_j].$
- [ $$\hspace{-0.75cm} \sum\limits_{1 \leq k_1<k_2\leq n} \left[\Pi_{V_{[\xi_1]}}(f(\mathbb V, \ldots, V_{[\xi_2]}^{(k_1)}, \ldots, V_{[\xi_2]}^{(k_2)}, \ldots, \mathbb V)) + \Pi_{V_{[\xi_2]}}(f(\mathbb V, \ldots, V_{[\xi_1]}^{(k_1)}, \ldots, V_{[\xi_1]}^{(k_2)}, \ldots, \mathbb V)) \right]\neq 0.$$ $$\hspace{-0.75cm} \sum\limits_{1 \leq k_1<k_2\leq n} \left[\Pi_{V_{[\xi_2]}}(f(\mathbb V, \ldots, V_{[\xi_3]}^{(k_1)}, \ldots, V_{[\xi_3]}^{(k_2)}, \ldots, \mathbb V)) + \Pi_{V_{[\xi_3]}}(f(\mathbb V, \ldots, V_{[\xi_2]}^{(k_1)}, \ldots, V_{[\xi_2]}^{(k_2)}, \ldots, \mathbb V)) \right]\neq 0.$$ $\vdots$\
$$\hspace{-0.75cm} \sum\limits_{1 \leq k_1<k_2\leq n} \left[\Pi_{V_{[\xi_{m-1}]}}(f(\mathbb V, \ldots, V_{[\xi_m]}^{(k_1)}, \ldots, V_{[\xi_m]}^{(k_2)}, \ldots, \mathbb V)) + \Pi_{V_{[\xi_m]}}(f(\mathbb V, \ldots, V_{[\xi_{m-1}]}^{(k_1)}, \ldots, V_{[\xi_{m-1}]}^{(k_2)}, \ldots, \mathbb V)) \right]\neq 0.$$]{}
Clearly $\approx$ is an equivalence relation on ${\mathcal{F}}$ and so we can introduce the quotient set $${\mathcal{F}} / \approx :=\{ [V_{[e_i]}]: V_{[e_i]}
\in {\mathcal{F}}\}.$$ For each $[V_{[e_i]}] \in {\mathcal{F}} / \approx$, we denote by $\overbrace{V_{[e_i]}}$ the linear subspace of $\mathbb V$ $$\overbrace{V_{[e_i]}}:= \bigoplus\limits_{V_{[e_j]} \in [V_{[e_i]}]} V_{[e_j]}.$$
By equation (\[elp3\]) and the definition of $\approx$, we clearly have
$$\label{hel1}
\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}}.$$
Also, we can assert by Lemma \[elp1\] that
$$f(\mathbb V, \ldots, {{\overbrace{V_{[e_i]}}}}^{(k_1)}, \ldots, {{\overbrace{V_{[e_j]}}}}^{(k_2)} \ldots, \mathbb V)=0$$ when $[V_{[e_i]}] \neq [V_{[e_j
]}]$ in ${\mathcal{F}} / \approx$, for all $k_1,k_2\in \left\{1,\dots,n \right\}$, $k_1 \neq k_2$.
\[lema\_submodulo\] For any $[V_{[e_i]}] \in {\mathcal{F}} / \approx$, $\overbrace{V_{[e_i]}}$ is a strongly $f$-invariant linear subspace of $\mathbb V$.
We begin by proving that $$\label{panzer5}
f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(k_1)}, \ldots, {\overbrace{V_{[e_i]}}}^{(k_2)}, \ldots, \mathbb V) \subset \overbrace{V_{[e_i]}}.$$ Indeed, in case some $0 \neq w \in f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(k_1)}, \ldots, {\overbrace{V_{[e_i]}}}^{(k_2)}, \ldots, \mathbb V)$, decomposition (\[hel1\]) allows us to write $$w=w_1+w_2+ \dots +w_m$$ for some $0\neq w_j \in \overbrace{V_{[\xi_j]}}$ for $j=1, \ldots ,m$ and $\xi_j \in \mathcal B$. Observe now that Lemma \[elp1\] gives us that there exist nonzero $x,y \in V_{[e_k]}$ with $V_{[e_k]} \subset \overbrace{V_{[e_i]}}$ and $z_1, \ldots z_{n-2} \in \mathbb V,$ such that $$\label{boli}
0\neq w=f(z_1, \ldots, x^{(k_1)},\ldots, y^{(k_2)}, \ldots, z_{n-2})$$ Let us consider $0 \neq w_1 \in \overbrace{V_{[\xi_1]}}$, being so $w_1 \in V_{[e_r]}$ for some $V_{[e_r]} \subset \overbrace{V_{[\xi_1]}}$. By equation (\[boli\]) we have $$\Pi_{V_{[e_r]}}(f(z_1, \ldots, x^{(k_1)},\ldots, y^{(k_2)}, \ldots, z_{n-2}))=w_1 \neq 0.$$ That is $$\Pi_{V_{[e_r]}}(f(\mathbb V, \ldots, V_{[e_k]}^{(k_1)},\ldots, V_{[e_k]}^{(k_2)}, \ldots, \mathbb V )) \neq 0$$ and we get that the set $\{[e_k],[e_r]\}$ gives us $V_{[e_k]} \approx V_{[e_r] }$. Hence $$V_{[e_i]} \approx V_{[e_k]} \approx V_{[e_r] } \approx
V_{[\xi_1] }$$ and we conclude $V_{[\xi_1] } \subset \overbrace{V_{[e_i]}}$. From here $w_1 \in \overbrace{V_{[e_i]}}$. In a similar way we get that any $w_j \in \overbrace{V_{[e_i]}}$ for $j=2,...,m$ and so $w \in \overbrace{V_{[e_i]}}$. Consequently, the inclusion (\[panzer5\]) holds, as desired.
Finally, by decomposition (\[hel1\]), Lemma \[elp1\] and equation (\[panzer5\]), we have the following inclusion $$\sum\limits_{j=1}^{n}f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(j)}, \ldots, \mathbb V) \subset \overbrace{V_{[e_i]}},$$ and thus $f(\mathbb V, \ldots, {\overbrace{V_{[e_i]}}}^{(j)}, \ldots, \mathbb V) \subset \overbrace{V_{[e_i]}}$ for all $j\in\{1,\dots,n\}.$
\[theo1\] Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \ldots \times \mathbb V \to \mathbb V$. For any basis ${\mathcal B} =\{e_i: i \in I\}$ of $\mathbb V$ we have that $\mathbb V$ decomposes as the $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces $$\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}}.$$
Consider the decomposition, as direct sum of linear subspaces $$\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}},$$ given by equation (\[hel1\]). Now Lemma \[elp1\] shows that this decomposition is $f$-orthogonal and Proposition \[lema\_submodulo\] that all of the linear subspaces $\overbrace{V_{[e_i]}}$ are strongly $f$-invariant.
On the relation among the decompositions given by different choices of bases
============================================================================
Observe that the decomposition of $\mathbb V$ as an $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces given by Theorem \[theo1\] is related with the initial choice of the basis. Indeed, as it was exemplified in [@Yo4], for $n=2$, two different bases of $\mathbb V$ may lead to two different of those decompositions of $\mathbb V$. The same happens in the $n$-ary case, with $n>2$, as shown in the following example.
Let $\mathbb V$ be the ${\mathbb R}$-linear space $\mathbb V:={\mathbb R}^4$ equipped with the $n$-linear map $f:{\mathbb R}^4 \times \dots \times {\mathbb R}^4 \to {\mathbb R}^4$ defined as $$f(\overline{x}_1,\dots,\overline{x}_n)=(x_{11}x_{21},x_{11}x_{21}, 0,0),$$ where $$\overline{x}_i=(x_{i1},\dots,x_{i4})$$ for each $i\in \{1,\dots,n\}.$
Let us consider the following two bases of ${\mathbb R}^4$: $${\mathcal B}:=\{e_1,\dots,e_4\},$$ that is, the canonical basis, and $${\mathcal B}^{\prime}:=\{(1,0,1,0), (1,0,-1,0),e_2, e_4\}.$$
Then it is possible to observe that the decomposition of $\mathbb V={\mathbb R}^4$, given in Theorem \[theo1\] with respect to the basis ${\mathcal B}$ is given by $${\mathbb R}^4=({\mathbb R}e_1 \oplus {\mathbb R}e_2) \bigoplus ({\mathbb R}e_3) \bigoplus ({\mathbb R}e_4).$$
However, the same kind of decomposition with respect to ${\mathcal B}^{\prime}$ is given by $${\mathbb R}^4=({\mathbb R}(1,0,1,0) \oplus {\mathbb R}(1,0,-1,0) \oplus {\mathbb R} e_2)\bigoplus ( {\mathbb R}e_4).$$
Thus, it will be an interesting task to find a sufficient condition for two different decompositions of a linear space $\mathbb V$, induced by an $n$-linear map $f$ and with respect to two different bases of $\mathbb V$, being isomorphic. The following notion will help us in this purpose.
Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and consider $$\Gamma:=\ \mathbb V=\bigoplus\limits_{i \in I} V_i \text{ and } \Gamma^{\prime}:=\ \mathbb V=\bigoplus\limits_{j \in J} W_j$$ two decompositions of $\mathbb V$ as an $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces. It is said that $\Gamma$ and $\Gamma^{\prime}$ are [*isomorphic*]{} if there exists a linear isomorphism $g:\mathbb V \to \mathbb V$ satisfying $$f(g(v_1),\dots g(v_n))=g(f(v_1,\dots , v_n))$$ for any $v_1,\dots , v_n \in \mathbb V$, and a bijection $\sigma: I \to J$ such that $$g(V_i)=W_{\sigma(i)}$$ for any $i \in I$.
\[genesis\] Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and consider ${\mathcal B}=\{e_i: i\in I\}$ a fixed basis of $\mathbb V$. Let also $g:\mathbb V \to \mathbb V$ be a linear isomorphism satisfying $$f\left(g\left(\xi_1\right),\dots,g\left(\xi_{n}\right)\right)=g\left(f\left(\xi_1,\dots,\xi_{n}\right)\right)$$ for any $\xi_i \in \mathcal B$. Then for any $U \in {\mathcal P}( \mathbb V^{*})$ and $\xi_k \in \mathcal B,\ k \in I$, the following assertions hold:
- $g\left(F\left(U,\xi_1,\dots, \xi_{n-1}\right)\right)
=F\left(g(U),g\left( \xi_1 \right),\dots,g\left( \xi_{n-1}\right)\right)$,
- $g\left(F\left(U,\ov{\xi}_1,\dots,\ov{\xi}_{n-1}\right)\right)
=F\left(g(U),\ov{g\left({\xi}_1\right)},\dots,\ov{g\left({\xi}_{n-1}\right)}\right)$,
where $F$ is the mapping defined by equation (\[gene\]).
\(i) We have $$\hspace{-1cm} g\left(F\left(U, \xi_1,\dots,\xi_{n-1}\right)\right)
=\left( \bigcup_{
\begin{array}{c}
k\in \{1,\dots,n\}\\
\sigma \in {\mathbb S}_{n-1}
\end{array}
} \left\{g\left(f(\xi_{\sigma (1)},\dots, u^{(k)}, \ldots, \xi_{\sigma (n-1)})\right):u \in U\right\} \right)\setminus \{0\}$$ $$\hspace{-1cm} =\left( \bigcup_{
\begin{array}{c}
k\in \{1,\dots,n\}\\
\sigma \in {\mathbb S}_{n-1}
\end{array}
} \left\{f \left(g\left(\xi_{\sigma (1)}\right),\dots, g(u)^{(k)}, \ldots, g\left(\xi_{\sigma (n-1)}\right)\right):u \in U\right\} \right)\setminus \{0\}$$ $$=F\left(g(U),g\left( \xi_1\right),\dots,g\left( \xi_{n-1}\right)\right).$$
\(ii) In this case we have
$$\hspace{-1cm} g\left(F\left(U,\ov{\xi}_1,\dots,\ov{\xi}_{n-1}\right)\right)$$ $$\hspace{-1cm}=\left( \bigcup_{
\begin{array}{c}
k\in \{1,\dots,n\}\\
\sigma \in {\mathbb S}_{n-1}
\end{array}
} \left\{u\in \mathbb V : f\left(\xi_{\sigma (1)},\dots, (g^{-1}(u))^{(k)}, \ldots, \xi_{\sigma (n-1)}\right) \in U\right\} \right)\setminus \{0\}$$ $$\hspace{-1cm} =\left( \bigcup_{
\begin{array}{c}
k\in \{1,\dots,n\}\\
\sigma \in {\mathbb S}_{n-1}
\end{array}
} \left\{u\in \mathbb V : f \left(g\left( \xi_{\sigma (1)}\right),\dots, u^{(k)}, \ldots, g\left(\xi_{\sigma (n-1)}\right)\right) \in g(U)\right\} \right)\setminus \{0\}$$ $$=F\left(g(U),\ov{g\left({\xi}_1\right)},\dots,\ov{g\left({\xi}_{n-1}\right)}\right).$$
Observe that in both cases we took into account Remark \[Fsym\].
\[188\]
Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and consider ${\mathcal B}=\{e_i: i\in I\}$ a fixed a basis of $\mathbb V$. Further, admit that $g:\mathbb V \to \mathbb V$ is a linear isomorphism satisfying $$f\left(g\left(\xi_1\right),\dots,g\left(\xi_{n}\right)\right)=g\left(f\left(\xi_1,\dots,\xi_{n}\right)\right)$$ for any $\xi_i \in \mathcal B$. Then the decompositions $$\Gamma:=\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}} \hbox{ and } \Gamma^{\prime}:=\mathbb V=\bigoplus\limits_{[V_{[g(e_i)]}] \in {\mathcal{F}^{\prime}} / \approx} \overbrace{V_{[g(e_i)]}},$$ corresponding to the choices of ${\mathcal B}$ and ${\mathcal B}^{\prime}:=\{g(e_i): i \in I\}$ respectively in Theorem \[theo1\], are isomorphic.
Firstly, let us observe that, according to the previous result, we may state that if $e_i$ is connected to some $e_j$, for some $i,j \in I$, $e_i,e_j \in \mathcal B$ through a connection $(X_1,X_2,\dots,X_m)$, where $X_{i}=\left(a_{i1},\dots,a_{in-1} \right)$ such that $a_{ik} \in {\mathcal B}\dot\cup \overline{{\mathcal B}} $, $i \in \{1,\dots,m\},\ k\in \{1,\dots,n-1\}$, then $g(e_i)$ is connected to $g(e_j)$ through the connection $(g(X_1),g(X_2),...,g(X_n))$, where $g(X_{i}):=\left(g(a_{i1}),\dots,g(a_{in-1}) \right)$ and $g(a_{ik}) \in {\mathcal B^{\prime}} \cup \ov{{\mathcal B}^{\prime}}$, (where $g(\ov{e}_k):=\ov{g(e_k)}$). Thus, it is possible to conclude that $$g(V_{[e_i]})=V_{[g(e_i)]}$$ for any $[e_i] \in {\mathcal B} / \sim$. Further, it is also clear that the mapping $\mu$ such that $$\mu(V_{[e_i]})=V_{[g(e_i)]}$$ defines a bijection between the families ${\mathcal{F}}:=\{V_{[e_i]}:[e_i]\in {\mathcal B}/ \sim \}$ and ${\mathcal{F}}^{\prime}:=\{V_{[g(e_i)]}:[g(e_i)]\in {\mathcal B}^{\prime}/ \sim \}$.
Now, from Lemma \[genesis\] we have $$\hspace{-1cm} g\left(\Pi_{V_{[e_i]}}(f(\mathbb V, \ldots, V_{[e_j]}^{(k_1)}, \ldots, V_{[e_j]}^{(k_2)}, \ldots, \mathbb V)\right)
=\Pi_{V_{[g(e_i)]}}
\left( f(\mathbb V, \ldots, V_{[g(e_j)]}^{(k_1)}, \ldots, V_{[g(e_j)]}^{(k_2)}, \ldots, \mathbb V) \right)$$ for $i,j \in I$ and $k_1,k_2 \in \{1,\dots , n\}$, with $k_1<k_2$. This allows to deduce that
$$\label{gene1}
g(\overbrace{V_{[e_i]}})=\overbrace{V_{[g(e_i)]}}$$
for any $[V_{[e_i]}] \in {\mathcal{F}} / \approx$, which induces a second bijection, $\sigma$, now between the families $ {\mathcal{F}} / \approx$ and ${\mathcal{F}^{\prime}} / \approx $ given by
$$\label{gene2}
\sigma([V_{[e_i]}])=[V_{[g(e_i)]}].$$
From equations (\[gene1\]) and (\[gene2\]) we conclude that the decompositions $\Gamma$ and $\Gamma^{\prime}$ are isomorphic.
Being $f$ an $n$-linear map on $\mathbb V$, the following set $${\rm O}_{f}(\mathbb V)=\{g \in {\rm GL}(\mathbb V): f(g(v_1),\dots,g(v_n))=g(f(v_1,\dots,v_n)) \hbox{ for any } v_1,\dots,v_n \in \mathbb V\},$$ (where $\rm{GL}(\mathbb V)$ denotes the group of all linear isomorphisms of $\mathbb V$), is known as the [*orbit*]{} of $\mathbb V$ (associated to $f$). We have that ${\rm O}_{f}(\mathbb V)$ is a subgroup of $\rm{GL}(\mathbb V)$. If we also denote by ${\mathfrak B}$ the set of all bases of $\mathbb V$ we get the action
$$\label{act}
{\rm O}_{f}(\mathbb V) \times {\mathfrak B} \to {\mathfrak B}$$
given by $(g, \{e_i\}_{i \in I})=\{g(e_i)\}_{i \in I}$. The previous result states that if two bases ${\mathcal B} $ and ${\mathcal B}^{\prime} $ of $\mathbb V$ belong to the same orbit under the action given by equation (\[act\]), then they induce two isomorphic decompositions of $\mathbb V$. Finally, this can be stated as follows.
Let $\mathbb V$ be a linear space equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and fix two bases ${\mathcal B} =\{e_i: i \in I\}$ and ${\mathcal B}^{\prime} =\{u_i: i \in I\}$ of $\mathbb V$. Suppose there exists a bijection $\mu:I \to I$ such that the linear isomorphism $g:\mathbb V \to \mathbb V$ determined by $g(e_i):=u_{\mu(i)}$ for any $i\in I$, satisfies
$$f\left(g(v_1),\dots,u_{\mu(i)}^{(k_1)},\dots, u_{\mu(j)}^{(k_2)},\dots,g(v_{n-2})\right)=g(f(v_1,\dots,e_i^{(k_1)},\dots, e_j^{(k_2)},\dots,v_{n-2}))$$ for any $i,j \in I$, $k_1, k_2 \in \{1,\dots,n\}$, with $k_1<k_2$. Then the decompositions $$\Gamma:=\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}} \hbox{ and } \Gamma^{\prime}:=\mathbb V=\bigoplus\limits_{[V_{[u_i]}] \in {\mathcal{F}^{\prime}} / \approx} \overbrace{V_{[u_i]}},$$ corresponding to the choices of ${\mathcal B}$ and ${\mathcal B}^{\prime}$, respectively, in Theorem \[theo1\], are isomorphic.
A characterization of the $f$-simplicity of the components
===========================================================
Our aim in this section is to establish a characterization theorem on the $f$-simplicity of the linear subspaces $\overbrace{V_{[e_i]}}$, which appear in the decomposition of $\mathbb V$ given in Theorem \[theo1\].
Let us begin by recalling several concepts from the theory of algebras.
Let $\mathbb{A}$ be an algebra equipped with an $n$-ary multiplication $[ .,\dots ,.]\ :\mathbb{A}\times \dots \times \mathbb{A} \to \mathbb{A}$ and ${\mathcal{B}}$ a basis of $\mathbb{A}$. The basis ${\mathcal{B}}$ is said to be an *$i$-division basis* if for any $e_i \in {\mathcal{B}}$ and $b_1,\dots,b_{n-1} \in \mathbb{A}$ such that $$[ b_1,\dots ,e_i^{(k)},\dots,b_{n-1} ] =w\neq 0$$ for some $k\in \{1,\dots,n\}$ we have that $e_i,b_1,\dots,b_{n-1} \in {\mathcal{I}}(w)$, where ${\mathcal{I}}(w)$ denotes the [*ideal of $\mathbb A$ generated by $w$*]{}.
The above notion can be generalized to the case of a linear space $\mathbb V$ equipped with an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$. We refer to the minimal strongly $f$-invariant subspace of $\mathbb V$ that contains $v$ as the [*strongly $f$-invariant subspace of $\mathbb V$ generated by $v$*]{}, and will be denoted by ${\mathcal{I}}(v)$. Observe that the sum of two strongly $f$-invariant subspaces of $\mathbb V$ is also a strongly $f$-invariant subspace, and that the whole $\mathbb V$ is a trivial strongly $f$-invariant subspace.
Let $\mathbb V$ be a linear space, ${\mathcal B}=\{e_i:i \in I\}$ a fixed basis of $\mathbb V$ and $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ an $n$-linear map. It is said that ${\mathcal B}$ is an [*$i$-division basis*]{} of $\mathbb V$ respect to $f$, if for any $e_i \in {\mathcal{B}}$ and $b_1,\dots,b_{n-1} \in \mathbb{V}$ such that $$f\left( b_1,\dots ,e_i^{(k)},\dots,b_{n-1} \right) =w\neq 0$$ for some $k\in \{1,\dots,n\}$ we have that $e_i,b_1,\dots,b_{n-1} \in {\mathcal{I}}(w)$, where ${\mathcal{I}}(w)$ denotes the strongly $f$-invariant subspace of $\mathbb V$ generated by $w$.
Let us return to the decomposition of the linear space $\mathbb V$, given an $n$-linear map $f: \mathbb V \times \dots \times \mathbb V \to \mathbb V$ and fixed a basis ${\mathcal B}$, $$\mathbb V=\bigoplus\limits_{[V_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{V_{[e_i]}}$$ as deduced by Theorem \[theo1\]. For any $\overbrace{V_{[e_i]}}$ we can restrict $f$ to the $n$-linear map $$f^{\prime}: \overbrace{V_{[e_i]}} \times \dots \times \overbrace{V_{[e_i]}} \to \overbrace{V_{[e_i]}}$$ and consider on $\overbrace{V_{[e_i]}}$ the basis ${\mathcal B}^{\prime}:= {\mathcal B} \cap \overbrace{V_{[e_i]}}$. Then we can assert:
\[ane100\] The linear space $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple if and only if ${\rm Ann}(f^{\prime})=0$ and ${\mathcal B}^{\prime}$ is an $i$-division basis of $\overbrace{V_{[e_i]}}$ with respect to $f^{\prime}$.
Suppose that $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple. Observe firstly that ${\rm Ann}(f^{\prime})$ is a strongly $f^{\prime}$-invariant subspace of $\overbrace{V_{[e_i]}}$, and thus ${\rm Ann}(f^{\prime})=0.$ Additionally, if we consider some $e_j \in {\mathcal{B}}^{\prime}$ and $b_1,\dots,b_{n-1} \in \overbrace{V_{[e_i]}}$ such that $$f^{\prime}\left( b_1,\dots ,e_j^{(k)},\dots,b_{n-1} \right) =w\neq 0$$ for some $k\in \{1,\dots,n\}$, since $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple, we have $${\mathcal{I}}(w)=\overbrace{V_{[e_i]}}$$ and so $e_j,b_1,\dots,b_{n-1} \in {\mathcal{I}}(w)$. Thus, the basis ${\mathcal B}^{\prime}$ is an $i$-division basis of $\overbrace{V_{[e_i]}}$ with respect to $f^{\prime}$.
Conversely, let us suppose that ${\rm Ann}(f^{\prime})=0$ and that the set ${\mathcal B}^{\prime}$ is an $i$-division basis of $\overbrace{V_{[e_i]}}$ with respect to $f^{\prime}$. Consider any nonzero strongly $f^{\prime}$-invariant linear subspace $W$ of $\overbrace{V_{[e_i]}}$ and take some nonzero $ w \in W$. Since ${\rm Ann}(f^{\prime})=0$, there are nonzero elements $$\xi_1,\dots,\xi_{n-1} \in {\mathcal B}^{\prime}$$ such that $$0 \neq f\left( \xi_1,\dots ,w^{(j)},\dots,\xi_{n-1} \right) \in W$$ for some $j\in \{1,\dots,n\}.$ Since ${\mathcal B}^{\prime}$ is an $i$-division basis, we get $$\label{ane1}
\xi_k \in W,$$ for all $k\in \{1,\dots,n-1\}$.
Let us now prove that $ V_{[\xi_k]} \subset W$ for each $k\in \{1,\dots,n-1\}$. To do so, we have to show that for any $\nu_j \in [\xi_k]$ such that $\nu_j \neq \xi_k$, we must conclude that $\nu_j \in W$. It is clear that $\xi_k$ is connected to any $\nu_j\in [\xi_k]$, and thus there is a connection $ (X_1,X_2,...,X_m)$, where $X_{i}=\left(a_{i1},\dots,a_{in-1} \right)$ such that $a_{il} \in {\mathcal B}\dot\cup \overline{{\mathcal B}} $, $i \in \{1,\dots,m\},\ l\in \{1,\dots,n-1\}$, from $\xi_k$ to $\nu_j$.
Recall that we are dealing with an $f$-orthogonal and strongly $f$-invariant decomposition of $\mathbb{V}$ (by Theorem \[theo1\]). Thus, we may claim that the elements $a_{il}$ satisfy $$\label{ane2}
a_{il} \in {\mathcal B}^{\prime} \cup \overline{ {\mathcal B}^{\prime} },$$ and that the whole connection process from $\xi_k$ to $\nu_j$ can be deduced in $\overbrace{V_{[e_i]}}$.
We have that $$F(\{\xi_k\}, X_1)=F(\{\xi_k\}, a_{11},\dots,a_{1 n-1}) \neq \emptyset.$$ There are two cases to discuss.\
First case: $a_{1l} \in {\mathcal B}^{\prime}$, $l=1,\dots,n-1$ and so there exists $$0\neq x=f\left( a_{11},\dots ,\xi_k^{(r)},\dots,a_{1 n-1} \right),$$ for some $r\in \{1,\dots,n\}$.\
Second case: $a_{1l} \in \overline{{\mathcal B}^{\prime}}$, $l=1,\dots,n-1$ and so there exists $0\neq x \in \overbrace{V_{[e_i]}}$ such that $$f\left( \overline{a}_{11},\dots ,x^{(r)},\dots,\overline{a}_{1 n-1} \right)= \xi_k,$$ for some $r\in \{1,\dots,n\}$.
Consider the first case. As a consequence of the inclusion (\[ane1\]), we obtain $ x \in W.$
Consider now the second case. By the $i$-division property of the basis ${\mathcal B}^{\prime}$ and due to inclusion (\[ane1\]) we conclude that $x \in {\mathcal I}(\xi_k) \subset W$. So, in both cases we have shown that $$\label{fel1}
F(\{\xi_k\}, X_1) \subset W.$$
By the connection definition, we have $$F(F(\{\xi_k\}, X_1), X_2)\neq \emptyset,$$ where $F(\{\xi_k\}, a_1) \subset W$ as seen in (\[fel1\]).
Given an arbitrary $t \in F(F(\{\xi_k\}, X_1), X_2)$, as before, we have two cases to distinguish. In the first one $a_{2l} \in {\mathcal B}^{\prime}$, $l=1,\dots,n-1$ and so there exists $z \in F(\{\xi_k\}, X_1)$ such that $$0\neq z=f\left( a_{21},\dots ,\xi_k^{(r^{\prime})},\dots,a_{2 n-1} \right),$$ for some $r^{\prime}\in \{1,\dots,n\}$.\
In the second one $a_{2l} \in \overline{{\mathcal B}^{\prime}}$, and then there exists $z \in F(\{\xi_k\}, X_1)$ such that $0\neq f(\overline{a}_{21},\dots ,t^{(r^{\prime})},\dots,\overline{a}_{2 n-1}) =z$ .
In the first case the inclusion (\[fel1\]) shows that $t \in W.$ In the second case the $i$-division property of ${\mathcal B}^{\prime}$ gives us that $t \in {\mathcal I}(z) \subset W$.
In both cases, we have $$F(F(\{\xi_k\}, X_1), X_2)\subset W.$$
Iterating this argument on the connection (\[ane2\]), we obtain that $$\nu_j \in F( F (\dots (F(F(\{\xi_k\},X_1),X_{2}),\dots,X_{m-1}), X_{m}) \subset W$$ and so we can assert that $$\label{ane4}
V_{[\xi_k]} \subset W.$$
To finish the proof, we must prove that all $V_{[\nu_j]}$ such that $V_{[\nu_j]}\approx V_{[\xi_k]}$ verifies $V_{[\nu_j]} \subset W.$
Under the above assumption, there exists a subset $$\label{metalli}
\{[\xi_k],[\nu_2],...,[\nu_j]\} \subset {\mathcal B}/ \sim$$ satisfying the conditions in Definition \[ane3\]. From here,
$$\sum\limits_{1 \leq i<i^{\prime} \leq n} \left[\Pi_{V_{[\xi_k]}}(f(\mathbb V, \scriptsize{\ldots}, V_{[\nu_2]}^{(i)}, \scriptsize{\ldots}, V_{[\nu_2]}^{(i^{\prime})}, \scriptsize{\ldots}, \mathbb V)) + \Pi_{V_{[\nu_2]}}(f(\mathbb V, \scriptsize{\ldots}, V_{[\xi_k]}^{(i)}, \scriptsize{\ldots}, V_{[\xi_k]}^{(i^{\prime})}, \scriptsize{\ldots}, \mathbb V)) \right]\neq 0.$$ Therefore, there are $i,i^{\prime}\in \{1,\dots,n\}$ with $i<i^{\prime}$, such that $$\Pi_{V_{[\nu_2]}}(f(\mathbb V, \ldots, V_{[\xi_k]}^{(i)}, \ldots, V_{[\xi_k]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0$$ or $$\Pi_{V_{[\xi_k]}}(f(\mathbb V, \ldots, V_{[\nu_2]}^{(i)}, \ldots, V_{[\nu_2]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0.$$
Consider the first case, in which $$\Pi_{V_{[\nu_2]}}(f(\mathbb V, \ldots, V_{[\xi_k]}^{(i)}, \ldots, V_{[\xi_k]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0.$$ Then there exist $e_k^{\prime}, e_k^{\prime \prime } \in [\xi_k]$ and $b_1,\dots,b_{n-2}\in \mathbb V$ such that
$$0 \neq f(b_1,\dots,{e_k^{\prime}} ^{(i)}, \dots, {e_k^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})=x_2+c$$ where $0 \neq x_2 \in V_{[\nu_2]}$ and $c \in \bigoplus\limits_{[\nu_j] \neq [\nu_2]} V_{[\nu_j]}$.
Since ${\rm Ann}(f^{\prime})=0$, and taking into account Lemma \[elp1\], there exist $e_{21}^{\prime},\dots,e_{2 n-1}^{\prime} \in [\nu_2]$ such that
$$0\neq f(e_{21}^{\prime},\dots,{x_2} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =q$$ for some $r\in \{1,\dots,n\}$. By Lemma \[elp1\] and (\[ane4\]) we have that $$0\neq f(e_{21}^{\prime},\dots,{f(b_1,\dots,{e_k^{\prime}} ^{(i)}, \dots, {e_k^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =$$
$$f(e_{21}^{\prime},\dots,{(x_2+c)} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =f(e_{21}^{\prime},\dots,{x_2} ^{(r)}, \dots, e_{2 n-1}^{\prime}) =q\in W.$$ From here, by the $i$-division property of ${\mathcal B}^{\prime}$ we conclude that $$e_{21}^{\prime},\dots,e_{2 n-1}^{\prime} \in {\mathcal I}(q) \subset W.$$
Concerning the second case, recall that we have $$\Pi_{V_{[\xi_k]}}(f(\mathbb V, \ldots, V_{[\nu_2]}^{(i)}, \ldots, V_{[\nu_2]}^{(i^{\prime})}, \ldots, \mathbb V))\neq 0.$$ Similarly to the first case, there exist $e_2^{\prime}, e_2^{\prime \prime } \in [\nu_2]$ and $b_1,\dots,b_{n-2}\in \mathbb V$ such that $$0 \neq f(b_1,\dots,{e_2^{\prime}} ^{(i)}, \dots, {e_2^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})=x_k+d$$ where $0 \neq x_k \in V_{[\xi_k]}$ and $d \in \bigoplus\limits_{[\nu_j] \neq [\xi_k]} V_{[\nu_j]}$. Again, since ${\rm Ann}(f^{\prime})=0$, there exist $e_{k1}^{\prime},\dots,e_{k n-1}^{\prime} \in [\xi_k]$ such that
$$0\neq f(e_{k1}^{\prime},\dots,{x_k} ^{(r)}, \dots, e_{k n-1}^{\prime}) =s$$ for some $r\in \{1,\dots,n\}$.
By Lemma \[elp1\] and inclusion (\[ane4\]) we have that $$0\neq f(e_{k1}^{\prime},\dots,{f(b_1,\dots,{e_2^{\prime}} ^{(i)}, \dots, {e_2^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})} ^{(r)}, \dots, e_{k n-1}^{\prime}) =$$
$$f(e_{k1}^{\prime},\dots,{(x_k+d)} ^{(r)}, \dots, e_{k n-1}^{\prime}) =f(e_{k1}^{\prime},\dots,{x_k} ^{(r)}, \dots, e_{k n-1}^{\prime}) =s\in W.$$ From here, by the $i$-division property of ${\mathcal B}^{\prime}$ we conclude that $$e_{21}^{\prime},\dots,e_{2 n-1}^{\prime} \in {\mathcal I}(q) \subset W.$$ Applying the $i$-division property of ${\mathcal B}^{\prime}$ this leads to
$$f(b_1,\dots,{e_2^{\prime}} ^{(i)}, \dots, {e_2^{\prime \prime}} ^{(i^{\prime})}, \dots, b_{n-2})\in {\mathcal I}(s) \subset W.$$ A second application of the $i$-division property of ${\mathcal B}^{\prime}$ allows us to write $e_2^{\prime} \in W$.
At this point, we have shown in both cases that there are elements in $[\nu_2]$ belonging to $W$. Hence by using the same previous argument as done with $\xi_k$, (see inclusions (\[ane1\]) and (\[ane4\])), we get that $$V_{[\nu_2]} \subset W.$$
It is clear that this reasoning can be repeated for all other elements of the set (\[metalli\]). Henceforth $$V_{[\nu_j]} \subset W$$ and consequently, since $$\overbrace{V_{[e_i]}}=\overbrace{V_{[\xi_k]}}:= \bigoplus\limits_{V_{[e_j]} \in [V_{[\xi_k]}]} V_{[e_j]}$$ we proved that $$\overbrace{V_{[e_i]}}=W,$$ that is $\overbrace{V_{[e_i]}}$ is $f$-simple.
The above result can be restated as follows.
[*The linear space $\overbrace{V_{[e_i]}}$ is $f^{\prime}$-simple if and only if ${\rm Ann}(f^{\prime})=0$ and every non-zero element in $\overbrace{V_{[e_i]}}$ is an $i$-division element with respect to $f^{\prime}$.*]{}
Application to the structure theory of arbitrary $n$-ary algebras
=================================================================
In this section we will apply the results obtained in the previous sections to the structure theory of arbitrary $n$-ary algebras.
We will denote by ${\mathfrak A}$ an arbitrary $n$-ary algebra in the sense that there are no restrictions on the dimension of the algebra nor on the base field ${\mathbb F}$, and that no specific identity on the product ($n$-Lie (Filippov) [@Fil], $n$-ary Jordan [@kps], $n$-ary Malcev [@Pojidaev], etc.) is supposed. That is, ${\mathfrak A}$ is just a linear space over ${\mathbb F}$ endowed with a $n$-linear map $$[ \cdot, \ldots, \cdot ] :{\mathfrak A} \times \ldots \times {\mathfrak A} \to {\mathfrak A}$$ $$\hspace{1cm}(x_1, \ldots, x_n) \mapsto [x_1, \ldots, x_n]$$ called [*the product*]{} of ${\mathfrak A}$.
We recall that given an $n$-ary algebra $({\mathfrak A}, [\cdot, \ldots, \cdot ])$, a [*subalgebra*]{} of ${\mathfrak A}$ is a linear subspace ${\mathfrak B}$ closed for the product. That is, such that $[{\mathfrak B}, \ldots, {\mathfrak B}] \subset {\mathfrak B}$. A linear subspace ${\mathfrak I}$ of ${\mathfrak A}$ is called an [*ideal*]{} of ${\mathfrak A}$ if $[{\mathfrak A}, \ldots, {\mathfrak I}^{(r)}, \ldots, {\mathfrak A}] \subset {\mathfrak I},$ for all $ r\in\{1,\dots,n\}$. An $n$-ary algebra ${\mathfrak A}$ is said to be [*simple*]{} if its product is nonzero and its only ideals are $\{0\}$ and ${\mathfrak A}$. We finally recall that the [*annihilator*]{} of the algebra $({\mathfrak A}, [.,\dots,.])$ is defined as the linear subspace $${\rm Ann}({\mathfrak A})=\{x \in {\mathfrak A}: [{\mathfrak A}, \ldots, x^{(k)}, \ldots, {\mathfrak A}] =0, \mbox{ for all $k\in\{1,\dots,n\}$ }\}.$$
If we fix any basis ${\mathcal B}=\{e_i\}_{i \in I}$ of ${\mathfrak A}$, and denote the product $[.,\dots,.]$ of ${\mathfrak A}$ as $f$, Theorem \[theo1\] applies to get that ${\mathfrak A}$ decomposes as the $f$-orthogonal direct sum of strongly $f$-invariant linear subspaces $${\mathfrak A}=\bigoplus\limits_{[{\mathfrak A}_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{{\mathfrak A}_{[e_i]}}.$$
Now observe that the $f$-orthogonality of the linear subspaces means that, when $[{\mathfrak A}_{[e_i]}] \neq [{\mathfrak A}_{[e_j]}]$, we have $$[ \mathfrak A, \ldots, \overbrace{{\mathfrak A}_{[e_i]}}^{(k_1)}, \ldots, \overbrace{{\mathfrak A}_{[e_j]}}^{(k_2)}, \ldots, \mathfrak A]=0,$$ for all $k_1,k_2\in \left\{1,\dots,n \right\}$, $k_1 \neq k_2$, and that the strongly $f$-invariance of a linear subspace $\overbrace{{\mathfrak A}_{[e_i]}}$ means that $\overbrace{{\mathfrak A}_{[e_i]}}$ is actually an ideal of ${\mathfrak A}$. From here, we can state:
Let $({\mathfrak A}, [\cdot, \ldots, \cdot ])$ be an arbitrary algebra. Then for any basis ${\mathcal B} =\{e_i: i \in I\}$ of ${\mathfrak A}$ one has the decomposition $${\mathfrak A}=\bigoplus\limits_{[{\mathfrak A}_{[e_i]}] \in {\mathcal{F}} / \approx} \overbrace{{\mathfrak A}_{[e_i]}},$$ being any $\overbrace{{\mathfrak A}_{[e_i]}}$ an ideal of $ {\mathfrak A}$. Furthermore, any pair of different ideals in this decomposition is $f$-orthogonal.
In the same context, if we restrict the product $[\cdot, \ldots, \cdot ]$ of ${\mathfrak A}$ to any ideal $\overbrace{{\mathfrak A}_{[e_i]}}$, we get the algebra $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$. Now, by observing that the $f'$-simplicity of $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$ is equivalent to the simplicity of $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$ as an algebra, and that ${\rm Ann}(f^{\prime})={\rm Ann}(\overbrace{{\mathfrak A}_{[e_i]}})$, Theorem \[ane100\] allows us to assert the following.
The ideal $(\overbrace{{\mathfrak A}_{[e_i]}}, [\cdot, \ldots, \cdot ])$ is simple if and only if ${\rm Ann}(\overbrace{{\mathfrak A}_{[e_i]}})=0$ and ${\mathcal B}^{\prime}:={\mathcal B} \cap \overbrace{{\mathfrak A}_{[e_i]}}$ is an $i$-division basis of $\overbrace{{\mathfrak A}_{[e_i]}}$.
[99]{}
Barreiro E.; Calderón A.; Kaygorodov I.; Sánchez J.M.: $n$-Ary generalized Lie-type color algebras admitting a quasi-multiplicative basis, arXiv:1801.02071
Barreiro, E.; Kaygorodov, I.; Sánchez, J. M.: $k$-Modules over linear spaces by $n$-linear maps admitting a multiplicative basis, arXiv:1707.07483
Calderón A.; Navarro F.J.: Arbitrary algebras with a multiplicative basis. [*Linear Algebra Appl.*]{} 498 (2016), 106-116.
Calderón, A.J.: Associative algebras admitting a quasi-multiplicative basis. [*Algebr. Represent. Theory*]{} 17 (2014), no. 6, 1889-1900.
Calderón, A.J.: Lie algebras with a set grading. [*Linear Algebra Appl.*]{} 452 (2014), 7-20.
Calderón, A.J.: Decompositions of linear spaces induced by bilinear maps, [*Linear Algebra Appl.*]{} 542 (2018), 209–224.
Calderón, A.J.; Navarro, F.J.; Sánchez, J.M.: $n$-Algebras admitting a multiplicative basis. [*J. Algebra Appl.*]{} [ 16]{} (2018), 11, 1850025 (11 pages).
Cao, Y.; Chen, L.Y.: On the structure of graded Leibniz triple systems. [*Linear Algebra Appl.*]{} 496 (2016), 496-509.
Filippov, V.: $n$-Lie algebras, [*Sib. Math. J.*]{} 26 (1985), 6, 126-140.
Kaygorodov, I.; Pozhidaev, A.; Saraiva, P.: On a ternary generalization of Jordan algebras, Linear and Multilinear algebra, DOI: 10.1080/03081087.2018.1443426
Pozhidaev, A.: $n$-ary Mal’tsev algebras, [*Algebra and Logic*]{} 40 (2001), 3, 309-329.
|
---
abstract: 'Simulated observations of a $10{^\circ}\times 10{^\circ}$ field by the Microwave Anisotropy Probe (MAP) are analysed in order to separate cosmic microwave background (CMB) emission from foreground contaminants and instrumental noise and thereby determine how accurately the CMB emission can be recovered. The simulations include emission from the CMB, the kinetic and thermal Sunyaev-Zel’dovich (SZ) effects from galaxy clusters, as well as Galactic dust, free-free and synchrotron. We find that, even in the presence of these contaminating foregrounds, the CMB map is reconstructed with an rms accuracy of about 20 $\mu$K per 12.6 arcmin pixel, which represents a substantial improvement as compared to the individual temperature sensitivities of the raw data channels. We also find, for the single $10{^\circ}\times 10{^\circ}$ field, that the CMB power spectrum is accurately recovered for $\ell \la 600$.'
author:
- |
A.W. Jones, M.P. Hobson and A.N. Lasenby\
Cavendish Astrophysics, Cavendish Laboratory, Madingley Road, Cambridge CB3 OHE, UK
date: 'Accepted ???. Received ???; in original form '
title: Separation of foregrounds from cosmic microwave background observations with the MAP satellite
---
\#1[[ \#1]{}]{} \#1[[ \#1]{}]{} == == == == \#1\#2[[[d]{} \#1 [d]{} \#2]{}]{} \#1\#2[[\#1 \#2]{}]{} \#1\#2\#3[[\^2 \#1 \#2 \#3]{}]{} \#1\#2[[\^2 \#1 \#2\^2]{}]{} \#1 \#1[[|[\#1]{}|]{}]{}
\[firstpage\]
methods: data analysis – techniques: image processing – cosmic microwave background.
Introduction {#intro}
============
The NASA MAP satellite is due to be launched in 2000 and aims to make high-resolution, low-noise maps of the whole sky at several observing frequencies. The main goal of the mission is to use this multi-frequency data to produce an all-sky map of fluctuations in the cosmic microwave background (CMB). As with any CMB experiment, however, MAP is also sensitive to emission from several foreground components. The main contaminants are expected to be Galactic dust, free-free and synchrotron emission, the kinetic and thermal Sunyaev-Zel’dovich (SZ) effect from galaxy clusters, and extra-galactic point sources. In order to obtain a map of the CMB fluctuations alone, it is therefore necessary to separate the emission from these various components.
The contamination due to extra-galactic point sources is expected to be mainly from radio-loud AGNs, including flat spectrum radio-galaxies and QSOs, blazars and possibly some inverted spectrum radiosources. Since the frequency spectra of many of these extra-galactic objects are, in general, rather complicated, any extrapolation of their emission over a wide frequency range must be performed with caution. For MAP observations it is possible that a significant fraction of point sources may be identified and removed using the satellite observations themselves, together perhaps with pre-existing surveys. Moreover, by including the point source predictions of Toffolatti et al. (1998) into simulated Planck Surveyor observations, Hobson et al. (1998b) find, using a maximum-entropy algorithm, that the quality of the component separation is relatively insensitive to the presence of the point sources. Therefore, the effects of point sources will be ignored in this paper.
Aside from extra-galactic point sources, the other physical components mentioned above have reasonably well defined spectral characteristics, and we may use this information, together with multi-frequency observations, to distinguish between the various foregrounds. In this paper, we perform a separation of the different components, in order to determine the accuracy to which the MAP satellite can recover the CMB emission in the presence of contaminating foreground emission. The separation is carried out using the Fourier space maximum-entropy method (MEM) developed by Hobson et al. (1998) (hereafter HJLB98), which in the absence of non-Gaussian signals reduces to linear Wiener filtering (Bouchet, Gispert & Puget 1996; Tegmark & Efstathiou 1996).
Simulated MAP observations {#simmobs}
==========================
The simulated input components used here are identical to those described in HJLB98, so that a direct comparison between the MAP and Planck Surveyor results can be made. These simulations were performed by Gispert & Bouchet (1997) and consist of a $10{^\circ}\times 10{^\circ}$ field for each of the six foreground components described above. The realisations of the input components used are shown in Figure \[fig1\]. Each component is plotted at 50 GHz and, for illustration purposes, has been convolved with a Gaussian beam of FWHM equal to 12.6 arcmin, which is the highest resolution of the current design for the MAP satellite. For convenience, the mean of each map is set to zero, in order to highlight the relative level of fluctuations due to each component.
The primary CMB fluctuations are a realisation of COBE-normalised standard CDM with critical density and $H_\circ=50$km s$^{-1}$ Mpc$^{-1}$. IRAS 100-$\mu$m maps are used as spatial templates for the dust and free-free components; the correlated emission from these two components is described by HJLB98. Haslam 408 MHz maps (Haslam et al. 1982) are used as the spatial template for the synchrotron emission to which was added Gaussian small scale structure following a $C_\ell \propto
\ell^{-3}$ power law on angular scales below 0.85 degrees. The kinetic and thermal SZ effects are generated using a Press-Schechter formalism, as discussed in Bouchet et al. (1997) and a King model for the cluster profiles. The MAP satellite is not designed to be sensitive to either of the SZ effects but they are included here for completeness.
Using the realisations for each physical component and the design specifications summarised in Table \[table1\], it is straightforward to simulate MAP satellite observations. The simulated observations are produced by integrating the emission due to each physical component across each waveband, assuming the transmission is uniform across the band. At each observing frequency, the total sky emission is convolved with a beam of the appropriate FWHM. Finally, isotropic noise is added to the maps, assuming a spatial sampling rate of FWHM/2.4 at each frequency. We have assumed that any striping due to the scanning strategy and $1/f$ noise has been removed to sufficient accuracy that any residuals become negligible.
Figure \[fig2\] shows the rms temperature fluctuations as a function of observing frequency due to each physical component, after convolution with the beam. The rms noise per pixel at each frequency is also plotted (this noise is calculated for one year of observations and is based upon the current design of the satellite as reported in the MAP NASA home page). We see from the figure that, as expected, the rms temperature fluctuation of the CMB is almost constant across the frequency channels, the only variations being due to the convolution with beams of different sizes. Furthermore, it is seen that the CMB is consistently above the noise level, but only at the very lowest frequency do any of the other physical components become significant. At 22 GHz it is seen that the free-free and synchrotron components just reach above the noise level but are still well below the level of the CMB. This is to be expected since the MAP satellite is designed not to make separate maps of the individual components, but only to provide sufficient information about the foreground emission in order to perform an accurate subtraction from the CMB. This should be contrasted with the Planck Surveyor mission (Bersanelli et al. 1996), which is designed to produce all-sky maps of the foreground components as well as the CMB.
------------------------------------------------------ ----- ----- ----- ----- ------ --
Central frequency (GHz): 22 30 40 60 90
Fractional bandwidth ($\Delta\nu/\nu$): 0.2 0.2 0.2 0.2 0.2
Transmission: 1.0 1.0 1.0 1.0 1.0
Angular resolution (arcmin): 56 41 28 21 12.6
$\Delta T$ sensitivity ($\mu$K) ($17^\prime$ pixel): 26. 32. 27. 35. 35.
------------------------------------------------------ ----- ----- ----- ----- ------ --
: Proposed observational parameters for the MAP satellite. Angular resolution is quoted as FWHM for a Gaussian beam. Sensitivities are quoted for 12 months of observations.
\[table1\]
The observed maps at each of the five MAP frequencies are shown in Figure \[fig3\] in units of equivalent thermodynamic temperature measured in $\mu$K. The coarse pixelisation at the lower observing frequencies is due to the FWHM/2.4 sampling rate. Moreover, at these lower frequencies the effect of convolution with the relatively large beam is also easily seen. It is also seen that, as expected, the CMB emission dominates each of the frequency channels.
The component separation {#results}
========================
The component separation is performed using the Fourier space MEM algorithm developed by HJLB98, and the reader is referred to that paper for details of the method. In the absence of non-Gaussian signals the method reduces to linear Wiener filtering. In this Section, we apply the MEM algorithm to the simulated MAP data discussed in Section \[simmobs\]. Since the dominant contributions to the MAP data are the CMB and the pixel noise, which are both Gaussian, the results of the MEM algorithm should be very similar to those obtained using a Wiener filter approach (Bouchet, Gispert & Puget 1996; Tegmark & Efstathiou 1996). Our primary aim is simply to determine the accuracy to which the CMB emission can be recovered from MAP observations after such a component separation has been performed.
As discussed in HJLB98 there are several layers of information that we can use in the analysis of the data. For the CMB and SZ effects the frequency spectra is accurately known. For the Galactic emissions this is not true. However, as discussed in Jones et al. (1998b) it is possible to use the data itself to constrain the spectral dependence of any Galactic emission that is significant in the data. If it is not a significant component then the uncertainty in the spectral index will only affect the reconstruction of that Galactic component and not the reconstruction of any of the other components. Therefore, in this letter, we assume perfect knowledge of the frequency spectra of each component.
Our other prior knowledge concerns the spatial distribution, or power spectrum of the various emission components. We are never entirely ignorant of the shape of the power spectra for the foregrounds in the data. In HJLB98 two levels of power spectrum information were used. The first assumed no information on the power spectra of any component and the second assumed perfect knowledge of the power spectra and cross-correlation information. In this letter we will assume a more conservative approach than the latter but also remembering that we have some prior information on the components. We choose to use the best guess theoretical power spectrum for each component. This is a white noise power spectra for the two SZ effects, standard CDM for the CMB and an $\ell^{-3}$ power spectra for the Galactic components. In any case, as noted in HJLB98, the MEM algorithm is an iterative process and the initial guess for the power spectra does not greatly affect the final reconstructions.
The reconstructed CMB map {#recicf}
-------------------------
The reconstruction of the CMB component is shown in Figure \[fig4\]. The greyscale in this figure was chosen to coincide with that of Fig. \[fig1\] in order to enable a more straightforward comparison with the true map. At least by eye the CMB has been reproduced quite accurately. The reconstructions of the other physical components are significantly worse and are not plotted. In fact, the only other components for which the reconstruction differs significantly from zero are the free-free and synchrotron Galactic components. This is expected as the other three components are well below the noise (see Figure \[fig2\]) in all frequency channels. The free-free and synchrotron reconstructions appear smoothed to a larger degree than the CMB reconstruction which is also expected because the two Galactic components only appear above the noise in the lowest resolution (22 GHz) channel. Further tests on the thermal SZ effect show that only clusters with an integrated decrement of $> 250\mu$K (at 22 GHz) will be observable with the MAP satellite and even then the reconstructed decrement is severely underestimated. This is because the frequency dependence of the thermal SZ at frequencies below 100 GHz closely follows that of the CMB and it is very difficult to extract the information on the SZ effect from the data.
In order to obtain a quantitative description of the accuracy of the reconstructions, we calculate the rms of the residuals. For any particular physical component, this is given by $$e_{\rm rms} = \left[ {1\over N} \sum_{i=1}^N \left( T^{\rm rec}_i -
T^{\rm true}_i \right)^2 \right]^{1/2}
\label{eq:erms}$$ where $T^{\rm rec}_i$ and $T^{\rm true}_i$ are respectively the reconstructed and true temperatures in the $i$th pixel and $N$ is the total number of pixels in the map. The value of $e_{\rm rms}$ for the CMB reconstruction is $22\mu$K per $12.6\arcmin$ pixel. We note that this is very close to the desired $20\mu$K accuracy generally quoted for the MAP satellite. This error should be contrasted with the result obtained by using the 90 GHz data channel as the CMB map. In this case, the error on the map is $46\mu$K (of which $45\mu$K is due to instrumental noise and $10\mu$K is due to the presence of the unsubtracted foregrounds), so some form of component separation is clearly desirable.
As mentioned above, the other physical components were only poorly reconstructed. Indeed no recovery of the dust or either SZ effect was possible at all, and only a low-resolution reconstructions of the other two components were obtained. The rms errors on the free-free and synchrotron reconstructions were $0.6\mu$K and $0.07\mu$K respectively. By comparing these rms errors with the peak amplitudes in each map, which are $1.0\mu$K and $0.10\mu$K respectively, we see that the reconstructions are not very accurate.
As a further test of the quality of the CMB reconstruction, we plot the amplitudes of the reconstructed pixel temperatures against those of the true map. The temperature range of the true input map is divided into 100 bins. Three contours are then plotted which correspond to the 68, 95 and 99 per cent points of the distribution of corresponding reconstructed temperatures in each bin. Clearly, a perfect reconstruction would be represented by a single diagonal contour of unit gradient with width equal to the bin size. Figure \[fig5\] shows this comparison for the CMB reconstruction from the MAP data. The gradient of the best fit line is 1.03 and the 68 per cent contours lie approximately $25\mu$K on either side of the true value. This agrees with the value for the $e_{\rm rms}$ quoted above. Figure \[fig6\] shows the same comparison for the case in which the highest frequency data channel as used as the CMB reconstruction at $12.6\arcmin$ resolution. We see that in this case the 68 per cent contour is much wider, corresponding to about 45 $\mu$K errors, in agreement with the $e_{\rm rms}$ value quoted above.
The reconstructed CMB power spectrum
------------------------------------
Since the component separation is performed in the Fourier domain, it is straightforward to compute the reconstructed power spectrum of the CMB component. As noted in HJLB98 it is also straightforward to calculate the errors on the reconstructed power spectrum using the inverse Hessian. Figure \[fig7\] shows the power spectrum (bold line) of the reconstructed CMB map compared to the power spectrum (faint line) of the input map. The 68 per cent confidence limits on the power spectrum reconstruction are also shown (dotted lines). It is seen that the true power spectrum is almost always contained within the 68 per cent confidence intervals. These confidence limits on the power spectrum are calculated by assuming a Gaussian profile for the posterior probability distribution around its maximum (see HJLB98). We see that the power spectrum of the reconstructed map is reasonably accurate for $\ell \la 500$, at which point the reconstructed spectrum begins slightly to underestimate the true value, and beyond $\ell \approx 1000$ the input power spectrum is severely underestimated. This behaviour is due mainly to the effects of beam convolution and presence of instrumental noise. By using some form of rescaled filter, it is possible to boost the reconstructed power spectrum at high $\ell$ so that it lies closer to the input spectrum, but only at the cost of increasing the rms error of the corresponding reconstructed CMB map (see HJLB98). Therefore, such a procedure has not been carried out here.
Discussion and conclusions {#conc}
==========================
In this paper, we have analysed simulated MAP observations in a $10{^\circ}\times 10{^\circ}$ field, in order to separate the CMB emission from various foreground contaminants and determine how accurately the CMB emission can be recovered. The algorithm used to perform the component separation is the Fourier space maximum-entropy method discussed by Hobson et al. (1998), which reduces to a Wiener filter in the absence of non-Gaussian signals. The simulated observations include contributions from primary CMB, kinetic and thermal SZ effects, and Galactic dust, free-free and synchrotron emission. Assuming knowledge of both the one-dimensional ensemble average power spectrum for each component and its frequency behaviour, we find that the CMB emission can be reconstructed with an rms accuracy of $22\mu$K per 12.6 arcmin pixel. We note that this represents a substantial improvement as compared to the individual temperature sensitivities of the raw data channels, and thus indicates that some form of component separation is desirable for MAP observations. We also find that the power spectrum of the reconstructed CMB map lies close to the true power spectrum for $\ell \la 1000$. In contrast to our analysis of simulated Planck Surveyor observations (Hobson et al. 1998), we find that it is not possible to recover the emission from Galactic dust or the kinetic/thermal Sunyaev-Zel’dovich effects, although low-resolution reconstructions of the Galactic free-free and synchrotron emission were obtained.
We have also repeated the component separation using a straightforward Wiener filter algorithm, and find that, as expected, the reconstructions are very similar to those presented above and the rms error on the reconstructed CMB map is almost identical. Moreover, both techniques take the same computational time as they are based upon similar calculations in Fourier space. As mentioned above, the similarity of the results is due to the fact that the dominant contributions to the MAP data are the CMB and instrumental noise, which are both Gaussian. Thus, although several non-Gaussian foregrounds were included in the simulated observations, the angular resolution and frequency coverage of the MAP satellite preclude the presence of significant non-Gaussian effects in the data (although this is certainly not the case for the Planck Surveyor satellite). If, however, the input CMB component in the MAP simulations is replaced by a realisation of CMB fluctuations due to cosmic strings, there still exists a significant non-Gaussian signal in the data. In this case we find that the rms error on the CMB reconstruction is consistently $2\mu$K lower for the MEM algorithm as compared to the Wiener filter.
Acknowledgements {#acknowledgements .unnumbered}
================
AWJ acknowledges King’s College, Cambridge, for support in the form of a Research Fellowship. We would like to thank Francois Bouchet and Richard Gispert for kindly providing the simulations used in this paper.
[99]{} Bouchet F.R., Gispert R., Boulanger F., Puget J.L., 1997, in Bouchet F.R., Gispert R., Guideroni B., Tran Thanh Van J.,eds, Proc. 36th Moriond Astrophysics Meeting, Microwave Anisotropies. Editions Frontière, Gif-sur-Yvette, p. 481 Bouchet F.R., Gispert R., Puget J.L., 1996, in Dwek E., ed., Proc. AIP Conf. 348, The mm/sub-mm foregrounds and future CMB space missions. AIP Press, New York, p. 255 Gispert R., Bouchet F.R., 1997, in Bouchet F.R., Gispert R., Guideroni B., Tran Thanh Van J.,eds, Proc. 16th Moriond Astrophysics Meeting, Microwave Anisotropies. Editions Frontière, Gif-sur-Yvette, p. 503 Haslam C. G. T et al., 1982, A&AS, 47, 1 Hobson M.P., Jones A.W., Lasenby, A.N., Bouchet, F.R., 1998a, MNRAS, in press Hobson M.P., Barreiro R.B., Toffalati L., Lasenby A.N., Sanz J.L., Jones A.W., Bouchet F.R., 1998b, MNRAS, submitted Jones A. W., Hancock S., Lasenby A. N., Davies R. D., Gutiérrez C. M., Rocha G., Watson R. A., Rebolo R., 1998a, MNRAS, 294, 582 Jones A. W., Hobson, M.P., Mukherjee, P., Lasenby A.N., 1998b, to appear in Proc. of ’The CMB and the Planck Mission’, Santander Maisinger K., Hobson M.P., Lasenby A.N., 1997, MNRAS, 290, 313 Tegmark M., Efstathiou G., 1996, MNRAS, 281, 1297 Toffolatti L., Argüeso Gómez F., De Zotti G., Mazzei P., Francheschini A., Danese L., Burigana C., 1998, MNRAS, 297, 117
\[lastpage\]
|
---
abstract: 'A complete extension theorem for linear codes over a module alphabet and the symmetrized weight composition is proved. It is shown that an extension property with respect to arbitrary weight function does not hold for module alphabets with a noncyclic socle.'
author:
- Serhii Dyshko
bibliography:
- 'biblio.bib'
title: When the extension property does not hold
---
Introduction
============
The famous MacWilliams Extension Theorem states that each linear Hamming isometry of a linear code extends to a monomial map. The result was originally proved in the MacWilliams’ Ph.D. thesis, see [@macwilliams-phd61], and was later generalized for linear codes over module alphabets.
Starting from the work [@ward-wood] of Ward and Wood, where they used the character theory to get an easy proof of the classical MacWilliams Extension Theorem, several generalizations of the result appeared in works of Dinh, López-Permouth, Greferath, Wood, and others, see [@dinh-lopez-1; @dinh-lopez; @greferath; @wood-foundations]. For finite rings and the Hamming weight, it was proved that the extension theorem holds for linear codes over a module alphabet if and only if the alphabet is pseudo-injective and has a cyclic socle, see [@wood-foundations].
Regarding the symmetrized weight composition, the extension theorem for the case of classical linear codes was proved by Goldberg in [@goldberg]. There is a recent result in [@elgarem-megahed-wood] where the authors proved that if an alphabet has a cyclic socle, then an analogue of the extension theorem holds for the symmetrized weight composition built on arbitrary group. The result was improved in [@assem] where the author showed, in some additional assumptions, the maximality of the cyclic socle condition for the symmetrized weight composition built on the full automorphism group of an alphabet. There the author showed the relation between extension properties with respect to the Hamming weight and the symmetrized weight composition.
There exist various results on the extension property for arbitrary weight functions, and particularly for homogeneous weights and the Lee weight, as for example in [@barra], [@greferath-biinvariant] and [@langevin].
In this paper we give the complete proof of the maximality of cyclic socle condition for the extension theorem in the context of codes over a module alphabet and symmetrized weight composition built on arbitrary group, see . The result is used to show that for a noncyclic socle alphabet and arbitrary weight function the extension property does not hold, see .
Preliminaries
=============
Let $R$ be a ring with identity and let $A$ be a finite left $R$-module. Consider a group $\operatorname{Aut}_R(A)$ of all $R$-linear automorphisms of $A$. Let $G$ be a subgroup of $\operatorname{Aut}_R(A)$. Consider the action of $G$ on $A$ and denote by $A/G$ the set of orbits.
Let $\operatorname{F}(X,Y)$ denote the set of all maps from the set $X$ to the set $Y$.
Let $n$ be a positive integer and consider a module $A^n$. Define a map $\operatorname{swc}_G: A^n \rightarrow \operatorname{F}(A/G,{\mathbb{Q}})$, called the *symmetrized weight composition* built on the group $G$. For each $a \in A^n$, $O \in A/G$, $$\operatorname{swc}_G(a)(O) = {|{ \{i \in {\{1,\dots,n\}} \mid a_i \in O\} }|}\;.$$ The *Hamming weight* $\operatorname{wt}: A^n \rightarrow \{0, \dots, n\}$ is a function that counts the number of nonzero coordinates. There is always a zero orbit $\{0\}$ in $A/G$. For each $a \in A^n$, $\operatorname{swc}_G(a)(\{0\}) = n - \operatorname{wt}(a)$.
Consider a linear code $C \subseteq A^n$ and a map $f \in \operatorname{Hom}_R(C,A^n)$. The map $f$ is called an *$\operatorname{swc}_G$-isometry* if $f$ preserves $\operatorname{swc}_G$. We call $f$ a *Hamming isometry* if $f$ preserves the Hamming weight.
A *closure* of a subgroup $G \leq \operatorname{Aut}_R(A)$, denoted $\bar{G}$, is defined as, $$\bar{G}= \{ g \in \operatorname{Aut}_R(A) \mid \forall O \in A/G, g(O) = O\}\;.$$
Obviously, $G$ is a subgroup of $\bar{G}$. If $G = \bar{G}$, then the group is called *closed*. Also, $\operatorname{swc}_G = \operatorname{swc}_{\bar{G}}$, since both groups have the same orbits. More on a closure of a group and its properties see in [@wood-aut-iso].
A map $h:A \rightarrow A$ is called *$G$-monomial* if there exist a permutation $\pi \in S_n$ and automorphisms $g_1, g_2, \dots, g_n \in \bar{G}$ such that for any $a \in A^n$ $$h\left( (a_1, a_2, \dots,a_n) \right) = \left( g_1(a_{\pi(1)}), g_2(a_{\pi(2)}) \dots, g_n(a_{\pi(n)})\right)$$
It is not difficult to show that a map $f \in \operatorname{Hom}_R(A^n, A^n)$ is an $\operatorname{swc}_G$-isometry if and only it is a $G$-monomial map.
We say that $A$ has an *extension property* with respect to $\operatorname{swc}_G$ if for any code $C \subseteq A^n$, each $\operatorname{swc}_G$-isometry $f \in \operatorname{Hom}_R(C,A^n)$ extends to a $G$-monomial map.
**Characters and the Fourier transform.** Let $A$ be a left $R$-module. The module $A$ can be seen as an abelian group, i.e., $A$ is a $\mathbb{Z}$-module. Consider a multiplicative $\mathbb{Z}$-module $\mathbb{C}^*$. Denote by $\hat{A} = \operatorname{Hom}_\mathbb{Z}(A, \mathbb{C}^*)$ the set of characters of $A$. The set $\hat{A}$ has a natural structure of a right $R$-module, see [@greferath] (Section 2.2).
Let $W$ be a left $R$-module. The Fourier transform of a map $f: W \rightarrow \mathbb{C}$ is a map $\mathcal{F}(f): \hat{W} \rightarrow \mathbb{C}$, defined as $$\mathcal{F}(f)(\chi) = \sum_{w \in W} f(w)\chi(w)\;.$$ Recall the indicator function of a subset $Y$ of a set $X$ is a map ${\mathbbm{1}}_Y: X \rightarrow \{0,1\}$, such that ${\mathbbm{1}}_Y(x) = 1$ if $x \in Y$ and ${\mathbbm{1}}_Y(x) = 0$ otherwise. For a submodule $V \subseteq M$, $$\mathcal{F}({\mathbbm{1}}_V) = {|{V}|} {\mathbbm{1}}_{V^\perp}\;,$$ where the dual module $V^\perp \subseteq \hat{M}$ is defined as $$V^\perp = \{ \chi \in \hat{M} \mid \forall v \in V, \chi(v) = 1 \}\;.$$ Note that the Fourier transform is invertible, $V^{\perp\perp} \cong V$, see [@greferath] (Section 2.2), and it is true that for any $R$-submodules $V, U \subseteq W$, $(V \cap U)^\perp = V^\perp + U^\perp$.
Extension criterium {#sec:ext-criterium}
===================
Let $W$ be a left $R$-module isomorphic to $C$. Let $\lambda \in \operatorname{Hom}_R(W, {A^n})$ be a map such that $\lambda(W) = C$. Present the map $\lambda$ in the form $\lambda = (\lambda_1,\dots, \lambda_n)$, where $\lambda_i \in \operatorname{Hom}_R(W,{A})$ is a projection on the $i$th coordinate, for $i \in {\{1,\dots,n\}}$. Let $f: C \rightarrow A^n$ be a homomorphism of left $R$-modules. Define $\mu = f\lambda \in \operatorname{Hom}_R(W,A^n)$.
W & C\
& A\^n
An $R$-module $A$ is called *$G$-pseudo-injective*, if for any submodule $B \subseteq A$, each injective map $f \in \operatorname{Hom}_R(B,A)$, such that for any $O \in A/G$, $f(O \cap B) \subseteq O$, extends to an element of $\bar{G}$.
\[thm-isometry-criterium\] The map $f \in \operatorname{Hom}_R (C,A^n)$ is an $\operatorname{swc}_G$-isometry if and only if for any $O \in A/G$, the following equality holds, $$\label{eq-main-space-equation}
\sum_{i=1}^n {\mathbbm{1}}_{\lambda_i^{-1}(O)} = \sum_{i=1}^n {\mathbbm{1}}_{\mu_i^{-1}(O)}\;.$$ If $f$ extends to a $G$-monomial map, then there exists a permutation $\pi \in S_n$ such that for each $O \in A/G$ the equality holds, $$\label{eq-condition}
\lambda_i^{-1}(O) = \mu_{\pi(i)}^{-1}(O)\;.$$ If $A$ is $G$-pseudo-injective and \[eq-condition\] holds, then $f$ extends to a $G$-monomial map.
For any $w \in W$, $O \in A/G$, $$\operatorname{swc}(\lambda(w))(O) = \sum_{i=1}^n {\mathbbm{1}}_{O}(\lambda_i(w)) = \sum_{i=1}^n {\mathbbm{1}}_{\lambda_i^{-1}(O)}(w)\;.$$ Therefore, the map $f$ is an $\operatorname{swc}_G$-isometry if and only if \[eq-main-space-equation\] holds.
If $f$ is extendable to a $G$-monomial map with a permutation $\pi \in S_n$ and automorphisms $g_1, \dots, g_n \in \bar{G}$, then for all $i \in {\{1,\dots,n\}}$, $\mu_{\pi(i)} = g_i \lambda_i$. Hence, for all $O \in A/G$, $\mu_{\pi(i)}^{-1}(O) = \lambda_i^{-1} (g_i^{-1}(O)) = \lambda_i^{-1} (O)$.
Prove the last part. Fix $i \in {\{1,\dots,n\}}$. From \[eq-condition\] calculated in the orbit $\{0\}$, $\operatorname{Ker}\lambda_i = \operatorname{Ker}\mu_{\pi(i)} = N \subseteq W$. Consider the injective maps $\bar{\lambda}_i, \bar{\mu}_{\pi(i)} : W/N \rightarrow A$ such that $\bar{\lambda}_i(\bar{w}) = \lambda_i(w)$ and $\bar{\mu}_{\pi(i)}(\bar{w}) = \mu_{\pi(i)}(w)$ for all $w \in W$, where $\bar{w} = w + N$.
One can verify that for all $O \in A/G$, $\bar{\lambda}_i^{-1}(O) = \bar{\mu}_{\pi(i)}^{-1}(O)$. Then, it is true that for the injective map $h_i = \bar{\mu}_{\pi(i)}\bar{\lambda}_i^{-1} \in \operatorname{Hom}_R(A, A)$, $h(O \cap \bar{\lambda}_i(W/N)) \subseteq O$, for all $O \in A/G$. Since $A$ is $G$-pseudo-injective, there exists a $G$-monomial map $g_i$ such that $g_i = h_i$ on $\bar{\lambda}_i(W/N) = \lambda_i(W) \subseteq A$. It is easy to check that $\lambda_i = g_i \mu_{\pi(i)}$. Hence $f$ extends to a $G$-monomial map.
A module $A$ is $G$-pseudo-injective if and only if for any code $C \subset A^1$, each $\operatorname{swc}_G$-isometry $f \in \operatorname{Hom}_R(C,A)$ extends to a $G$-monomial map.
Prove the contrapositive. By definition, $A$ is not $G$-pseudo-injective, if there exists a module $C \subseteq A$ and an injective map $f \in \operatorname{Hom}_R(C,A)$, such that for each $O \in A/G$, $f(O \cap C) \subseteq O$, but $f$ does not extend to an automorphism $g \in \bar{G}$. Equivalently, $\operatorname{swc}_G(x) = \operatorname{swc}_G(f(x))$, for all $x \in C$, yet $f$ does not extend to a $G$-monomial map.
A module $A$ is called *pseudo-injective*, if for any module $B \subseteq A$, each injective map $f \in \operatorname{Hom}_R(B,A)$ extends to an endomorphism in $\operatorname{Hom}_R(A,A)$. In [@dinh-lopez-1] and [@wood-foundations] the authors used the property of pseudo-injectivity to describe the extension property for the Hamming weight. They showed that an alphabet is not pseudo-injective if and only if there exists a linear code $C \subset A^1$ with an unextendable Hamming isometry.
Not all the modules are $G$-pseudo-injective. It is even true that not all pseudo-injective modules are $G$-pseudo-injective, for some $G \leq \operatorname{Aut}_R(A)$. In our future paper we will give a description of $G$-pseudo-injectivity of finite vector spaces. Apparently, despite the fact that vector spaces are pseudo-injective, almost all vector spaces, except a few families, are not $G$-pseudo-injective for some $G$.
Matrix module alphabet {#sec:mm}
======================
Let $R = \operatorname{M}_{k\times k} (\mathbb{F}_q)$ be the ring of $k \times k$ matrices over the finite field $\mathbb{F}_q$, where $k$ is a positive integer and $q$ is a prime power. It is proved in [@lang p. 656] that each left(right) module $R$-module $U$ is isomorphic to $\operatorname{M}_{k \times t}(\mathbb{F}_q)$ ($\operatorname{M}_{t \times k}(\mathbb{F}_q)$), for some nonnegative integer $t$. Call the integer $t$ a *dimension* of $U$ and denote $\dim U = t$.
Let $m$ be a positive integer, $m > k$. Let $M$ be an $m$-dimensional left(right) $R$-module. Let ${\mathcal{L}}(M)$ be the set of all $R$-submodules in $M$. Consider a poset $({\mathcal{L}}(M), \subseteq)$ and define a map $$E : \operatorname{F}({\mathcal{L}}(M), {\mathbb{Q}}) \rightarrow \operatorname{F}(M, {\mathbb{Q}})\;,\quad \eta \mapsto \sum_{U \in {\mathcal{L}}(M)} \eta(U){\mathbbm{1}}_{U}\;.$$ The set $\operatorname{F}({\mathcal{L}}(M),{\mathbb{Q}})$ has a structure of an ${|{{\mathcal{L}}(M)}|}$-dimensional vector space over the field ${\mathbb{Q}}$. In the same way, $\operatorname{F}(M, {\mathbb{Q}})$ is an ${|{M}|}$-dimensional ${\mathbb{Q}}$-linear vector space. The map $E$ is a ${\mathbb{Q}}$-linear homomorphism. Similar notions of a *multiplicity function* and the “*$W$ function*" were observed in [@wood-aut-iso] and [@wood-foundations].
Let $V$ be a submodule of $M$. Consider a map in $\operatorname{F}({\mathcal{L}}(M),{\mathbb{Q}})$, $$\eta_V(U) = \left\lbrace\begin{array}{ll}
(-1)^{\dim V - \dim U} q^{\binom{\dim V - \dim U}{2}}& \text{, if } U \subseteq V;\\
0& \text{, otherwise}.
\end{array}
\right.$$ In fact, $\eta_V(U) = \mu(U,V)$, where $\mu$ is the Möbius function of the poset $({\mathcal{L}}(M),\subseteq)$, see [@wood-foundations] (Remark 4.1).
\[lemma:eta-v\] For any $V \subseteq M$, if $\dim V > k$, then $E(\eta_V) = 0$.
First, note that for a submodule $U \subseteq M$ and an element $x \in M$ the inclusion $x \in U$ holds if and only if for the cyclic module $xR$ the inclusion $xR \subseteq U$ holds. Calculate, for any $x \in M$, $$\begin{aligned}
E(\eta_V)(x) = \sum_{U \in {\mathcal{L}}(M)} \eta_V(U) {\mathbbm{1}}_U(x) = \sum_{U \subseteq V} \mu(U,V) {\mathbbm{1}}_U(x) = \sum_{xR \subseteq U \subseteq V} \mu(U,V)\;.\end{aligned}$$ From the duality of the Möbius function, see [@rota] (Proposition 3), the last sum is equal to $\sum_{V \supseteq U \supseteq xR} \mu^*(V,U)$, where $\mu^*$ is the Möbius function of the poset $({\mathcal{L}}(M), \supseteq)$. Since $\dim xR \leq \dim R_R = k < \dim V$, $V \supset xR$. From the definition of the Möbius function the resulting sum equals $0$.
Let $\ell$ be a positive integer, $m \geq \ell > k$. Fix a submodule $X$ in $M$ of dimension $m - l$. Define two subsets of ${\mathcal{L}}(M)$, $$S_{=\ell} = \{ V \in {\mathcal{L}}(M) \mid \dim V = \ell, V \cap X = \{0\} \}\;,$$ $$S_{<\ell} = \{ V \in {\mathcal{L}}(M) \mid \dim V < \ell, V \cap X = \{0\} \}\;.$$ Consider a map, $$E': \operatorname{F}(S_{=\ell}, {\mathbb{Q}}) \rightarrow \operatorname{F}(S_{<\ell}, {\mathbb{Q}}), \quad \xi \mapsto \sum_{V \in S_{=\ell}} \xi(V) \eta_V\;.$$ The map $E'$ is a ${\mathbb{Q}}$-linear homomorphism of ${\mathbb{Q}}$-linear vector spaces.
\[lemma:number-nonint\] Let $A$ be an $a$-dimensional $\mathbb{F}_q$-linear vector space and let $B$ be its $b$-dimension subspace. Then, $${|{ \{ C \subseteq A \mid C \cap B = \{0\}, \dim_{\mathbb{F}_q} C = c\} }|} = q^{bc} \binom{a-b}{c}_q\;.$$
See [@dist-reg-graphs] (Lemma 9.3.2).
\[lemma:qbin-ineq\] For any positive integer $t$ there exists an integer $x > t$ such that $$\sum_{i=0}^{t-1} \binom{x}{i}_q < q^{t(x-t)}\;.$$
If $t = 1$ the inequality holds for any $x > 1$. Let $t > 2$ and consider only $x \geq 2t$. Then, $$\begin{aligned}
\sum_{i=0}^{t-1} \binom{x}{i}_q &< t \binom{x}{t-1}_q = t \frac{(q^x - 1) \dots (q^{x - t + 2} - 1)}{(q^{t-1} - 1) \dots (q - 1)}
\\&< c(t) q^x q^{x-1}\dots q^{x-t + 2}
= c(t)q^{\frac{(t-1)(2x - t + 2)}{2}} = q^{(t-1)x + c'(t)} \;,\end{aligned}$$ for some constants $c(t), c'(t)$ that depend on $t$. The inequality $q^{(t-1)x + c'(t)} \leq q^{t(x-t)}$ holds for any $x \geq t^2 + c'(t)$. Thus we can take $x$ large enough to be greater than $2t$ and to satisfy the inequality.
\[lemma:nonzero-kernel\] There exists $m$ such that $\operatorname{Ker}E' \neq \{0\}$.
For any positive integer $m$ there exists an isomorphism between the poset of subspaces of an $m$-dimensional vector spaces and the poset of submodules of an $m$-dimensional $R$-module, see [@yaraneri]. Therefore, we can use for $R$-modules.
Calculate the cardinalities of sets. From , $${|{S_{=\ell}}|} = q^{l(m-l)} \binom{l}{l}_q = q^{l(m-l)}\;.$$ Since $S_{<\ell} \subseteq \{ U \in {\mathcal{L}}(M) \mid \dim U < \ell \}$, $${|{S_{<\ell}}|} \leq \sum_{i=0}^{\ell - 1} \binom{m}{i}_q\;.$$ From , there exists $m$ such that ${|{S_{=\ell}}|} > {|{S_{<\ell}}|}$. Therefore $$\dim_{{\mathbb{Q}}} \operatorname{Ker}E' \geq \dim_{{\mathbb{Q}}} \operatorname{F}(S_{=\ell}, {\mathbb{Q}}) - \dim_{{\mathbb{Q}}} \operatorname{F}(S_{<\ell}, {\mathbb{Q}}) = {|{S_{=\ell}}|} - {|{S_{<\ell}}|} > 0\;.$$
From now we assume that $\dim M = m$ is such that $\operatorname{Ker}E' \neq \{0\}$. It is possible due to . Let $0 \neq \xi \in \operatorname{Ker}E' \subseteq \operatorname{F}(S_{=\ell}, {\mathbb{Q}})$. Define a map, $$\eta(V) = \left\lbrace\begin{array}{ll}
\xi(V)& \text{, if } V \in S_{=\ell};\\
0& \text{, otherwise}.
\end{array}
\right.$$
\[lemma:eta-zero\] The equality $E(\eta) = 0$ holds.
From , for any $V \in {\mathcal{L}}(M)$ of dimension $\ell > k$, $$0 = E(\eta_V) = \sum_{U \in {\mathcal{L}}(M)} \eta_V(U) {\mathbbm{1}}_U = {\mathbbm{1}}_V + \sum_{U \in S_{<\ell}} \eta_V(U) {\mathbbm{1}}_U\;.$$ Then, $$\begin{aligned}
E(\eta) =& \sum_{V \in {\mathcal{L}}(M)} \eta(V) {\mathbbm{1}}_V = \sum_{V \in S_{=\ell}} \xi(V) {\mathbbm{1}}_V = - \sum_{V \in S_{=\ell}}\xi(V) \sum_{U \in S_{<\ell}} \eta_V(U) {\mathbbm{1}}_U
\\=& - \sum_{U \in S_{<\ell}} \left(\sum_{V \in S_{=\ell}}\xi(V) \eta_V\right)(U) {\mathbbm{1}}_U = - \sum_{U \in S_{<\ell}} E'(\xi)(U) {\mathbbm{1}}_U = 0\;.\end{aligned}$$
One can see that we can choose $\xi \in \operatorname{Ker}E'$ to have integer values, by multiplying by the proper scalar $\lambda \in {\mathbb{Q}}$, so we assume $\eta$ also has integer values.
The module of characters $W = \hat{M}$ is a right(left) $R$-module of dimension $m$. The poset $({\mathcal{L}}(M),\subseteq)$ is isomorphic to the poset $({\mathcal{L}}(W),\subseteq)$. For any module $V \in {\mathcal{L}}(W)$ the dual module $V^\perp$ is in ${\mathcal{L}}(M)$ and vice versa. Define a dual map $\eta^{\perp} \in \operatorname{F}({\mathcal{L}}(W), {\mathbb{Q}})$ as follows, for any $V \in {\mathcal{L}}(W)$, $$\eta^{\perp}(V) = \eta(V^\perp)\;.$$ Note that the map $\eta^\perp$ has only integer values.
\[lemma:etaperp-zero\] The equality $E(\eta^{\perp}) = 0$ holds.
The Fourier transform is a ${\mathbb{Q}}$-linear map. Note that since for all $V \in S_{=\ell}$, $\dim V = \ell$, ${|{V}|} = q^{k\ell}$ and ${|{V^\perp}|} = q^{(m - \ell) k}$. Calculate, $$\begin{aligned}
\mathcal{F} (E(\eta^{\perp})) &= \sum_{V \in {\mathcal{L}}(W)} \eta^{\perp}(V)\mathcal{F}({\mathbbm{1}}_V) = \sum_{V^{\perp} \in S_{=\ell}} \eta(V^\perp) {|{V}|}{\mathbbm{1}}_{V^{\perp}}
\\&= \sum_{U \in S_{=\ell}} {|{U^{\perp}}|} \eta(U){\mathbbm{1}}_{U} = q^{(m - \ell) k}\sum_{U \in S_{=\ell}} \eta(U){\mathbbm{1}}_{U}
\\&= q^{(m - \ell) k} \sum_{U \in {\mathcal{L}}(M)} \eta(U){\mathbbm{1}}_{U} = q^{(m - \ell) k} E(\eta)\;.\end{aligned}$$ From , $E(\eta) = 0$, so $\mathcal{F} (E(\eta^{\perp})) = 0$. The Fourier transform is invertible, thus $E(\eta^{\perp}) = \mathcal{F}^{-1}(0) = 0$.
Define $S^{\perp} = \{ V \in {\mathcal{L}}(W) \mid V^\perp \in S_{=\ell} \}$. For each $V \in S^{\perp}$, by duality, $V + X^\perp = W$. Recall $\dim V = m - \ell$ and $\dim X^\perp = m - (m - \ell) = \ell$, and therefore $V \cap X^{\perp} = \{0\}$. Alternatively, $S^{\perp}$ can be defined as, $$S^{\perp} = \{ V \in {\mathcal{L}}(W) \mid V \cap X^\perp = \{0\}, \dim V = m - \ell \}\;.$$ Note that $$\sum_{V \in S^{\perp}} \eta^{\perp}(V) = \sum_{V \in {\mathcal{L}}(W)} \eta^{\perp}(V){\mathbbm{1}}_V(0) = E(\eta^{\perp})(0) = 0\;.$$
\[thm:swc\_g\] Let $R = \operatorname{M}_{k\times k} (\mathbb{F}_q)$ and let $A$ be an $\ell$-dimensional $R$-module, where $\ell > k$. Then there exist a positive integer $n$ and an $R$-linear code $C \subset A^n$ with an unextendable $\operatorname{swc}_{G}$-isometry $f \in \operatorname{Hom}_R(C, A^n)$, for any $G \leq \operatorname{Aut}_R(A)$.
Use the notation of this section. Since $\dim X^{\perp} = \dim A$ there is a module isomorphism $\psi: X^{\perp} \rightarrow A$. Define the length of the code $n$ as the sum of the positive values $\eta^\perp(U)$, $U \in S^{\perp}$. From the calculations above, it is equal to the sum of the negative values $\eta^\perp(U)$, $U \in S^{\perp}$.
For any $i \in {\{1,\dots,n\}}$, let us define $\lambda_i \in \operatorname{Hom}_R(W,A)$. Choose a submodule $V_i \subseteq W$, such that $\eta^{\perp}(V_i) > 0$, (do it for the submodule $V = V_i$ exactly $\eta^{\perp}(V)$ times). Since $V_i \in S^{\perp}$, $V_i \cap X^\perp = \{0\}$, $\dim V_i = m - \ell$ and $\dim X^\perp = \ell$, and therefore $W = V_i \oplus X^{\perp}$. Define $$\lambda_i: W = V_i \oplus X^{\perp} \rightarrow A, \quad (v, x) \mapsto \psi(x)\;.$$ Then $\operatorname{Ker}\lambda_i = V_i \subseteq W$ and for any $a \in A$, $\lambda_i^{-1}(a) = \psi^{-1}(a) + V_i$. In the same way, for any $i \in {\{1,\dots,n\}}$, define maps $\mu_i: W \rightarrow A$ for the modules $U_i$, such that $\eta^{\perp}(U_i) < 0$.
Check \[eq-main-space-equation\] is satisfied for the trivial group $\{e\} < \operatorname{Aut}_R(A)$. The trivial group has one-point orbits in $A$. Calculate, using , for any $a \in A$, for any $w \in W$, $$\begin{aligned}
\sum_{i = 1}^n ({\mathbbm{1}}_{\lambda_i^{-1}(\{a\})} - {\mathbbm{1}}_{\mu_i^{-1}(\{a\})})(w) &= \sum_{U \in S^{\perp}} \eta^{\perp}(U) {\mathbbm{1}}_{\psi^{-1}(a) + U}(w)
\\= \sum_{U \in S^{\perp}} \eta^{\perp}(U) {\mathbbm{1}}_{U}(w - \psi^{-1}(a)) &= E(\eta^\perp)(w - \psi^{-1}(a)) = 0\;.\end{aligned}$$ Also, it is easy to see that for any $i,j \in {\{1,\dots,n\}}$, $\operatorname{Ker}\lambda_i \neq \operatorname{Ker}\mu_j$.
Since \[eq-main-space-equation\] holds for the orbit $\{0\}$, it is true that $\operatorname{Ker}\lambda = \bigcap_{i=1}^n \operatorname{Ker}\lambda_i = \bigcap_{i=1}^n \operatorname{Ker}\mu_i = \operatorname{Ker}\mu = N \subset W$. Let $\bar{\lambda}, \bar{\mu}$ be two canonical injective maps $\bar{\lambda}, \bar{\mu} \in \operatorname{Hom}_R(W/N, A^n)$ such that $\bar{\lambda}(\bar{w}) = \lambda(w)$ and $\bar{\mu}(\bar{w}) = \mu(w)$ for all $w \in W$, where $\bar{w} = w + N$.
Use the notation of . Define a code $C \subset A^n$ as the image $\lambda(W)$. Define a map $f \in \operatorname{Hom}_R(C,A^n)$ as $f = \bar{\mu}\bar{\lambda}^{-1}$. It is true that $f\lambda = \mu$. From , $f$ is an $\operatorname{swc}_{\{e\}}$-isometry. Therefore, $f$ is an $\operatorname{swc}_{G}$-isometry. However, $f$ does not extend even to an $\operatorname{Aut}_R(A)$-monomial map, since \[eq-condition\] does not hold for the orbit $\{0\}$.
Main result
===========
Let $R$ be a finite ring with identity. Recall several results that appear in [@wood-foundations] in order to generalize for the case of arbitrary module alphabet. For the finite ring $R$ there is an isomorphism, $$R/\operatorname{rad}(R) \cong \operatorname{M}_{r_1 \times r_1}(\mathbb{F}_{q_1}) \oplus \dots \oplus \operatorname{M}_{r_n \times r_n}(\mathbb{F}_{q_n})\;,$$ for nonnegative integers $n, r_1, \dots, r_n$ and prime powers $q_1, \dots, q_n$, see [@wood-foundations] and [@lam] (Theorem 3.5 and Theorem 13.1).
Consider a finite left $R$-module $A$. Since $\operatorname{soc}(A)$ is a sum of simple $R$-modules, there exist nonnegative integers $s_1, \dots, s_n$ such that, $$\operatorname{soc}(A) \cong s_1 T_1 \oplus \dots \oplus s_n T_n\;,$$ where $T_i \cong \operatorname{M}_{r_i \times 1}(\mathbb{F}_{q_i})$ is a simple $\operatorname{M}_{r_i \times r_i}(\mathbb{F}_{q_i})$-module, $i \in {\{1,\dots,n\}}$.
\[thm:noncyclic-socle-property\] The socle $\operatorname{soc}(A)$ is cyclic if and only if $s_i \leq r_i$, for all $i \in {\{1,\dots,n\}}$.
In [@elgarem-megahed-wood] (Theorem 3) the authors proved the following.
\[thm:elgarem-wood\] Let $R$ be a finite ring with identity. Let $A$ be a finite $R$-module with a cyclic socle. The alphabet $A$ has an extension property with respect to the symmetrized weight composition $\operatorname{swc}_G$, built on any subgroup $G \leq \operatorname{Aut}_R(A)$.
We prove the complementary part.
\[thm:noncyclic-socle-swc\] Let $R$ be a finite ring with identity. Let $A$ be a finite $R$-module with a noncyclic socle. The alphabet $A$ does not have an extension property with respect to the symmetrized weight composition $\operatorname{swc}_G$, built on any subgroup $G \leq \operatorname{Aut}_R(A)$.
Our proof repeats the idea of [@wood-foundations] (Theorem 6.4). From , $\operatorname{soc}(A)$ is not cyclic, then there exists an index $i$ with $s_i > r_i$. Of course, $s_i T_i \subseteq \operatorname{soc}(A) \subseteq A$. Recall that $s_i T_i$ is the pullback to $R$ of the $\operatorname{M}_{r_i \times r_i}(\mathbb{F}_q)$-module $B = \operatorname{M}_{r_i \times s_i}(\mathbb{F}_q)$. Denote the ring $\operatorname{M}_{r_i \times r_i}(\mathbb{F}_q) = R'$.
Because $r_i < s_i$, implies the existence of $R'$-linear code $C \subset B^n$ and an $\operatorname{swc}_{\{e\}}$-isometry $f \in \operatorname{Hom}_{R'} (C, B^n)$ that does not extend to a $\operatorname{Aut}_{R'}(B)$-monomial map.
Recall the notation of . Denote $V = \operatorname{Ker}\lambda_1$. Define a subcode $C' = \lambda(V) \subseteq \lambda(W) = C$. The first column of $C'$ is a zero-column. Assume that the code $f(C')$ has a zero-column. Then there exists $i \in {\{1,\dots,n\}}$ such that $V \subseteq \operatorname{Ker}\mu_i$. The code $C$ is constructed from the map $\eta^\perp$, defined in , so $\dim V = \dim A = \dim \operatorname{Ker}\mu_i$ for all $i \in {\{1,\dots,n\}}$. Also, $\operatorname{Ker}\lambda_i \neq \operatorname{Ker}\mu_j$ for all $i,j \in {\{1,\dots,n\}}$. Therefore it is impossible to have $V \subseteq \operatorname{Ker}\mu_i$ for some $i \in {\{1,\dots,n\}}$ and thus $f(C')$ does not have a zero-column.
The projection mappings $R \rightarrow R/\operatorname{rad}(R) \rightarrow R'$ allows us to consider $C'$ and $f$ as an $R$-module and an $R$-linear homomorphism correspondingly.
We have $C' \subset (s_i T_i)^n \subseteq \operatorname{soc}(A)^n \subset A^n$ as $R$-modules. The map $f$ is thus an $\operatorname{swc}_{\{e\}}$-isometry of an $R$-linear code over $A$. Since $\{e\} \leq G$, obviously, $f$ is an $\operatorname{swc}_{G}$-isometry. The codes $C'$ and $f(C')$ have different number of zero columns and hence $f$ does not extend to an $\operatorname{Aut}_R(A)$-monomial map.
\[thm:cor1\] Let $R$ be a finite ring with identity. Let $A$ be a finite $R$-module. Let $G$ be a subgroup of $\operatorname{Aut}_R(A)$. The alphabet $A$ has an extension property with respect to $\operatorname{swc}_G$ if and only if $\operatorname{soc}(A)$ is cyclic.
See and .
Let $\omega: A \rightarrow \mathbb{C}$ be a function. For a positive integer $n$ define a weight function $\omega: A^n \rightarrow \mathbb{C}$, $\omega(a) = \sum_{i=1}^n \omega(a_i)$. Consider a code $C \subseteq A^n$. We say that a map $f \in \operatorname{Hom}_R(C,A^n)$ is an *$\omega$-preserving function* if for any $a \in A^n$, $\omega(a) = \omega(f(a))$.
Let $U(\omega) = \{ g \in \operatorname{Aut}_R(A) \mid \forall a\in A, \omega(g(a))= \omega(a) \}$ be a symmetry group of the weight. An alphabet $A$ is said to have an extension property with respect to the weight function $\omega$ if for any linear code $C \subseteq A^n$, any $\omega$-preserving map $f \in \operatorname{Hom}_R(C,A^n)$ extends to an $U(\omega)$-monomial map.
\[thm:anyweight-noncyclic-socle\] Let $R$ be a finite ring with identity. Let $A$ be a finite $R$-module with a noncyclic socle. Let $\omega: A \rightarrow \mathbb{C}$ be arbitrary weight function. The alphabet $A$ does not have an extension property with respect to the weight $\omega$.
Let $C \subseteq A^n$ be a code and let $f \in \operatorname{Hom}_R(C, A^n)$ be an unextendable $\operatorname{swc}_{U(\omega)}$-isometry that exist due to . The map $f$ is then an $\omega$-preserving map and it does not extend to a $U(\omega)$-monomial map.
The length of the code with an unextendable isometries that is observed in and later used in can be large. For the code observed in we can give a lower bound $n \geq \prod_{i = 1}^{k} (1 + q^i)$, which is proved in [@d4]. In a future paper we will show that for the spacial case $k = 1$ we can give an explicit construction for the code observed in . The length of the resulting code is $q + 1$. Moreover the resulting unextendable $\operatorname{swc}_G$-isometry can be an automorphism.
Unlike the case of the Hamming weight, for arbitrary weight function it is not true that the cyclic socle and pseudo-injectivity conditions lead to the extension property. For example, it is still an open question if the extension property holds for the Lee weight and a cyclic group alphabet, see [@barra]. In a recent work of Langevin, see [@langevin], the extension property for the Lee weight with an alphabet that is a cyclic group of prime order is proved.
|
---
abstract: 'In this Letter, we report the observational constraints on the Hu-Sawicki $f(R)$ theory derived from weak lensing peak abundances, which are closely related to the mass function of massive halos. In comparison with studies using optical or x-ray clusters of galaxies, weak lensing peak analyses have the advantages of not relying on mass-baryonic observable calibrations. With observations from the Canada-France-Hawaii-Telescope Lensing Survey, our peak analyses give rise to a tight constraint on the model parameter $|f_{R0}|$ for $n=1$. The $95\%$ C.L. is $\log_{10}|f_{R0}| < -4.82$ given WMAP9 priors on $(\Omega_{\rm m}, A_{\rm s})$. With Planck15 priors, the corresponding result is $\log_{10}|f_{R0}| < -5.16$.'
author:
- Xiangkun Liu
- Baojiu Li
- 'Gong-Bo Zhao'
- 'Mu-Chen Chiu'
- Wei Fang
- Chuzhong Pan
- Qiao Wang
- Wei Du
- Shuo Yuan
- Liping Fu
- Zuhui Fan
bibliography:
- 'ms.bib'
title: 'Constraining $f(R)$ Gravity Theory Using Weak Lensing Peak Statistics from the Canada-France-Hawaii-Telescope Lensing Survey'
---
Introduction
============
While both are able to explain the observed late-time accelerating expansion of the universe [@Riess1998; @Perlmutter1999], modified gravity (MG) theories [e.g., @Koyama2008; @DurMaar2008; @JaKh2010] and dark energy models in general relativity (GR) [e.g., @Weinberg1989] lead to different formation and evolution of cosmic structures [e.g., @Schmidt2009; @Zhao2011a; @Zhao2011b; @Zhao2012; @Li2012a; @Li2012b; @Li2013; @Zhao2014; @Achitouv2015; @Joyce2015; @Chiu2015; @Stark2016]. Observations of large-scale structures are therefore critical in scrutinizing the underlying mechanism driving the global evolution of the universe and in revealing the fundamental law of gravity.
The $f(R)$ theory is a representative MG model, in which the integrand of the Einstein-Hilbert action is $R+f(R)$, where $f(R)$ is a function of the scalar curvature $R$ [@Sotiriou2010; @deFelice2010; @Zhao2011a]. By choosing $f(R)$ properly, such as the Hu-Sawicki model [@HuSaw2007 hereafter HS07], the theory can give rise to the late time cosmic acceleration without violating the gravity tests in the solar system and without affecting high redshift physics significantly. Matching the expansion history with that of the flat $\Lambda$CDM model with the matter density parameter $\Omega_{\rm m}$, an extra degree of freedom is $f_R=df/dR$. For HS07, $f_R\approx -n(c_1/c_2^2)[m^2/(-R)]^{n+1}$ (with the sign convention used in @Zhao2011a), and its current background value is $f_{R0}\approx -n(c_1/c_2^2)[3(1+4\Omega_{\Lambda}/\Omega_{\rm m})]^{-(n+1)}$. Here $m^2=H_0^2\Omega_{\rm m}$ with $H_0$ being the present Hubble constant, $c_1/c_2=6\Omega_\Lambda/\Omega_{\rm m}$, and $\Omega_\Lambda=1-\Omega_{\rm m}$. It satisfies the solar system tests for $n\ge 1$ [HS07, @Zhao2011a]. On the other hand, cosmic structures can be affected significantly. Thus independent observational studies of different scales are important in probing the nature of gravity [e.g., @Jain2013s; @Weinberg2013; @Jain2013; @Higuchi2016].
On cosmological scales, there have been different observational analyses [e.g., @Oyaizu2008; @Raveri2014; @Planck15XIV]. Among them, studies of clusters of galaxies provide the most sensitive constraints [e.g., @Schmidt2009; @Ferraro2011; @Lombriser2012a; @Cataneo2015; @Wilcox2015; @Li2016] and reach a level of $\log_{10}|f_{R0}|< -4.8 $ ($95\%$ C.L.) [@Cataneo2015].
Weak lensing effects (WL) are a key cosmological probe [e.g., @BartSch2001; @Albrecht2006; @LSST2012; @Amendola2013; @Weinberg2013; @FuFan2014]. Cosmic shear correlation analyses have been incorporated to constrain gravity theories [e.g., @Harnoise2015]. WL peak statistics, particularly high peaks, possess the cosmological sensitivities of both WL effects and massive clusters, and provide an important complement to shear correlation studies [e.g., @White2002; @Hamana2004; @TangFan2005; @DietHart2010; @Fan2010; @Yang2011; @Hamana2012; @Liu2014; @Lin2015; @Liujia2015; @Liu2015]. In comparison with cluster studies that normally involve baryonic observables, WL peak analyses are advantageous because of the gravitational origin of WL effects.
In this Letter, we derive constraints on the HS07 model parameter $|f_{R0}|$ for $n=1$, for the first time, from WL peak abundances using WL data from Canada-France-Hawaii-Telescope Lensing Survey (CFHTLenS) [@Erben2013]. We perform mock tests to validate our pipeline before applying to actual data analyses.
Observational data
==================
CFHTLenS covers a total survey area of $\sim 154\deg^2$ from 171 individual pointings distributed in four regions [@Erben2013]. We note that for cosmic shear correlation analyses, 129 pointings pass the systematic tests [@Heymans2012]. For the high peak abundances, our analyses find that using the full pointings does not introduce any notable bias comparing to that using the passed fields. We therefore keep the full data here. The photometric redshift is estimated for each galaxy from five bands $u^*g'r'i'z'$ observations [@Hildebrandt2012]. The forward modelling LENSFIT pipeline is applied for the shape measurement [@Miller2013]. After masking out bright stars and faulty CCD rows across the entire survey, the effective survey area is $\sim 127\deg^2$. We select source galaxies with weight $w>0$, $\hbox{FITCLASS}=0$, $\hbox{MASK}\le 1$ and redshift in the range $z=[0.2, 1.3]$ in our weak lensing analyses [@Miller2013]. Such a selection results in a total number of 5,596,690 source galaxies. By taking into account their weights, the effective number of galaxies is $\sim 4.5 \times 10^6$, corresponding to the density $\sim 10 \hbox{ arcmin}^{-2}$. By summing up the photo-z probability distribution of each source galaxy, we obtain the redshift distribution for our source sample with $p_z(z)=A{(z^a+z^b)}/{(z^c+d)}$ and $A=0.5514$, $a=0.7381$, $b=0.7403$, $c=6.0220$ and $d=0.6426$.
Peak analyses
=============
We perform WL peak analyses following the procedures described in detail in @Liu2015. The steps are briefly summarized here. (1) We calculate the smoothed shear field taking into account properly the additive and multiplicative bias corrections. Then the convergence $\kappa$ map is reconstructed for each individual $\sim 1\times 1\deg^2$ field using the nonlinear Kaiser-Squires method [e.g., @KS1993; @Bartel1995; @SK1996]. The corresponding smoothed filling-factor map is also generated from the positions and weights of source galaxies. We apply a Gaussian smoothing with $W_{\theta_{\rm G}}(\boldsymbol{\theta})=1/({\pi\theta_{\rm G}^2})
\exp{(-{|\boldsymbol{\theta}|^2}/{\theta_{\rm G}^2})}$ taking $\theta_{\rm G}=1.5$ arcmin. (2) For each convergence map defined on $1024\times 1024$ pixels, we identify peaks by comparing their $\kappa$ values with those of their nearest $8$ neighboring pixels. We exclude regions with the filling factor values $\le 0.5$ in peak counting to suppress the mask effects [@Liu2014; @Liu2015], and also the outer most $50$ pixels in each side of an individual map to eliminate the boundary effect. The total leftover area for peak counting is $\sim 112\hbox{ deg}^2$. (3) We divide peaks into different bins based on their signal to noise ratio $\nu=\kappa/\sigma_0$, where $\sigma_0$ is the average rms of the shape noise estimated by randomly rotating source galaxies to construct noise maps. For CFHTLenS and with $\theta_{\rm G}=1.5$ arcmin, $\sigma_0\approx 0.026$. In this paper, we only consider high peaks with $\nu\ge 3$. To avoid possible bias arising from a single bin with very few peaks and thus a large statistical fluctuation, we adopt unequal binning with comparable numbers of peaks in different bins, specifically $\nu=[3, 3.1], (3.1, 3.25], (3.25, 3.5], (3.5, 4], (4, 6]$. The peak counts are then denoted by $N_i^d$ $(i=1,...,5)$.
To derive cosmological constraints from peak counts, we define the following $\chi^2$ to be minimized [@Liu2015] $$\chi_{\rm peak}^{2}=\boldsymbol{dN}^{(p')}(\widehat{\boldsymbol{C}^{-1}})\boldsymbol{dN}^{(p')},
\label{chipeak}$$ where $\boldsymbol{dN}^{(p')}=\boldsymbol{N}^d-\boldsymbol{N}^{p'}$ is the difference between the data vector $\boldsymbol {N}^d$ and the theoretical expectations of the peak counts $\boldsymbol {N}^{p'}$ for the cosmological model $p'$. The covariance matrix $\boldsymbol{C}$ is estimated from bootstrap analyses using the CFHTLenS data themselves. The matrix $\widehat{\boldsymbol{C}^{-1}}$ is the scaled inverse covariance matrix with $\widehat{\boldsymbol{C}^{-1}}=(R_{\rm s}-N_{\rm bin}-2)/(R_{\rm s}-1)(\boldsymbol{C}^{-1})$, where $N_{\rm bin}=5$, and $R_{\rm s}=10,000$ is the total number of bootstrap samples.
For $\boldsymbol {N}^{p'}$, we use the theoretical model of @Fan2010 [here after F10]. The model assumes that a true high peak is contributed dominantly from a single massive halo. The shape noise effects, the major contaminations to WL peak analyses using relatively shallow surveys, such as CFHTLenS, are fully accounted for. The cosmological quantities involved in F10 are the mass function and the internal density profile of dark matter halos, and the cosmological distances in the lensing efficiency factor as well as in the volume element. F10 has been tested extensively by comparing with simulations [@Liu2014; @Liu2015; @Lin2015]. It has also been applied to derive cosmological constraints, within the framework of $\Lambda$CDM model, from observed WL peaks [@Liu2015].
We adopt the halo mass function given by @Kopp2013, valid for $10^{-7}\le |f_{R0}|\le 10^{-4}$. We compare its predictions with that from our $f(R)$ simulations to be described in the next section, and find a good agreement. For the halo density profile in $f(R)$ theory, studies have shown that it is not different significantly from that of the corresponding $\Lambda$CDM model for massive halos concerned in our peak analyses here [e.g., @Zhao2011a; @Lombriser2012b; @Achitouv2015; @Shi2015]. We therefore use the Navarro-Frenk-White (NFW) density profile [@NFW1996; @NFW1997] with the mass-concentration (M-c) relation given by @Duffy2008. We have checked and found that different choices of M-c relation do not affect our constraint on $|f_{R0}|$ significantly due to the weak degeneracy between these two parameters from current data. We should note that in both the mass function of @Kopp2013 and in our $f(R)$ simulations and observational analyses, the $\sigma_8$ parameter, the rms of the present linearly extrapolated density perturbations smoothed with a top-hat window function of scale $8h^{-1}\hbox{Mpc}$, is defined to be the $\Lambda$CDM-equivalent value rather than its true value in $f(R)$ theory. Thus this $\sigma_8$ should be regarded as a measure of the initial perturbations.
Our analyses concern high peaks that are physically related to halos with $M\sim 10^{14}{M}_{\odot}$ and above. The baryonic effects on their mass function and overall density profiles are shown to be minimal [e.g., @Sawala2013; @Schaller2015a; @Schaller2015b]. Depending on baryonic physics, the very central part of halos may be affected [e.g., @Schaller2015b]. However, our smoothing operation can suppress effectively the influence of detailed central profiles. We therefore do not expect significant baryonic effects on high peak abundances for the current WL data with relatively large statistical errors [e.g., @Osato2015].
We focus on deriving constraints on $(|f_{R0}|, \Omega_{\rm m}, \sigma_8)$, the parameters that WL effects are most sensitive to.
We employ priors on $\Omega_{\rm m}$ and the initial curvature perturbation parameter $A_{\rm s}$ from WMAP9 [@Hinshaw2013] or Planck15 [@Planck2015], where $A_{\rm s}$ can be directly linked to $\sigma_8$ [e.g., @Schmidt2009]. Thus our total $\chi^2$ is $$\chi_{\rm tot}^2=\chi_{\rm peak}^2+\chi_{\Omega_{\rm m}}^2+\chi_{A_{\rm s}}^2.
\label{chitot}$$ Here $\chi_{\Omega_{\rm m}}^2=(\Omega_{\rm m}-\Omega_{\rm m}^{\rm prior})^2/\sigma_{\Omega_{\rm m}^{\rm prior}}^2$ and $\chi_{A_{\rm s}}^2=(A_{\rm s}-A_{\rm s}^{\rm prior})^2/\sigma_{A_{\rm s}^{\rm prior}}^2$, where $\Omega_{\rm m}^{\rm prior}$ and $A_{\rm s}^{\rm prior}$ are the prior central values, and $\sigma_{\Omega_{\rm m}^{\rm prior}}$ and $\sigma_{A_{\rm s}^{\rm prior}}$ are the corresponding 68% confidence limits. The specific priors are listed in Table \[tab:prior\]. These priors do not include the contributions from the constructed lensing potential that depends on gravity theories. On the other hand, for the small $|f_{R0}|$ concerned here, the impacts of modified gravity on the primordial cosmic microwave background (CMB) and on the late integrated Sachs-Wolfe (ISW) effect are negligible [e.g., @Song2007; @Planck15XIV]. Thus the priors we adopt here that are derived under $\Lambda$CDM are feasible and should not introduce biases to our constraints on $|f_{R0}|$. The other cosmological parameters, such as the baryon density $\Omega_{\rm b}$, the Hubble constant $h$ and the power index of initial density perturbations $n_{\rm s}$, are fixed to the corresponding values of WMAP9 or Planck15.
Mock tests
==========
To validate our pipeline, we generate mocks from ray-tracing simulations.
We run N-body simulations for flat $\Lambda$CDM under GR, and for $f(R)$ theory with $n=1$ and $|f_{R0}|=10^{-4}$ (F4), $10^{-5}$ (F5) and $10^{-6}$ (F6), respectively. Besides $|f_{R0}|$, all the other cosmological parameters are the same in all simulations with $\Omega_{\rm m}=0.281$, $\Omega_\Lambda=0.719$, $\Omega_{\rm b}=0.046$, $h=0.697$, $n_{\rm s}=0.971$ and the $\Lambda$CDM-equivalent $\sigma_8=0.819$.
The simulations start at redshift $z=49$ with the initial conditions generated by MPgrafic [@Zhao2011a]. The ECOSMOG [@Li2012c] is used for the dynamical evolutions. The box size is $1024h^{-1}\hbox{Mpc}$ and the particle number is $1024^3$. [We compare the halo mass function from these simulations with the predictions from @Kopp2013, and find a good agreement.]{}
The mock WL analyses for GR, F5 and F4 are done. Here we mainly present the results for GR and F5. For each model, we run five independent N-body simulations and pad them together to form the light cones to $z=3$. The five simulations for F5 have exactly the same initial conditions as their GR counterparts.
Based on the padded simulations, we then use $36$ lens planes evenly distributed in the comoving distance to $z=3$ to perform multiple-plane ray-tracing calculations following closely the procedures applied in our previous studies[@Liu2014; @Liu2015]. To generate mock data, we divide the simulated area into different fields of $1\time 1\hbox{ deg}^2$, and match them randomly to the observational fields. In each field, we preserve the relative positions and the photo-zs of the observed galaxies, as well as the masked areas. We randomly rotate source galaxies to eliminate the original WL signals, and then incorporate the reduced WL shears from ray-tracing simulations to construct the mock shear data. To better estimate the shape noise effects, we apply $15$ sets of different random rotations to the source galaxies. Thus for each model, we finally have $15$ sets of mock data, each with a survey area of $\sim ~150\hbox{ deg}^2$. We refer the readers to @Liu2015 for further details.
With mock data, we perform the same WL peak analyses as we do for observational data, and derive cosmological constraints to validate our analyzing pipeline.
[C[1.6cm]{}C[2.5cm]{}C[2.2cm]{}C[2cm]{}]{} Parameter & Obs. (WMAP9) (WMAP+BAO+$H_0$)& Obs. (Planck15) (TT,TE,EE+LowP) & Mock (F5&GR)\
$\Omega_{\rm m}^{\rm prior}$ & $0.288\pm 0.0093$ & $0.3156\pm 0.0091$ & $0.281\pm 0.0093$\
$10^9 A_{\rm s}^{\rm prior}$ & $2.427\pm 0.079$ & $2.207\pm 0.074$ & $2.372\pm 0.079$\
$k_{\rm pivot} ({\rm Mpc}^{-1})$ & $0.002$ & $0.05$ & $0.002$\
$\Omega_{\rm b}$ & $0.0472$ & $0.0492$ & $0.046$\
$h$ & $0.6933$ & $0.6727$ & $0.697$\
$n_{\rm s}$ & $0.971$ & $0.9645$ & $0.971$\
![\[fig:mockpeak\] Peak counts distribution from F5 (upper left) and GR (upper right) mock simulations. Different symbols with different colors correspond to different noise realizations. The blue ‘\*’ and the error bars are for the average values and the rms over the 15 realizations. The solid line is for our model predictions. The lower panel is for the difference ratios.](peakF5.eps "fig:"){width="0.495\columnwidth" height="0.36\columnwidth"} ![\[fig:mockpeak\] Peak counts distribution from F5 (upper left) and GR (upper right) mock simulations. Different symbols with different colors correspond to different noise realizations. The blue ‘\*’ and the error bars are for the average values and the rms over the 15 realizations. The solid line is for our model predictions. The lower panel is for the difference ratios.](peakGR.eps "fig:"){width="0.495\columnwidth" height="0.36\columnwidth"} ![\[fig:mockpeak\] Peak counts distribution from F5 (upper left) and GR (upper right) mock simulations. Different symbols with different colors correspond to different noise realizations. The blue ‘\*’ and the error bars are for the average values and the rms over the 15 realizations. The solid line is for our model predictions. The lower panel is for the difference ratios.](fractional_diff.eps "fig:"){width="0.9\columnwidth" height="0.32\columnwidth"}
Results
=======
We first present the results from mock simulations. Figure \[fig:mockpeak\] shows the peak number distributions for F5 (upper left) and GR (upper right). In both cases, the averaged mock results agree with our model predictions very well. The lower panel shows the difference ratios between F5 and GR (blue) and F4 and GR (red), respectively, which demonstrates the constraining potential of WL peak statistics on $|f_{R0}|$.
![\[fig:mockfitting\] Constraints derived from F5 (upper) and GR (lower) mock data. In 1-d distributions, blue solid and dashed lines indicate the locations of the maximum marginalized probabilities, and the corresponding 68% confidence intervals. Red solid lines and ‘+’ symbols are the input parameters of the mock simulations. For the GR case, $f_{R0}=0$, and we only indicate the input $\Omega_{\rm m}$ and $\sigma_8$. ](fitting_F5GR.eps){width="1\columnwidth" height="0.5\columnwidth"}
With these averaged data as our mock ‘observed’ data and the covariance matrix derived by bootstrapping from the $15$ sets of simulated catalogs, we perform Markov Chain Monte Carlo (MCMC) constraints on ($|f_{R0}|, \Omega_{\rm m}, \sigma_8$) using COSMOMC [@Cosmomc2002] modified to include our likelihood function from WL peak abundances. As in real observational analyses, we also add the priors on $\Omega_{\rm m}$ and $A_{\rm s}$ (Table \[tab:prior\]). Here the central value of $\Omega_{\rm m}$ is directly from the simulation input, and the $A_{\rm s}$ value is chosen to match the input $\sigma_8$. The $1\sigma$ ranges for the two parameters are taken from WMAP9. For $|f_{R0}|$, because its value spans orders of magnitude, we sample it in log-space and apply a flat prior in the range of $\log_{10}|f_{R0}|=[-7, -4]$.
The obtained constraints are shown in Figure \[fig:mockfitting\] for F5 (upper) and GR (lower). The red symbols and lines denote the input values of the corresponding mock simulations. It is seen that the 1-d maximum probability values (blue solid lines) agree with the input parameters excellently. Similar results are also obtained for F4 mock analyses. The flattening trend for $\log_{10}|f_{R0}|< {-6}$ in the GR case is a reflection of the non-detectable differences for high peak abundances between GR and $f(R)$ with $|f_{R0}|< 10^{-6}$ due to the chameleon effect. The 1-d marginalized constraints on $\log_{10}|f_{R0}|$ for GR and F5 mocks are shown in Table \[tab:constraint\]. For F5, we show the $68\%$C.L. because the $95\%$C.L. is beyond our considered ranges of $\log_{10}|f_{R0}|$.
We now show the observational results from CFHTLenS. We note that in the base Planck15 constraints, a minimum neutrino mass of $0.06\hbox{ eV}$ is included in their analyses. To be consistent, we therefore also include this neutrino mass in our peak abundance calculations when the Planck15 priors are applied.
The results are presented in Figure \[fig:obs\_results\]. The left panel shows the peak count distribution along with the theoretical predictions from the best-fit cosmological parameters obtained from the MCMC fittings using WMAP9 (green) and Planck15 (red) priors, respectively. The right panels show the derived constraints. The marginalized 1-d constraints for $|f_{R0}|$ are shown in Table \[tab:constraint\], where the results from linear sampling on $|f_{R0}|$ and the value whose posterior probability is $\exp(-2)$ ($2\sigma$) of the maximum probability are also listed.
It is seen that WL peak abundance analyses can provide strong constraints on $|f_{R0}|$ even with data from surveys of an area of $\sim 150\deg^2$. The $95\%$C.L. from log-space sampling is $\log_{10}|f_{R0}|<-4.82$ and $<-5.16$ with WMAP9 and Planck15 priors, respectively. The stronger constraint from Planck15 is due to its somewhat larger value of $\Omega_{\rm m}$.
Our constraints are comparable and slightly tighter than that from @Cataneo2015 with $\log_{10}|f_{R0}|<-4.73$ (WMAP9) and $<-4.79$ (Planck2013) noting their wider prior of $[-10, -2.523]$ on $\log_{10}|f_{R0}|$. Comparing to the results in Table 8 of @Planck15XIV, our equivalent constraint on $B_0$ is $B_0<2.45\times 10^{-4}$ (Planck15) (linear sampling), which is about $2-3$ times larger than theirs obtained by adding data of redshift space distortion and WL 2-pt correlations to Planck CMB data.
In the analyses above, we fix ($n_s$, $h$, $\Omega_{\rm b}$). WL peak abundances depend on them very weakly. On the other hand, given $A_s$, the derived $\sigma_8$ changes with their values, which in turn may affect our constraint on $|f_{R0}|$. To test this, we perform MCMC analyses by including them separately as additional free parameters and applying WMAP9 priors, which are larger than those of Planck15. We find that by adding $n_s$ or $h$ or $\Omega_{\rm b}$, the constraint is weakened by $\sim 1.4\%,0.9\%$ and $ 0.2\%$ with $\log_{10}|f_{R0}|<-4.75$, $-4.78$, and $-4.81$, respectively. Considering the negative degeneracy between $n_s$ ($h$) and $A_s$ from WMAP9, their influences on the $|f_{R0}|$ constraint should be even smaller.
Summary
=======
Using CFHTLenS, we derive constraints on $f(R)$ theory, for the first time using WL peak abundance analyses. To demonstrate the potential of the probe, we focus on the specific HS07 model with $n=1$. We find no evidence of deviations from GR and obtain strong limits on the $|f_{R0}|$ parameter. For other $n$ values with $n>1$, because of the $|f_{R0}|-n$ degeneracy [e.g., @Li2011], we expect that the limit on $|f_{R0}|$ would be larger. We will perform more general studies in the future.
WL high peaks are closely associated with massive clusters, and thus the constraining power of WL high peak abundances is physically similar to that of cluster abundance studies. However, WL peak analyses are much less affected by baryonic physics than other cluster probes in which baryon-related observables are involved. On the other hand, the WL peak signal depends on the halo density profile, whose shape is determined by the concentration parameter for a NFW halo. Thus the uncertainty in the halo M-c relation can potentially affect the cosmological constraints from WL peak abundances. This impact is weak for our current analyses given the data statistics. For future large observations, such effect needs to be considered carefully. Our studies show that we can constrain the M-c relation simultaneously with cosmological parameters from WL peak counts to avoid potential biases from the assumed M-c relation [@Liu2015].
With improved WL data, we expect that our analyses can be applied to constrain a more general class of modified gravity theories that can affect the halo abundances significantly.
Mock
----------------------------- ------------------------------- -------------------------- ----------------------
Parameter case
$\mathrm{log}_{10}|f_{R0}|$ GR (1-d 95% limit) $<-4.59$
$\mathrm{log}_{10}|f_{R0}|$ F5 (1-d best fit and 68%C.L.) $-5.08^{+0.81}_{-1.06} $
CFHTLenS observation
Parameter case WMAP9 Planck15
$\mathrm{log}_{10}|f_{R0}|$ 1-d limit (95%) $<-4.82$ $<-5.16$
$|f_{R0}|$ 1-d limit (95%) $<7.59\times10^{-5}$ $<4.63\times10^{-5}$
$\mathrm{log}_{10}|f_{R0}|$ 1-d limit ($2\sigma$) $<-4.50$ $<-4.92$
: Constraints from mock and observational analyses.[]{data-label="tab:constraint"}
![\[fig:obs\_results\] Results from CFHTLenS observational data. Left: The peak counts distribution. The corresponding solid lines are the theoretical predictions with the best-fit cosmological parameters listed therein. The error bars are the square root of the diagonal terms of the covariance matrix. Right: The derived constraints. Green and red contours are the results with WMAP9 and Planck15 priors, respectively.](peakMCMC_CFHT.eps){width="1\columnwidth" height="0.7\columnwidth"}
Acknowledgement
===============
This work used the DIRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DIRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure Capital Grant No. ST/K00042X/1, STFC Capital Grants No. ST/H008519/1 and No. ST/K00087X/1, STFC DIRAC Operations Grant No. ST/K003267/1 and Durham University. DIRAC is part of the National E-Infrastructure. The MCMC calculations are partly done on the Laohu supercomputer at the Center of Information and Computing at National Astronomical Observatories, Chinese Academy of Sciences, funded by Ministry of Finance of People’s Republic of China under Grant No. ZDYZ2008-2. The research is supported in part by NSFC of China under Grants No. 11333001, No. 11173001, and No. 11033005. L.P.F. also acknowledges the support from NSFC Grants No. 11103012, Shanghai Research Grant No. 13JC1404400 $\&$ No. 16ZR1424800 of STCSM, and from Shanghai Normal University Grants No. DYL201603. G.B.Z. is supported by the 1000 Young Talents program in China. Z.H.F. and G.B.Z. are also supported by the Strategic Priority Research Program “The Emergence of Cosmological Structures” of the Chinese Academy of Sciences Grant No. XDB09000000. B.L. acknowledges the support by the U.K. STFC Consolidated Grant No. ST/L00075X/1 and No. RF040335. M.C.C. acknowledges the support from NSFC Grants No. 1143303 and No. 1143304. X. K. L. acknowledges the support from General Financial Grant from China Postdoctoral Science Foundation with Grant No. 2016M591006. To access the simulation data used in this work, please contact the authors.
|
---
abstract: 'Scalar field theories that possess a Vainshtein mechanism are able to dynamically suppress the associated fifth forces in the presence of massive sources through derivative non-linearities. The resulting equations of motion for the scalar are highly non-linear and therefore very few analytic solutions are known. Here we present a brief investigation of the structure of Vainshtein screening in symmetrical configurations, focusing in particular on the spherical, cylindrical and planar solutions that are relevant for observations of the cosmic web. We consider Vainshtein screening in both the Galileon model, where the non-linear terms involve second derivatives of the scalar, and a k-essence theory, where the non-linear terms involve only first derivatives of the scalar. We find that screening, and consequently the suppression of the scalar force, is most efficient around spherical sources, weaker around cylindrical sources and can be absent altogether around planar sources.'
author:
- 'Jolyon K. Bloomfield'
- Clare Burrage
- 'Anne-Christine Davis'
bibliography:
- 'bibrefs.bib'
title: The Shape Dependence of Vainshtein Screening
---
Introduction
============
Are there new, light degrees of freedom associated with the physics explaining the current acceleration of the expansion of the universe? The simplest explanation of the observed current behaviour of the universe is the introduction of a cosmological constant, however the required value of this constant continues to defy explanation in a quantum theory. Alternative theories almost universally introduce new light scalars [@Copeland:2006wr; @Clifton:2011jh] that would mediate long range fifth forces, and yet no such force has been seen to date. In the absence of an explanation for why such scalars would be forbidden from interacting with matter fields, scalar fields are required to possess a screening mechanism in order to dynamically hide the resulting force from observations. Screening mechanisms rely on the presence of non-trivial self interactions of the scalar field in order to change the behaviour of the field dynamically on differing scales and in differing environments. We can classify screening mechanisms depending on the type of self-interactions that lead to the screening: $\phi$ screening, which includes chameleon [@Khoury:2003aq], symmetron [@Hinterbichler:2010es; @Olive:2007aj; @Pietroni:2005pv] and varying dilaton mechanisms [@Brax:2010gi; @Brax:2011ja]; $\partial \phi$ screening, which includes k-essence [@Brax:2012jr], k-mouflage [@Babichev:2009ee] and D-BIonic screening mechanisms [@Burrage:2014uwa]; and $\partial \partial \phi$ screening, which is a property of the Galileon and generalised Galileon models [@Nicolis2008; @Kimura:2011dc; @Narikawa:2013pjr; @Kase:2013uja].
In the two latter cases, the screening of the scalar field around a source occurs when the field gradients become large and the derivative non-linearities begin to dominate the evolution of the scalar field. This is known as Vainshtein screening [@Vainshtein:1972sx], and the distance scale within which the screening occurs is known as the Vainshtein radius. In this work, we discuss both types of Vainshtein screening, considering theories that rely on non-linearities in both first and second derivatives of the scalar field. In each case, we work with a specific model to illustrate the screening behaviour: we use the D-BIonic scalar, an example of $\partial \phi$ screening, and the Galileon model, an example of $\partial\partial \phi$ screening. These are particularly interesting examples of screening, as in the limit where the coupling to matter vanishes, both theories possess symmetries which protect the self-interactions of the scalar field from quantum corrections [@Luty:2003vm; @Nicolis:2004qq; @Burrage:2014uwa]; introducing the coupling to matter only mildly breaks this symmetry. While this property makes these theories particularly interesting to study, the phenomenology of each screening mechanism is common to the broader class of theories.
Screening mechanisms require non-linear interactions, and therefore the scalar field profile can be sensitive to the shape of a source in a way that Newtonian gravitational forces are not. For all of the screening mechanisms mentioned above, the phenomenon of screening has been demonstrated for static, spherically symmetric sources. To a good approximation, this configuration describes many objects in our universe, including galaxy halos, stars and planets. The efficiency of screening in such conditions is invoked to evade fifth-force constraints in the vicinity of such objects.
However, given the apparent necessity of introducing light degrees of freedom for cosmological purposes, we would like to consider environments in which screening is not so efficient, in order to place tighter constraints on scalar field theories with screening. For this reason it is important to study screening beyond the static, spherically symmetric approximation. Previous work in this field has investigated screening behaviors about spherical bodies in two-body systems and in slowly-rotating regimes in a fully relativistic description [@Hiramatsu:2012xj; @Chagoya:2014fza]. In this work we study the presence (or absence) of Vainshtein screening for a number of static, one-dimensional systems with completely different geometries. Vainshtein screening is a particularly interesting target for this investigation as it is already known that no screening of the Galileon field occurs around a planar source [@Brax:2011sv][^1]. We leave the possibility of relaxing the static assumption for future work.
We treat Galileon and D-BIonic theories in Sections \[sec:galileon\] and \[sec:dbionic\] respectively. In the Galileon case, we review known results in spherical and planar symmetry, and present new solutions in cylindrical symmetry. In the D-BIonic case, we present new solutions in planar and cylindrical symmetry, and review known results in spherical symmetry. In Section \[sec:discussion\] we discuss the implications of these results and their connections with cosmological observations.
The Galileon {#sec:galileon}
============
The flat space Galileon action introduced by Nicolis *et al.* [@Nicolis2008] is given by $$\begin{aligned}
\label{eq:fullaction}
S = \int d^4x \sqrt{-g} & \bigg[ - \frac{1}{2} {\cal L}_2 - \frac{1}{2 \Lambda^3} {\cal L}_3 - \frac{\lambda_4}{2\Lambda^6} {\cal L}_4
\nonumber\\
& \qquad - \frac{\lambda_5}{2\Lambda^9} {\cal L}_5 + \frac{\beta \phi}{M_P} \tensor{T}{^\mu_\mu}\bigg]\end{aligned}$$ where
$$\begin{aligned}
{\cal L}_2 &= (\nabla \phi)^2
\\
{\cal L}_3 &= \square \phi (\nabla \phi)^2
\\
{\cal L}_4 &= (\nabla \phi)^2 \left[ (\square \phi)^2 - \nabla_\mu \nabla_\nu \phi \nabla^\mu \nabla^\nu \phi \right]
\\
{\cal L}_5 &= (\nabla \phi)^2 \big[ (\square \phi)^3 - 3 (\square \phi) \nabla_\mu \nabla_\nu \phi \nabla^\mu \nabla^\nu \phi
\nonumber\\
& \qquad \qquad \quad + 2 \nabla^\mu \nabla_\nu \phi \nabla^\nu \nabla_\rho \phi \nabla^\rho \nabla_\mu \phi \big]\end{aligned}$$
with $(\nabla \phi)^2 = \nabla_\mu \phi \nabla^\mu \phi$ and $\square \phi = \nabla^\mu \nabla_\mu \phi$, and where $M_P = 1/\sqrt{8 \pi G}$ is the reduced Planck mass. The first four terms in this action are invariant under the Galileon symmetry $$\begin{aligned}
\phi\rightarrow \phi +b_{\mu}x^{\mu}+c\end{aligned}$$ for arbitrary constants $b_{\mu}$ and $c$, up to total derivative terms. Although a tadpole term is also compatible with the symmetry, we do not include it here. The covariant form was first given by Deffayet *et al.* [@Deffayet2009a], however in this work we restrict our attention to situations where the curvature is weak. For the static, non-relativistic sources that we investigate, corrections due to spacetime curvature will be governed by the size of the Newtonian potential and its derivatives. We thus expect the theory described by the action to be sufficient for our purposes. Furthermore, we will use flat metrics to investigate solutions to the scalar field equations. We expect that corrections to these solutions due to spacetime curvature effects will go as $O(\Phi)$, which for our purposes are negligible.
The final term in the action represents a conformal coupling to the trace of the stress-energy tensor of the matter sector, which breaks the Galileon symmetry. The presence of this coupling means that test particles of mass $m$ experience a Galileon force of the form $$\begin{aligned}
\vec{F}_{\phi}= - m \frac{\beta}{M_P}\vec{\nabla} \phi\,.\end{aligned}$$
Neglecting the coupling to matter, the Galileon action can be alternatively expressed as an action in $D$ dimensions in the following manner, as described by Deffayet *et al.* [@Deffayet2009b]. $$\begin{aligned}
\label{eq:fancyaction}
S = \int d^D x \sqrt{-g} \phi \; \sum_{n=1}^D \lambda_n \tensor*[^n]{A}{^{\mu_1 \ldots \mu_n}_{\nu_1 \ldots \nu_n}} \Pi_{i=1}^n \nabla_{\mu_i} \nabla^{\nu_i} \phi\end{aligned}$$ Here, we absorb various coefficients into the coupling constants $\lambda_n$. The tensor $\tensor*[^n]{A}{^{\mu_1 \ldots \mu_n}_{\nu_1 \ldots \nu_n}}$ is defined as the contraction of $D-n$ indices between two epsilon tensors as $$\begin{aligned}
\tensor*[^n]{A}{^{\mu_1 \ldots \mu_n}_{\nu_1 \ldots \nu_n}} = \epsilon^{\mu_1 \ldots \mu_n \alpha_{n+1} \ldots \alpha_D} \epsilon_{\nu_1 \ldots \nu_n \alpha_{n+1} \ldots \alpha_D}\end{aligned}$$ where $$\begin{aligned}
\epsilon^{\mu_1 \ldots \mu_n} = - \frac{1}{\sqrt{-g}} \delta_1^{[\mu_1} \delta_2^{\mu_2} \ldots \delta_n^{\mu_n]} \,.\end{aligned}$$ Note that $\tensor*[^n]{A}{}$ is completely antisymmetric on the top indices and the bottom indices.
Different $n$ correspond to different order Galileon terms. The $n=1$ term is the quadratic Galileon, the $n=2$ term the cubic Galileon, and so on. In this form, it is obvious that there are a finite number of Galileon terms, as the $\tensor*[^n]{A}{}$ tensor can only antisymmetrize over a number of indices equal to the spacetime dimension, and no more. In particular, in four-dimensional spacetime, the highest order Galileon possible is the quintic Galileon.
Static Solutions
----------------
We begin by looking at the Galileon equation of motion in Cartesian coordinates. Starting from the Galileon part of the action , the equation of motion can be expressed as $$\begin{aligned}
\sum_{n=1}^D \lambda_n \tensor*[^n]{A}{^{\mu_1 \ldots \mu_n}_{\nu_1 \ldots \nu_n}} \Pi_{i=1}^n \partial_{\mu_i} \partial^{\nu_i} \phi = 0 \,.\end{aligned}$$ In this form, it is straightforward to see that for a given order $n$, the terms in the equation of motion will be zero if the number of Cartesian coordinates that $\phi$ depends on is less than $n$ (modulo terms of the form $b_\mu x^\mu$, which vanish when twice differentiated). For example, if $\phi = \phi(x)$, then only the $n=1$ term will survive, as for $n>1$, all terms contain products of partial derivatives of $\phi$ that differentiate with respect to $y$, $z$ or $t$ and therefore vanish.
This suggests that for static configurations in planar symmetry, we expect only the quadratic Galileons to contribute. In cylindrical symmetry, the quadratic and cubic terms contribute, as $\phi(r) = \phi(\sqrt{x^2 + y^2})$ depends on both $x$ and $y$. Spherical symmetry will receive contributions from the quadratic, cubic and quartic Galileon terms. The quintic term can never contribute to static solutions; only configurations that depend non-trivially on $x$, $y$, $z$ and $t$ are influenced by the quintic term. Alternatively, notice that when flat dimensions are present in the metric and the Galileon configuration is independent of this dimension, the configuration is also a solution of a theory with fewer dimensions, where correspondingly fewer nontrivial Galileon terms exist in the action. In all static solutions, the flat time dimension could well have been integrated out of the action (effectively removing the quintic term), and similarly for further symmetric solutions. This greatly simplifies the structure of the equations of motion when appropriate symmetries are present. It also allows for the possibility of breaking the degeneracy between the Galileon parameters $\lambda_i$ by studying configurations with different spatial symmetries. This argument is more generally true for the class of theories which possess $(\partial\partial\phi)$ screening, known as generalised Galileons, because terms which include second derivatives of $\phi$ always have to enter with the same index structure as the Galileon terms in order to avoid the presence of ghost degrees of freedom.
Let us now look towards solving the full equations of motion under the assumption of static configurations. Screening is most important in non-relativistic scenarios where all of our searches for deviations from Newtonian gravity are carried out, including laboratory searches for fifth forces, and solar systems constraints on deviations from the $r^{-2}$ force law. These are the tests that screening mechanisms are designed to avoid. In this regime, the mass energy completely dominates the stress-energy tensor, and pressure and anisotropic stresses are negligible. We thus assume a matter configuration consisting only of an energy density with stress-energy tensor $\tensor{T}{^\mu_\mu} = \tensor{T}{^0_0} = -\rho$. The equation of motion from the action is written in covariant notation as the following, where we neglect the quintic terms which vanish under the static assumption. $$\begin{aligned}
\frac{\beta}{M_P} \rho &= \square \phi + \frac{1}{\Lambda^3} \left[(\square \phi)^2 - (\nabla_\mu \nabla_\nu \phi) (\nabla^\mu \nabla^\nu \phi)\right]
\nonumber \\
& \qquad + \frac{\lambda_4}{\Lambda^6} \big[ (\square \phi)^3
- 3 \square \phi (\nabla_\mu \nabla_\nu \phi) (\nabla^\mu \nabla^\nu \phi)
\nonumber \\
& \qquad \qquad \quad
+ 2 (\nabla^\mu \nabla_\nu \phi) (\nabla^\nu \nabla_\gamma \phi) (\nabla^\gamma \nabla_\mu \phi)\big]\end{aligned}$$ We use step-functions for our energy density profiles, as we are primarily concerned with the exterior field solutions for the scalar field (such as outside a planet/star). We show below that the exterior solutions only ever depend on the total enclosed mass (or appropriate mass density in cylindrical or planar symmetries), which further justifies restricting our investigation to sources of constant density.
Far away from a source we expect the field to be close to the vacuum solution $\phi \approx \mathrm{const}$. Therefore gradients of the field will be small and the non-linear terms in the equation of motion can be neglected when compared with $\square \phi$. If Vainshtein screening occurs then as we approach a source, gradients of the field will increase and the non-linear terms will begin to dominate, changing the form of the scalar field profile. The distance scale within which the non-linear terms dominate is known as the Vainshtein radius.
### Planar Symmetry
We begin by investigating planar symmetry using the metric $$\begin{aligned}
ds^2 = - dt^2 + dx^2 + dy^2 + dz^2 \,.\end{aligned}$$ We choose $\phi = \phi(z)$, and assume that $\rho = \rho(z)$ also. Such a scenario was first considered for the Galileon in [@Brax:2011sv]. Only the quadratic and coupling terms survive in the equation of motion. $$\begin{aligned}
\frac{\beta}{M_P} \rho(z) = \partial_z^2 \phi\end{aligned}$$ For concreteness, let $\rho(z) = \rho_0$ between $\pm z_0$ and zero outside, and choose the zero of the potential to be $\phi(0)=0$. We can then integrate to obtain $$\begin{aligned}
\partial_z \phi &= \left\{
\begin{array}{cc}
\displaystyle \frac{\beta \rho_0}{M_P} z \qquad & |z| < z_0
\\
\displaystyle \frac{\beta \rho_0}{M_P} z_0 \qquad & |z| \ge z_0
\end{array}
\right.\end{aligned}$$ and $$\begin{aligned}
\phi &= \left\{
\begin{array}{lc}
\displaystyle \frac{\beta \rho_0}{2 M_P} z^2 \qquad & |z| < z_0
\\
\displaystyle \frac{\beta \rho_0 z_0}{M_P} \left(z - \frac{z_0}{2}\right) \qquad & |z| \ge z_0
\end{array}
\right.\end{aligned}$$ where $\partial_z \phi = 0$ at the origin by symmetry. The absence of the scale $\Lambda$ from these expressions clearly indicates that no non-linear or screening effects are present. As the gravitational force outside the plane has magnitude $F_G = 2 \rho_0 z_0 m/M_P^2$, the ratio of the scalar force to the corresponding gravitational force $F_\phi / F_G$ is given by $$\begin{aligned}
\frac{F_\phi}{F_G} = 2 \beta^2 \,.\end{aligned}$$
### Cylindrical Symmetry
We next investigate cylindrical symmetry, using the metric $$\begin{aligned}
ds^2 = -dt^2 + dr^2 + r^2 d\theta^2 + dz^2 \,.\end{aligned}$$ We take $\phi = \phi(r)$ as well as $\rho = \rho(r)$. The quadratic, cubic and coupling terms contribute to the equation of motion $$\begin{aligned}
\frac{\beta}{M_P} \rho(r) = \phi'' + \frac{\phi'}{r} + \frac{2 \phi' \phi''}{r \Lambda^3}\end{aligned}$$ where we use primes to denote derivatives with respect to $r$.
Let us consider a cylinder with constant mass density $\rho = \rho_0$ for $r < r_0$, and zero outside. The equation of motion can be rearranged into $$\begin{aligned}
\frac{\beta}{M_P} r \rho(r) = (r \phi')' + \frac{(\phi'^2)'}{\Lambda^3}
\label{eq:cylallgal}\end{aligned}$$ which can be straightforwardly integrated over $r$. We choose our boundary conditions to be $\phi(0)=0$, and cylindrical symmetry demands $\phi^{\prime}(0)=0$. If the cubic term is absent ($\Lambda\rightarrow \infty$), we have $$\begin{aligned}
\phi' =
\left\{
\begin{array}{lc}
\displaystyle \frac{\beta \rho_0 r}{2 M_P} \qquad & r < r_0
\\
\displaystyle \frac{\beta \rho_0 r_0^2}{2 M_P r} \qquad & r \ge r_0
\end{array}
\right.\end{aligned}$$ giving the expected $\sim1/r$ force law in the exterior of the source. The gravitational force sourced by the same cylindrical object is $F_G = m \rho_0 r_0^2 / 4 M_P^2 r$, again yielding the ratio $$\begin{aligned}
\frac{F_\phi}{F_G} = 2 \beta^2 \,.\end{aligned}$$ The corresponding scalar potential is $$\begin{aligned}
\phi =
\left\{
\begin{array}{lc}
\displaystyle \frac{\beta \rho_0 r^2}{4 M_P} \qquad & r < r_0
\\
\displaystyle \frac{\beta \rho_0 r_0^2}{4 M_P} \left[1 + 2 \ln\left(\frac{r}{r_0}\right) \right] \qquad & r \ge r_0
\end{array}
\right. \,.\end{aligned}$$
We now turn to the full equation of motion. Solving Eq. for $\phi'$ yields $$\begin{aligned}
\phi' =
\left\{
\begin{array}{lc}
\displaystyle \frac{\Lambda^3 r}{2} \left(\sqrt{1 + \frac{r_v^2}{r_0^2}} - 1\right) \qquad & r < r_0
\\
\displaystyle \frac{\Lambda^3 r}{2} \left(\sqrt{1 + \frac{r_v^2}{r^2}} - 1\right) & r \ge r_0
\end{array}
\right.\end{aligned}$$ where the Vainshtein radius, within which the non-linear terms dominate the behaviour of the scalar, is $$\begin{aligned}
r_v = \sqrt{\frac{2 \beta \rho_0 r_0^2}{M_P \Lambda^3}} = \sqrt{\frac{2 \beta \lambda}{\pi M_P \Lambda^3}}\end{aligned}$$ where $\lambda = \pi r_0^2 \rho_0$ is the linear mass density. We have chosen a positive sign outside the square root to ensure that we recover the $1/r$ unscreened force law at large distances from the source. We also impose continuity of $\phi^{\prime}$ at $r=r_0$.
Integrating one last time, we obtain the scalar potentials $$\begin{aligned}
\phi &= \frac{\Lambda^3}{4} \left(\sqrt{1 + \frac{r_v^2}{r_0^2}} - 1\right) r^2\end{aligned}$$ for $r < r_0$ and $$\begin{aligned}
\phi &= \frac{\Lambda^3}{4} \bigg[r^2 \left(\sqrt{1 + \frac{r_v^2}{r^2}} - 1\right)
\nonumber\\
& \qquad + r_v^2 \ln \left(\frac{r + \sqrt{r^2 + r_v^2}}{r_0 + \sqrt{r_0^2 + r_v^2}}\right)
\bigg]\end{aligned}$$ for $r \ge r_0$[^2].
Deep inside the Vainshtein radius $r_0 < r \ll r_v$, the scalar force saturates at a constant magnitude $F_{\phi} = m \beta \Lambda^3 r_v /2 M_P$, meaning that in this region the scalar force is suppressed compared to the gravitational force sourced by the same cylindrical object by $$\begin{aligned}
\frac{F_\phi}{F_G} = 4 \beta^2 \frac{r}{r_v} \,.\end{aligned}$$
The behaviour of the screening around a cylindrical source is illustrated in Fig. \[fig:plot\].
### Spherical Symmetry
Finally we turn to spherical symmetry, where we use the metric $$\begin{aligned}
ds^2 = -dt^2 + dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2\end{aligned}$$ and take $\phi = \phi(r)$ and $\rho = \rho(r)$. Spherically symmetric solutions for the Galileon were first studied in [@Nicolis2008; @Burrage:2010rs]. Galileon terms up to quartic order contribute to the equation of motion $$\begin{aligned}
\frac{\beta}{M_P} \rho(r) = \phi'' + \frac{2 \phi'}{r} + \frac{2 \phi'^2}{r^2 \Lambda^3} + \frac{4 \phi' \phi''}{r \Lambda^3} + \frac{6 \lambda_4 \phi'^2 \phi''}{r^2 \Lambda^6}\end{aligned}$$ where a prime now indicates differentiation with respect to the radial coordinate of the spherically symmetric metric.
We begin by rearranging the equation of motion into the following form. $$\begin{aligned}
\frac{\beta}{M_P} r^2 \rho(r) = (r^2 \phi')' + \frac{2 (r \phi'^2)'}{\Lambda^3} + \frac{2 \lambda_4 (\phi'^3)'}{\Lambda^6}\end{aligned}$$ Let us take $\rho = \rho_0$ for $r < r_0$, and again choose $\phi(0) = 0$ as the zero of our potential. Spherical symmetry yields $\phi'(0) = 0$. Generally speaking, this equation is intractable, and full analytic solutions are only known for particular values of $\lambda_4$. However in all cases it is possible to determine the asymptotic form of the solutions.
When the cubic and quartic terms are turned off ($\Lambda \rightarrow \infty$), the field derivatives are simply given by $$\begin{aligned}
\phi' =
\left\{
\begin{array}{lc}
\displaystyle \frac{\beta \rho_0}{3 M_P} r \qquad & r < r_0
\\
\displaystyle \frac{\beta \rho_0 r_0}{3 M_P} \frac{r_0^2}{r^2} & r \ge r_0
\end{array}
\right.\end{aligned}$$ which can be integrated to give $$\begin{aligned}
\phi =
\left\{
\begin{array}{lc}
\displaystyle \frac{\beta M}{8 \pi M_P r_0} \frac{r^2}{r_0^2} \qquad & r < r_0
\\
\displaystyle \frac{\beta M}{4 \pi M_P r_0} \left(\frac{3}{2} - \frac{r_0}{r}\right) & r \ge r_0
\end{array}
\right.\end{aligned}$$ where we let $M = 4 \pi r_0^3 \rho_0 / 3$. As expected, this exhibits a $1/r^2$ force that is disallowed by solar system constraints unless $\beta \ll 1$. The magnitude of the gravitational force for $r>r_0$ is $F_G = Mm / 8 \pi M_P^2 r^2$, again giving the ratio $$\begin{aligned}
\frac{F_\phi}{F_G} = 2 \beta^2 \,.\end{aligned}$$
When the cubic term is present but the quartic term vanishes ($\lambda_4 \rightarrow 0$), $\phi'$ becomes $$\begin{aligned}
\phi' =
\left\{
\begin{array}{lc}
\displaystyle \frac{\Lambda^3}{4} r \left(\sqrt{1 + \frac{r_v^3}{r_0^3}} - 1\right) \qquad & r < r_0
\\
\displaystyle \frac{\Lambda^3}{4} r \left(\sqrt{1 + \frac{r_v^3}{r^3}} - 1\right) & r \ge r_0
\end{array}
\right.\end{aligned}$$ where we have identified the Vainshtein radius as $$\begin{aligned}
\label{eq:cubicrv}
r_v = \left(\frac{8 \beta \rho_0 r_0^3}{3 M_P \Lambda^3}\right)^{1/3} = \left(\frac{2 \beta M}{\pi M_P \Lambda^3}\right)^{1/3} \,.\end{aligned}$$ Deep inside the Vainshtein radius, the scalar force goes as $\sim 1/\sqrt{r}$, with the ratio of the Galileon force to the corresponding gravitational force being $$\begin{aligned}
\frac{F_\phi}{F_G} = 4 \beta^2 \left( \frac{r}{r_v}\right)^{3/2} \,.\end{aligned}$$ The expression for $\phi'$ can be integrated to obtain $$\begin{aligned}
\phi = \frac{\Lambda^3}{8} r^2 \left(\sqrt{1 + \frac{r_v^3}{r_0^3}} - 1\right)\end{aligned}$$ for $r < r_0$, and $$\begin{aligned}
\phi &= \frac{\Lambda^3}{8} \bigg(
r^2 \left[\sqrt{1 + \frac{r_v^3}{r^3}} - 1\right]
\nonumber\\
& \qquad \qquad + 3 \sqrt{r_v^3 r_0} \bigg[\sqrt{\frac{r}{r_0}} \;\; \tensor[_2]{F}{_1}\left(\frac{1}{6}, \frac{1}{2}; \frac{7}{6}; - \frac{r^3}{r_v^3}\right)
\nonumber\\
& \qquad \qquad \qquad \qquad - \tensor[_2]{F}{_1}\left(\frac{1}{6}, \frac{1}{2}; \frac{7}{6}; - \frac{r_0^3}{r_v^3}\right) \bigg]
\bigg)\end{aligned}$$ for $r \ge r_0$, where $\tensor[_2]{F}{_1}(a,b;c;d)$ is the hypergeometric function.
The presence of the quartic term requires solving the following cubic polynomial equation. $$\begin{aligned}
\label{eq:generalspherical}
r^2 \phi' + \frac{2 r \phi'^2}{\Lambda^3} + \frac{2 \lambda_4 \phi'^3}{\Lambda^6} =
\left\{
\begin{array}{lc}
\displaystyle \frac{\beta M}{4 \pi M_P} \frac{r^3}{r_0^3} \qquad & r < r_0
\\
\displaystyle \frac{\beta M}{4 \pi M_P} & r \ge r_0
\end{array}
\right.\end{aligned}$$ In general, these solutions are unpleasant. However, the distance scale controlling when the quartic Galileon term becomes important in the equation of motion can still be identified.
Due to stability constraints [@Nicolis2008], the coefficients appearing in the action are limited to $\Lambda>0$ and $0 \le \lambda_4 \le \frac{2}{3}$. For $\lambda_4>0$ but within these limitations, there will be a region about the origin in which the quartic term dominates, followed by a region in which the cubic term dominates, and subsequently a region in which the quadratic term dominates [@Burrage:2010rs]. The crossover at which the cubic and quadratic terms are equally important is just the cubic Vainshtein radius .
At the crossover radius $r_{v4}$ when the cubic and quartic terms are equally important, we have $$\begin{aligned}
\phi' = \frac{r_{v4} \Lambda^3}{\lambda_4} \,.\end{aligned}$$ Substituting this back in the equation of motion to solve for $r_{v4}$, we obtain $$\begin{aligned}
r_{v4} = \left(\frac{\lambda_4^2}{32}\right)^{1/3} \left(\frac{2 \beta M}{\pi M_P \Lambda^3}\right)^{1/3}\end{aligned}$$ where we neglect the subdominant quadratic term. The quantity to the right here is just the cubic Vainshtein radius . Deep inside this Vainshtein radius $r_0 < r \ll r_{v4}$, $\phi'$ saturates at the constant value $$\begin{aligned}
\phi' = \frac{2^{1/3} \Lambda^3}{\lambda_4} r_{v4}\end{aligned}$$ and the scalar force is suppressed compared to the corresponding gravitational force by $$\begin{aligned}
\frac{F_\phi}{F_G} = \beta^2 \lambda_4 2^{-2/3} \frac{r^2}{r_{v4}^2} \,.\end{aligned}$$

A particularly nice analytic solution in the quartic case can be found for $\lambda_4 = 2/3$. $$\begin{aligned}
\phi' =
\left\{
\begin{array}{lc}
\displaystyle
\frac{\Lambda^3}{2} r \left[\left(1 + \frac{r_v^3}{r_0^3} \right)^{1/3} - 1 \right]
\qquad & r < r_0
\\
\displaystyle
\frac{\Lambda^3}{2} r \left[\left(1 + \frac{r_v^3}{r^3} \right)^{1/3} - 1 \right]
& r \ge r_0
\end{array}
\right.\end{aligned}$$ The Vainshtein radius here is $$\begin{aligned}
r_v = \left(\frac{3}{4}\right)^{1/3} \left(\frac{2 \beta M}{\pi M_P \Lambda^3}\right)^{1/3}\end{aligned}$$ which is approximately 91% the size of the case for the cubic term alone. Note that in this limiting case, there is only one screened regime rather than the two described above; this arises because the quadratic, cubic and quartic terms are all equally important at this radius. In this case, deep inside the Vainshtein radius, the force saturates at $$\begin{aligned}
F_\phi = \frac{m \beta \Lambda^3 r_v}{2 M_P}\end{aligned}$$ which yields a scalar to gravitational force ratio of $$\begin{aligned}
\frac{F_\phi}{F_G} = 6 \beta^2 \frac{r^2}{r_v^2} \,.\end{aligned}$$ This solution is included as the quartic case in Fig. \[fig:plot\].
The D-BIon {#sec:dbionic}
==========
We now look at the behavior of a model that exhibits $\partial \phi$ screening. The D-BIonic model [@Burrage:2014uwa] has the following action. $$\begin{aligned}
S = \int d^4x \sqrt{-g} \left[ \Lambda^4 \sqrt{1 - \frac{(\nabla \phi)^2}{\Lambda^4}} + \frac{\beta \phi}{M_P} \tensor{T}{^\mu_\mu} \right]
\label{eq:DBIaction}\end{aligned}$$ Compared to the standard DBI form, both the overall sign of the first term in this action and the sign of $(\nabla \phi)^2$ have been flipped. This is necessary to achieve screening, and it is straightforward to check that when the square root is expanded the scalar kinetic term has the correct sign for the theory to be free of ghosts. The DBI form means that the first term in the action is invariant under the following transformation of the field and the coordinates. $$\begin{aligned}
\tilde{\phi}(\tilde{x}) &= \gamma(\phi(x)+\Lambda^2 v_{\mu}x^{\mu})\;,
\\
\tilde{x}^{\mu} &= x^{\mu} +\frac{\gamma-1}{v^2}v^{\mu}v_{\nu}x^{\nu}+\gamma v^{\mu}\frac{\phi(x)}{\Lambda^2}\end{aligned}$$
The leading order term in Eq. expanded around $(\nabla \phi)^2 = 0$ is equivalent to the quadratic Galileon term by itself, so around any matter distribution we expect the same asymptotic behavior for the field profile as in the Galileon situation; in particular, we expect an attractive scalar force. The coupling term is identical to the Galileon coupling, and so the relationship between the scalar force and the gradient of the scalar is also identical.
The equation of motion resulting from the action is simply $$\begin{aligned}
\nabla_\mu \left(\frac{\nabla^\mu \phi}{\sqrt{1 - (\nabla \phi)^2 / \Lambda^4}} \right) = - \frac{\beta}{M_P} \tensor{T}{^\mu_\mu} \,.\end{aligned}$$ We now investigate the static symmetric solutions as we did for the Galileons.
Static Solutions
----------------
As previously, we investigate situations with stress-energy tensor $\tensor{T}{^\mu_\mu} = -\rho$.
### Planar Symmetry
Assuming that $\rho = \rho(z)$ and $\phi = \phi(z)$, the equation of motion becomes $$\begin{aligned}
\partial_z \left( \frac{\partial_z \phi}{\sqrt{1 - (\partial_z \phi)^2 / \Lambda^4}} \right) = \frac{\beta \rho}{M_P} \,.\end{aligned}$$ Again, we take $\rho(z) = \rho_0$ between $\pm z_0$ and zero outside, and choose the zero of the potential to be $\phi(0)=0$. We can then integrate to obtain $$\begin{aligned}
\partial_z \phi &= \left\{
\begin{array}{lc}
\displaystyle
\pm \frac{\Lambda^2}{\sqrt{1 + z_\ast^2/z^2}}
\qquad & |z| < z_0
\\
\displaystyle
\pm \frac{\Lambda^2}{\sqrt{1 + z_\ast^2/z_0^2}}
\qquad & |z| \ge z_0
\end{array}
\right.\end{aligned}$$ where $z_\ast = M_P \Lambda^2 / \beta \rho_0$ is a characteristic length scale. Here, we take the positive (negative) root for $z > 0$ ($z < 0$) to obtain the appropriate asymptotics, and to ensure the continuity of $\partial_z \phi$. These expressions can be integrated to obtain the following. $$\begin{aligned}
\phi &= \left\{
\begin{array}{lc}
\displaystyle
\Lambda^2 z_\ast \left(\sqrt{1 + \frac{z^2}{z_\ast^2}} - 1\right)
\qquad & |z| < z_0
\\
\displaystyle
\Lambda^2 \left(
\frac{z z_0 + z_\ast^2}{\sqrt{z_0^2 + z_\ast^2}}
- z_\ast
\right)
\qquad & |z| \ge z_0
\end{array}
\right.\end{aligned}$$ As is the case for the Galileon (and also purely canonical scalar fields), the scalar force is independent of $z$. However, unlike the Galileon, the strength of the force is not purely fixed by the coupling strength $\beta$. If $z_{\ast} \gg z_0$ then the D-BIon non-linearities are always subdominant, but if the density and size of the planar source are such that $z_{\ast} \ll z_0$, then the force is smaller than it would be in a theory with no non-linearities. The scale $z_{\ast}$ can still be thought of as the Vainshtein distance scale for this system. However, because the force around a planar source is constant with distance, we find that sources are either always screened if the width of the source is smaller than the Vainshtein scale $z_{\ast}$, or always unscreened if the width is larger than the Vainshtein scale.
### Cylindrical Symmetry
We take $\phi = \phi(r)$ as well as $\rho = \rho(r)$. The equation of motion becomes $$\begin{aligned}
\partial_r \left( \frac{r \phi'}{\sqrt{1 - \phi'^2 / \Lambda^4}} \right) = \frac{\beta r \rho}{M_P}\end{aligned}$$ where we use primes to denote derivatives with respect to the cylindrical radial coordinate $r$.
Let us again consider a cylinder with constant mass density $\rho = \rho_0$ for $r < r_0$. The equation of motion can be integrated over $r$ and solved for $\phi'$ to obtain the following. $$\begin{aligned}
\phi' &= \left\{
\begin{array}{lc}
\displaystyle
\pm \frac{\Lambda^2}{\sqrt{1 + r_0^4 / r^2 r_v^2}}
\qquad & r < r_0
\\
\displaystyle
\pm \frac{\Lambda^2}{\sqrt{1 + r^2 / r_v^2}}
\qquad & r \ge r_0
\end{array}
\right.\end{aligned}$$ Here, the Vainshtein radius is $$\begin{aligned}
r_v = \frac{\lambda_0 \beta}{2 \pi M_P \Lambda^2}\end{aligned}$$ where $\lambda_0 = \pi r_0^2 \rho_0$ is the linear mass density. Again, we choose the positive roots by matching to the appropriate asymptotic form, and requiring continuity at $r_0$. We can integrate to obtain $\phi(r)$. $$\begin{aligned}
\phi &= \left\{
\begin{array}{l}
\displaystyle
\frac{\Lambda^2 r_0^2}{r_v}\left(\sqrt{1 + \frac{r^2 r_v^2}{r_0^4}} - 1\right)
\qquad \qquad r < r_0
\\
\displaystyle
\frac{\Lambda^2 r_0^2}{r_v} \left(\sqrt{1 + \frac{r_v^2}{r_0^2}} - 1
+ \frac{r_v^2}{r_0^2} \ln \left[\frac{r + \sqrt{r^2 + r_v^2}}{r_0 + \sqrt{r_0^2 + r_v^2}}\right]
\right)
\end{array}
\right.\end{aligned}$$ Here, the second expression is for $r > r_0$. This expression, particularly outside the object, bears a striking resemblance to the corresponding Galileon expression.
Deep inside the Vainshtein radius ($r_0 < r \ll r_v$), the scalar force saturates at $F_\phi = - m \beta \Lambda^2 / M_P$, giving a scalar to gravitational force ratio of $$\begin{aligned}
\frac{F_\phi}{F_G} = 2 \beta^2 \frac{r}{r_v}\end{aligned}$$ which is the same as the Galileon case up to a factor of two.
### Spherical Symmetry
We take $\phi = \phi(r)$ as well as $\rho = \rho(r)$, where $r$ is now the spherical radius. The equation of motion becomes $$\begin{aligned}
\partial_r \left( \frac{r^2 \phi'}{\sqrt{1 - \phi'^2 / \Lambda^4}} \right) = \frac{\beta r^2 \rho}{M_P}\end{aligned}$$ where we use primes to denote derivatives with respect to $r$.
We again consider a sphere with constant mass density $\rho = \rho_0$ for $r < r_0$. The equation of motion can be integrated over $r$ and solved for $\phi'$ to obtain the following. $$\begin{aligned}
\phi' &= \left\{
\begin{array}{lc}
\displaystyle
\pm \frac{\Lambda^2}{\sqrt{1 + r_0^6 / r^2 r_v^4}}
\qquad & r < r_0
\\
\displaystyle
\pm \frac{\Lambda^2}{\sqrt{1 + r^4 / r_v^4}}
\qquad & r \ge r_0
\end{array}
\right.\end{aligned}$$ Here, the Vainshtein radius is $$\begin{aligned}
r_v = \sqrt{\frac{\beta M}{4 \pi M_P \Lambda^2}}\end{aligned}$$ where $M = 4 \pi r_0^3 \rho_0 / 3$. Again, we choose the positive roots by matching to the appropriate asymptotic form, and applying continuity at $r_0$. We can integrate to obtain $\phi(r)$. For $r < r_0$, we have $$\begin{aligned}
\phi &= \frac{\Lambda^2 r_0^3}{r_v^2}\left(\sqrt{1 + \frac{r^2 r_v^4}{r_0^6}} - 1\right)\end{aligned}$$ while for $r > r_0$, the integral again yields hypergeometric functions. $$\begin{aligned}
\phi = {}&\frac{\Lambda^2 r_0^3}{r_v^2}\left(\sqrt{1 + \frac{r_v^4}{r_0^4}} - 1\right)
\nonumber\\
& - \frac{\Lambda^2 r_v^2}{r_0} \bigg[ \frac{r_0}{r} \; \tensor[_2]{F}{_1}\left(\frac{1}{4}, \frac{1}{2}; \frac{5}{4}; - \frac{r_v^4}{r^4}\right)
\nonumber\\
& \qquad \qquad
- \tensor[_2]{F}{_1}\left(\frac{1}{4}, \frac{1}{2}; \frac{5}{4}; - \frac{r_v^4}{r_0^4}\right)
\bigg]\end{aligned}$$ Again, this bears a striking resemblance to the solution for the cubic Galileon in spherical symmetry.
Deep in the Vainshtein radius, the force again saturates at the constant value $F_\phi = - m \beta \Lambda^2 / M_P$. This yields the scalar to gravitational force ratio of $$\begin{aligned}
\frac{F_\phi}{F_G} = 2 \beta^2 \frac{r^2}{r_v^2} \,.\end{aligned}$$ This is very similar to the form of the Galileon force.
The screening curves for this model are plotted alongside the Galileon results in Fig. \[fig:plot\].
Discussion {#sec:discussion}
==========
In this work we have derived the flat space solutions for theories with Vainshtein screening mechanisms around planar, cylindrical and spherical sources. We have considered Galileon theories as a typical example of $(\partial\partial \phi)$ screening and D-BIons as an example of $(\partial \phi)$ screening. Whilst the sources considered in this work represent a tiny subset of all the possible shapes that one could imagine for matter sources, they are sufficient to describe what is happening on large cosmological scales, where almost all matter lives either in walls, filaments or halos.
For the Galileon, there is no screening at all around a planar source, making such structures the best place to look for Galileon fields. Both cylindrical and spherical sources possess a Vainshtein radius within which the scalar force is screened. In the cylindrical case, the ratio of the Galileon force to the gravitational force scales as $r/r_v$ well within the Vainshtein radius, whereas the screening for spherical sources is more efficient, with ratios of either $(r/r_v)^{3/2}$ or $(r/r_v)^2$ depending on whether the cubic or quartic Galileon terms are dominant. Thus, Vainshtein screening is less efficient at hiding the scalar force for cylindrical sources than it is for spherical sources.
For a static system the quintic Galileon term never contributes to the equations of motion, and so observations of static systems can never constrain the Galileon parameter $\lambda_5$. We have shown that the quartic Galileon never contributes to the cylindrically symmetric Galileon equation of motion, and it has been previously shown that none of the Galileon operators contribute to the equation of motion for the field around a planar source. Therefore, if it were possible to measure the Galileon field profile around cosmological walls, filaments and halos, it would be possible to break the degeneracies between the Galileon parameters and determine $\beta$, $\Lambda$ and $\lambda_4$. Information about $\lambda_5$ can only be ascertained from four-dimensional dynamics.
In contrast, for a D-BIonic scalar there is always a Vainshtein radius (or more precisely, a Vainshtein distance scale) governing screening in all the geometries considered. As this does not rely on the symmetries of the D-BIonic Lagrangian, we expect this to be general to all theories with $(\partial \phi)$ screening. The scalar force is always constant and independent of distance, around an infinite planar source. We find that planar objects are always screened or unscreened, depending on whether or not the width of the source is larger or smaller than the corresponding Vainshtein distance scale. This is in contrast to cylindrical or spherical sources, where only observers within the Vainshtein radius of the source see a screened force. Deep inside the Vainshtein radius, we found that the ratio of the scalar to gravitational forces had the same dependence on $r/r_v$ as the cubic Galileon in cylindrical symmetry and the quartic Galileon in spherical symmetry.
It is interesting to note that the scaling of the Vainshtein radius is quite different in the cylindrical and spherical cases, and also differs between the Galileon and D-Bionic theories. These expressions are displayed together in Table \[table:vainshtein\]. Compared side-by-side like this, we see that the Galileon scales always contain $M_P \Lambda^3 / \beta$, while the D-BIon scales always contain $M_P \Lambda^2 / \beta$. Up to numerical factors, the Vainshtein radius is simply the combination of the appropriate mass or mass density with these combinations.
Source Galileon D-BIon
---------- -------------------------------------------------------------------- -------------------------------------------------------
Plane $\displaystyle \frac{M_P \Lambda^2}{\beta \rho_0}$
Cylinder $\displaystyle \sqrt{\frac{\beta \lambda_0}{M_P \Lambda^3}}$ $\displaystyle \frac{\beta \lambda_0}{M_P \Lambda^2}$
Sphere $\displaystyle \left( \frac{\beta M }{M_P \Lambda^3}\right)^{1/3}$ $\displaystyle \sqrt{\frac{\beta M}{ M_P \Lambda^2}}$
: The Vainshtein distance scales in the different theories and symmetries considered in this article. Numerical coefficients have been suppressed in order to demonstrate how the radii scale with various quantities.[]{data-label="table:vainshtein"}
Source Sphere ($M_\odot$) (pc) Cylinder ($\lambda_0$) (Mpc)
---------- ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
Galileon $\displaystyle 500 \, \beta^{1/3} \left(\frac{10^{-13} \mbox{ eV}}{\Lambda}\right)$ $\displaystyle \sqrt{\beta} \left(\frac{10^{-13}\mbox{ eV}}{\Lambda}\right)^{3/2}$
D-BIon $\displaystyle \sqrt{\beta} \left( \frac{10^{-5} \mbox{ eV}}{\Lambda} \right)$ $\displaystyle \beta \left(\frac{10^{-5}\mbox{ eV}}{\Lambda}\right)^2$
: Approximate Vainshtein radii for a solar mass sphere and a filament with $\lambda_0 \sim 10^8 M_\odot / \mathrm{Mpc}$ in the Galileon and D-BIon models. For both models we expect $\beta \sim 1$ if the scalar arises from a modification of the gravitational sector.[]{data-label="table:vainshtein2"}
From Fig. \[fig:plot\], we see that the D-BIon is somewhat better at screening than the Galileon. However, this statement should be treated cautiously; the plot is shown in units of $r/r_v$, and comparing the Vainshtein radii of different models is a dubious proposition at best. The other thing to note from this figure is that spherical screening is stronger than cylindrical screening within the Vainshtein radius in all cases, which suggests that cylindrical systems may be useful environments in which to search for extra forces.
Cosmological implications
-------------------------
The large scale structure of the universe, sometimes referred to as the cosmic web, is built out of walls, filaments and halos. These are predominantly composed of dark matter, traced by visible galaxies. While the Vainshtein radius of spherical structures like the sun and the galaxy are typically expected to be extremely large, the cylindrical Vainshtein radius may, depending on the parameters, be somewhat reduced compared to spherical expectations. Simulations suggest the existence of filaments of radii $\sim 10 \, \mbox{kpc}$ with nearly constant linear mass densities $\lambda \sim 10^8 \, M_\odot / \mbox{kpc}$ [@Harford:2008bw]. Such filaments can be particularly long, with observations suggesting lengths of up to $\sim 100 \, \mbox{Mpc}$ [@Bharadwaj:2003xm; @Pandey:2010yj].
We estimate the Vainshtein radii for Galileons and D-BIons around solar mass objects and filaments with the above linear mass density mass in Table \[table:vainshtein2\]. The reference scale for Galileons $\Lambda = (H_0^2M_P)^{1/3} \sim 10^{-13} \mbox{ eV}$ is chosen because this scale allows the Galileon to be cosmologically relevant at the current epoch [@Nicolis2008], while for D-BIons the scale is taken to be the value which allows the D-BIon to evade lunar laser ranging searches for fifth forces [@Burrage:2014uwa].
We see that for appropriate values of $\beta$ and $\Lambda$, the screening radius for a filament may well be within its radius, although we would typically expect filaments to be screened. The filament screening radii are approximately the same in both models (for the given parameters), at around 100 times the filament radius. This is a significantly smaller ratio than the radius of the sun to its screening radius, which for the D-BIon is around $5 \times 10^7$.
The dependence of Vainshtein screening on the morphology of structures in N-body simulations of the cosmic web has been studied by Falck *et al.* in [@Falck:2014jwa]. It was found that dark matter particles in filaments and voids experienced a Galileon force that was unscreened whilst dark matter particles in halos felt a Galileon forces that was screened, compared to the gravitational force they experienced. This supports the analytic results derived here and demonstrates that it is possible to separate cosmological observables by the morphology of the associated cosmological structure.
We have seen that Vainshtein screening is less efficient around objects that are not spherically symmetric. Therefore, the vicinity of walls and filaments may be ideal environments in which to look for the existence of Vainshtein screened fifth forces. If it is possible to observe the motion of particles towards cosmological structures with differing shapes, we may be able to determine whether a fifth force must be screened, and to what degree, around walls, filaments and halos separately. This will allow us to differentiate between $(\partial \phi)$ and $(\partial\partial \phi)$ screening, as the latter is unable to screen walls. It will also allow us to break the degeneracies between the parameters within one class of screening mechanism, as in Galileon models, only the cubic coupling is important around cylindrical sources, while a combination of both the cubic and quartic couplings are important around spherical sources.
We thank the Lorentz Center at Leiden University for their gracious hospitality while this work was performed. C.B. is supported by a Royal Society University Research Fellowship. ACD is supported in part by STFC.
[^1]: Asymptotic solutions for the chameleon field profile around an ellipsoidal source are also known [@2012PhRvL.108v1101J].
[^2]: The logarithm can also be written as a pair of arcsinh functions as $$\begin{aligned}
\ln \left(\frac{r + \sqrt{r^2 + r_v^2}}{r_0 + \sqrt{r_0^2 + r_v^2}}\right) = \mathrm{arcsinh} \left(\frac{r}{r_v}\right) - \mathrm{arcsinh} \left(\frac{r_0}{r_v}\right) \,.\end{aligned}$$
|
---
abstract: 'A necessary and sufficient condition for the differentiability of the distance function generated by an almost proximinal closed set has been given for normed linear spaces with locally uniformly convex and differentiable norm. We prove that the proximinal condition of Giles (Proc. Amer. Math. Soc., 104, No. 2, 1988, 458-464) is true for almost sun. In such spaces if the proximinal condition is satisfied and the distance function is uniformly differentiable on a dense set then it will result in the differentiability on all off the set (generating the distance function). The proximinal condition ensures about the convexity of almost sun in some spaces under a differentiability condition of the distance function. A necessary and sufficient condition is obtained for the convexity of Chebyshev sets in Banach spaces with rotund dual.'
author:
- |
Triloki Nath [^1]\
Department of Mathematics and Statistics\
School of Mathematical and Physical Sciences\
Dr. Harisingh Gour University, Sagar\
Madhya Pradesh-470003\
INDIA.
title: Differentiability of Distance Function and The Proximinal Condition implying Convexity
---
**Keywords.** Distance function, Proximinal set, Differentiability, Generalized subdifferential, Almost sun, Chebyshev set.
Introduction {#sec:intro}
============
Let $X$ be a real normed linear space. For a nonempty closed set $K$ in $X$, we define its distance function $d_K$ on $X$ by $$d_K (x)= \inf \left\{\left\|x-k\right\| : k\in K\right\}.$$ This function is not necessarily every where differentiable but it is (globally) Lipschitz, with the Lipschitz constant equal to 1. The metric projection of $x$ into $K$ is $$P_K (x)= \left\{k \in K : \left\|x-k\right\| = d_K(x)\right\}.$$ The set $K$ is called proximinal (Chebyshev) if for every $x \in X \diagdown K, P_K (x) $ is nonempty (singleton). $K$ will be called almost proximinal if $P_K (x) $ is nonempty for a dense set of $ x \in X \diagdown K.$ A proximinal set $K$ in a normed linear space $X$ is a sun if for every $x \in X \diagdown K$ with a closest point $p(x)\in K$ , points $x + t\vec{x}$ have $p(x)$ as a closest point for all $t \geqslant 0,$ where $\vec{x}$ is a unit vector in the direction of $x-p(x).$ An almost proximinal set $K$ will be called almost sun if for a dense set of $x \in X \diagdown K$ with a closest point $p(x)\in K,$ points $x + t\vec{x}$ also have $p(x)$ as a closest point for all $t \geqslant 0.$
Dutta [@Dutt1] has deduced that if the norm on $X$ is locally uniformly convex (LUR) and (Fréchet) smooth. Then the (Fréchet) smoothness of the distance function $d_{K}$ generated by an almost proximinal set $K$ is generic on $X \diagdown K.$ Further if norms on $X$ and $X^{\ast}$ are LUR, then he characterized the convexity of Chebyshev sets in terms of the Clarke generalized subdifferential of the distance function. His technique is based on the observation of the denseness of the set $E'(K)$, where $E'(K)$ denotes the set of points in $X \diagdown K$ for which every minimizing sequence in $K$ converges to a unique nearest point. A sufficient condition for $E'(K)$ to be dense is the local uniform convexity of the norm on $X$. We will use this result to improve (in some sense new) results of Giles [@Giles1].
In a normed linear space $X$, Giles [@Giles1] assumed a proximinal condition on a nonempty closed set $K,$ which has the property that for some $r > 0$ there exists a set of points $x_{\circ} \in X \diagdown K$ which have closest points $ p(x_{\circ}) \in K $ with $d_K(x_{\circ}) > r $ such that the set of points $x_{\circ}- r \vec{x}_{\circ}$ is dense in $ X \diagdown K $. It has been shown that if the norm has sufficiently strong differentiability properties, then the distance function $ d_K $ generated by $K$ has similar differentiability properties and it follows that, in some spaces, $K$ is convex.\
It is well known that in a smooth finite-dimensional normed linear space every Chebyshev set is convex and the metric projection is continuous on $X \diagdown K$ and this fact is used in the proof. So it is natural to consider the continuity of the metric projection while proving the convexity of Chebyshev sets in smooth infinite-dimensional spaces. The best known result is due to Vlasov [@Vlas1]: in a Banach space with rotund dual, Chebyshev sets with continuous metric projection are convex. A close look of Vlasov’s proof shows that the continuity of the metric projection has been used only to establish a differentiability condition of the distance function generated by the set. In terms of a differentiability condition on the distance function, Vlasov’s Theorem can be stated as follows.\
[ ( Borwein et al. [@Borwn], THEOREMS 14-18)]{}.\
\[prop\_open\] In a Banach space $X$ with rotund dual $X^{\ast},$ a nonempty closed set $K$ is convex if its distance function $ d_K $ satisfies $$\limsup_{\left\| y \right\| \rightarrow 0}\dfrac{d_K(x+y)-d_K(x)}{\left\| y \right\|} = 1 ~~{\rm for~ all}~~ x \in X \diagdown K.$$ In particular, this differentiability condition is satisfied if $ d_K $ is smooth and $\left\| d'_K(x) \right\| = 1 $ or if $d_K $ is Fréchet smooth for all $x \in X \diagdown K.$
It is easily seen that in any normed linear space $X,$ if every point $x \in X \diagdown K$ is an interior point of an interval with end points $x_{\circ} \in X \diagdown K$ and closest point $p(x_{\circ}) \in K,$ then $ p(x) = p(x_{\circ})$ and the differentiability condition of Proposition \[prop\_open\] will be satisfied. For the convexity of $K$ in a normed linear space $X,$ it is a necessary condition that the distance function $d_K$ satisfies the differentiability condition for all $x \in X \diagdown K.$ We will use Vlasov’s Theorem in the form of Proposition \[prop\_open\] to establish convexity results. Fitzpatrick [@Fitz1] observed a close connection between continuity of the metric projection and differentiability of the distance function, and a differentiability condition on the distance function implies convexity of Chebyshev sets.
To make the paper self contained, we reproduce some definitions and known results given as follows.
A function $h: X \rightarrow \mathbb R$ is said to be *Gâteaux differentiable* or *smooth* at $x \in X$ if there exists a continuous linear functional $h'(x)\in X^{\ast}$, called the Gâteaux derivative of $h$, such that for given $\epsilon > 0$ and $y \in X$ with $\left\|y\right\|=1$ there exists a $\delta(\epsilon, x,y) > 0$ such that
$$\left|\frac{h(x+ty)- h (x)}{t} - h'(x)(y)\right|< \epsilon \rm ~ {when} ~0 < \left|t\right| < \delta.
\label{c}$$
The function $h$ is said to be *Fréchet smooth* at $x$ if there exists a $\delta(\epsilon, x) > 0$ such that inequality (\[c\]) holds for all $y \in X$ with $\left\|y\right\|=1.$
The function $h$ is said to be *uniformly smooth* on a set $D$ if there exists a $\delta(\epsilon, y) > 0$ such that inequality (\[c\]) holds for all $x \in D, $ and is said to be *uniformly Fréchet smooth* on a set $D$ if there exists a $\delta(\epsilon) > 0$ such that inequality (\[c\]) holds for all $x \in D $ and for all $y \in X$ with $\left\|y\right\|=1.$
The space $X$ is said to be *smooth* (*Fréchet smooth*) at $x \neq 0$ if the norm is smooth (Fréchet smooth) at $x \neq 0$. We say that $X$ has *uniformly smooth* (*uniformly Fréchet smooth*) norm if the norm is uniformly smooth (uniformly Fréchet smooth) on the unit sphere $\left\{x \in X: \left\|x\right\| = 1\right\}.$
Let $h: X \rightarrow \mathbb R$ be a locally Lipschitz function. The *Clarke generalized directional derivative* of $h$ at a point $x$ and in the direction $y\in X$, denoted by $h^\circ(x; y)$, is given by: $$h^\circ(x; y)=\limsup_{{z\rightarrow x},~ {t\downarrow 0}} \frac{h (z+ty)- h(z)}{t}$$
and the *Clarke generalized subdifferential* of $h$ at $x$ is given by $$\partial h(x)= \left\{f \in X^\ast:h^\circ(x; y)\geqslant f (y),~ \forall y \in X\right\}.$$
Differentiability and The Proximinal Condition {#sec:DPC}
==============================================
We denote by $E(K)$, the set of all points in $X \diagdown K$ which has nearest points in $K$ and $E'(K)$ to be the set of $x \in E(K)$ such that every minimizing sequence in $K$ for $x$ converges to a unique nearest point of $x$. We denote by $f_x$, a subgradient of the norm at $x \in X$, then sub differential $\partial \left\|x\right\|$, the set of all subgradients of the norm at $x \in X$, is given by $$\begin{aligned}
\partial \left\|x\right\|= \left\{f_x \in X^{\ast}: f_x (x) = \left\|x\right\|~ {\rm{and}}~ \left\|f_x\right\| = 1\right\}\end{aligned}$$ Note that $\partial \left\| \frac{x}{\left\|x\right\|} \right\|= \partial \left\|x\right\|$ for $x \neq 0$. Clearly, if norm is smooth at $x \neq 0 $ then $\partial \left\|x\right\|$ is singleton. In this case the single subgradient $f_x$ becomes Gâteaux derivative.\
The following two lemmas play very crucial role in establishing our results.
[( Borwein and Giles [[@Borw1], LEMMA 1]{})]{} For any $z \in E(K)$ and every $p(z) \in P _{K}(z)$, there exists an $f_{\vec{z}} \in \partial \left\|z- p(z)\right\|$ such that $f_{\vec{z}} \in \partial d_{K}(z).$ \[lm0\]
[(Dutta [[@Dutt1], LEMMA 1]{})]{} For any $z \in E'(K)$ we have $$\partial d_{K}(z) \subseteq \partial \left\|z- P_{K}(z)\right\|.$$ \[lm1\] Equality holds if norm on $X$ is smooth at $z- P_{K}(z).$ Moreover norm on $X$ is Fréchet smooth at $z- P_{K}(z)$, then $d_{K}$ is Fréchet smooth at $z.$
For more detailed explanation of generalized subdifferential, see Clarke [@Clarke1].\
Indeed, smoothness of $d_{K} $ in Lemma \[lm0\] is strict [(see [@Clarke1], Proposition 2.2.4)]{}. For a locally Lipschitz function $h$, the strict differentiability is equivalent to being singleton of $\partial h.$
Now, we establish the following result, which would serve as a a part of next result of this paper.
\[Prop-01\] Let $X$ be a smooth normed linear space. Let $K$ be a nonempty closed set with the set $E'(K)$ dense in $X \diagdown K$. Suppose $x \in X \diagdown K$ is such that for every $y_n \in E'(K)$ with $y_n \rightarrow x$ the sequence $\left\{g_{\vec{y}_n}\right\}$ is $w^*-$convergent. Then the distance function $d_K$ generated by $K$ is strictly smooth at $x$.
To prove that $d_K$ is strictly smooth at $x \in X \diagdown K$, it suffices to show that $\partial d_{K}(x)$ is singleton.\
Let $y_n \in E'(K)$ be any sequence such that $y_n \rightarrow x$. By definition of upper limit, for each $n \in \mathbb N$ there exists $ z_n \in X \diagdown K$ and $t_n > 0$ such that $$\left\| z_n - y_n\right\|+t_n < \frac{1}{n}, {\rm and}$$ $$\begin{aligned}
d_{K}^\circ(y_n; y)-\frac{1}{n} & \leqslant & \dfrac{d_{K} (z_n+t_n y)- d_{K}(z_n)}{ t_n } \\
{\rm hence,~~} \limsup_{n\rightarrow \infty} d_{K}^\circ(y_n; y) & \leqslant & \limsup_{{z\rightarrow x},~ {t\downarrow 0}} \frac{ d_{K}(z+ty)- d_{K}(z)}{t}\\
& = & d_{K}^\circ(x; y).
\end{aligned}$$ Since $y_n \in E'(K)$ with $y_n \rightarrow x$, so by Lemma \[lm1\], $\partial d_{K}(y_n)= \partial \left\| y_n - P_{K} (y_n)\right\| $ is singleton, we have $ d_{K}^\circ(y_n; y)= d'_{K} (y_n)(y)= g_n (y),$ where $g_n (y_n - P_{K} (y_n))= \left\| y_n - P_{K} (y_n)\right\|$, that is $g_n = g_{\vec{y}_n}$, with $y_n \rightarrow x$ and $y_n \in E'(K)$, hence $w^{\ast}-$convergent. Let $g_{\vec{y}_n} \rightarrow g$ in $w^{\ast}-$ topology, by $w^{\ast}-$upper semicontinuity of $\partial d_{K}$, we must have $g \in \partial d_{K}(x).$\
Thus, for all $y \in X$ with $\|y\|=1$ and for every sequence $y_n \in E'(K)$ with $y_n \rightarrow x$, we have $\lim_{n\rightarrow \infty} d_{K}^\circ(y_n;y)$ exists in $\partial d_{K}(x)(y),$ so linear in $y$ and $$\begin{aligned}
\lim_{n\rightarrow \infty} d_{K}^\circ(y_n; y) & \leqslant & d_{K}^\circ(x; y).\end{aligned}$$
Now, we prove that the reverse of the last inequality holds for some $y_n \in E'(K)$ with $y_n \rightarrow x$. Which proves that $d_{K}^\circ(x; y)$ is linear in $y$, and it follows that $\partial d_{K}(x)$ is singleton.\
For, let $y \in X$ with $\|y\|=1$, then by definition of $d_{K}^\circ(x; y)$, corresponding to each $n$ there exists $z_n \in X \diagdown K$ and $t_n > 0$ such that $$\left\| z_n -x\right\|+t_n < \frac{1}{n}, {\rm and}$$ we have $$\begin{aligned}
d_{K}^\circ(x; y)-\frac{1}{n} & \leqslant & \dfrac{d_{K} (z_n+t_n y)- d_{K}(z_n)}{ t_n }.
\end{aligned}$$ Since $E'(K)$ is dense in $X \setminus K$, choose $y_n \in E'(K)$ such that $\left\| z_n +t_ny -y_n\right\| < t_n^2.$ Then $d_{K} (z_n+t_n y) \leqslant d_{K} (y_n) + t_n^2$ and $d_{K} (z_n) \geqslant d_{K} (y_n-t_ny) - t_n^2.$ Thus for sufficiently large $n$, we have $$\begin{aligned}
d_{K}^\circ(x; y)-\frac{1}{n} & \leqslant & \dfrac{d_{K} (y_n)- d_{K}(y_n-t_ny)}{ t_n } + 2t_n \\
& = & \dfrac{d_{K} ((y_n-t_ny)+t_ny)- d_{K}(y_n-t_ny)}{ t_n } + 2t_n \\
& \leqslant & d_{K}^\circ(y_n; y)+ \frac{1}{n}+2t_n.
\end{aligned}$$ Thus $$\begin{aligned}
d_{K}^\circ(x; y) & \leqslant & \lim_{n\rightarrow \infty} d_{K}^\circ(y_n; y).\end{aligned}$$ This completes the proof of Proposition.
It may be noted that merely density condition is not sufficient for the conclusion of Proposition \[Prop-01\]. Let us consider $K= \left\{ x \in \mathbb R^2| \left\|x\right\| =1\right\}$, the unit sphere in smooth space $X=\mathbb R^2$, then $E'(K)= X \diagdown (K \cup \left\{ 0 \right\})$. Put $x=0$ then we can find sequences $\left\{ y_n\right\}$ of non zero vectors such that the sequences converge to zero but $\left\{g_{\vec{y}_n}\right\}$ are not convergent, so $d_K$ is not strictly smooth at $x=0$. In fact the subdifferential of $d_K$ is the closed unit ball and it is easy to see that $d_K$ is even not smooth at $x=0$.
Before proceed further to establish our main results, note that if $X$ is smooth then for any $z_n \in E(K)$ and every $ p(z_n) \in P_K(z_n)$, the subdifferential $\partial \left\|z_n- p(z_n)\right\| $ is singleton which depends on $z_n$ and $ p(z_n)$ both. Since $P_K(z_n)$ is set valued, so $f_{\vec{z}_n} \in \partial \left\|z_n- p(z_n)\right\| $ need not be unique corresponding to given $z_n \in E(K).$ Indeed, every sequence $\left\{z_n\right\}$ in $E(K)$ determines (possibly uncountably) many sequences $ \left\{f_{\vec{z}_n}\right\}$. Hence when we say that $ \left\{f_{\vec{z}_n}\right\}$ is ($w^{\ast}-$ or norm) convergent for every sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $, we mean that for every $ p(z_n) \in P_K(z_n)$ the sequence $ \left\{f_{\vec{z}_n}\right\}$ where $ f_{\vec{z}_n} \in \partial \left\|z_n- p(z_n)\right\| $, obtained in this way, is ($w^{\ast}-$ or norm) convergent. It may be noted that they need not converge to the same ($w^{\ast}-$ or norm) limit. For more details see Borwein et al. [@Borwn] Corollary $9$ and Giles [@Giles1].\
The following Proposition provides a necessary and sufficient condition for differentiability of the distance function if $E'(K)$ is dense in $X \diagdown K.$ We do not assume the uniform differentiability conditions as in Giles [@Giles1], Proposition $2$. Hence, we hope, our result advances that of Giles [@Giles1].
\[Prop-1\] Let $X$ be a normed linear space with smooth $($Fréchet smooth$)$norm and $K$ be a nonempty closed set with the set $E'(K)$ dense in $X \diagdown K$. Then the distance function $d_K$ generated by $K$ is strictly smooth $($ and Fréchet smooth$)$ at $x \in X \diagdown K$ if and only $\left\{f_{\vec{z}_n}\right\}$is $w^*-$convergent $($norm convergent$)$ for every sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $.
**First** we consider the case when norm is smooth. Suppose that for every sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $, the sequence $\left\{f_{\vec{z}_n}\right\}$ is $w^{\ast}-$convergent. Since $E(K)$ contains $E'(K)$, so it follows from Proposition \[Prop-01\] that $d_K$ is strictly smooth at $x.$
Conversely, suppose that $d_{K}$ is strictly smooth at $x$. Let $f_{\vec{z}_n} \in \partial \left\| z_n - p(z_n)\right\|$, since norm is smooth so by Lemma \[lm0\], $f_{\vec{z}_n} \in \partial d_{K}(z_n)$. Let $f$ be $w^{\ast}-$cluster point of $f_{\vec{z}_n}$, by upper semicontinuity of $\partial d_{K}$, we have $f \in \partial d_{K}(x)$, but $d_{K}$ is strictly smooth at $x$, hence the sequence $\left\{f_{\vec{z}_n}\right\}$ is $w^{\ast}-$convergent to $d'_{K}(x).$\
We **next** consider the case when the norm is Fréchet smooth. Suppose that for every sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $, the sequence $f_{\vec{z}_n}$ is norm convergent (so $w^{\ast}-$ convergent to $d'_{K}(x)$). Then $d_{K}$ is strictly smooth at $x$. It remains to prove the Fréchet smoothness only.\
Since $d_K$ is smooth at $x$, so for any $t_n \rightarrow 0$ and any $y \in X$ with $ \|y\| = 1,$ $$\begin{aligned}
\lim_{n \rightarrow \infty} \dfrac{d_{K} (x+t_n y)- d_{K}(x)}{ t_n } & = & d'_K(x) (y)= \lim_{n \rightarrow \infty} f_{\vec{z}_n}(y)\\\end{aligned}$$ But $f_{\vec{z}_n} \rightarrow d'_K(x)$ in norm, so the last limit is uniform over $ \|y\| = 1.$ Hence $d_{K}$ is Fréchet smooth at $x$.\
Finally, suppose that $d_{K}$ is strictly smooth and Fréchet smooth at $x \in X \diagdown K.$ Then we prove that $f_{\vec{z}_n}$ is norm convergent to $d'_{K}(x)$ for every sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $.
First we prove that $f_{\vec{y}_n}$ is norm convergent to $d'_{K}(x)$ for every sequence $\left\{y_n\right\}$ in $E'(K)$ with $ y_n \rightarrow x $. Since norm is Fréchet smooth, so by Lemma \[lm1\] $d_K$ is Fréchet smooth at each $y_n \in E'(K)$ and by assumption $d_K$ is Fréchet smooth at $x \in X \diagdown K.$
So, for given $\epsilon > 0 $ there exists a $\delta_{1} (\epsilon, y_n)>0$ and $\delta_{2}
(\epsilon, x)>0 $ such that for all $y \in X$ with $ \left\|y\right\| =1$, we have $$\begin{aligned}
\left|\dfrac{ d_{K} (y_n + t y)- d_{K}(y_n)}{t } - f_{\vec{y_n}}(y)\right| & < & \epsilon~~~ \displaystyle {\rm ~for ~all~} 0< |t| < \delta_{1}.\end{aligned}$$ and $$\begin{aligned}
\left|\dfrac{ d_{K} (x + t y)- d_{K}(x)}{t } - d'_K (x)(y)\right| & < & \epsilon~~~ \displaystyle {\rm ~for ~all~} 0< |t| < \delta_{2}.\end{aligned}$$ Choose $\delta= \min \{ \delta_1, \delta_2 \}$, then for any $y \in X$ with $ \left\|y\right\| =1,$ we have $$\begin{aligned}
\left|f_{\vec{y_n}}(y) - d'_K (x)(y)\right| & \leqslant &
\left|\dfrac{ d_{K} (y_n + t y)- d_{K}(y_n)}{t } - f_{\vec{y_n}}(y)\right| \\
& + & \left|\dfrac{ d_{K} (y_n + t y)- d_{K}(y_n)}{t } - \dfrac{ d_{K} (x + t y)- d_{K}(x)}{t }\right| \\
& + & \left|\dfrac{ d_{K} (x + t y)- d_{K}(x)}{t } - d'_K (x)(y)\right| \\
& < & 2 \epsilon + \frac{4}{\delta }\left\|y_n - x\right\| ~~~ \displaystyle {\rm ~for ~all~} \frac{\delta}{2}< |t| < \delta \\
& < & 6 \epsilon ~~~ \displaystyle {\rm ~for ~all~} \left\|y_n - x\right\|< \epsilon \delta.\end{aligned}$$ Since $ y_n \rightarrow x $, hence the sequence $f_{\vec{y}_n}(y)$ is uniformly convergent to $d'_K (x)(y)$ over $\| y \| = 1$, that is $f_{\vec{y}_n}$ is norm convergent to $d'_K (x).$\
Since $$\begin{aligned}
\left|f_{\vec{z_n}}(y) - d'_K (x)(y)\right| & \leqslant & \left|f_{\vec{z_n}}(y) - f_{\vec{y_n}}(y)\right|+ \left|f_{\vec{y_n}}(y) - d'_K (x)(y)\right|\end{aligned}$$ hence to complete the proof of the result it is enough to show that for every sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $ there is some sequence $\left\{y_n\right\}$ in $E'(K)$ with $ y_n \rightarrow x $, such that $\left|f_{\vec{z_n}}(y) - f_{\vec{y_n}}(y)\right|$ converges to zero uniformly over $ \left\|y\right\| =1.$\
Suppose there exists a sequence $\left\{z_n\right\}$ in $E(K)$ with $ z_n \rightarrow x $ such that $f_{\vec{z}_n}$ is not norm convergent to $d'_K (x).$ Then for every sequence $\left\{y_n\right\}$ in $E'(K)$ with $ y_n \rightarrow x $, the sequence $\left\{f_{\vec{z_n}}-f_{\vec{y_n}}\right\}$ is not norm convergent to zero. So, there exists an $\epsilon > 0 $ and a subsequence of $\left\{z_n\right\}$ (assume the sequence itself) such that for every sequence $\left\{y_n\right\}$ in $E'(K)$ with $ y_n \rightarrow x $, we have $$\begin{aligned}
\| f_{\vec{z_n}} - f_{\vec{y_n}} \| > 5 \epsilon ~~~ \displaystyle {\rm ~for ~all~} n.\end{aligned}$$ So there exists a sequence $\left\{v_n\right\}$ in $X$ with $\| v_n \|= 1$ such that $$\begin{aligned}
f_{\vec{y_n}}(v_n)- f_{\vec{z_n}} (v_n)> 5 \epsilon ~~~ \displaystyle {\rm ~for ~all~} n.\end{aligned}$$
Since $d_K$ is Fréchet smooth at each $y_n \in E'(K)$, so there exists a $\delta_{n} (\epsilon, y_n)>0$ such that for all $v \in X$ with $ \left\|v\right\| =1$, we have $$\begin{aligned}
\left|\dfrac{ d_{K} (y_n + t v)- d_{K}(y_n)}{t } - f_{\vec{y_n}}(v)\right| & < & \epsilon~~~ \displaystyle {\rm ~for ~all~} 0< |t| \leqslant \delta_{n}.\end{aligned}$$
So, for each $n$ and $t_n > 0 $ satisfying $\delta_{n} \geqslant t_n\downarrow 0 $ we have
$$\begin{aligned}
\left| d_{K} (y_n + t_n v_n)- d_{K}(y_n) - f_{\vec{y_n}}(t_n v_n)\right| & < & \epsilon t_n.\end{aligned}$$
Put $ t_n v_n = w_n $ so that $\| w_n\|=t_n$. So for each $n$, we have $$\begin{aligned}
5 \epsilon t_n & < & f_{\vec{y_n}}(w_n)- f_{\vec{z_n}} (w_n)\\
& < & f_{\vec{y_n}}(w_n)- d_K (y_n + w_n) + d_K (y_n)+d_K (z_n + w_n) - d_K (z_n)- f_{\vec{z_n}} (w_n)\\
& ~ & + d_K (z_n)-d_K (y_n) + \|y_n -z_n\|\\
& < & \epsilon t_n + \left\| z_n - p(z_n)+ w_n\right\|-\left\| z_n - p(z_n)\right\|- f_{\vec{z_n}} (w_n) + 2 \|y_n -z_n\|
\end{aligned}$$ Since norm is Fréchet smooth, in particular, at $z_n-p(z_n)$. So, there exists a $0 < \delta'_{n} (\epsilon, \vec{z_n})< \delta_{n}$ such that for all $v \in X$ with $ \left\|v\right\| =1$, we have
$$\begin{aligned}
\left|\dfrac{\left\| z_n - p(z_n)+ t v\right\|-\left\| z_n - p(z_n)\right\|}{t} - f_{\vec{z_n}}(v)\right| & < & \epsilon
~~~ \displaystyle {\rm ~for ~all~} 0 < |t| \leqslant \delta'_{n} \\\end{aligned}$$
So, for each $n$ and $t'_n$ satisfying $\delta'_{n} \geqslant t'_n\downarrow 0 $ we have $$\begin{aligned}
\left|\left\| z_n - p(z_n)+ t'_n v_n\right\|-\left\| z_n - p(z_n)\right\| - f_{\vec{z_n}}(v_n)\right| & < & \epsilon t'_n.\\\end{aligned}$$ In particular, put $ w_n = \delta'_n v_n$ then for all n, we have $$\begin{aligned}
\left|\left\| z_n - p(z_n)+ w_n\right\|-\left\| z_n - p(z_n)\right\| - f_{\vec{z_n}}(w_n)\right| & < & \epsilon \delta'_n\end{aligned}$$ Thus, we have $$\begin{aligned}
5 \epsilon \delta'_n < \epsilon \delta'_n + \epsilon \delta'_n + 2 \|y_n -z_n\|\end{aligned}$$ This is true for every sequence $\left\{y_n\right\}$ in $E'(K)$ with $ y_n \rightarrow x $, which is impossible. Since $E'(K)$ is dense in $X \diagdown K$, for each $n$, we can choose $y_n \in E'(K)$ with $ y_n \rightarrow x $ such that $\|y_n - z_n\|< \epsilon \delta'_n$.
In Theorem $4$ of [[@Dutt1]]{}, author has shown that for an almost proximinal set $K$, the set $E'(K)$ to be dense a sufficient condition on $X$ is local uniform convexity [(LUR)]{} of the norm.
Let us denote by $ E_r (K)$ the set $$\left\{x_{\circ}- r \vec{x}_{\circ} = x_{\circ}- r \frac{x_{\circ}-p(x_{\circ})}{\left\| x_{\circ}-p(x_{\circ}) \right\|} : x_{\circ} \in E(K), p(x_{\circ}) \in P_{K}(x) ~ {\rm and}~ \left\| x_{\circ}-p(x_{\circ}) \right\| = d_K (x_{\circ})> r \right\}.$$
The following Theorem signifies that the uniform differentiability on a dense set will result in the differentiability on $X \diagdown K$, if the norm on $X$ is LUR and differentiable.
\[thm01\] Let $X$ be a normed linear space with LUR and smooth [(]{}Fréchet smooth[)]{} norm. Let $K$ be a nonempty closed and almost proximinal set, if for some $r > 0$ the set $E_r(K)$ is dense in $X \diagdown K$ and $d_{K}$ is uniformly smooth [(]{}uniformly Fréchet smooth[)]{} on the dense set. Then distance function $d_{K}$ is strictly smooth [(]{}and Fréchet smooth[)]{} on $X\diagdown K.$
Let $\bar{x} \in X \diagdown K$ and $\bar{r} > 0$ be arbitrary chosen. Then due to the denseness of $E_r(K)$ in $X \diagdown K$, the set $E_r(K) \cap B(\bar{x}, \bar{r}) $ is nonempty. Since $d_{K}$ is uniformly smooth (uniformly Fréchet smooth) on $E_r(K) \cap B(\bar{x}, \bar{r})$. So, for given $\epsilon > 0 $ and $y \in X$ with $ \left\|y\right\| =1$, there exists a $\delta (\epsilon, y)>0 \left(\delta(\epsilon)>0\right)$ such that $$\begin{aligned}
\left|\dfrac{ d_{K} (x + t y)- d_{K}(x)}{t } - f_{\vec{x}}(y)\right| < \epsilon~~~ \displaystyle {\rm ~for ~all~} x \in E_r(K) \cap B(\bar{x}, \bar{r}), 0< |t| < \delta.\end{aligned}$$
So, for $x,z \in E_r(K) \cap B(\bar{x}, \bar{r})$ and for any $y \in X$ with $ \left\|y\right\| =1,$ we have $$\begin{aligned}
\left|f_{\vec{z}}(y) - f_{\vec{x}}(y)\right| & \leqslant &
\left|\dfrac{ d_{K} (z + t y)- d_{K}(z)}{t } - f_{\vec{z}}(y)\right| \\
& + & \left|\dfrac{ d_{K} (z + t y)- d_{K}(z)}{t } - \dfrac{ d_{K} (x + t y)- d_{K}(x)}{t }\right| \\ &+ & \left|\dfrac{ d_{K} (x + t y)- d_{K}(x)}{t } - f_{\vec{x}}(y)\right|\\
& < & 2 \epsilon + \frac{4}{\delta }\left\|z - x\right\| ~~~ \displaystyle {\rm ~for ~all~} \frac{\delta}{2}< |t| < \delta\\
& < & 6 \epsilon ~~~ \displaystyle {\rm ~for ~all~} \left\|z - x\right\|< \epsilon \delta.\end{aligned}$$ That is, the mapping $x \longrightarrow f_{\vec{x}}(y) (x \longrightarrow f_{\vec{x}} )$ is uniformly continuous on $E_r(K) \cap B(\bar{x}, \bar{r})$ Since $E_r(K)$ is dense in $X \diagdown K$, this mapping has a unique continuous extension on $B(\bar{x}, \bar{r})$. But this implies that for any $x \in B(\bar{x}, \bar{r})$ and sequence $\left\{z_n\right\}$ in $E_r(K) \cap B(\bar{x}, \bar{r})$ converging to $x$, the sequence $\left\{f_{\vec{z}_n}\right\}$ is $w^*-$ convergent (norm convergent). From Proposition \[Prop-1\], it follows that $d_{K}$ is strictly smooth (and Fréchet smooth) at $x.$
One of the main results of this paper is to investigate the conditions on a nonempty closed set $K$ such that $E_r (K)$ is dense in $X \diagdown K.$ A simple observation reveals that for almost sun $K$ the set $ E_r (K)$ is dense in $X \diagdown K$. Indeed, we have the following result.
\[lem03\] Let $K$ be a nonempty closed and almost sun in a normed linear space $X$. Then for every $r > 0,$ the set $ E_r (K)$ is dense in $X \diagdown K.$
Let $\mathbb S (K)$ denotes the set of points $x\in X \diagdown K$ where $K$ is sun. Since $K$ is an almost sun so the set $\mathbb S (K)$ is dense in $X \diagdown K$, so it suffices to prove that $ E_r (K)$ is dense in $\mathbb S(K).$\
Let $y \in \mathbb S(K)$ be any point. For all $0< \epsilon < \left\| y - p(y) \right\|$ and $|t|< \epsilon$, if we choose $$x_{\circ} = y + (r-t) \frac{y -p(y)}{\left\| y -p(y) \right\|}.$$ Then $x_{\circ} $ is also in $\mathbb S(K)$ and it is easy to prove that $p(y)\in K$ is the nearest point for $x_{\circ} $ and $d_K(x_{\circ})= \left\| x_{\circ}-p(x_{\circ}) \right\|= \left\| x_{\circ}-p(y) \right\|= \left\| y-p(y) \right\|-t+r > r.$ Now $\vec{x}_{\circ}= \vec{y}$, Therefore $$\begin{aligned}
&& x = x_{\circ}- r \vec{x}_{\circ} \in \mathbb E_r (K)\\
&&~~ = x_{\circ}- r \frac{y-p(y)}{\left\| y-p(y) \right\|}\\
\text{that is,}&& x = y- t \frac{y-p(y)}{\left\| y-p(y)\right\|}, \\
\text{So,}&& \left\| x-y \right\|= |t|< \epsilon.\end{aligned}$$ Thus, for $y \in \mathbb S(K)$ and $0 <\epsilon< \left\| y-p(y) \right\| $, the point $x= y-t\vec{y}$ is in $ E_r (K)$ for all $|t|< \epsilon $ such that $\left\| x-y \right\| < \epsilon.$ This proves that $ E_r (K)$ is dense in $\mathbb S(K)$ for all $r>0$.
Using Lemma \[lem03\] and Theorem \[thm01\], we can deduce the differentiability of $d_{K}$ on $X\diagdown K,$ if it is uniformly differentiable on some dense set $ E_r (K).$
\[core-LUR\] Let $X$ be a normed linear space with LUR and smooth [(]{}Fréchet smooth[)]{} norm, and $K$ be a nonempty closed subset of $X$, which is almost sun. Suppose $d_{K}$ is uniformly smooth [(]{}uniformly Fréchet smooth[)]{} on the set $E_r(K)$ for some $r > 0$. Then distance function $d_{K}$ is strictly smooth [(]{}and Fréchet smooth[)]{} on $X\diagdown K.$
Convexity of Almost Sun {#sec:MResl}
=======================
We observe that the notion of almost sun is not merely a tidier form for Giles [@Giles1] page 462, but it also provides a non-trivial illustration for the proximinal condition. Moreover, the Lemma \[lem03\] motivates to improve the results of Giles [@Giles1] page 462.
\[thm1\_open\] Let $X$ be a normed linear space with uniformly smooth [(uniformly Fréchet smooth)]{} norm, and $K$ be a nonempty closed subset of $X$, which is almost sun. Then $d_K$ is smooth (Fréchet smooth) on $X \diagdown K.$
In the Giles proof, it requires only the denseness of $ E_r (K)$ in $X \diagdown K $ for some $r>0$, which follows from Lemma \[lem03\] for almost sun $K.$
The above Theorem enables us to give a better characterizations for the convexity of a set. Which follows directly from Proposition \[prop\_open\] and Corollary \[core-LUR\].
Let $X$ be a Banach space, and $K$ be a nonempty closed subset of $X$, which is almost sun. If\
\
$~~~~~~~(i)$ $X$ has uniformly smooth norm and the distance function $d_K$ satisfies $\left\| d'_K(x) \right\|=1$ for all $x \in X \diagdown K.$\
\
or $(ii)$ $X$ has LUR and smooth norm and the distance function $d_K$ is uniformly smooth on the set $E_r(K)$ for some $r > 0$ which satisfies $\left\| d'_K(x) \right\|=1$ for all $x \in X \diagdown K.$\
\
or $(iii)$ $X$ has uniformly Fréchet smooth norm,\
\
or $(iv)$ $X$ has LUR and Fréchet smooth norm and the distance function $d_K$ is uniformly Fréchet smooth on the set $E_r(K)$ for some $r > 0.$\
Then $K$ is convex.
Observe that if the distance function $d_K$ generated by a proximinal set $K$ is smooth on $X \diagdown K,$ then we have$\left\| d'_K(x) \right\|=1$ for all $x \in X \diagdown K.$ So if $X$ has uniformly smooth norm, then every proximinal and almost sun $K$ is convex.\
Let the norm of space $X$ be uniformly smooth and rotund. Let $K$ be an almost sun then every point $x \in X \diagdown K$ which has a closest point in $K$ has a unique closest point in $K.$ To see this, suppose that a point $x \in X \diagdown K$ has two closest points $p_1(x), p_2(x) \in K.$ If $\vec{x}_1$ and $\vec{x}_2$ denotes the unit vectors in the direction of $x-p_1(x)$ and $x-p_2(x)$ respectively, then using Theorem \[thm1\_open\], it follows that $d_K$ is smooth at $x$ and $d'_K(x) = f_{\vec{x}_1} = f_{\vec{x}_2}.$ Since $ f_{\vec{x}_1} (\frac{\vec{x}_1+ \vec{x}_2}{2}) = \left( \frac{ f_{\vec{x}_1} (\vec{x}_1)+ f_{\vec{x}_1}(\vec{x}_2)}{2} \right)= 1$, which implies that $\| \frac{\vec{x}_1+ \vec{x}_2}{2} \|= 1$, a contradiction to the rotundity. This proves the uniqueness of the closest point.\
Thus we conclude the following result, which asserts that proximinality is equivalent to Chebyshev property for almost sun.
Let $X$ be a normed linear space with uniformly smooth and rotund norm. Then a nonempty closed set K which is almost sun, is Chebyshev if and only if $K$ is proximinal.
Since every Hilbert space has the property of rotundity and uniform smoothness of the norm, and every Chebyshev set is proximinal, Hence we have a partial result regarding the convexity of Chebyshev sets in a Hilbert space.
In a Hilbert space every Chebyshev set which is almost sun, is convex.
Thus the problem of convexity of Chebyshev set in a Hilbert space is equivalent to the existence of a Chebyshev set $K$ which is not a sun at every point of some open ball in $X \diagdown K.$\
\
It is known that in any reflexive Banach space $X$ with Kadec norm, every nonempty closed set K has a set $\mathbb E(K)$ dense in $X \diagdown K $, in particular every Hilbert space has this property. Thus, if for some $r>0$ the set $\mathbb E_r (K)$ is dense in $X \diagdown K$ then it follows that in a Hilbert space every Chebyshev set must be convex.\
It is easy to verify that the Vlasov’s differentiability condition is a consequence of the almost sun property and so we have the following result which is more general than the above Theorem.\
In a Banach space $X$ with rotund dual $X^{\ast}$ every nonempty closed set $K$ which is almost sun, is convex.
Since $K$ is almost sun, by Lemma \[lem03\] for every $r>0$ the set $ E_r (K)$ is dense in $X \diagdown K $. So the proof follows from the ending Theorem of Giles[@Giles1] page 463.
Since a convex proximinal set is a sun, so we have the following characterization and equivalent conditions for convexity of Chebyshev and proximinal sets respectively.
Let $X$ be a Banach space with rotund dual $X^{\ast}$. Then a Chebyshev set $K$ is convex if and only if it is an almost sun.
Let $X$ be a Banach space with rotund dual $X^{\ast}$, and $K$ be a proximinal set in $X$. Then following are equivalent for $K$\
$(i)$ Almost sun\
$(ii)$ Convex\
$(iii)$ Sun
**Acknowledgment:** The present research work is supported by UGC, Govt. of India, New Delhi-110012, under UGC-BSR Start-Up Grant No. F.30.12/2014(BSR), dated $22^{nd}$ July, 2014.
J. M. Borwein, S. P. Fitzpatrick and J. R. Giles, “The differentiability of real functions on normed linear space using generalized subgradients,” J. Math. Anal. Appl. 128 (1987) 512-534. J. M. Borwein and J. R. Giles, The proximal normal formula in Banach space, Trans. Amer. Math. Soc. 302 (1) (1987) 371-381. F. H. Clarke, Optimization and nonsmooth analysis, Canadian Math. Soc. series and monographs and advanced texts, Wiley, New York, 1983. S. Dutta, Generalized Subdifferential of The Distance Function, Proc. Amer. Math. Soc. 133 (10)(2005) 2949-2955. S. Fitzpatrick, Differentiation of real-valued functions and continuity of metric projections, Proc. Amer. Math. Soc. 91 (4) (1984) 544-548. J. R. Giles, Differentiability of Distance Functions and a Proximinal Property inducing Convexity, Proc. Amer. Math. Soc. 104 (2) (1988) 458-464. L. P. Vlasov, Almost convexity and Chebyshev sets, Math. Notes. Acad. Sci. USSR 8 (1970) 776-779.
[^1]: tnverma07@gmail.com
|
---
abstract: 'We study an $SU(N)$ gauge-Higgs model with $N_F$ massless fundamental fermions on $M^3\otimes S^1$. The model has two kinds of order parameters for gauge symmetry breaking: the component gauge field for the $S^1$ direction (Hosotani mechanism) and the Higgs field (Higgs mechanism). We find that the model possesses three phases called Hosotani, Higgs and coexisting phases for $N=$ odd, while for $N=$ even, the model has only two phases, the Hosotani and coexisting phases. The phase structure depends on a parameter of the model and the size of the extra dimension. The critical radius and the order of the phase transition are determined. We also consider the case that the representation of matter fields under the gauge group is changed. We find some models, in which there is only one phase independent of parameters of the models as well as the size of the extra dimension.'
---
\#1[[|[\#1]{}|]{}]{} \#1[\#1 ]{} \#1[|[\#1]{}]{} \#1[|]{}
KOBE-TH 03-03\
TIT/HEP-496\
OU-HET-444/2003
[Phase Structures of $SU(N)$ Gauge-Higgs Models\
on\
Multiply Connected Spaces]{}
Hisaki [Hatanaka]{}$^{(a),}$ [^1] Katsuhiko [Ohnishi]{}$^{(b),}$ [^2], Makoto [Sakamoto]{}$^{(c),}$ [^3] and\
Kazunori [Takenaga]{}$^{(d),}$ [^4]
${}^{(a)}$ [*Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551, Japan*]{}\
${}^{(b)}$[*Graduate School of Science and Technology, Kobe University,\
Rokkodai, Nada, Kobe 657-8501, Japan*]{}\
${}^{(c)}$ [*Department of Physics, Kobe University, Rokkodai, Nada, Kobe 657-8501, Japan*]{}\
${}^{(d)}$ [*Department of Physics, Osaka University, Toyonaka, Osaka 560-0043, Japan*]{}
Introduction
============
Recently, physics with extra dimensions has been studied extensively in connection with the long standing problems, namely, new mechanism and/or the origin of (gauge, super) symmetry breaking in elementary particle physics. It can provide us new insight and understanding for low-energy physics. In fact, it has been pointed out[@SUSY] that new mechanism of spontaneous supersymmetry breaking is possible in a certain class of models as a consequence of the breakdown of the translational invariance for the extra dimension $S^1$[@translation; @O(N); @model]. Furthermore, one of our authors (M.S) and his collaborators have shown[@monopole] that the rotational invariance of $S^2$ is spontaneously broken in a monopole background above some critical radius due to the appearance of vortex configuration as vacuum configuration.
When one considers gauge-Higgs models on space-time with some of the space directions being compactified on a multiply connected space, one should take account of gauge symmetry breaking through the Hosotani mechanism[@Hosotani]. The mechanism essentially occurs due to quantum corrections in the extra dimension, reflecting the topology of the compactified space. It is possible for the component gauge field for the compactified direction to acquire nonvanishing vacuum expectation values (VEV). The Hosotani mechanism has been studied extensively in (supersymmetric) gauge models[@HosotaniS2; @HosotaniT2; @HP; @Hoso-ft; @Pattern-Matter; @Pattern-BC; @HIL] and, in particular, paid much attention in the context of orbifold compactifications[@orbifold; @Hoso-orbi]. On the other hand, the Higgs mechanism also breaks gauge symmetry by the nonvanishing VEV for the Higgs field even at the tree level. This suggests that if we consider the gauge-Higgs model on such the space-time, the gauge symmetry can be broken by both or either of the two mechanisms due to the existence of the two kinds of the order parameters for gauge symmetry breaking.
In a previous paper[@previous] we showed that phase structures of gauge-Higgs models on $M^3\otimes S^1$ are nontrivial, where $M^3 (S^1)$ is three-dimensional Minkowski space-time (a circle). In the paper we studied the phase structure of the simplest $SU(2)$ gauge-Higgs model and found three different phases called Hosotani, Higgs and coexisting phases. In each phase the VEVs for the two order parameters take the different forms and values. The structure depends on a parameter of the model and the size of $S^1$. The critical radius and the order of the phase transition were determined explicitly. We also pointed out that the phase structure could provide a new approach to the gauge hierarchy problem in grand unified theory (GUT).
This paper is a generalization of the previous work. In particular, we shall investigate the phase structure of an $SU(N)$ gauge-Higgs model with $N_F$ massless fermions on $M^3\otimes S^1$. In the next section we analyze the phase structure of the model, in which both the fermion and Higgs fields belong to the fundamental representation under $SU(N)$. We will find that the phase structure of the model is very different, depending on whether $N$ is even or odd. For the case $N=$ odd, there are the three phases, and the structure is similar to the one obtained in the previous paper. Only two phases, the Hosotani and the coexisting phases, appear for $N=$ even, and the Higgs phase does not exist for finite sizes of the extra dimension. We also determine the critical radius and the order of the phase transition. In the models, the Hosotani mechanism works as the restoration of the gauge symmetry. In Sec. $3$, we consider the case that the representation of matter fields under the gauge group is changed. We find some models whose phase structures do not depend on the parameters of the models and also the size of the extra dimension. The final section is devoted to conclusions and discussions. Details of calculations will be given in Appendix.
$SU(N)$ Gauge-Higgs Model
=========================
We study the vacuum structure of an $SU(N)$ gauge-Higgs model with $N_F$ massless fundamental fermions. The Higgs field also belongs to the fundamental representation under $SU(N)$. We take our space-time to be $M^3\otimes S^1$ in order to perform analytic calculations, where $M^3$ and $S^1$ stand for three-dimensional Minkowski space-time and a circle with radius $R$, respectively. Our action is $$S=\int d^3x \int_0^Ldy\left(
-\half{\rm tr}F_{{\hat\mu}{\hat\nu}}F^{{\hat\mu}{\hat\nu}}+
\sum_{I=1}^{N_F}{\bar\psi}_I i\Gamma^{{\hat\mu}}D_{{\hat\mu}}\psi_I
+(D^{{\hat\mu}}\Phi)^{\dagger}D_{{\hat\mu}}\Phi - V(\Phi^{\dagger}, \Phi)\right),$$ where the Higgs potential is given by $$V(\Phi^{\dagger}, \Phi)=-m^2\Phi^{\dagger}\Phi
+{\lambda\over 2}(\Phi^{\dagger}\Phi)^2.
\label{potential}$$ We have used a notation such as $x^{{\hat\mu}}\equiv (x^{\mu}, y)$ and the length of the circumference of $S^1$ by $L=2\pi R$.
As stated in the introduction, there are two kinds of the order parameters for gauge symmetry breaking. One is the component gauge field $A_y$ for the $S^1$ direction, which is related with the Hosotani mechanism, and the mechanism is essentially caused by quantum corrections in the extra dimension. The other one is the Higgs field, and the Higgs mechanism works even at the tree-level. Taking the order parameters into account, we study the effective potential for $\vev{A_y}$ and $\vev{\Phi}$ parametrized by $$gL \vev{A_y}={\rm diag}(\theta_1, \theta_2, \cdots, \theta_N),~~
\vev{\Phi}={1\over\sqrt{2}}(v, 0, \cdots, 0)^T,
\label{vev}$$ where $\sum_{i=1}^{N}\theta_i=0$ and $v$ is a real constant. Here, we have arranged $\theta_i$ in such a way that $\abs{\hat{\theta}_1}\leq \abs{\hat{\theta}_2}\leq
\cdots\leq\abs{\hat{\theta}_{N}}$, where $\hat{\theta}_i = \theta_i$ mod $2\pi$ with $\hat{\theta}_i \le \pi$. This can be done without loss of generality. We showed in the appendix that the parametrization for the vacuum expectation value (VEV) of the Higgs field given in Eq.(\[vev\]) is enough to study the vacuum structure of the model. We assume $N_F$ is so large that the leading order correction comes from the fermion one-loop correction to $\vev{A_y}$ alone. Then, the effective potential is given by $$\begin{aligned}
V &= &-\half m^2 v^2+
{\lambda\over 8}v^4+
{{\hat{\theta}_1^2v^2}\over {2L^2}}
+{A\over {\pi^2 L^4}}\sum_{n=1}^{\infty}\sum_{i=1}^{N}{1\over n^4}
\cos(n\theta_i)\label{effpotu}\\
&=&{1\over L^4}
\left(-\half \mbar^2 \vbar^2+
{\lambda\over 8}\vbar^4+
\half\hat{\theta}_1^2 \vbar^2
+{A\over {\pi^2}}\sum_{n=1}^{\infty}\sum_{i=1}^{N}{1\over n^4}
\cos(n\theta_i)\right)\equiv \Vb L^{-4},
\label{effpotd}\end{aligned}$$ where $A\equiv 2^2 N_{F}$ and the number $2^2$ counts the physical degrees of freedom of a Dirac fermion. Here, we have introduced the dimensionless quantities, $\mbar\equiv mL, \vbar\equiv vL$. The first two terms in Eq. (\[effpotu\]) are nothing but the classical Higgs potential, and the third term comes from the interaction between the gauge and Higgs fields in $D_y\Phi$, which, as we will see later, plays an important role to determine the phase structure of the model. The fourth term stands for the one-loop correction from the fermions. We have neglected other one-loop corrections arising from the gauge and Higgs fields under the assumption that the number of the fermions $N_F$ is sufficiently large and also that the couplings $g$ and $\lambda$ are sufficiently small. We make use of the assumption throughout the paper.
If we look at the dependence of the effective potential on the scale $L$ in Eq.(\[effpotu\]), it suggests that the vacuum structure changes according to the size of the extra dimension. When $L$ is large enough, the quantum correction in the extra dimension is suppressed and the leading order contribution is given by the classical Higgs potential, so that the Higgs field acquires the nonvanishing VEV. The next leading order contribution, the third term in Eq.(\[effpotu\]), yields vanishing $\hat{\theta}_1$ in order to minimize the potential in the large $L$ limit. On the other hand, if $L$ is small enough, the quantum correction in the extra dimension dominates the effective potential, and we would obtain nonzero values of $\theta_i$. Then, the next leading order term, the third term, would enforce to result in $v=0$. This simplified discussion implies that the vacuum structure depends on the size of the extra dimension. One, of course, needs to study the effective potential carefully in order to determine the vacuum structure of the model.
Let us now study the vacuum structure of the model. We follow the standard procedure to find the vacuum configuration. We first solve equations of the first derivative of the effective potential (\[effpotd\]) with respect to the order parameters, $$\begin{aligned}
{{\del \Vb}\over{\del\vbar}}&=&\vbar \left(-\mbar^2+{\lambda\over 2}\vbar^2
+\hat{\theta}_1^2\right)=0,
\label{deru}\\
{{\del \Vb}\over{\del\theta_1}}&=&
\hat{\theta}_1 \vbar^2
+{A\over \pi^2}\sum_{n=1}^{\infty}
{{-1}\over n^3}\left(\sin(n\theta_1)+\sin(n\sum_{i=1}^{N-1}\theta_i)
\right)=0,\label{derd}\\
{{\del \Vb}\over{\del\theta_k}}&=&
{A\over \pi^2}\sum_{n=1}^{\infty}{{-1}\over n^3}
\left(\sin(n\theta_k)+\sin(n\sum_{i=1}^{N-1}\theta_i)\right)=0,
\quad k=2, \cdots, N-1,
\label{dert}\end{aligned}$$ where we have used $\theta_N = - \sum_{i=1}^{N-1}\theta_i$. Solutions to the equations are candidates of the vacuum configuration. Then, we analyze the stability of the solutions against small fluctuations, and this constrains allowed regions of the solutions as the local minimum of the effective potential. Among various, if any, candidates of those configurations, the global minimum of the effective potential is given by the configuration which gives the lowest energy. Following these steps, one can obtain the vacuum structure of the model.
The equation (\[deru\]) leads to $$\begin{aligned}
&&\vbar =0,\label{hvevu}\\
&{\rm or}&\nonumber\\
&&-\mbar^2+{\lambda\over 2}\vbar^2+\hat{\theta}_1^2=0.
\label{hvevd}\end{aligned}$$ For the first case (\[hvevu\]), the equations (\[derd\]) and (\[dert\]) are unified into an equation, $${A\over \pi^2}\sum_{n=1}^{\infty}{{-1}\over n^3}
\left(\sin(n\theta_k)+\sin(n\sum_{i=1}^{N-1}\theta_i)\right)=0,
\quad k=1, \cdots, N-1.
\label{eqhosotani}$$ We call the solution to this equation type I, and the solution describes the Hosotani phase. On the other hand, for the second case (\[hvevd\]) we solve the coupled equations, $$\begin{aligned}
{2\over\lambda}(\mbar^2 -\hat{\theta}_1^2)\hat{\theta}_1
+{A\over \pi^2}\sum_{n=1}^{\infty}
{{-1}\over n^3}\left(\sin(n\theta_1)+\sin(n\sum_{i=1}^{N-1}\theta_i)
\right)&=&0,\label{eqcoexistu}\\
{A\over \pi^2}\sum_{n=1}^{\infty}{{-1}\over n^3}
\left(\sin(n\theta_k)+\sin(n\sum_{i=1}^{N-1}\theta_i)\right)&=&0,
\quad k=2, \cdots, N-1.
\label{eqcoexistd}\end{aligned}$$ A solution to the equation is called type II or type III, depending on whether the solution has the scale dependence on the extra dimension or not. The type II (III) corresponds to the Higgs (coexisting) phase, whose vacuum expectation values are independent of (dependent on) the scale of the extra dimension. In order to avoid unnecessary complexity, details of calculations to solve these equations will be given in the appendix. We find that it is convenient to discuss the vacuum structure separately, depending on whether $N$ is odd or even. Let us first study the case $N=$ odd.
$N=$ odd
--------
There are three types of possible vacuum configurations, as shown in the appendix, $$\begin{aligned}
{\rm type~~I}&\cdots&\left\{\begin{array}{l}
gL\vev{A_y}={\rm diag}\left({{N-1}\over N}\pi,
\cdots,{{N-1}\over N}\pi, -{{(N-1)^2}\over N}\pi\right),\\[0.3cm]
\vev{\Phi}={1\over\sqrt{2}}(0,\cdots, 0)^T,
\end{array}\right. \label{hosotani}\\[0.3cm]
{\rm type~~II}&\cdots&\left\{\begin{array}{l}
gL\vev{A_y}={\rm diag}\left(0, \pi, \pi, \cdots, -(N-2)\pi\right),\\[0.3cm]
\vev{\Phi}={1\over\sqrt{2}}(\sqrt{2\over\lambda}m,0,\cdots, 0)^T,
\end{array}\right.\label{higgs}\\[0.3cm]
{\rm type~~III}&\cdots&\left\{\begin{array}{l}
gL\vev{A_y}={\rm diag}\left(\theta_1^-, \pi-{\theta_1^-\over{N-1}},\cdots,
\pi-{\theta_1^-\over{N-1}},
-((N-2)\pi+{\theta_1^-\over{N-1}})\right),\\[0.3cm]
\vev{\Phi}={1\over\sqrt{2}}(v,\cdots, 0)^T.
\end{array}\right.\label{coexist}\end{aligned}$$ The type III solution depends on the scale $\mbar$, and accordingly, $\vbar=\sqrt{{2\over\lambda}(\mbar^2-(\theta_1^{-})^2)}$ does as well. The explicit form of $\theta_1^-(\mbar)$ is given by Eq.(\[cosol\]) in the appendix. As we stated before, we call the vacuum configuration corresponding to the type I, II and III solutions the Hosotani, Higgs and coexisting phases, respectively.
Given the vacuum configuration, the gauge symmetry in each phase is generated by the generators $T^a$ of $SU(N)$ which commute with the Wilson line, $$W\equiv {\cal P}{\rm exp}\left(ig\oint_{S^1}dy \vev{A_y}\right)
={\rm diag}\left(\e^{i\theta_1}, \e^{i\theta_2}, \cdots, \e^{i\theta_N}
\right)\quad
{\rm and} \quad T^a\vev{\Phi}=0.$$ Let us note that the phase $\theta_i$ is defined modulo $2\pi$. It is easy to observe that the $SU(N)$ gauge symmetry is not broken in the Hosotani phase because the Wilson line for the configuration (\[hosotani\]) is proportional to the identity matrix, $W={\rm exp}(i({{N-1}\over N}\pi)){\bf 1}_{N\times N}$. In the Higgs phase, the $SU(N)$ gauge symmetry is broken to $SU(N-1)\times U(1)$ by the Wilson line and the $U(1)$ symmetry is broken by the Higgs VEV, so that the residual gauge symmetry is $SU(N-1)$. Likewise, in the coexisting phase, the residual gauge symmetry is $SU(N-1)$.
It is important to note that each type of the vacuum configuration has the restricted region determined by the scale $\mbar$, in which the configuration is stable against small fluctuations. The region also depends on the parameter $t\equiv \lambda A=4\lambda N_F$. Let us quote relevant results from the appendix that are necessary to determine the phase structure of the model. The Hosotani phase (type I) is stable for $$0 < \mbar < {{N-1}\over N}\pi\equiv \mbar_2,
\label{regionu}$$ and the Higgs phase (type II) is stable when $\mbar$ satisfies $$\mbar > \left({{2N-3}\over{N-1}}{t\over{24}}\right)^{\half}\equiv
\mbar_3.
\label{regiond}$$ In the coexisting phase (type III), $\theta_1^-(\mbar)$ must satisfy the reality condition $(\theta_1^-({\mbar}))^*=\theta_1^-(\mbar) $ and $$0 \leq \theta_1^-({\mbar})\leq {{N-1}\over N}\pi .
\label{restrictodd}$$ These requirements on $\theta_1^-(\mbar)$ restrict the allowed region of the coexisting phase in the parameter space of $(\bar{m}, t)$. The analysis in the appendix shows that the coexisting phase lies in the region $$\begin{aligned}
\mbar_2 \leq \mbar \le \mbar_3 && \quad{\rm for}\quad
t\ge 48\pi^2\frac{(N-1)^3}{N(N^2-3)} ,
\label{regionofcoexist1}\\
\mbar_1 \leq \mbar \le \mbar_3 && \quad{\rm for}\quad
t < 48\pi^2\frac{(N-1)^3}{N(N^2-3)} ,
\label{regionofcoexist2}\end{aligned}$$ where $\bar{m}_1$ is the critical scale above which the realty condition is furnished and is given by Eq.(\[m\_1\]) in the appendix.
Now, we are ready to determine the phase structure of the model. As shown in Fig.1, the lines $\bar{m}_i (i=1,2,3)$ divide the $\bar{m}$-$t$ plane into the several regions. Some region allows only one phase, which is nothing but the vacuum configuration. There are, however, overlapping regions in which two of the three phases remain as candidates of the vacuum configuration. In this case, one has to determine which phase gives the lowest energy among them. Fig.1 will help us understand the phase structure of the model.
Since $\bar{m}_1=\bar{m}_2$ at $t=t_1\equiv
48\pi^2\frac{(N-1)^3}{N(N^2-3)}$ and $\bar{m}_2=\bar{m}_3$ at $t=t_2\equiv 24\pi^2\frac{(N-1)^3}{N^2(2N-3)}$ (no other intersections of the curves $\bar{m}_i$ for $t>0$), it is convenient to consider separately the three parameter regions of $t$: $$\begin{aligned}
48\pi^2{{(N-1)^3}\over{N(N^2-3)}} < & t, &
\hspace{3.9cm}(\mbar_2 < \mbar_3) ,\label{cou}\\
24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}
< &t & \leq 48\pi^2{{(N-1)^3}\over{N(N^2-3)}},~~~
(\mbar_1 \leq \mbar_2 < \mbar_3),\label{cod}\\
& t & \leq 24\pi^2{{(N-1)^3}\over {N^2(2N-3)}},~~
(\mbar_1 < \mbar_3 \leq \mbar_2).\label{cot}\end{aligned}$$ Here, the relative magnitude of $\bar{m}_i$ for each parameter region of $t$ is shown in the parenthesis.[^5]
(i) $t > 48\pi^2{{(N-1)^3}\over{N(N^2-3)}}$
We immediately observe that the scale $\mbar_2$ $(\mbar_3)$ is the phase boundary between the Hosotani phase and the coexisting one (the coexisting phase and the Higgs one). There is no overlapping region of the phases for this parameter region of $t$. Thus, the vacuum configuration is given by $$\mbox {vacuum configuration}=\left\{\begin{array}{lll}
\mbox{Hosotani phase} &\mbox{for}& \mbar < \mbar_2,\\
\mbox{coexisting phase} &\mbox{for}& \mbar_2 < \mbar < \mbar_3,\\
\mbox{Higgs phase} & \mbox{for}& \mbar_3 < \mbar.
\end{array}\right.
\label{solu}$$ The order parameters in the Hosotani and coexisting phases (the coexisting and Higgs phases) are connected continuously at the phase boundary $\mbar_2$ $(\mbar_3)$, so that the phase transition is the second order.
(ii) $24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}
< t \leq 48\pi^2{{(N-1)^3}\over{N(N^2-3)}}$
For this parameter region of $t$, the Hosotani and coexisting phases overlap between $\mbar_1$ and $\mbar_2$. Let us consider the quantity $\Delta\Vb\equiv\Vb_{Hosotani}-\Vb_{coexisting}$, which is a monotonically increasing function with respect to $\bar{m}$, $${{\del\Delta\Vb}\over{\del\mbar^2}}=\half \vbar^2(\mbar)\geq 0,
\label{monou}$$ as shown in the appendix. By denoting the scale giving $\Vb_{coexisting}=\Vb_{Hosotani}$ by $\mbar_4$, we can conclude that for $\mbar \leq \mbar_4$ $(\mbar_4 < \mbar\leq \mbar_3)$, the Hosotani (coexisting) phase is realized as the vacuum configuration. The Higgs phase can exist for $ \mbar_3 < \mbar$. Thus, we obtain that $$\mbox {vacuum configuration}=\left\{\begin{array}{lll}
\mbox{Hosotani phase} &\mbox{for}& \mbar < \mbar_4,\\
\mbox{coexisting phase} &\mbox{for}& \mbar_4 < \mbar < \mbar_3,\\
\mbox{Higgs phase} & \mbox{for}& \mbar_3 < \mbar.
\end{array}\right.
\label{sold}$$ The explicit form of the critical scale $\bar{m}_4$ is given by $$\begin{aligned}
\bar{m}_4 =
2\pi\sqrt{\frac{1}{2a}\left( b + \sqrt{b^2 - 4ac}\right)} ,
\label{m_4} \end{aligned}$$ where $$\begin{aligned}
a &=& 48 N^4 (N^2 -3N+3)^2 \biggl( 3(N-1)^3 +
2N(N^2-3N+3)\frac{t}{16\pi^2} \biggr) ,\label{m_4a}\\
b &=& 24 N^2 (N-1)^2 \biggl( 3(N-1)^3 (N^2-N-1)(3N^2-5N+1)\nonumber\\
& &\ \ + 2N^3 (2N-3)^2 (N^2-3N+3) \frac{t}{16\pi^2}\biggr) ,
\label{m_4b}\\
c &=& (N-1)^3 \biggl( -9(N-1)^4 (N^2-N-1)^2
- 6N(N-1)(N^2-N-1)\nonumber\\
& &\ \ \times(11N^4-36N^3+33N^2-9)\frac{t}{16\pi^2}
+ 4N^2(N^2-3)^3 \biggl( \frac{t}{16\pi^2}\biggr)^2
\biggr) . \label{m_4c}\end{aligned}$$ The phase transition at $\mbar=\mbar_3$ is the second order, while that at $\mbar=\mbar_4$ is the first order because the order parameters are not connected continuously.
(iii) $ t \leq 24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}$
Let us first compare the potential energy of the Hosotani phase with that of the Higgs phase. The scale $\mbar_5$ is the critical scale, at which $\Vb_{Hosotani}=\Vb_{Higgs}$ holds. Then, as shown in the appendix, we obtain that $$\begin{aligned}
\Vb_{Hosotani}< \Vb_{Higgs}\qquad {\rm for}
\qquad \mbar < \mbar_5,\label{hhu}\\
\Vb_{Hosotani}> \Vb_{Higgs}
\qquad {\rm for}\qquad \mbar > \mbar_5,
\label{hhd}\end{aligned}$$ where $$\mbar_5\equiv \left({{(N-1)(N^2-N-1)}\over N^3}
{\pi^2\over{24}}t\right)^{1\over 4}.
\label{regiont}$$ The parameter region of $t$ is further classified into two cases, depending on the relative magnitude between $\mbar_5$ and $\mbar_3$:
(iii-a) $24\pi^2 {{(N-1)^3(N^2-N-1)}\over {N^3 (2N-3)^2}}
< t \leq 24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}$
In this case, the relative magnitude of $\bar{m}_i\ (i=1,\cdots,5)$ is given by $\bar{m}_1<\bar{m}_4\le\bar{m}_5\le\bar{m}_3<\bar{m}_2$ (see Fig.1). It immediately follows that the vacuum configuration is uniquely determined as $$\mbox {vacuum configuration}=\left\{\begin{array}{lll}
\mbox{Hosotani phase} &\mbox{for}& \mbar < \mbar_4,\\
\mbox{coexisting phase} &\mbox{for}& \mbar_4 < \mbar < \mbar_3,\\
\mbox{Higgs phase} & \mbox{for}& \mbar_3 < \mbar.
\end{array}\right.
\label{solt}$$ The vacuum structure is similar to the case (ii), and the phase transition at $\bar{m}=\bar{m}_3$ ($\bar{m}=\bar{m}_4$) is the second (first) order.
(iii-b) $ t\leq 24\pi^2 {{(N-1)^3(N^2-N-1)}\over {N^3 (2N-3)^2}}$
In this case, we have $\mbar_1 \le \mbar_3 \le \bar{m}_5 < \bar{m}_2$ (see Fig.1). We observe that the Hosotani and coexisting phases overlap between $\mbar_1$ and $\mbar_3$. Let us recall that the difference of the potential energy between the Hosotani phase and the coexisting one, $\Delta\Vb$, is the monotonically increasing function with respect to $\bar{m}$, and we find that $$\Delta \Vb(\mbar=\mbar_3)={t\over{2\lambda}}
\left({{2N-3}\over{24(N-1)}}\right)^2
\left(t- 24\pi^2 {{(N-1)^3(N^2-N-1)}\over {N^3 (2N-3)^2}}\right) \leq 0,
\label{valu}$$ [*i.e.*]{} $\Vb_{Hosotani} \leq \Vb_{coexisting}$ for the parameter region of $t$ under consideration. This implies that there is no coexisting phase for this parameter region of $t$. Thus, taking Eq.(\[hhd\]) into account, we obtain that $$\mbox {vacuum configuration}=\left\{\begin{array}{lll}
\mbox{Hosotani phase} &\mbox{for}& \mbar < \mbar_5,\\
\mbox{Higgs phase} & \mbox{for}& \mbar > \mbar_5.
\end{array}\right.
\label{solq}$$ The order parameters are not connected continuously at $\mbar_5$. The phase transition between the two phases is the first order and there is no coexisting phase for this parameter region of $t$.
Collecting all the results obtained above, we depict the phase structure of the model in Fig.2. It should be noted that the Hosotani mechanism, which is usually known to break down gauge symmetry, provides a mechanism of the [*restoration*]{} of the gauge symmetry in the model.
$N=$ even $\geq 4$
------------------
Let us study the case $N=$ even $(\geq 4)$. The type I solution corresponding to the Hosotani phase is given by solving Eq.(\[eqhosotani\]). We obtain, as shown in the appendix, that $${\rm type~~I}~~\cdots~~\left\{\begin{array}{l}
gL\vev{A_y}={\rm diag}\left(\pi, \pi,\cdots,
\pi, -(N-1)\pi\right),\\[0.3cm]
\vev{\Phi}={1\over\sqrt{2}}(0,0,\cdots, 0)^T,
\end{array}\right.
\label{hosotanid}$$ where the phase is stable for the region given by $$0<\mbar < \pi.
\label{regionq}$$ The Wilson line for the configuration (\[hosotanid\]) is $-{\bf 1}_{N\times N}$ and commutes with all the generators of $SU(N)$, so that the $SU(N)$ symmetry is not broken in the phase.
In order to investigate other phases, one has to solve the third order equation with respect to $\theta_1$, $$\left(1+{{t\alpha}\over{24\pi^2}}\right)\theta_1^3
-\left({{t\alpha}\over{8\pi}} +6\pi\right)\theta_1^2
+\left({{t\beta}\over{24}}-\mbar^2+12\pi^2\right)\theta_1
+{{t\gamma}\over{24}}\pi +2\pi\mbar^2 -8\pi^3=0,
\label{eqevendt}$$ where $\alpha, \beta$ and $\gamma$ are constants and any solution to Eq.(\[eqevendt\]) has to lie in the range $$\begin{aligned}
\pi \le \theta_1 \le 2\pi ,
\label{restricteven}\end{aligned}$$ as shown in the appendix. This equation has the very different structure from that for $N=$ odd (See Eq.(\[cohiggsu\]) in the appendix). It does not have the solution of $\theta_1=2\pi$ for any values of $t$ and $\mbar$. This implies that the Higgs phase $${\rm type~~II}~~\cdots~~\left\{\begin{array}{l}
gL\vev{A_y}={\rm diag}\left(0, {{N-2}\over{N-1}}\pi,\cdots,
{{N-2}\over{N-1}}\pi, -{{(N-2)^2}\over{N-1}}\pi\right),\\[0.3cm]
\vev{\Phi}={1\over\sqrt{2}}(\sqrt{2\over\lambda}m,0,\cdots, 0)^T,
\end{array}\right.
\label{higgsd}$$ does not exist unlike the case $N=$ odd. The Higgs phase can be realized only in the limit $L \rightarrow \infty$ (or $\bar{m}\rightarrow \infty$), as we will see later. The $SU(N)$ gauge symmetry is broken to $SU(N-1)\times U(1)$ by the Wilson line and the Higgs VEV breaks the $U(1)$, so that the residual gauge symmetry is $SU(N-1)$ for the configuration (\[higgsd\]).
In order to see that the vacuum configuration is expected to approach the Higgs phase in the limit $\bar{m}\rightarrow \infty$, let us first note that the classical Higgs potential dominates the effective potential in the limit. Then, we obtain the nonvanishing Higgs VEV $v=\sqrt{2m/\lambda}$. It follows that the equation (\[deru\]) results in $\hat{\theta}_1 =0$ or $\theta_1 =0$ mod $2\pi$. This result is also derived from Eq.(\[eqevendt\]) by taking the limit $\bar{m}\rightarrow \infty$. For these values of the order parameters, the equation we have to solve becomes the same equation as the one that produces the Hosotani phase for the case $N=$ odd, but in the present case, $N$ is replaced by $N-1$ ($=$ odd) with the nonvanishing $\bar{v}$. Hence, we finally arrive at the solution (\[higgsd\]).
Our task is now to solve the equation (\[eqevendt\]) for finite sizes of $S^1$ and to confirm the phase structure for the case $N=$ even depicted in the Fig.3. The coexisting phase is given by $${\rm type~~III}~~\cdots~~\left\{\begin{array}{l}
gL\vev{A_y}={\rm diag}\left(\theta_c, {N\over{N-1}}\pi
-{\theta_c\over{N-1}},\cdots,{N\over{N-1}}\pi-{\theta_c\over{N-1}},
{{-\theta_c}\over{N-1}}-{{N(N-2)}\over{N-1}}\pi\right),\\[0.3cm]
\vev{\Phi}={1\over\sqrt{2}}(\sqrt{2\over\lambda}v,0,\cdots, 0)^T,
\end{array}\right.
\label{coexistd}$$ where $\vbar =\sqrt{{2\over\lambda}(\mbar^2-(2\pi-\theta_c)^2)}$ and $\theta_c$ is the solution for the coexisting phase (see below). The $SU(N)$ gauge symmetry is broken to $SU(N-1)\times U(1)$ by the Wilson line and the Higgs VEV breaks the $U(1)$, so that the residual gauge symmetry is $SU(N-1)$ for the vacuum configuration (\[coexistd\]).
In order to confirm that the phase structure is actually given by the Fig.3, it is convenient to consider intersections of two functions defined by $$\begin{aligned}
F(\theta_1)&\equiv &2(\mbar^2-(2\pi - \theta_1)^2)(2\pi -\theta_1),\\
G(\theta_1)& \equiv &{{-t}\over{12\pi^2}}
\left(\alpha\theta_1^3 -3\pi \alpha\theta_1^2 +\beta \pi^2\theta_1
+ \gamma \pi^3\right).
\label{fgeq}\end{aligned}$$ Let us note that $F(\theta_1)=G(\theta_1)$, in fact, reproduces Eq.(\[eqevendt\]) and $G(\theta_1)$ is dependent of $\bar{m}$. Since the intersections of the functions $F(\theta_1)$ and $G(\theta_1)$ have different behavior for $t > 48\pi^2 (N-1)/N$ and $t < 48\pi^2 (N-1)/N$, as discussed in the appendix, it is convenient to investigate separately the phase structure for each region of $t$ (see Figs.4 and 5).
(i) $t> 48\pi^2 {{N-1}\over N}$
In this parameterization of $t$, there is one solution denoted by $\theta_c$ for $\bar{m} > \pi$, as shown in Fig.4. The solution satisfies the condition (\[restricteven\]) and is found to be stable, as discussed in the appendix. Since the Hosotani phase is unstable for $\bar{m}>\pi$, the coexisting phase must be the vacuum configuration for $\bar{m}>\pi$.
When the scale $\bar{m}$ approaches $\pi$, $\theta_c$ becomes closer to $\pi$ and is finally identical to $\pi$ at $\bar{m}=\pi$. This implies that the type III solution (coexisting phase) becomes identical to the type I solution (Hosotani phase). As the scale becomes smaller than $\pi$, the solution is outside of the required region (\[restricteven\]). Hence, there is no coexisting phase for $\bar{m}<\pi$, so that the Hosotani phase must be the vacuum configuration for $\bar{m}<\pi$. Thus, we obtain that $$\mbox {vacuum configuration}=\left\{\begin{array}{lll}
\mbox{Hosotani phase} &\mbox{for}& \mbar < \pi,\\
\mbox{coexisting phase} &\mbox{for}& \mbar > \pi.
\end{array}\right.
\label{vaceven1}$$ Since the order parameters are connected continuously at the phase boundary $\bar{m}=\pi$, the phase transition is the second order.
(ii) $t < 48\pi^2 {{N-1}\over N}$
In this parameter region of $t$, we observe in Fig.5 that there is one solution denoted by $\theta_c$ for $\bar{m}>\pi$. The solution satisfies the condition (\[restricteven\]) and is stable, so that the coexisting phase is the vacuum configuration for $\bar{m}>\pi$, as in the case (i).
Unlike the case (i), $\theta_c$ does not approach $\pi$ as $\bar{m}\rightarrow \pi$. When the scale $\bar{m}$ is equal to $\pi$, there appears a new solution denoted by $\theta_{c}^{\prime} = \pi$, while the $\theta_c$ still lies between $\pi$ and $2\pi$, as shown in Fig.5. If we go to smaller scales than $\pi$, there are two solutions $\theta_{c}^{\prime}$ and $\theta_c$ with $\pi<\theta_{c}^{\prime}\le \theta_c <2\pi$ for $\bar{m}_1^{\prime} \le \bar{m} < \pi$, where the two solutions coincide at $\bar{m}=\bar{m}_1^{\prime}$ (see Fig.5). Since there are no solutions in the required region of $\theta_1$ below the scale $\bar{m}_1^{\prime}$, the coexisting phase disappears and the Hosotani phase must be the vacuum configuration for $\bar{m}<\bar{m}_1^{\prime}$.
One has to take care about what is happening in the region of $\mbar_1^{\prime} \leq \mbar \leq \pi$. For this region, there are two solutions, $\theta_c$ and $\theta_{c}^{\prime}$, to Eq.(\[eqevendt\]) or the equation $F(\theta_1)=G(\theta_1)$. It turns out that the solution $\theta_c$ is stable but the other one is unstable, as shown in the appendix. Thus, there are two candidates for the vacuum configuration, [*i.e.*]{} the Hosotani phase and the coexisting phase given by $\theta_c$. Since $\theta_{c}^{\prime}=\theta_c$ ($\theta_{c}^{\prime}=\pi$) at $\bar{m}=\bar{m}_1^{\prime}$ ($\bar{m}=\pi$), we find that the unstable solution $\theta_{c}^{\prime}$ becomes identical to the coexisting phase given by $\theta_c$ (the Hosotani phase) at $\bar{m}=\bar{m}_1^{\prime}$ ($\bar{m}=\pi)$. This shows that the coexisting phase (the Hosotani phase) is not the vacuum configuration at the boundary $\bar{m}=\bar{m}_1^{\prime}$ $(\bar{m}=\pi)$. This observation implies that there exists a critical scale $\bar{m}_4^{\prime}$ such that[^6] $$\Vb(\theta_i, \vbar,\mbar_{4}^{\prime})\bigg|_{\rm type~I}
=\Vb(\theta_i, \vbar,\mbar_{4}^{\prime})\bigg|_{\rm type~III(\theta_c)},$$ at which the first-order phase transition must occur. Since the above equation is satisfied only once for $\bar{m}_1^{\prime}\le\bar{m}\le\pi$, we obtain the phase structure for $t<48\pi^2\frac{N-1}{N}$ as $$\mbox {vacuum configuration}=\left\{\begin{array}{lll}
\mbox{Hosotani phase} &\mbox{for}& \mbar < \bar{m}_{4}^{\prime},\\
\mbox{coexisting phase} &\mbox{for}& \mbar > \bar{m}_{4}^{\prime}.
\end{array}\right.
\label{vaceven2}$$
Collecting all the discussions we have made in this subsection, we confirm the phase structure depicted in Fig.3. We should emphasize again that the Hosotani mechanism works as the restoration of the gauge symmetry, as in the case $N=$ odd.
Other Models
============
In this section we study how phase structures change if we consider different representations of matter fields under the gauge group. Let us first introduce $N_F$ fermions in the adjoint representation under $SU(N)$ instead of those in the fundamental representation. Then, the last term in Eq.(\[effpotu\]), which stands for the fermion one-loop correction, is replaced by $${A\over{\pi^2 L^4}}\sum_{n=1}^{\infty}\sum_{i,j=1}^{N}
{1\over n^4}\cos(n(\theta_i -\theta_j)).
\label{adjfermi}$$ It has been known that the function (\[adjfermi\]) is minimized by the configuration that breaks the $SU(N)$ gauge symmetry to $U(1)^{N-1}$ [@adjointmodel], $$gL\vev{A_y}={\rm diag}\left({{N-1}\over N}\pi, {{N-3}\over N}\pi,
\cdots,
-{{N-3}\over N}\pi, -{{N-1}\over N}\pi \right).
\label{configadj}$$ Note that a zero eigenvalue located at the ${{N+1}\over 2}$th component appears for the case $N=$ odd, while all the components are nonzero for $N=$ even. This implies, again, that the phase structure is different, depending on whether $N=$ odd or even.
If $N=$ odd, it is possible for the Higgs VEV to take nonzero values, keeping the cross term vanishing and the Higgs potential minimizing, thanks to the zero in Eq.(\[configadj\]). Then, one of the $U(1)$’s is broken by the Higgs VEV, so that the residual gauge symmetry is $U(1)^{N-2}$. It is important to note that in the model there is only one phase whose structure does not depend on the size of the extra dimension. This is a new feature that has not been observed in the phase structures obtained in the previous section. If $N=$ even $(\geq 4)$, there appears no zero component in Eq.(\[configadj\]). One needs to study the effective potential carefully in order to determine the vacuum structure of the model. In case of the $SU(2)$ gauge group with $N_F$ massless adjoint fermions, we can perform fully analytic calculations, and the phase structure is found to be very similar to the one obtained in the previous paper, though the residual gauge symmetry in the Hosotani phase is given by $U(1)$ in this case.
Let us next consider the adjoint Higgs instead of the fundamental Higgs. Both $\vev{A_y}$ and $\vev{\Phi}$ belong to the adjoint representation under the $SU(N)$ gauge group. The cross term corresponding to the third term in the effective potential (\[effpotu\]) is replaced by $$g^2{\rm tr}\left([\vev{A_y},~\vev{\Phi}]^2\right),$$ which is positive semidefinite. The diagonal form of $\vev{\Phi}$ makes the term vanish to minimize the effective potential. Then, the effective potential is divided into two parts written in terms of $\theta_i$ or $v$ alone. As a result, the minimization can be carried out separately with respect to the order parameters, so that the phase structure does not depend on the size of $S^1$. The residual gauge symmetry is determined by the generators of $SU(N)$ which commute with both $\vev{A_y}$ and $\vev{\Phi}$. Though we have already known that the gauge symmetry is (un)broken to $(SU(N))$ $U(1)^{N-1}$ through the Hosotani mechanism if the fermions belong to the (fundamental) adjoint representation under $SU(N)$, the actual residual gauge symmetry depends on the structure of the Higgs potential.
If we assume the same type of the Higgs potential as Eq.(\[potential\]), for example, there are generally flat directions parametrized like $\vbar_1^2 +\cdots +\vbar_{N-1}^2
+(\vbar_1 +\cdots + \vbar_{N-1})^2=\mbar^2/\lambda$ in the effective potential. The residual gauge symmetry is not uniquely determined in this case[^7]. For the fermions belonging to the fundamental representation under $SU(N)$, depending on the form of the Higgs VEV, the $SU(N)$ gauge symmetry is broken to its subgroup. On the other hand, for the fermions in the adjoint representation, the residual gauge symmetry is $U(1)^{N-1}$, irrespective of the flat directions.
Conclusions and Discussions
===========================
We have study the phase structure of the $SU(N)$ gauge-Higgs models with $N_F$ massless fermions on the space-time $M^3\otimes S^1$. There are two kinds of the order parameters for gauge symmetry breaking in the models. One is the vacuum expectation value of the Higgs field $\vev{\Phi}$ (the Higgs mechanism) and the other is the vacuum expectation value of the component gauge field for the $S^1$ direction $\vev{A_y}$ (the Hosotani mechanism). The former works at the tree level, while the latter is effective at the quantum level and sensitive to the size of $S^1$. There is also the interaction between $\vev{A_y}$ and $\vev{\Phi}$, which depends on the size as well. Thus, the dominant contribution to the effective potential comes from the different physical origins, depending on the size of the extra dimension. Therefore, the phase structure depends on the size (in addition to the parameters of the models) in general. This is expected to be a general feature in gauge-Higgs models on the space-time.
We have computed the effective potential for the two kinds of the order parameters in a one-loop approximation. In the calculation we have assumed that the number of the massless fermions is large enough, so that we have neglected the one-loop contributions from the gauge and Higgs fields to the effective potential. Then, we have obtained the effective potential given by Eq.(\[effpotu\]). It turns out that the existence of the cross term in the potential, which comes from the interaction between $\vev{A_y}$ and $\vev{\Phi}$ in $D_{y}\Phi$, plays a crucial role to determine the phase structure of the model.
We have first considered the case that both the fermion and Higgs fields belong to the fundamental representation under $SU(N)$. The model possesses three phases called Hosotani, Higgs and coexisting phases for $N=$ odd, while for $N=$ even the model has only two phases, Hosotani and coexisting phases. The Higgs phase does not exist for finite sizes of $S^1$ and $N=$ even. The phase structure depends on both the size of the extra dimension and the parameter of the model. We have obtained the phase structure depicted in Fig. 2 (3) for $N=$ odd (even). It should be noted that, contrary to the usual case, the Hosotani mechanism can play a role of the restoration of gauge symmetry in the model.
We have next considered the case that the representation of the fermions is changed into the adjoint representation under $SU(N)$. The $SU(N)$ gauge symmetry is maximally broken to $U(1)^{N-1}$ through the Hosotani mechanism. If $N=$ odd, the Higgs field can acquire the nonvanishing vacuum expectation value, keeping the cross term vanishing. Then, one of the $U(1)$’s is further broken by the Higgs VEV, so that the residual gauge symmetry is $U(1)^{N-2}$. There is only one phase in the model, which does not depend on the size of the extra dimension. On the other hand, if $N=$ even $(\geq 4)$, due to the nonexistence of the zero component in the $\vev{A_y}$ unlike the case $N=$ odd, one has to study the effective potential carefully in order to investigate the phase structure of the model. The phase structure for $SU(2)$, however, can be fully studied analytically and is similar to the one obtained in the previous paper [@previous]. The residual gauge symmetry in the Hosotani phase is given by $U(1)$ in this case.
We have also considered the case that the Higgs field belongs to the adjoint representation under $SU(N)$. Both $\vev{A_y}$ and $\vev{\Phi}$ belong to the adjoint representation, and they cannot be diagonalized simultaneously, in general. The cross term, however, requires the diagonal form of $\vev{\Phi}$ in order for the effective potential to be minimized. Then, the effective potential is separated into two parts with respect to the order parameters in our approximation, and the minimization of the potential is carried out separately. This implies that the phase structure of the model does not depend on the size of the extra dimension and there is only one phase in the model. The residual gauge symmetry in the phase is generated by the generators of $SU(N)$ commuting with both $\vev{A_y}$ and $\vev{\Phi}$, and it depends on the detailed structure of the Higgs potential.
Our models have been studied on the space-time $M^3\otimes S^1$. One may wonder what will happen if we consider models on $M^4\otimes S^1$, or more generally, $M^{D-1}\otimes S^1$. Qualitative features such as the existence of the several phases and their structures with respect to the scale will not change even if we go to the higher dimensions. The phase structure comes from the fact that each term in the effective potential (\[effpotu\]) has the different dependence on the size of the extra dimension. In other words, each term has its own physical origin, which is different each other and exists even in the higher dimensions. If we start with the space-time $M^{D-1}\otimes S^1$, the fermion one-loop correction is given by $${{2^{[{D\over 2}]}
N_F \Gamma(\frac{D}{2})}
\over{\pi^{{D\over 2}}L^D}}
\sum_{i=1}^{N}\sum_{n=1}^{\infty}{1\over n^D}\cos(n\theta_i).$$ The scale of the term is governed by the factor $1/L^D$ in place of $1/L^4$ in $D$ dimensions. The minimum of this function is given by the same configuration as Eqs.(\[hosotani\]) or (\[hosotanid\]) in Sec. 2. This means that the global minimum for the correction does not depend on the total dimension. This is because the Hosotani mechanism is controlled by the infrared physics like the Casimir effect. In fact, the one-loop potential is governed only by the light modes in the Kaluza-Klein ones, so that they mainly contribute to determine the dynamics. On the contrary, the heavy modes suppresses the effective potential more as the dimension becomes higher. The cross term, which is crucial for the phase structure, also exists even in the higher dimensions as the same way. Therefore, we expect that the qualitative features found in this paper do not change even if we start with the higher dimensions.
In computing the effective potential, we have neglected the one-loop corrections to $\vev{A_y}$ from the gauge and Higgs fields. We have assumed that the number of massless fermions is large enough, so that these contributions are suppressed. One needs to take account of these contributions to the effective potential for small $N_F$ in order to understand the whole vacuum structure of the model. Namely, it is expected that the phase structure for small radius of $S^1$ is more involved because the ignored terms start to come into play in the effective potential.
We have also ignored the one-loop corrections to the Higgs potential from the gauge and Higgs fields, such as $c\frac{g^2}{L^2}\Phi^{\dagger}\Phi$ and $c^{\prime}\frac{\lambda}{L^2}\Phi^{\dagger}\Phi$, by assuming that the couplings $g$ and $\lambda$ are sufficiently small. Those mass corrections are irrelevant to the model considered in Sec.2, but they could cause gauge symmetry restoration at very small scales for models with nonvanishing Higgs VEV, like the models with one phase found in Sec.3. Mass corrections to Higgs potentials at finite temperatures or finite scales of extra dimensions have been investigated in many literature [@O(N); @model; @finitetemperature; @finiteradius] and their effects on gauge symmetry breaking/restoration are well understood. Since the subject is not our main concerns, we will not discuss it any more.
There are several directions to extend our studies. It is interesting to investigate how gauge symmetry breaking patterns can be rich in the phase diagram by introducing matter fields belonging to various representations of gauge groups[^8]. This study has the relation with the new approach to the gauge hierarchy problem we proposed in the previous paper. We have considered the massless fermions through the analyses. In connection with the suppression of the effective potential by the large fermion number, a massive fermion also modifies the size of the fermion one-loop correction like $\e^{-mL}/L^4$ for $L>m^{-1}$. It is interesting to see how massive fermions affect the phase structure. We should finally stress that if the Standard Model were embedded in a higher dimensional theory with a multiply connected space, our studies would have physical importance because the theory is just in a class of the gauge-Higgs system on multiply connected spaces. It would be of importance to investigate the phase structure and clarify its physical consequences at low energies. Those will be reported elsewhere.
[**Acknowledgements**]{}
K.T would like to thank the Dublin Institute for Advanced Study for warm hospitality, where part of this work was done. This work was supported in part by a JSPS Research Fellowship for Young Scientists (H.H).
[**Appendix**]{}
(A) [*Parametrization of the vacuum expectation value for the Higgs field*]{}
We shall show that the parameterization (\[vev\]) of the vacuum expectation value for the Higgs field can minimize the effective potential (\[effpotu\]).
The classical part of the effective potential for $\vev{A_y}$ and $\vev{\Phi}$ with every position being filled is given by $$\begin{aligned}
V_{cl}&=&-\half m^2\sum_{i=1}^{N}|v_i|^2
+ {\lambda\over 8}\left(\sum_{i=1}^{N} |v_i|^2\right)^2
+\frac{1}{2L^2}\sum_{i=1}^N\hat{\theta}_i^2 |v_i|^2 ,
\label{classical}\end{aligned}$$ where $\hat{\theta}_i = \theta_i$ mod $2\pi$ with $|\hat{\theta}_i| \le \pi$ for $i=1,\cdots, N$. Without loss of generality, we can assume that $\abs{\hat{\theta}_1}\leq\abs{\hat{\theta}_2}\leq\cdots\leq
\abs{\hat{\theta}_{N}}$. Then, it is convenient to rewrite $V_{cl}$ into the form $$\begin{aligned}
V_{cl} &=& V_1 +V_2 ,\end{aligned}$$ where $$\begin{aligned}
V_{1}&=&-\frac{1}{2L^2}( m^2 L^2 - \hat{\theta}_{1}^{2})
\sum_{i=1}^{N}|v_i|^2
+ {\lambda\over 8}\left(\sum_{i=1}^{N} |v_i|^2\right)^2 ,
\\
V_{2}&=& \frac{1}{2L^2}
\sum_{i=2}^{N}(\hat{\theta}_i^2 - \hat{\theta}_{1}^{2})|v_i|^2 .
\label{V1V2}\end{aligned}$$ Note that $V_1$ depends only on $\sum_{i=1}^{N}|v_i|^2 $ (and $\hat{\theta}_i$) and $V_2$ is positive semidefinite.
Let us now consider a minimization problem of $V_{cl}$ for fixed $\theta_i$ ($i=1,\cdots, N$). Suppose that the minimum of $V_1$ for fixed $\theta_i$ is realized by $\sum_{i=1}^{N} |v_i|^2 = v^2$ for a real constant $v$. Then, it is easy to see that the configuration $\vev{\Phi} = \frac{1}{\sqrt{2}}(v_1, 0,\cdots, 0)^T$ with $|v_1|^2 = v^2$ gives the minimum of $V_{cl}$ for fixed $\theta_i$, because the configuration realizes the minimum values of $V_1$ and $V_2$ simultaneously, so that it must be a configuration which minimizes $V_{cl}$. By using a $U(1)$ symmetry to make $v_1$ real, we arrive at the expression (\[vev\]). Since the incorporation of quantum corrections to $\vev{A_y}$ does not alter the above discussion, we have proved the parameterization (\[vev\]) in the text.
(B) [*Expressions and results*]{}
We shall derive some expressions and results used in the text.
(B)-1 [*Hosotani phase and its stability*]{}
The Hosotani phase is obtained by solving the equation (\[eqhosotani\]) and the Higgs VEV is given by Eq.(\[hvevu\]) in the text. We work on the space-time $M^3\otimes S^1$, so that there is a formula, $$\sum_{n=1}^{\infty}{1\over n^4}\cos(nx)={{-1}\over {48}}x^2(x-2\pi)^2
+{{\pi^4}\over {90}}\qquad \mbox{for} \ 0\le x \le 2\pi
\label{formulad}$$ from which we have $$\begin{aligned}
\sum_{n=1}^{\infty}{1\over n^3}\sin(nx)&=&{1\over {12}}x(x-\pi)(x-2\pi),
\label{formulau}\\
\sum_{n=1}^{\infty}{1\over{n^2}}\cos(nx)&=&{1\over 4}x(x-2\pi)+{\pi^2\over
6}.
\label{formulat}\end{aligned}$$ Note that the minimum of the function (\[formulad\]) is located at $x=\pi$.
To make our analysis simple, let us assume that $\theta_i (i=1,\cdots, N-1)$ lies in the range of $0\le\theta_i<2\pi$. This can be done without loss of generality. Applying the formula (\[formulau\]) to Eq.(\[eqhosotani\]), we obtain $$\begin{aligned}
&&\biggl(\theta_k +\sum_{i=1}^{N-1}\theta_i-2\pi q\biggr)
\biggl(\theta_k^2+\Bigl(\sum_{i=1}^{N-1}\theta_i
- 2\pi(q-1)\Bigr)^2 - \theta_k\Bigl(\sum_{i=1}^{N-1}\theta_i
-2\pi(q-1)\Bigr)\nonumber\\
&&\qquad
-\pi \theta_k - \pi \Bigl(\sum_{i=1}^{N-1}\theta_i
- 2\pi(q-1)\Bigr)\biggr)
=0 \qquad \mbox{for}\ k=1,2,\cdots, N-1,
\label{extuno}\end{aligned}$$ where the integer $q$ is defined by the requirement $$\begin{aligned}
0 \le \sum_{i=1}^{N-1} \theta_i - 2\pi (q-1) < 2\pi .\end{aligned}$$ Let us first study the case given by $$\begin{aligned}
\theta_k + \sum_{i=1}^{N-1} \theta_i = 2\pi q\qquad
\mbox{for}\ k=1,2,\cdots, N-1.
\label{extdue}\end{aligned}$$ The solution to the equation is obtained as $$\begin{aligned}
\theta \equiv \theta_k =\frac{2\pi q}{N}\ \ (k=1,\cdots, N-1),
\quad q=0,1,\cdots, N-1.\end{aligned}$$ Knowing that the effective potential is, now, recast in $$\Vb={{AN}\over{\pi^2}}\sum_{n=1}^{\infty}{1\over n^4}
\cos(n{{2\pi q}\over{N}}),$$ we find that the potential is minimized at $q=\frac{N-1}{2}$ for $N=$ odd and at $q=\frac{N}{2}$ for $N=$ even. Thus, we have $$(\vbar, \theta)=\left\{\begin{array}{ll}
(0,~~{{N-1}\over N}\pi),& N={\rm odd},\\[0.4cm]
(0,~~\pi),& N={\rm even},\end{array}\right.
\label{solap}$$ which give the solutions (\[hosotani\]) and (\[hosotanid\]) corresponding to the Hosotani phase in the text.
Let us next discuss the stability of the above solution against small fluctuations. The stability is guaranteed if all the eigenvalues of the Hessian is positive definite. The Hessian is given by the second derivative of the effective potential with respect to the order parameters, $$H \equiv
\left(\begin{array}{cccc}
{{\del^2\Vb}\over{\del\vbar^2}} &{{\del^2\Vb}\over{\del\vbar\del\theta_1}} &
\cdots & {{\del^2\Vb}\over{\del\vbar\del\theta_{N-1}}} \\
{{\del^2\Vb}\over{\del\vbar\del\theta_1}}
&{{\del^2\Vb}\over{\del\theta_1^2}} &
\cdots & {{\del^2\Vb}\over{\del\theta_1\del\theta_{N-1}}} \\
\vdots & \vdots & \ddots & \vdots \\
{{\del^2\Vb}\over{\del\vbar\del\theta_{N-1}}} &
{{\del^2\Vb}\over{\del\theta_1\del\theta_{N-1}}} &
\cdots & {{\del^2\Vb}\over{\del\theta^2_{N-1}}}
\end{array}\right).$$ The matrix $H$ evaluated at the solution becomes $$H=\left(\begin{array}{ccccc}
F & 0 &\cdots &\cdots & 0\\
0 &2C & C &\cdots &C \\
\vdots &C &\ddots & &C \\
\vdots &\vdots & & \ddots&\vdots\\
0 &C &\cdots&\cdots&2C\\
\end{array}\right),$$ where we have defined $$F\equiv -\mbar^2+\left({{2\pi q}\over{N}}\right)^2,~~C\equiv {A\over\pi^2}
\sum_{n=1}^{\infty}{{-1}\over n^2}\cos\left(n{{2\pi q}\over N}\right)$$ with $q={{N-1}\over 2}$ or ${N\over 2}$. The eigenvalues of the matrix $H$ are found to be $F$, $C$ ($(N-2)$ degeneracy) and $NC$, and hence all the eigenvalues of $H$ for the given values of $q$ is positive as long as $0 < \mbar < {{2\pi q}\over N}$. This means that each solution in Eq.(\[solap\]) is stable for the scale region given by $$\begin{array}{lll}
0< \mbar < {{N-1}\over N}\pi \quad
& {\rm for }\quad & N={\rm odd},\\[0.3cm]
0< \mbar < \pi \quad & {\rm for }\quad & N={\rm even}.
\end{array}$$ We have obtained Eqs.(\[regionu\]) and (\[regionq\]) in the text.
Let us next consider the other solutions to Eq.(\[extuno\]), $$\begin{aligned}
&&\biggl(\theta_k^2+\Bigl(\sum_{i=1}^{N-1}\theta_i
- 2\pi(q-1)\Bigr)^2 - \theta_k\Bigl(\sum_{i=1}^{N-1}\theta_i
-2\pi(q-1)\Bigr)\nonumber\\
&&\quad -\pi \theta_k - \pi \Bigl(\sum_{i=1}^{N-1}\theta_i
- 2\pi(q-1)\Bigr)\biggr)
=0
\label{altrosol}\end{aligned}$$ for $k=1,\cdots, N-1$. Any solutions satisfying Eq.(\[altrosol\]) give a negative diagonal component $\frac{\partial^2 \bar{V}}{\partial \theta_k^2}$ in $H$. It is not difficult to show that $$\begin{aligned}
{{\del^2\Vb}\over{\del\theta_k^2}}
&=&
{A\over\pi^2}\sum_{n=1}^{\infty}
{{-1}\over{n^2}}
\biggl(\cos(n\theta_k)+\cos(n\sum_{i=1}^{N-1}\theta_i)\biggr)
\nonumber\\
&=&
-{{A}\over{12\pi^2}}
\biggl(\theta_k +\sum_{i=1}^{N-1}\theta_i -2\pi q \biggr)^2 <0,\end{aligned}$$ where we have used the formula (\[formulat\]) and Eq.(\[altrosol\]). This implies that any solutions satisfying Eq.(\[altrosol\]) are not stable against small fluctuations, so that we exclude such solutions from our discussions hereafter.
(B)-2 [*Higgs and coexisting phases and their stabilities*]{}
Let us next consider the case given by Eq.(\[hvevd\]), $$-\mbar^2+{\lambda\over 2}\vbar^2 + \hat{\theta}_1^2=0.
\label{altrovev}$$ In this case, the equations we solve are given by Eqs.(\[eqcoexistu\]) and (\[eqcoexistd\]). Applying the formula (\[formulau\]) to Eq.(\[eqcoexistd\]), we obtain the same equation as Eq.(\[extdue\]), in which the case of $k=1$ is excluded in the present case. The equations obtained imply that $$\theta_2=\theta_3=\cdots = \theta_{N-1}\equiv \btheta.
\label{samecond}$$ As the result, the equation (\[eqcoexistd\]) finally yields the relation, $$\theta_1+(N-1)\btheta=2\pi l
\label{relation}$$ for some integer $l$. Since it is enough to consider the region $0\leq \bar{\theta} \leq \pi$ and we have required the sequence $\abs{\hat{\theta}_1}\leq\abs{\hat{\theta}_2}\leq\cdots\leq
\abs{\hat{\theta}_{N}}$, our solutions also have to satisfy the constraint $\abs{\hat{\theta}_1}\leq \abs{\btheta}$ in addition to the relation (\[relation\]). Among possible solutions satisfying those, one needs the solution that minimizes the effective potential, which is now recast in $$\Vb=-\half \mbar^2\vbar^2 + {\lambda\over 8}\vbar^4
+ \half \hat{\theta}_1^2 \vbar^2
+{A\over \pi^2}\sum_{n=1}^{\infty}{1\over n^4}
\left(\cos(n\theta_1) + (N-1)\cos(n\btheta))\right),
\label{effpotential1}$$ where we have used Eqs.(\[samecond\]) and (\[relation\]). The integer $l$ must be determined in such a way that the potential energy is minimized. For general $l$, Eq.(\[relation\]) and other constraints restrict allowed regions of $\theta_1$ and $\bar{\theta}$. It is not difficult to see that the minimum of the effective potential (\[effpotential1\]) can be realized when $l={{N-1}\over 2}$ for $N=$odd or $l={N\over 2}$ for $N=$ even and $$\begin{aligned}
{{N-1}\over N}\pi \leq \btheta \leq \pi, &&
\quad 0\leq \theta_1 \leq {{N-1}\over N}\pi,\quad
\mbox{for}\ N=\mbox{odd},
\label{desiu}\\
{{N-2}\over{N-1}}\pi \leq \btheta \leq \pi,&&
\quad \pi \leq \theta_1 \leq 2\pi, \hspace{1.3cm}
\mbox{for}\ N=\mbox{even}.
\label{desid}\end{aligned}$$ Thus, we have obtained Eqs.(\[restrictodd\]) and (\[restricteven\]) in the text.
Now, the effective potential is rewritten, depending on whether $N$ is even or odd, in terms of $\vbar$ and $\theta_1$ alone, as $$\begin{aligned}
\Vb_{N=odd}&=&-\half \mbar^2\vbar^2 + {\lambda\over 8}\vbar^4 +
\half \hat{\theta}_1^2 \vbar^2\nonumber\\
&+&{A\over \pi^2}\sum_{n=1}^{\infty}{1\over n^4}
\biggl(\cos(n\theta_1)
+ (N-1)\cos\Bigl(n\bigl(\pi +{{\theta_1}\over{N-1}}\bigr)\Bigr)
\biggr),\label{effapu}\\
\Vb_{N=even}&=&-\half \mbar^2\vbar^2 + {\lambda\over 8}\vbar^4 +
\half {\hat{\theta}}_1^2 \vbar^2\nonumber\\
&+&{A\over \pi^2}\sum_{n=1}^{\infty}{1\over n^4}
\biggl(\cos(n\theta_1) + (N-1)\cos\Bigl(n\bigl({{\theta_1}\over{N-1}}+
{{N-2}\over{N-1}}\pi\bigr)\Bigr)
\biggr).\label{effapd}\end{aligned}$$ It follows from Eqs.(\[desiu\]) and (\[desid\]) that the relation between $\hat{\theta}_1$ and $\theta_1$ is given by $\hat{\theta}_1=\theta_1$ for $N=$ odd and $\hat{\theta}_1=\theta_1 -2\pi$ for $N=$ even. Our remaining task is to solve the equation (\[eqcoexistu\]) under the relation (\[relation\]) with $q=\frac{N-1}{2}$ $(\frac{N}{2})$ for $N=$ odd (even) or, equivalently, to solve the equation from the first derivative of the potential (\[effapu\]) or (\[effapd\]) with respect to $\theta_1$ with Eq.(\[altrovev\]).
(B)-3 [*$N=$odd*]{}
As explained above, the equation we solve becomes $$\theta_1\left(
{2\over \lambda}(\mbar^2-\theta_1^2)-{A\over{12\pi^2}}
\Bigl({{N(N^2-3N+3)}\over{(N-1)^3}}\theta_1^2
-3\pi\theta_1+{{2N-3}\over {N-1}}\pi^2\Bigr)
\right)=0,
\label{cohiggsu}$$ which reads $$\begin{aligned}
&&\theta_1=0,\label{higgssou}\\
&\mbox{or}&\nonumber\\
&&\left(1+{t\over{24\pi^2}}{{N(N^2-3N+3)}\over{(N-1)^3}}\right)\theta_1^2
-{t\over{8\pi}}\theta_1-\mbar^2 +{t\over{24}}{{2N-3}\over{N-1}}=0.
\label{kyouzon}\end{aligned}$$ Here we have introduced $t\equiv \lambda A(=4\lambda N_F)$.
Let us first study the case $\theta_1=0$. The relation (\[relation\]) with $l={{N-1}\over 2}$ yields $\theta_2=\theta_3=\cdots=\theta_{N-1}=\pi$ and $\theta_N=-(N-2)\pi$. And $\vbar=\sqrt{2/\lambda}~\mbar$ from Eq.(\[altrovev\]). Thus, we have obtained the type II solution corresponding to the Higgs phase (\[higgs\]) in the text. The stability of the type II solution is studied by the eigenvalues of the matrix $H$ evaluated at the solution. It is given by $$H=
\left(\begin{array}{ccccc}
2\mbar^2 & 0 &\cdots &\cdots & 0\\
0 &G & B &\cdots &B \\
\vdots &B &2B & &B \\
\vdots &\vdots & & \ddots&\vdots\\
0 &B &\cdots&\cdots&2B\\
\end{array}\right),$$ where we have defined $B\equiv {A\over{12}}$ and $G\equiv
{2\over \lambda}\mbar^2-B$. The eigenvalues of the matrix are found to be $2\bar{m}^2$, ${A\over{12}}$ ($(N-3)$ degeneracy) and $x_{\pm}$, where $$x_{\pm}\equiv\half\Biggl(G+(N-1)B
\pm \sqrt{(G+(N-1)B)^2-4B((N-1)G-(N-2)B)}\Biggr).$$ The condition that the eigenvalues $x_{\pm}$ are positive is given by $$\mbar^2 > {{2N-3}\over{N-1}}{{\lambda A}\over{24}}
={{2N-3}\over{N-1}}{t\over{24}}\equiv \mbar_3^2.$$ Thus, we have obtained Eq.(\[regiond\]) in the text.
Let us next study the solutions given by Eq.(\[kyouzon\]), [*i.e.*]{} $$\theta_1^{\pm}(\mbar)=
{1\over {2(1+{{t}\over{24\pi^2}}C_0)}}\left(
{{t\over{8\pi}}\pm {{N^2-3}\over{24(N-1)^2}}{t\over \pi}
\sqrt{S(\mbar)}}\right),
\label{cosol}$$ where $$S(\mbar)\equiv 1-C_1\left({\pi^2\over t}\right)+
{1\over t^2}\left(C_2\pi^2 +C_3 t \right)\mbar^2$$ with $C_i(i=0,1,2,3)$ being defined by $$\begin{aligned}
C_0 &\equiv & {{N(N^2-3N+3)}\over {(N-1)^3}},~~
C_1\equiv {{96(2N-3)(N-1)^3}\over{(N^2-3)^2}},\\
C_2 &\equiv & {{
2304(N-1)^4}\over{(N^2-3)^2}},~~
C_3\equiv {{96 N(N-1)(N^2-3N+3)}\over{(N^2-3)^2}}.\end{aligned}$$ Let us study the stability of the solutions. To this end, we note that the order parameters in this case are reduced to two, that is, $\vbar$ and $\theta_1$. Then, the matrix $H$ becomes the $2\times 2$ matrix, $$H \equiv
\left(\begin{array}{cc}
{{\del^2\Vb}\over{\del\vbar^2}} &
{{\del^2\Vb}\over{\del\vbar\del\theta_1}}\\
{{\del^2\Vb}\over{\del\vbar\del\theta_1}}
&{{\del^2\Vb}\over{\del\theta_1^2}}\\
\end{array}\right),
\label{hessian2}$$ where each component evaluated at the solutions is given by $$\begin{aligned}
{{\del^2\Vb}\over{\del\vbar^2}}&=&\lambda\vbar^2,\qquad
{{\del^2\Vb}\over{\del\vbar\del\theta_1}}=2\theta_1^{\pm}\vbar\\
{{\del^2\Vb}\over{\del\theta_1^2}}&=&
\vbar^2 -{A\over{\pi^2}}\sum_{n=1}^{\infty}{1\over n^2}
\left(\cos(n\theta_1^{\pm})+
{1\over{N-1}}\cos(\pi+{\theta_1^{\pm}\over{N-1}})\right)
\nonumber\\
&=&{A\over{12\pi^2}}
\left(-{{2N(N^2-3N+3)}\over{(N-1)^3}}(\theta_1^{\pm})^2+3\pi\theta_1^{\pm}
\right),\end{aligned}$$ where we have used the formula (\[formulat\]). Then, the determinant of $H$ is calculated as $${\rm det}~H= \mp\vbar^2(\mbar)\theta_1^{\pm}(\mbar)
\left(
{{t(N^2-3)}\over{12\pi (N-1)^2}}\sqrt{S(\mbar)}\right).$$ Since $\theta_1$ is larger than zero, the solution $\theta_1^+$ gives a negative determinant of $H$, so that $\theta_1^+(\mbar)$ is unstable and is excluded from our discussions. Hence, we have obtained the type III solution $\theta_1^-(\mbar)$ with $\vbar =\sqrt{{2\over\lambda}
(\mbar^2-(\theta_1^-)^2)}$ corresponding to the coexisting phase (\[coexist\]) in the text.
The solution $\theta_1^-(\mbar)$ must satisfy the reality condition $(\theta_1^-(\mbar))^*=\theta_1^-(\mbar)$ and $ 0 \leq \theta_1^-(\mbar) \leq {{N-1}\over N}\pi$, as shown in Eq.(\[desiu\]). The reality condition is furnished if $t\ge C_1 \pi^2$ or if $$\mbar\geq \left({{C_1\pi^2 t -t^2}\over{C_2\pi^2+C_3 t}}\right)^{\half}
\equiv \mbar_1
\qquad \mbox{for}\ t<C_1 \pi^2 .
\label{m_1}$$ The condition $0 \leq \theta_1^-(\mbar)$ yields that $$\mbar \leq \left({{2N-3}\over{N-1}}{t\over{24}}\right)^{\half}\equiv
\mbar_3,$$ while the condition $\theta_1^-(\mbar) \leq {{N-1}\over N}\pi$ requires that $$\mbar \geq {{N-1}\over{N}}\pi \equiv \mbar_2
\quad {\rm for}\quad
t\geq 48\pi^2{{(N-1)^3}\over{N(N^2-3)}}.$$ The latter condition is always satisfied for $t< 48\pi^2{{(N-1)^3}\over{N(N^2-3)}}$.
The relative magnitude in the scales $\mbar_i$ $(i=1,2,3)$ is important to understand the allowed region of the coexisting phase. It is easy to show that $\mbar_1 < \mbar_3$ is always satisfied, irrespective of the values of $N$ and $t$, while the relative magnitude of $\bar{m}_2$ and $\bar{m}_3$ depends on the parameter $t$, $$\mbar_2 \leq (>) \mbar_3 \quad {\rm for} \quad t\geq (<)
24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}.$$ The relation $\mbar_1 \leq \mbar_2$ is always satisfied, where the equality holds for $t=48\pi^2{{(N-1)^3}\over{N(N^2-3)}}$. We have understood the scale relations given in the parentheses in Eqs.(\[cou\]), (\[cod\]) and (\[cot\]). In Fig.1, the curves of the critical scales $\bar{m}_i$ are depicted in the $\bar{m}$-$t$ plane, and the relative magnitude of $\bar{m}_i$ will be understood clearly there. Noting that $C_1\pi^2 > 48\pi^2\frac{(N-1)^3}{N(N^2-3)}$ and collecting the results obtained above, we find that the allowed region of the coexisting phase is given by $$\begin{aligned}
\bar{m}_2 \le \bar{m} \le \bar{m}_3\qquad &\mbox{for}&\
t > 48\pi^2\frac{(N-1)^3}{N(N^2-3)},\label{relativem1}\\
\bar{m}_1 \le \bar{m} \le \bar{m}_3\qquad &\mbox{for}&\
t \le 48\pi^2\frac{(N-1)^3}{N(N^2-3)}.\label{relativem2}\end{aligned}$$ It will be useful to evaluate the values of $\theta_1^{-}(\bar{m})$ at the boundaries in Eqs.(\[relativem1\]) and (\[relativem2\]). One can show that $$\begin{aligned}
\theta_1^{-}(\bar{m}_2) = \frac{N-1}{N}\pi\ \ \mbox{and}\ \
\theta_1^{-}(\bar{m}_3) = 0 \quad
&\mbox{for}&\ t\ge 48\pi^2\frac{(N-1)^3}{N(N^2-3)},\label{boundary1}\\
\theta_1^{-}(\bar{m}_1) = \frac{t}
{16\pi\left(1+\frac{t}{24\pi^2}C_0\right)}\ \ \mbox{and}\ \
\theta_1^{-}(\bar{m}_3) = 0 \quad
&\mbox{for}&\ t< 48\pi^2\frac{(N-1)^3}{N(N^2-3)}.\label{boundary2}\end{aligned}$$
Let us study the behavior of the type III solution with respect to the scale $\bar{m}$. We first note that the solution $\theta_1^{-}(\bar{m})$ is a monotonically decreasing function of $\bar{m}$. $${{\del\theta_1^-(\mbar)}\over{\del\mbar^2}}=-
{{24\pi (N-1)^2}\over{t(N^2-3)}}{1\over{\sqrt{S(\mbar)}}}<0.$$ On the other hand, $\bar{v}^2(\bar{m})$ is a monotonically increasing function of $\bar{m}$ $${{\del\vbar^2}\over{\del\mbar^2}}
={2\over\lambda}\left(1+\theta_1^-(\mbar)
{{48(N-1)^2}\over{(N^2-3)\sqrt{S(\mbar)}}}{\pi\over t}\right)>0$$ for the region (\[desiu\]). Since $\bar{v}^2(\bar{m})$ can be written as $$\begin{aligned}
\bar{v}^2(\bar{m})
=
\frac{N(N^2-3N+3)t}{12\pi^2(N-1)^3\lambda}
\biggl(\theta_1^{-}(\bar{m}) - \frac{N-1}{N}\pi\biggr)
\biggl(\theta_1^{-}(\bar{m}) - \frac{(N-1)(2N-3)}{N^2-3N+3}\pi\biggr),\end{aligned}$$ $\bar{v}^2(\bar{m})$ is positive semidefinite for $0\le \theta_1^{-}(\bar{m}) \le \frac{N-1}{N}\pi$, as it should be. One can also show that $$\begin{aligned}
\bar{v}^2(\bar{m}_2) &=& 0\qquad
\mbox{for}\ t\ge 48\pi^2\frac{(N-1)^3}{N(N^2-3)}, \\
\bar{v}^2(\bar{m}_3) &=& \frac{2}{\lambda}\bar{m}_3^2 .\end{aligned}$$ It follows together with Eq.(\[boundary1\]) and (\[boundary2\]) that the coexisting phase is found to be continuously connected to the Hosotani phase (the Higgs phase) at the boundary $\bar{m} = \bar{m}_2$ ($\bar{m} = \bar{m}_3$) for $t\ge 48\pi^2 \frac{(N-1)^3}{N(N^2-3)}$.
We have studied the allowed region of the coexisting phase with respect to the parameters $\bar{m}$ and $t$. We have obtained that (i) when $t > 48\pi^2{{(N-1)^3}\over{N(N^2-3)}}$, the relative magnitude of the scales[^9] is given by $\mbar_1 < \mbar_2 < \mbar_3$, and the coexisting phase exists between $\mbar_2$ and $\mbar_3$, (ii) when $24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}< t \leq
48\pi^2{{(N-1)^3}\over{N(N^2-3)}}$, the relative magnitude of the scales is given by $\mbar_1 \leq \mbar_2 < \mbar_3$, and the coexisting phase lies between $\mbar_1$ and $\mbar_3$, (iii) when $t \leq 24\pi^2{{(N-1)^3}\over {N^2(2N-3)}}$, we have $\mbar_1 < \mbar_3 \leq \mbar_2$, and the coexisting phase is between $\mbar_1$ and $\mbar_3$. We have arrived at the classification used in the text, Eqs.(\[cou\]), (\[cod\]) and (\[cot\]). Fig.1 will help our understanding of the phase structure.
Let us finally calculate the potential energy for each phase. $$\begin{aligned}
\Vb_{Hosotani}&=&A\pi^2
\left(-\frac{(N^2-1)^2}{48N^3}+{N\over{90}}\right),\\
\Vb_{Higgs}&=&-{\mbar^4\over{2\lambda}}+A\pi^2
\left(-{{(N-1)}\over{48}}+{{N}\over{90}}\right),\\
\Vb_{coexisting}&=&-{{1}\over{2\lambda}}\left(\mbar^2-\theta_1^-(\mbar)^2
\right)^2
+{A\over\pi^2}\Biggl(-{1\over{48}}
{\theta_1^-(\mbar)}^2\Bigl(\theta_1^-(\mbar) - 2\pi\Bigr)^2 \nonumber \\
&&\ \ -{{N-1}\over{48}}\Bigl(\pi+{{\theta_1^-(\mbar)}\over{N-1}}\Bigr)^2
\Bigl({{\theta_1^-(\mbar)}\over{N-1}}-\pi\Bigr)^2
+{N\over{90}}\pi^4\Biggr),\end{aligned}$$ where $\theta_1^-(\mbar)$ is given by Eq.(\[cosol\]). It is not difficult to show that the energy difference $\Delta\Vb\equiv \Vb_{Hosotani}-\Vb_{coexisting}$ is a monotonically increasing function of $\bar{m}^2$, $${{\del}\over{\del\mbar^2}}\Delta \Vb(\mbar)
=\half \vbar(\mbar)^2 \geq 0,$$ where we have used the equation (\[kyouzon\]). We also observe that $$\begin{aligned}
\Vb_{Hosotani}-\Vb_{Higgs}
&=&{1\over{2\lambda}}
\left(\mbar^4 -
{{(N-1)(N^2-N-1)}\over{N^3}}{{\pi^2}\over{24}}t\right)\nonumber\\
&\equiv & {1\over{2\lambda}}\left( \mbar^4 - (\mbar_5)^4\right),\end{aligned}$$ which gives the critical scale given by Eq. (\[regiont\]) in the text.
[*(B)-4 $N=$ even*]{}
Let us study the case $N=$ even. The equation we solve is given, from Eq.(\[effapd\]), by $$\begin{aligned}
&-&{2\over\lambda}\left(\mbar^2-(2\pi-\theta_1)^2\right)(2\pi-\theta_1)
\nonumber\\
&+&{A\over\pi^2}
\sum_{n=1}^{\infty}{{-1}\over n^3}\left(\sin(n\theta_1)+
\sin(n({\theta_1\over{N-1}}+{{N-2}\over {N-1}}\pi))\right)=0,
\label{eqevenu}\end{aligned}$$ where we have eliminated $\vbar^2$ by $\vbar^2={2\over\lambda}
(\mbar^2-(2\pi - \theta_1)^2)$. Using the formula (\[formulau\]), the above equation becomes $$\left(1+{{t\alpha}\over{24\pi^2}}\right)\theta_1^3
-\left({{t\alpha}\over{8\pi}} +6\pi\right)\theta_1^2
+\left({{t\beta}\over{24}}-\mbar^2+12\pi^2\right)\theta_1
+{{t\gamma}\over{24}}\pi +2\pi\mbar^2 -8\pi^3=0,
\label{eqevend}$$ where $$\alpha \equiv {{N(N^2-3N+3)}\over{(N-1)^3}},~
\beta \equiv {{N(2N^2-7N+8)}\over{(N-1)^3}},~
\gamma \equiv {{N(N-2)}\over{(N-1)^3}}.$$ This is the equation (\[eqevendt\]) in the text. Instead of solving the equation (\[eqevend\]) directly, it turns out to be convenient to study intersections of two functions $F(\theta_1)$ and $G(\theta_1)$ followed from Eq.(\[eqevend\]). Here, $F(\theta_1)$ and $G(\theta_1)$ are $$\begin{aligned}
F(\theta_1)&\equiv &2(\mbar^2-(2\pi - \theta_1)^2)(2\pi -\theta_1),\\
G(\theta_1)& \equiv &{{-t}\over{12\pi^2}}
\left(\alpha\theta_1^3 -3\pi \alpha\theta_1^2 +\beta \pi^2\theta_1
+ \gamma \pi^3\right), \\
&=&{{-t\alpha}\over{12\pi^2}}
(\theta_1 - \pi)(\theta_1 -\theta_1^{-})(\theta_1 - \theta_1^{+}),\end{aligned}$$ where $$\theta_1^{\pm}
=\pi\left(1\pm {{N-1}\over{\sqrt{N^2-3N+3}}}\right).$$ Let us note that $G(\theta_1)$ is independent of $\bar{m}$ and that $F(\theta_1)=G(\theta_1)$, of course, reproduces the equation (\[eqevend\]).
We study the behaviors of the intersections of $F(\theta_1)=G(\theta_1)$ with respect to the scale $\bar{m}$ and the parameter $t$. We first note that the number of the intersections of the functions $F(\theta_1)$ and $G(\theta_1)$ is either one or three. The Higgs VEV is also written, from Eq.(\[eqevenu\]) after using the formula (\[formulau\]), as $$\vbar^2={1\over{2\pi -\theta_1}}
\left({{-A\alpha}\over{12\pi^2}}\right)
(\theta_1 -\pi)(\theta_1 - \theta_1^-)(\theta_1 - \theta_1^+)\geq 0$$ for $\pi \leq \theta_1 \leq 2\pi$.
It may be useful here to study the matrix $H$ in this case, which is given by a $2\times 2$ matrix, as in the previous case (\[hessian2\]). We can show that the determinant of $H$ is evaluated as $$\begin{aligned}
{\rm det}~H&=&2\vbar^2(\mbar)\left(-3(1+{{t\alpha}\over
{24\pi^2}})\theta_1^2
+12\pi(1+{{t\alpha}\over {48\pi^2}})\theta_1-{{t\beta}\over {24}}+\mbar^2
-12\pi^2 \right)\nonumber\\
&=&2\vbar^2(\mbar)
\left({{-1}\over 2}\right)
{\del\over{\del\theta_1}}\left(F(\theta_1)-G(\theta_1)\right).
\label{det}\end{aligned}$$ We observe that the stability of the solutions to the equation $F(\theta_1)=G(\theta_1)$ is controlled by the sign of ${\del\over{\del\theta_1}}\left(F(\theta_1)-G(\theta_1)\right)$. It is also useful to know that $F(\theta_1=\pi)=G(\theta_1=\pi)$ $=$ $0$ at $\bar{m}=\pi$ and $${\del\over{\del\theta_1}}\left(F(\theta_1)-G(\theta_1)\right)
\bigg|_{\mbar=\pi,~~\theta_1=\pi}
=\left\{\begin{array}{lll}
\leq 0 & \mbox{for} & t\geq 48\pi^2 {{N-1}\over N},\\[0.3cm]
> 0 & \mbox{for} & t < 48 \pi^2 {{N-1}\over N}.
\end{array}\right.$$ This observation implies that the intersections of the functions $F(\theta_1)$ and $G(\theta_1)$ for $t\ge 48\pi^2\frac{N-1}{N}$ and $t<48\pi^2\frac{N-1}{N}$ have different behavior. We also obtain that $$F\left(\theta_1=2\pi\pm {\mbar\over{\sqrt 3}}\right)=
\mp {4\over{3\sqrt{3}}}\mbar^3,$$ where $\theta_1=2\pi\pm {\mbar\over{\sqrt 3}}$ are the solutions to $\del F(\theta_1)/\del\theta_1 =0$. Since $${{\del F(\theta_1)}\over
{\del\bar{m}}
}=4~\mbar(2\pi -\theta_1),$$ the function $F(\theta_1)$ increases (decreases) as $\bar{m}$ increases for fixed $\theta_1$ with $\theta_1<2\pi$ ($\theta_1>2\pi$). Note that $G(\theta_1)$ is independent of $\bar{m}$.
One can, now, draw the graphs of $F(\theta_1)$ and $G(\theta_1)$ for various $\mbar$ and $t$ and understand the behavior of the intersections of $F(\theta_1)$ and $G(\theta_1)$. In Fig.4, we depict the case $t> 48\pi^2 {{N-1}\over N}$. In the figure, the solution corresponding to the coexisting phase is denoted by $\theta_c$. Likewise, in Fig.5, we depict the case $t<48\pi^2 {{N-1}\over N}$, and the solution for the coexisting phase is denoted by $\theta_c$. The other solutions give negative determinants of $H$, so that they are unstable against small fluctuations. We observe from Eq.(\[det\]) that the solution $\theta_c$ in Figs.$4$ and $5$ gives a positive determinant of $H$ and hence the solution, if any, is stable. It is important to note that for any $t$, the intersections in the region of $\pi \leq \theta_1 \leq 2\pi$ tend to disappear as the scale $\mbar$ becomes smaller and smaller.
[99]{} M. Sakamoto, M. Tachibana, K. Takenaga, [[*Phys. Lett.*]{} [**[458]{}B**]{} (19[99]{}) [231]{}]{}, ; [[*Prog. Theor. Phys.*]{} [**[104]{}**]{} (20[00]{}) [633]{}]{}, ; in [*Proceedings of the 30th International Conference on High Energy Physics, 2000*]{}, edited by C. S. Lim, T. Yamanaka (World Scientific, Singapore, 2001), . M. Sakamoto, M. Tachibana, K. Takenaga, [[*Phys. Lett.*]{} [**[457]{}B**]{} (19[99]{}) [33]{}]{}, . K. Ohnishi, M. Sakamoto, [[*Phys. Lett.*]{} [**[486]{}B**]{} (20[00]{}) [179]{}]{}, .\
H. Hatanaka, S. Matsumoto, K. Ohnishi, M. Sakamoto, [[*Phys. Rev.*]{} [**D[63]{}**]{} (20[01]{}) [105003]{}]{}, ; in [*Proceedings of the 30th International Conference on High Energy Physics, 2000*]{}, edited by C. S. Lim, T. Yamanaka (World Scientific, Singapore, 2001), . S. Matsumoto, M. Sakamoto, S. Tanimura, [[*Phys. Lett.*]{} [**[518]{}B**]{} (20[01]{}) [163]{}]{}, .\
M. Sakamoto, S. Tanimura, [[*Phys. Rev.*]{} [**D[65]{}**]{} (20[02]{}) [065004]{}]{}, . K. Takenaga, [[*Phys. Lett.*]{} [**[425]{}B**]{} (19[98]{}) [114]{}]{}, ; [[*Phys. Rev.*]{} [**D[58]{}**]{} (19[98]{}) [026004]{}]{}, [**D61**]{} (2000), 129902(E), ; [[*Phys. Rev.*]{} [**D[64]{}**]{} (20[01]{}) [066001]{}]{}, ; [[*Phys. Rev.*]{} [**D[66]{}**]{} (20[02]{}) [085009]{}]{}, . Y. Hosotani, [[*Phys. Lett.*]{} [**[126]{}B**]{} (19[83]{}) [309]{}]{}, [[*Ann. Phys. (N.Y.)*]{} [**[190]{}**]{}, [233]{} (19[89]{})]{}. Y. Hosotani, [[*Phys. Lett.*]{} [**[129]{}B**]{} (19[83]{}) [193]{}]{}. J.E. Hetrik, C.L. Ho, [[*Phys. Rev.*]{} [**D[40]{}**]{} (19[89]{}) [4085]{}]{}.\
A. McLachlan, [[*Nucl. Phys.*]{} [**B[338]{}**]{} [188]{} (19[89]{})]{}. A. Higuchi, L. Parker, [[*Phys. Rev.*]{} [**D[37]{}**]{} (19[88]{}) [2853]{}]{}. K. Shiraishi, [[*Prog. Theor. Phys.*]{} [**[77]{}**]{} (19[87]{}) [975]{}]{}. H. Hatanaka, T. Inami, C.S. Lim, [[*Mod. Phys. Lett.*]{} [**A[13]{}**]{} (19[98]{}) [2601]{}]{}, . H. Hatanaka, [[*Prog. Theor. Phys.*]{} [**[102]{}**]{} (19[99]{}) [407]{}]{}, . C.L. Ho, Y. Hosotani, [[*Nucl. Phys.*]{} [**B[345]{}**]{} [445]{} (19[90]{})]{}. L. J. Dixon, J. A. Harvey, C. Vafa, E. Witten, [[*Nucl. Phys.*]{} [**B[261]{}**]{} [678]{} (19[85]{})]{}. M. Kubo, C.S. Lim, H. Yamashita, [[*Mod. Phys. Lett.*]{} [**A[17]{}**]{} (20[02]{}) [2249]{}]{}, .\
N. Haba, M. Harada, Y. Hosotani, Y. Kawamura, [[*Nucl. Phys.*]{} [**B[657]{}**]{} (20[03]{}) [169]{}]{}, . H. Hatanaka, K. Ohnishi, M. Sakamoto, K. Takenaga, [[*Prog. Theor. Phys.*]{} [**[107]{}**]{} (20[02]{}) [1191]{}]{}, . L. Dolan, R. Jackiw, [[*Phys. Rev.*]{} [**D[9]{}**]{} (19[74]{}) [3320]{}]{}.\
S. Weinberg, [[*Phys. Rev.*]{} [**D[9]{}**]{} (19[74]{}) [3357]{}]{}. L.H. Ford, T. Yoshimura, [[*Phys. Lett.*]{} [**[70]{}A**]{} (19[79]{}) [89]{}]{}.\
D.J. Toms, [[*Phys. Rev.*]{} [**D[21]{}**]{} (19[80]{}) [928]{}]{}; [[*Phys. Rev.*]{} [**D[21]{}**]{} (19[80]{}) [2805]{}]{}.\
G. Denardo, E. Spallucci, [[*Nucl. Phys.*]{} [**B[169]{}**]{} [514]{} (19[80]{})]{}; [[*Nuovo. Cim.*]{} [**A58**]{} (19[80]{}) [243]{}]{}. A.T. Davies, A. McLachlan, [[*Nucl. Phys.*]{} [**B[317]{}**]{} [237]{} (19[89]{})]{}.
{width="0.8\linewidth"}
{width="0.8\linewidth"}
{width="0.8\linewidth"}
{width="0.8\linewidth"}
{width="0.8\linewidth"}
[^1]: E-mail: hatanaka@th.phys.titech.ac.jp
[^2]: E-mail: ohnishi@phys.sci.kobe-u.ac.jp
[^3]: E-mail: sakamoto@phys.sci.kobe-u.ac.jp
[^4]: E-mail: takenaga@het.phys.sci.osaka-u.ac.jp
[^5]: We do not need to take $\bar{m}_1$ into account for the first case (\[cou\]) in our analysis.
[^6]: Although we can give an analytic expression for $\bar{m}_4^{\prime}$, it will not be useful for practical purposes.
[^7]: This is the case within the approximation we have made.
[^8]: The twisted boundary condition of matter for the $S^1$ direction also affects the phase structure [@SUSY; @translation; @O(N); @model].
[^9]: $\bar{m}_1$ is defined only for $t\le C_1\pi^2$.
|
---
abstract: 'Conditions are obtained for the existence of a warm inflationary attractor in the system of equations describing an inflaton coupled to radiation. These conditions restrict the temperature dependence of the dissipative terms and the size of thermal corrections to the inflaton potential, as well as the gradient of the inflaton potential. When these conditions are met, the evolution approaches a slow-roll limit and only curvature fluctuations survive on super-horizon scales. Formulae are given for the spectral indices of the density perturbations and the tensor/scalar density perturbation amplitude ratio in warm inflation.'
author:
- 'Ian G. Moss'
- Chun Xiong
bibliography:
- 'paper.bib'
title: On the consistency of warm inflation
---
introduction
============
Inflationary models [@guth81; @linde82; @albrecht82] have proved very sucessful in explaining many of the large scale features of the universe (see e.g. [@Liddle:2000cg]). An essential feature of these inflationary models is their stability, meaning in particular that inflation solutions are attractors in the solution space of the relevant cosmological equations (see e.g. [@Salopek:1990jq; @Liddle:1994dx]), at least up until inflation ends and the universe becomes radiation dominated. Without this feature, inflation might never have begun, and certainly would not have lasted long enough to affect the large scale structure of the universe.
Warm inflation is an alternative inflationary scenario in which a small but significant amount of radiation survives during the inflationary era due to continuous particle production [@Moss85; @bererafang95; @berera95]. The coupling between radiation and the inflaton field leads to thermal dissipation and fluctuations in the time evolution of the inflaton field. The stability of the inflationary solutions in warm inflationary models has only been addressed in a limited form previously [@deOliveira:1997jt], and we will present the full stability analysis here. We shall examine conditions under which warm inflation is an attractor and give conditions for a prolonged period of warm inflation.
We shall show that the stability of warm inflation can be related to conditions on two parameters describing the temperature dependence of terms in the inflaton equation of motion which where not taken into account in the earlier stability analysis [@deOliveira:1997jt]. The first condition says that if the dissipation term in the equation of motion falls off too rapidly at low temperature then the temperature is driven to zero, and we fall into the conventional inflationarty scenario. The second condition limits the temperature dependence of the inflaton potential. In many models large dissipation implies large thermal corrections to the potential which prevent warm inflation taking place. This argument is essentially the one first presented in Ref. [@yokoyama99]. Nowadays, we know that there are models where thermal corrections to the inflaton potential are suppressed by supersymmetry, and these models may allow warm inflation as an attractor [@BasteroGil:2006vr; @BuenoSanchez:2008nc; @review].
The duration of the period of inflation is related to a set of slow-roll paramaters which where introduced in [@Hall:2003zp]. We shall re-derive the slow-roll conditions for warm inflation as part of the stability analysis. A well understood feature of the slow-roll conditons for warm inflation is that they can be less restrictive than the slow-roll condtions for conventional inflation.
We stress that we are concerned here with the self-consistency of the warm inflationary scenario for given equations of motion. We shall not address how the equation of motion for the inflaton field is obtained from non-equilibrium thermal field theory. A discussion of the derivation of the equations of motion can be found in a recent review [@review]. However, we would like to point out that some of the critisims of warm inflation have been based on models which do not satisfy the fundamental stability conditions derived here, and are therefore not inconsistent with the validity of warm inflation in general [@Aarts:2007ye].
The stability of warm inflation has consequences for the origin and evolution of cosmological density fluctuations [@Hall:2003zp]. In warm inflation, density fluctuations originate from thermal fluctuations [@bererafang95; @berera00]. In particular, the fact that the inflationary solution depends on only one parameter means that only one perturbation mode, the curvature perturbation, survives on super-horizon scales despite the fact that there are entropy perturbations present on sub-horizon scales. We shall give formulae for the spectral indices of the scalar and tensor modes and say a little about the tensor/scalar ratio.
basic equations {#be}
===============
We start with a flat, homogeneous universe with expansion rate $H$. The matter content consists of a homogeneous inflaton field $\phi$ and thermal radiation of temperature $T$. We restrict attention to the warm inflationary regime where $T>H$, with the radiation close to thermal equilibrium. The evolution of the inflaton is governed by a potential $V(\phi,T)$ and a damping coefficient $\Gamma(\phi,T)$, such that the inflaton field satisfies the basic equation $$\ddot\phi+(3H+\Gamma)\dot\phi+V_{,\phi}=0,\label{phieq}$$ where a comma after a function denotes a derivative. The expansion rate is related to the energy density $\rho$ by the Friedman equation $$3H^2=8\pi G\rho,\label{heq}$$ where $G$ is Newton’s constant.
It is important to realise that the potential appearing in the inflaton equation is the free energy density, rather than the potential energy density. The potential energy density is given by the thermodynamic relation $V+Ts$, where $s$ is the entropy density $$s=-V_{,T}.\label{sdef}$$ The total energy density, including the inflaton’s kinetic energy density, is therefore $$\rho=\frac12\dot\phi^2+V+Ts.\label{rhoeq}$$ This includes the contributions of the scalar field and the radiation. It is not always possible to separate the scalar and radiation components of the energy density in an unambiguous way.
The final equation is the one which governs the transfer of energy from the inflaton to the radiation field. This can easily be derived from the stress-energy tensor $T_{\mu\nu}$ [@Hall:2003zp], $$T_{\mu\nu}=Ts\,u_\mu u_\nu-V g_{\mu\nu}+
\nabla_\mu\phi\nabla_\nu\phi-\frac12(\nabla\phi)^2 g_{\mu\nu},$$ where $u_\mu$ is the radiation fluid $4-$velocity vector, $g_{\mu\nu}$ is the metric and $\nabla_\nu$ is the spacetime derivative operator. Conservation of the stress-energy gives $$T(\dot s+3Hs)=\Gamma\dot\phi^2.\label{seq}$$ This version of the second law of thermodynamics shows clearly how the friction converts the inflaton’s energy into heat. We now have a complete set of evolution equations which can be solved for $\phi$ and $s$ given a set of initial conditions.
Inflation is associated with a slow-roll approximation which consists of dropping the leading derivative term in each equation. The slow-roll equations are therefore $$\begin{aligned}
\dot\phi&=&{-V_{,\phi}\over 3H(1+Q)}\label{sr1}\\
Ts&=&Q\dot\phi^2\label{sr2}\\
H^2&=&{8\pi G V\over 3}\label{sr3}\end{aligned}$$ where the strength of the dissipation is quantified by the parameter $Q$, $$Q={\Gamma\over 3H}.$$ Note that, since Eq. (\[sr1\]) is first order in time derivatives, any solution to the slow-roll equations has just one constant of integration.
The validity of the slow-roll approximation will depend on the size of a set of slow-roll parameters [@Hall:2003zp]. We use the following set of ‘small’ parameters: $$\begin{aligned}
\epsilon={1\over 16\pi G}\left({V_{,\phi}\over V}\right)^2,\quad
\eta={1\over 8\pi G}{V_{,\phi\phi}\over V},\quad
\beta={1\over 8\pi G}{V_{,\phi}\Gamma_{,\phi}\over V\Gamma}.\end{aligned}$$ An additional pair of parameters describe the temperature dependence, $$b={TV_{,\phi T}\over V_{,\phi}},\quad
c={T\Gamma_{,T}\over \Gamma}\label{srp}.$$ We use $b$ in place of the parameter $\delta$ defined in ref [@Hall:2003zp].
The parameter $b$ is very important in the theory of warm inflation. It measures the size of contributions to the potential from thermal quantum fields. In ordinary circumstances, we would expect $b$ to be order 1. Consider, for example, an inflaton of mass $m_\phi$ coupled to a set of $g_*$ light scalar fields and temperature $T>H>m_\phi$. The effective potential, $$V(\phi,T)=-{\pi^2\over 90}g_*T^4-{1\over 12}m_\phi^2T^2
+v(\phi),\label{corpot}$$ where $v(\phi)$ is the effective potential at $T=0$. The parameter $b$ for the potential (\[corpot\]) is approximately 2. Values of $b$ can be much smaller in supersymmetric theories, where there is a cancellation of leading order thermal corrections when the temperature $T<\Lambda_S$, where $\Lambda_S$ is the supersymmetry breaking scale [@Hall:2004zr]. We shall find limits on $b$, and conclude that warm inflation only takes place when there is a mechanism, like supersymmetry, which reduces the size of the thermal corrections to the potential [@berera02].
stability analysis
==================
We shall consider the consistency of the slow-roll approximation by performing a linear stability analysis to determine the conditions which are sufficient for the system to remain close to the slow-roll solution for many Hubble times. It may be worthwhile considering first what happens in the alternative cold inflationary scenario (see e.g. [@Liddle:2000cg]). The slow-roll equation in this case is first order in time derivatives, and the general solution has the form $\phi=f(t-t_0)$, where $t_0$ is an arbitrary constant. There always exists a homogeneous perturbation which is equivalent to changing the value of $t_0$, and is hugely important for the existence of density perturbations. Another mode decays on the Hubble timescale. It proves convenient to exclude the time-translation mode by using the value of the field as the time coordinate, which is possible if the inflaton field has non-vanishing time derivative. We shall follow the same procedure for the analysis or warm inflation.
With the inflaton as independent variable we can rewrite the system of equations in first order form, $$x'=F(x).$$ Prime denotes derivatives with respect to $\phi$ and $$x=\pmatrix{u\cr s}$$ where $u=\dot\phi$. Eqs. (\[phieq\]) and (\[seq\]) become $$\begin{aligned}
u'&=&-3H-\Gamma-V_{,\phi}u^{-1},\\
s'&=&-3Hsu^{-1}+T^{-1}\Gamma u,\end{aligned}$$ with the temperature determined implicitly by Eq. (\[sdef\]) and $H$ given by Eq. (\[heq\]). We take a background $\bar x$ which satisfies the slow-roll equations (\[sr1\]-\[sr3\]) and then the linearised perturbations satisfy $$\delta x'=M(\bar x)\delta x-{\bar x}'.\label{dxeq}$$ where $M$ is the matrix of first derivatives of $F$ evaluated at the slow-roll solution.
Consider stability first of all. We can express all the components of the matrix $M$ in terms of the slow-roll parameters (\[srp\]). For example, $$\Gamma_{,s}=\Gamma_{,T}T_{,s}={c\Gamma\over T}T_{,s}=
{c\over 3}{\Gamma\over s}=cQ{H\over s}.$$ where we have used $T_{,s}=(s_{,T})^{-1}=T/3s$. Following a similar procedure for all of the first derivatives gives a final expression for $M$, $$M=\pmatrix{
A&B\cr C&D\cr
}$$ where $$\begin{aligned}
A&=&{H\over u}\left\{
-3(1+Q)-{\epsilon\over (1+Q)^2}\right\}\\
B&=&{H\over s}\left\{
-cQ-{Q\over(1+Q)^2}\epsilon+(1+Q)b\right\}\\
C&=&{Hs\over u^2}\left\{
6-{\epsilon\over (1+Q)^2}\right\}\\
D&=&{H\over u}\left\{
c-4-{Q\epsilon\over (1+Q)^2}\right\}\\\end{aligned}$$ We require the determinant, $$\det M={H^2\over u^2}\left\{
12(1+Q)+3(Q-1)c-6(1+Q)b
+\left({3Q^2+9Q+4\over (1+Q)^2}-{c\over 1+Q}\right)\epsilon
+{b\epsilon\over 1+Q},
\right\}$$ and the trace, $${\rm tr}\,M={H\over u}\left\{
c-4-3(1+Q)-{\epsilon\over 1+Q}
\right\}.$$ In the cold inflationary case, where $Q=b=c=s=0$, the decaying modes have the approximate form $\delta u\propto \exp(-3N)$ and $\delta s\propto \exp(-4N)$, where $N$ is the number of e-folds of inflation [@Salopek:1990jq].
[*Sufficient*]{} conditions for stability of the warm inflationary solution are that the matrix $M$ varies slowly and $$|c|\le 4-2b,\qquad b\ge 0.$$ Evidently, the trace is negative and the determinant is positive if these conditions hold. The linear equation therefore has two negative eigenvalues and the both eigenmodes decay. The slow variation of $M$, which allows us to diagonalise the linear system, follows if the forcing term in Eq. (\[dxeq\]) is small, and so we turn to this term next.
The forcing term in Eq. (\[dxeq\]) depends on $\bar x'$. This term is present because the background chosen is not an exact solution to the full set of equations. The slow-roll approximation can only be valid when $\bar x'$ is small. If we work with time derivatives, the magnitude of $\bar x'$ depends on the the dimensionless quantities $\dot u/(Hu)$ and $\dot s/(Hs)$, and small values represent slow variation on the timescale of the Hubble time.
We only quote the leading terms in $\epsilon$. Starting from the slow-roll Eq. (\[sr3\]) and taking the time derivative gives $${\dot H\over H^2}=-{1\over 1+Q}\epsilon.\label{hdot}$$ By combining the other slow -roll equations (\[sr1\]) and (\[sr2\]) we eventually arive at $$\begin{aligned}
{\dot u\over Hu}&=&
{1\over \Delta}\left\{
-3c(1+Q)b
-{c(1+Q)-4\over 1+Q}\epsilon+(c-4)\eta+{4Q\over 1+Q}\beta
\right\}\label{udot}\\
{\dot s\over Hs}&=&
{1\over \Delta}\left\{
+{3(cQ+Q+1-c)(1+Q)\over Q}b
+{3Q+9\over 1+Q}\epsilon-6\eta+{3(Q-1)\over 1+Q}\beta
\right\}\label{sdot}\end{aligned}$$ where $\Delta=4(1+Q)+(Q-1)c$. The slow-roll approximation requires $\dot u<<Hu$ and $\dot s<<Hs$, and sufficient conditions for this are $$\epsilon,\ |\beta|\ ,|\eta|\ <<1+Q;\ 0<b<<{Q\over 1+Q},\ |c|<4.$$ The conditions on $\epsilon$ and $\eta$ agree with a previous stabilty analysis [@deOliveira:1997jt]. These are weaker than the corresponding conditions for cold inflation, and this fact is a well know feature of warm inflation.
The physical interpretation of the condition $c<4$ is evident from Eq. (\[seq\]), that radiation must be produced at a rate ($\Gamma\propto T^c$) exceeding the rate at which radiation is removed by the expansion of the universe ($Ts \propto T^4$). The condition on $b$ can only be met if there is a mechanism for suppressing thermal corrections to the potential because, as we mentioned at the end of Sect \[be\], high temperature thermal corrections would otherwise lead to $b\approx 2$. Models which include a mechanism for suppressing thermal corrections can be found, for example, in [@berera02; @BasteroGil:2006vr; @BuenoSanchez:2008nc].
density fluctuations
====================
The results which have been obtained as part of the stability analysis are also helpful for analysing various features of the density fluctuation spectrum. We therefore take the opportunity, whilst the results are to hand, of giving some formulae which might be useful for observational tests of warm inflation.
The origin of density fluctuations in warm inflationary scenarios is due to thermal fluctuations in the radiation. These are coupled to the inflaton as a consequence of the friction term in the inflaton equation of motion, and their amplitude is fixed by a fluctuation-dissipation theorem. This means that both entropy and curvature perturbations must be present. However, on length scales larger than the horizon, we know from the stability argument that the coupled inflaton plus radiation system approaches the slow-roll solution which has only one free parameter. Consequently, on large scales only the pure curvature perturbation survives. This has been confirmed in particular models by solving the full set of density fluctuation equations numerically [@Hall:2003zp].
Even though the entropy perturbations decay on large scales, they can sometimes leave behind an impression on the curvature fluctuations. If the friction term depends on temperature, then the entropy and curvature fluctuations on sub-horizon scales become coupled. The situation is similar to the sympathetic oscillations of a double pendulum [@som]. When the curvature fluctuations ‘freeze-out’, the amplitude of the sympathetic oscillation may be anywhere between zero and its maximum value, leading to oscillations in the wave-number dependence of the curvature fluctuation spectrum. The amplitude given below therefore has only limited use when $b$ and $c$ are non-zero and refers only to the envelope of these oscillations.
The thermal fluctuations produce a power spectrum of scalar density fluctuations of the form [@Hall:2003zp; @Moss:2007cv], $${\cal P}_S={\sqrt{\pi}\over 2}{H^3T\over u^2}(1+Q)^{1/2}.\label{ps}$$ The spectral index $n_s$ is defined by $$n_S-1={\partial \ln{\cal P}_S\over \partial \ln k},$$ evaluated when the amplitudes ‘freeze out’. To leading order in the slow-roll parameters we can take the freeze-out time to be the horizon crossing time when $k=aH$, and then $$n_S-1={\dot{\cal P}_S\over H{\cal P}_S}.$$ We can use eqs. (\[hdot\]-\[sdot\]) to obtain $$n_S-1={1\over\Delta}
\left\{
-{3(2Q+2+5Qc)(1+Q)\over Q}b-
{9Q+17-5c\over 1+Q}\epsilon
-{9Q+1\over 1+Q}\beta
-{3Qc-6-6Q+2c\over 1+Q}\eta
\right\}$$ If we consider $b=c=0$, then important limits include the strong regime of warm inflation, $Q>>1$, $$n_S-1=-{9\over 4Q}\epsilon-{9\over 4Q}\beta+{3\over 2Q}\eta.$$ This agrees with a partial result in [@taylor00] and the full result in [@Hall:2003zp]. In the weak regime of warm inflation, $Q<<1$, thermal fluctuations lead to the spectral index $$n_S-1=-{17\over 4}\epsilon-{1\over 4}\beta+{3\over 2}\eta.$$ Previous results for the weak regime, though expressed in a less useful form, can be found in Refs [@BasteroGil:2004tg; @Hall:2007qw]. Finally, the case $Q<<1$ and $c=3$ is important because it corresponds to a class of warm inflationary models where the friction coefficient $\Gamma$ has been calculated [@Moss:2006gt], $$n_S-1=-2\epsilon-\beta.$$
The tensor modes have the same amplitude as they do in the cold inflationary models, [^1] $${\cal P}_T=8\pi G H^2.$$ The spectrum for the tensor modes is simply $$n_T-1=-{2\over 1+Q}\epsilon.$$ Unlike in the cold inflationary scenario, the tensor-scalar amplitude ratio cannot be expressed in terms of slow-roll parameters. Instead [@taylor00], $${{\cal P}_T\over {\cal P}_S}=
{2\epsilon\over (1+Q)^3}{H\over T}.$$ Since $T>H$ for warm inflation, the tensor-scalar ratio is likely to be smaller than $1-n_T$. Tensor modes are strongly suppressed relative to the scalar modes on the strong regime of warm inflation $Q>>1$, but they could be significant in the weak regime of warm inflation.
We conclude this section by finding the lower limit on the friction term which is required for warm inflation. We shall express this limit in terms of $Q=\Gamma/3H$. Re-write the scalar amplitude Eq. (\[ps\]) as $${\cal P}_S\approx{T^4\over u^2}{H^3\over T^3}(1+Q)^{1/2}$$ The first factor can be replaced using the slow-roll equation Eq. (\[sr2\]) and the potential (\[corpot\]), and we obtain $${T\over H}\approx\left({45\over 2\pi^2g_*}\right)^{1/3}(1+Q)^{1/6}
\left( {Q\over {\cal P}_S}\right)^{1/3}.$$ The condition for warm inflation $T>H$ is therefore $$Q>g_*{\cal P}_S.$$ Cosmic microwave background observations give a scalar power spectrum of $1\times 10^{-10}$ on large scales, therefore very small amounts of dissipation can result in warm inflation.
conclusion
==========
We shall recapitulate the main points of this paper. There are conditions on six of the parameters defined in Sect. \[be\] for the possibility of a stable period of warm inflation in the early universe:
- The parameter $Q$ which measures the strength of the friction term must satisfy $$Q>g_*{\cal P}_S.$$ where $g_*$ is the effective particle number and ${\cal P}_S$ is the scalar perturbation power spectrum on large scales.
- The parameters which describe the inflaton dependence of the effective potential and friction term satisfy $$\epsilon<<1+Q,\qquad |\eta|<<1+Q,\qquad |\beta|<<1+Q.$$
- The temperature dependence of the potential and the friction term is restricted by $$|b|<<{Q\over 1+Q},\qquad |c|<4.$$ The condition on $b$ implies that warm inflation is only possible when a mechanism, such as supersymmetry, reduces the size of the thermal corrections to the potential.
Models of elementary particles exist in which these conditions can be satisfied. The most convincing of these models use a combination of supersymmetry and a two-stage decay process, where there are no direct coupling between the inflaton and the radiation and all thermal effects are supressed by factors of $T/\Lambda_S$, where $\Lambda_S$ is the supersymmtry breaking scale [@berera02; @BasteroGil:2006vr; @BuenoSanchez:2008nc].
When the conditions listed above are satisfied, then the solutions to the equations of motion approach a slow-roll approximation during inflation. As a result, large scale density perturbations have only one degree of freedom, which we identify as the curvature perturbation. (Entropy perturbations can only be introduced by adding extra degrees of freedom to the system.)
Chun Xiong was supported by an O.R.S. scholarship and by the School of Mathematics and Statistics, Newcastle University.
[^1]: Our power spectrum convention for $\delta_k$ is $\langle\delta_k\delta_{k'}\rangle=(2\pi)^3k^{-3}{\cal P}_\delta(k)\delta({\bf k}_1-{\bf k}_2)$.
|
---
abstract: |
The $\babar$ experiment has recorded the decays of more than 465$\times 10^6\ B\bar{B}$ pairs since 1999, and is reaching an unprecedented precision in the measurement of hadronic $B$ decays. The following results are presented: tests of QCD factorization with the decays $B\rightarrow \chi_{c0}K^{*} $, $B\rightarrow
\chi_{c1,2}K^{(*)} $, and $\bar{B}^{0}\rightarrow D^{(*)0}h^{0}$, $h^0=\pi^0,\ \eta,\ \omega,\ \eta'$, study of the decays to charmonium $B\rightarrow \eta_c K^{(*)},\ \eta_c(2S) K^{(*)}$ and $h_cK^{(*)}$, measurement of the mass difference between neutral and charged $B$’s, measurement of the “r” parameters for the extraction of the CKM angles $\sin(2\beta+\gamma)$ with the decays $B\rightarrow D_s^{(*)}h,\ h=\pi^-,\ \rho^-,\ K^{(*)+}$, study of the three-body rare decays $B\rightarrow J/\psi\phi K$, study of the baryonic decays $\bar{B}^0\rightarrow\Lambda_c^+\bar{p}$, $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$, and $B^-\rightarrow\Lambda_c^+\pi^0\bar{p}$. Except for the results presented in the sections \[charmo\], \[Bmassdiff\] and \[mesratior\], all the given numbers are preliminary.
author:
- Xavier Prudent
title: 'Hadronic $\boldmath{B}$ Decays to Charm and Charmonium with the BaBar Experiment'
---
BABAR-PROC-08/101\
SLAC-PUB-13400\
arXiv:0809.2929\[hep-ex\]
TESTS OF QCD FACTORIZATION
==========================
Weak decays of hadrons provide a straight access to the parameters of the CKM matrix and thus to the study of the CP violation. Gluon scattering in the final state, related with the confinement of quarks and gluons into hadrons, can modify the decay dynamics and so must be well understood. In the factorization model [@ref:BauerStechWirbel; @ref:NeubertPetrov], the non-factorizable interactions in the final state by soft gluons are neglected. The matrix element in the effective weak Hamiltonian of the $B$ decay is then factorized into a product of independent hadronic currents.
Measurement of the branching fractions (BF) of the decays [$B\rightarrow \chi_{c0}K^{*}$]{} [@ref:Kchic0]
---------------------------------------------------------------------------------------------------------
In the factorization model of the decay $b\rightarrow c\bar{c}s$, the charge conjugation invariance of the current-current operator forbids the hadronization of $c\bar{c}$ into $\chi_{c0}$. The branching fractions (BF) of the decays $B^0\rightarrow \chi_{c0}K^{*0}$ and $B^+\rightarrow
\chi_{c0}K^{*+} $ are measured from exclusive reconstruction using a data sample of 454$\times 10^6\ B\bar{B}$ pairs in units of $10^{-4}$: $BF(B^0\rightarrow \chi_{c0}K^{*0}) =
1.7 \pm 0.3 \pm 0.2$ and $BF(B^+\rightarrow
\chi_{c0}K^{*+}) = 1.4 \pm 0.5 \pm 0.2$, where the quoted first errors are statistical and the second are systematic. The decay $B^0\rightarrow \chi_{c0}K^{*0}$ is observed with a 8.9 standard deviation (quoted as $\sigma$) significance and an evidence is found for $B^+\rightarrow \chi_{c0}K^{*+}$ with a 3.6$\sigma$ significance. An upper limit is set for $BF(B^+\rightarrow \chi_{c0}K^{*+})<2.1$ at 90 $\%$ confidence level (quoted as $CL$). The $B^0\rightarrow \chi_{c0}K^{*0}$ BF does not agree with the zero value expected from factorization and is about half of the favored mode $B^0\rightarrow \chi_{c1}K^{*0}$ ($(3.2\pm 0.6)\times
10^{-4}$ [@ref:PDG]).
Measurement of the BFs of the decays [$B\rightarrow\chi_{c1,2}K^{(*)}$]{} [@ref:Kchic12]
----------------------------------------------------------------------------------------
In the factorization model, no operators exist for the hadronization of $c\bar{c}$ into $\chi_{c2}$, while the hadronization to $\chi_{c1}$ is favored. The BFs of the decays $B\rightarrow
\chi_{c1}K^{(*)}$ and $B\rightarrow \chi_{c2}K^{(*)} $ are measured from exclusive reconstruction using a data sample of 465$\times 10^6\ B\bar{B}$ pairs in units of $10^{-5}$: $BF(B^+\rightarrow\chi_{c1}K^+)=46 \pm
2 \pm 3$, $BF(B^0\rightarrow\chi_{c1}K^0)= 41 \pm
3 \pm 3$, $BF(B^+\rightarrow\chi_{c1}K^{*+})=
27\pm 5 \pm 4$, $BF(B^0\rightarrow\chi_{c1}K^{*0})= 25\pm 2 \pm 2$, $BF(B^+\rightarrow\chi_{c2}K^+)<1.8~@
90~\%~CL$, $BF(B^0\rightarrow\chi_{c2}K^0)<2.8~@
90~\%~CL$, $BF(B^+\rightarrow\chi_{c2}K^{*+})<12~@
90~\%~CL$, and $BF(B^0\rightarrow\chi_{c2}K^{*0})= 6.4\pm 1.7 \pm
0.5$, where the first quoted errors are statistical and the second are systematic. The measured values of $BF(B^+\rightarrow\chi_{c1}K^+)$, $BF(B^0\rightarrow\chi_{c1}K^0)$, and $BF(B^+\rightarrow\chi_{c1}K^{*+})$ are the most precise to date. The upper limit on $BF(B^+\rightarrow\chi_{c2}K^+)$ is improved and evidence for the decay $B^0\rightarrow\chi_{c2}K^{*0}$ is seen for the first time.
Measurement of the BFs of the color-suppressed decays [$\bar{B}^{0}\rightarrow D^{(*)0}h^{0}$]{}, [@ref:D0h0]
--------------------------------------------------------------------------------------------------------------
Previous measurements of the BFs of the color-suppressed decays $\bar{B}^{0}\rightarrow D^{(*)0}h^{0}$ invalidated the factorization model [@ref:Babar2004; @ref:Belle2005; @ref:Belle2006]. However more precise measurements are needed to confirm that result and to constrain the different QCD models: SCET (Soft Collinear Effective Theory) and pQCD (perturbative QCD). The BFs are measured from exclusive reconstruction using a data sample of 454$\times 10^6\ B\bar{B}$ pairs, the measured values are given in the Table \[tab:table2\].
$\bar{B}^0$ mode $(BF\pm\textrm{stat.}\pm\textrm{syst.})\times 10^{-4}$ Signif.
--------------------------------------- -------------------------------------------------------- --------------
$D^0\pi^0$ 2.78 $\pm$ 0.08 $\pm$ 0.20 35.5$\sigma$
$D^0\eta(\gamma\gamma)$ 2.34 $\pm$ 0.11 $\pm$ 0.17 26.1$\sigma$
$D^0\eta(\pi\pi\pi^0)$ 2.51 $\pm$ 0.16 $\pm$ 0.17 20.3$\sigma$
$D^0\eta$ 2.41 $\pm$ 0.09 $\pm$ 0.17 -
$D^0\omega$ 2.77 $\pm$ 0.13 $\pm$ 0.22 29.4$\sigma$
$D^0\eta'(\pi\pi\eta(\gamma\gamma))$ 1.29 $\pm$ 0.14 $\pm$ 0.09 14.7$\sigma$
$D^0\eta'(\rho^0\gamma)$ 1.95 $\pm$ 0.29 $\pm$ 0.30 7.2$\sigma$
$D^0\eta'$ 1.38 $\pm$ 0.12 $\pm$ 0.22 -
$D^{*0}\pi^0$ 1.78 $\pm$ 0.13 $\pm$ 0.23 15.1$\sigma$
$D^{*0}\eta(\gamma\gamma)$ 2.37 $\pm$ 0.15 $\pm$ 0.24 19.4$\sigma$
$D^{*0}\eta(\pi\pi\pi^0)$ 2.27 $\pm$ 0.23 $\pm$ 0.18 12.2$\sigma$
$D^{*0}\eta$ 2.32 $\pm$ 0.13 $\pm$ 0.22 -
$D^{*0}\omega$ 4.44 $\pm$ 0.23 $\pm$ 0.61 22.3$\sigma$
$D^{*0}\eta'(\pi\pi\eta)$ 1.12 $\pm$ 0.26 $\pm$ 0.27 8.0$\sigma$
$D^{*0}(D^0\pi^0)\eta'(\rho^0\gamma)$ 1.64 $\pm$ 0.53 $\pm$ 0.20 3.3$\sigma$
$D^{*0}\eta'$ 1.29 $\pm$ 0.23 $\pm$ 0.23 -
These results are consistent with the prediction by SCET: $BF(D^{*0}h^0)/BF(D^0h^0)\sim 1$ for $h^0\ne\omega$, but marginally consistent with the predictions by pQCD on the BFs. The measurements are 3 to 7 times higher than the predictions by the naive factorization model.
STUDY OF THE [$B$]{}-MESON DECAYS TO [$\eta_c K^{(*)},\ \eta_c(2S) K^{(*)},$]{} AND [@ref:etac_hc_etac2S] {#charmo}
==========================================================================================================
The $B$ decays to charmonium singlet states $h_c$ and $\eta_c$ are still poorly known. A better knowledge of the relative abundances of the various charmonium states allows a deeper understanding of the underlying strong processes. In the non-relativistic QCD model, the productions of $\chi_{cJ}$ ($J=0,1,2$) and $h_c$ are predicted to be comparable in magnitude, however $BF(B\rightarrow
\chi_{c1}K)\sim 3\times 10^{-4}$ and $BF(B^+\rightarrow h_c
K^+)<3.8\times 10^{-5}$. Similarly no exclusive measurements of the BF of $\eta_c(2S)$ production have been performed. The knowledge of the mass parameters of the charmonium state $\eta_c$ is pivotal for the models of $c\bar{c}$ spectrum, but the measurements available so far are in poor agreement with one another. The large uncertainties on $BF(\eta_c\rightarrow
K\bar{K}\pi)$ and on $BF(\eta_c(2S)\rightarrow K\bar{K}\pi)$ are cancelled by measuring the ratio by respect to $BF(B^+\rightarrow
\eta_c K^+)$ and $BF(B^+\rightarrow \eta_c(2S) K^+)$. The measured BF of the $h_c$ and $\eta_c$ productions are measured using 384$\times 10^6\ B\bar{B}$ pairs (with $BF(B^+\rightarrow \eta_c
K^+)=(9.1\pm 1.3)\times 10^{-4}$): $BF(B^0\rightarrow\eta_c
K^{*0}) = ( 5.7 \pm 0.6 (\textrm{stat.}) \pm 0.4 (\textrm{syst.})
\pm 0.8 (\textrm{bf.}) )\times 10^{-4}$, $BF(B^+\rightarrow h_c
K^+)\times BF(h_c\rightarrow\eta_c\gamma) < 4.8\times 10^{-5}~@
90~\%~CL$, and $BF(B^0\rightarrow h_c K^{*0})\times
BF(h_c\rightarrow\eta_c\gamma) <2.2\times 10^{-4}~@ 90~\%~CL$. The uncertainty noted bf. is related the error on $BF(B^+\rightarrow
\eta_c K^+)$. These are the first upper limits and confirm the $h_c$ suppression. Using $BF(B^+\rightarrow\eta_c(2S)K^+)=(3.4\pm
1.8)\times 10^{-4}$, the upper limit on BF for $\eta_c(2S)$ production is $BF(B^0\rightarrow \eta_c(2S)K^{*0}) < 3.9 \times
10^{-4}~@90~\%\ CL$. Using $BF(B^+\rightarrow\eta_c K^+)\times
BF(\eta_c\rightarrow K\bar{K}\pi)=(6.88\pm
0.77^{+0.55}_{-0.66})\times 10^{-4}$, the first measurement is reported for $BF(\eta_c(2S)\rightarrow K\bar{K}\pi) = (1.9 \pm
0.4\textrm{(stat.)} \pm 0.5\textrm{(syst.)} \pm 1.0\textrm{(bf.)})
$. Both the mean and width of the $\eta_c$ mass distribution are extracted: $m(\eta_c) = (2985.8 \pm 1.5\textrm{(stat.)} \pm
3.1\textrm{(syst.)})~\textrm{MeV}/c^2$, $\Gamma(\eta_c) =
(36.3^{+3.7}_{-3.6}\textrm{(stat.)} \pm
4.4\textrm{(syst.)})~\textrm{MeV}$, which are in agreement with the previous $\babar$ measurements.
MEASUREMENT OF THE MASS DIFFERENCE [$m(B^0)-m(B^+)$]{} [@ref:BmassDiff] {#Bmassdiff}
=======================================================================
The measurement of the mass difference $\Delta m_B =
m(B^0)-m(B^+)$ probes the Coulomb contributions to the quark structure, which affect the relative production rates of $\Upsilon(4S)\rightarrow B^0\bar{B}^0$ and $\Upsilon(4S)\rightarrow B^+B^-$. The decay modes $B^0\rightarrow J/\psi K^+\pi^-$ and $B^+\rightarrow J/\psi K^+$ with $J/\psi\rightarrow e^+e^-,\
\mu^+\mu^-$, are reconstructed exclusively using 230 million$\times 10^6\ B\bar{B}$ pairs. The mass difference $\Delta m_B$ is then computed as:
$$\Delta m_B = - \Delta p^* \times
\frac{p^*(B^0)+p^*(B^+)}{(m(B^0)+m(B^+))\cdot c^2},$$
where $p^*$ is the momentum in the $\Upsilon(4S)$ rest frame. The measured value is $\Delta m_B = (0.33 \pm 0.05\textrm{(stat.)} \pm
0.03\textrm{(syst.)} )~\textrm{MeV}/c^2$, which excludes the null value at the 5$\sigma$ level.
MEASUREMENT OF THE BFs OF [$B^0\rightarrow D_s^{(*)+}\pi^-$]{}, [$B^0\rightarrow D_s^{(*)+}\rho^-$]{}, AND [$B^0\rightarrow
D_s^{(*)-}K^+$]{} [@ref:sin2betaG] {#mesratior}
===========================================================================================================================
The quantity $\sin(2\beta+\gamma)$, with the CKM parameters $\beta$ and $\gamma$, can be measured from the study of the time evolution of the doubly-Cabibbo and CKM-suppressed decays $B^0\rightarrow D^{(*)-}\pi^+$ and $B^0\rightarrow
D^{(*)-}\rho^+$. That study requires the knowledge of the ratios of the decay amplitudes $r(D^{(*)}\pi)= |A(B^0\rightarrow
D^{(*)+}\pi^-)/A(B^0\rightarrow D^{(*)-}\pi^+)|$, which cannot be directly measured. Assuming $SU(3)$ flavor symmetry, $r(D^{(*)}\pi)$ can be related to the decay $B^0\rightarrow D_s^{(*)+}\pi^-$: $$\label{eq:SU3}
r(D^{(*)}\pi)=\tan(\theta_c)\frac{f_{D^{(*)}}}{f_{D_s^{(*)}}}\sqrt{\frac{BF(B^0\rightarrow
D_s^{(*)+}\pi^-)}{BF(B^0\rightarrow D^{(*)-}\pi^+)}},$$ where $\theta_c$ is the Cabibbo angle, and $f_{D^{(*)}}/f_{D_s^{(*)}}$ is the ratio of $D^{(*)}$ and $D_s^{(*)}$ meson decay constants.
The contribution from W-exchange diagrams are evaluated from the study of $B^0\rightarrow D_s^{(*)-}K^+$, which proceeds through a W-exchange diagram only.
Using 381$\times 10^6\ B\bar{B}$ pairs, the measured BFs are (in units of $10^{-5}$): $BF(D_s^+\pi^-) = 2.5 \pm 0.4 \pm
0.2$, $BF(D_s^{*+}\pi^-) = 2.6^{+0.5}_{-0.4} \pm 0.2$, $BF(D_s^+\rho^-) < 2.4~@ 90~\%~CL$, $BF(D_s^{*+}\rho^-) =
4.1^{+1.3}_{-1.2} \pm 0.4$, $BF(D_s^-K^-) = 2.9 \pm 0.4 \pm 0.2$, $BF(D_s^{*-}K^+) = 2.4 \pm 0.4 \pm 0.2$, $BF(D_s^-K^{*+}) =
3.5^{+1.0}_{-0.9} \pm 0.4$, and $BF(D_s^{*-}K^{*+}) =
3.2^{+1.4}_{-1.2} \pm 0.4$.
The measured longitudinal fractions are: $f_L(D_s^{*+}\rho^-) = 0.84^{+0.26}_{-0.28} \pm 0.13$ and $f_L(D_s^{*-}K^{*+}) = 0.92^{+0.37}_{-0.31} \pm 0.07$. The values of $r(D^{(*)}\pi)$ are computed with Equation (\[eq:SU3\]): $r(D\pi) =
(1.78^{+0.14}_{-0.13} \pm 0.08 \pm 0.10 (\textrm{th.}))~\%$, $r(D^{*}\pi) = (1.81^{+0.16}_{-0.15} \pm 0.09 \pm 0.10
(\textrm{th.}))~\%$, $r(D\rho) = (0.71^{+0.29}_{-0.27}
\pm 0.10 \pm 0.04
(\textrm{th.}))~\%$, and $r(D^{*}\rho) = (1.45^{+0.23}_{-0.22}
\pm 0.12 \pm 0.08
(\textrm{th.}))~\%$. The quoted first errors are statistical and the second are systematic. The errors denoted th. are related to the theoretical uncertainties.
MEASUREMENT OF THE BFs OF THE RARE DECAYS [$B\rightarrow J/\psi\phi K$]{} [@ref:JPsiPhiK]
=========================================================================================
Many charmonium-like resonances were discovered recently and more are expected in the $J/\psi\phi$ decay channel. The decays $B^0\rightarrow J/\psi\phi
K^0$ and $B^+\rightarrow J/\psi\phi K^+$ were exclusively reconstructed using 433$\times 10^6\ B\bar{B}$ pairs. The measured BFs are: $BF(B^0\rightarrow J/\psi\phi K^0) = (
5.40 \pm 1.20(\textrm{stat.}) \pm 0.40(\textrm{syst.}) )\times
10^{-5}$ and $BF(B^+\rightarrow J/\psi\phi K^+) = (5.81 \pm
0.73(\textrm{stat.}) \pm 0.29(\textrm{syst.}) )\times 10^{-5}$. The study of the $J/\psi\phi$ mass spectrum is on-going.
STUDY OF BARYONIC DECAYS
========================
Baryonic decays of $B$ mesons provide a laboratory for searches for excited charm baryon states and for the investigation of the dynamics of 3-body decays.
Study of the decays [$\bar{B}^0\rightarrow\Lambda_c^+\bar{p}$]{} and [@ref:Lambdacppi]
---------------------------------------------------------------------------------------
The BFs of the decay channels $\bar{B}^0\rightarrow\Lambda_c^+\bar{p}$ and $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$ are measured from exclusive reconstruction using 383$\times 10^6\ B\bar{B}$ pairs: $BF(\bar{B}^0\rightarrow\Lambda_c^+\bar{p}) = (1.89 \pm
0.21(\textrm{stat.}) \pm 0.06(\textrm{syst.}) \pm
0.49(\textrm{bf.}))\times 10^{-5}$ and $BF(B^-\rightarrow\Lambda_c^+\pi^-\bar{p})
= (3.38 \pm 0.12(\textrm{stat.}) \pm 0.12(\textrm{syst.}) \pm
0.88(\textrm{bf.}))\times 10^{-4} $, where the error denoted bf. is related to the uncertainty on $BF(\Lambda_c^+\rightarrow p K^-\pi^+)$. One notices an enhancement of the 3-body channel by a factor 15 by respect to the 2-body channel. An enhancement is seen in the Dalitz plot of $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$ at the threshold of the phase space in $m^2(\Lambda_c\bar{p})$. Such threshold enhancement has been seen in other baryon-antibaryon decay modes and is thus expected to be a dynamical effect rather than a resonance. Three resonances are investigated in the $\Lambda_c\pi$ mass spectrum: $\Sigma_c(2455)^0$, $\Sigma_c(2520)^0$ and $\Sigma_c(2800)^0$. The relative BFs measured are: $BF(B^-\rightarrow\Sigma_c(2455)^0\bar{p})/BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^-\bar{p})
= (12.3 \pm 1.2(\textrm{stat.}) \pm 0.8(\textrm{syst.}))\times
10^{-2}$, $BF(B^-\rightarrow\Sigma_c(2800)^0\bar{p})/BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^-\bar{p})
= (11.7 \pm 2.3(\textrm{stat.}) \pm 2.4(\textrm{syst.}))\times
10^{-2}$, and $BF(B^-\rightarrow\Sigma_c(2520)^0\bar{p})/BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^-\bar{p})
< 0.9\times 10^{-2}~@ 90~\%~CL$. No signal is seen for $\Sigma_c(2520)^0$.
The parameters of the mass distributions of these resonances are extracted: $m(\Sigma_c(2455)^0) = 2454.0 \pm
0.2~\textrm{MeV}/c^2$, $\Gamma(\Sigma_c(2455)^0) = 2.6\pm
0.5~\textrm{MeV}$, $m(\Sigma_c(2800)^0) = 2846.0 \pm
8.0~\textrm{MeV}/c^2$, and $\Gamma(\Sigma_c(2800)^0) =
86^{+33}_{-22}~\textrm{MeV}$. The measured mass for $\Sigma_c(2800)^0$ is 3$\sigma$ higher than the resonance seen by Belle [@ref:BelleSigma2800], which may indicate a new $J=1/2$ state. The angular distribution of $B^-\rightarrow\Sigma_c(2455)^0\bar{p}$ is consistent with a spin of $J=1/2$ for $\Sigma_c(2455)^0$ and the hypothesis $J=3/2$ is rejected at the $> 4\sigma$ level.
Study of the decay [$\bar{B}^0\rightarrow\Lambda_c^+\pi^0\bar{p}$]{}
--------------------------------------------------------------------
This channel is the isospin counterpart of $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$ and has never been observed. The BF is measured in the restrictive phase space $m(\Lambda_c^+\pi^0)>3.0~\textrm{GeV}/c^2$ and so does not include contributions from $\Sigma_c(2455,2520,2800)^0$ resonances. An enhancement, similar to the one seen for $B^-\rightarrow\Lambda_c^+\pi^-\bar{p}$, is seen at the threshold of the phase space of the $\Lambda_c^+\bar{p}$ mass spectrum. Using 467$\times 10^6\ B\bar{B}$ pairs the measured BF is: $BF(\bar{B}^0\rightarrow\Lambda_c^+\pi^0\bar{p}) = (1.61 \pm
0.26(\textrm{stat.}) \pm 0.13(\textrm{syst.}) \pm
0.42(\textrm{bf.}))\times 10^{-4}$, where the error denoted bf. is related to the uncertainty on $BF(\Lambda_c^+\rightarrow p
K^-\pi^+)$.
[9]{} M. Bauer, B. Stech and M. Wirbel, Z. Phys. C [**34**]{}, 103 (1987).
M. Neubert and A.A. Petrov, Phys. Lett. B [**519**]{}, 50 (2001).
$\babar$ Collaboration, B. Aubert [*et al.*]{}, SLAC-PUB-13362, \[hep-ex/arXiv:0808.1487v1\].
Particle Data Group, W.-M. Yao [*et al.*]{}, J. Phys. G 33, 1 (2006), and partial 2007 update for the 2008 edition.
$\babar$ Collaboration, B. Aubert [*et al.*]{}, SLAC-PUB-13375, \[hep-ex/arXiv:0809.0042v1\].
$\babar$ Collaboration, B. Aubert [*et al.*]{}, SLAC-PUB-13347, \[hep-ex/arXiv:0808.0697v1\].
$\babar$ Collaboration, B. Aubert [*et al.*]{}, Phys. Rev. D [**69**]{}, 032004 (2004).
Belle Collaboration, J. Schümann [*et al.*]{}, Phys. Rev. D [**72**]{}, 011103 (2005).
Belle Collaboration, K. Abe [*et al.*]{}, Phys. Rev. D [**74**]{}, 092002 (2006).
$\babar$ Collaboration, B. Aubert [*et al.*]{}, Phys. Rev. D [**78**]{}, 012006 (2008).
$\babar$ Collaboration, B. Aubert [*et al.*]{}, Phys. Rev. D [**78**]{}, 011103 (2008).
$\babar$ Collaboration, B. Aubert [*et al.*]{}, Phys. Rev. D [**78**]{}, 032005 (2008).
$\babar$ Collaboration, B. Aubert [*et al.*]{}, SLAC-PUB-13344.
$\babar$ Collaboration, B. Aubert [*et al.*]{}, SLAC-PUB-13341, \[hep-ex/arXiv:0807.4974v1\].
Belle Collaboration, R. Mizuk [*et al.*]{}, Phys. Rev. Lett. [**94**]{}, 122002 (2005).
|
---
abstract: |
We use a functional approach to evaluate the Casimir free energy for a self-interacting scalar field in $d+1$ dimensions, satisfying Dirichlet boundary conditions on two parallel planes.
When the interaction is turned off, exact results for the free energy in some particular cases may be found, as well as low and high temperature expansions based on a duality relation that involves the inverse temperature $\beta$ and the distance between the mirrors, $a$.
For the interacting theory, we derive and implement two different approaches. The first one is a perturbative expansion built with a thermal propagator that satisfies Dirichlet boundary conditions on the mirrors. The second approach uses the exact finite-temperature generating functional as a starting point. In this sense, it allows one to include, for example, non-perturbative thermal corrections into the Casimir calculation, in a controlled way.
We present results for calculations performed using those two approaches.
author:
- |
C. Ccapa Ttira and C. D. Fosco\
[*Centro Atómico Bariloche and Instituto Balseiro*]{}\
[*Comisión Nacional de Energía Atómica*]{} [*R8402AGP Bariloche, Argentina.*]{}
title: Casimir effect at finite temperature in a real scalar field theory
---
Introduction {#sec:intro}
============
Casimir and related effects, where quantum effects depend upon the existence of boundary conditions for a quantum field, have been extensively studied [@rev]. Many different points of view and approaches to this kind of problem have been followed, under quite different sets of assumptions regarding the system; i.e., its intrinsic properties, and the conditions under which one wants to evaluate the Casimir force.
In particular, there has been much interest in studying the Casimir effect at a finite temperature ($T>0$), a study that can be undertaken at different levels, distinguished by the way of taking into account all the possible $T>0$ effects. One could, for example, include thermal effects in the description of the matter on the mirrors (a very active research topic [@dispers]), or rather for the physical description of the vacuum field [@Lim], or for both.
In this article, we shall assume perfect (at any temperature) mirrors, with thermal effects restricted to the vacuum field.
It has been noted that thermal and Casimir-like effects share some similarities; this should hardly be surprising, since both may be regarded, essentially, as finite-size effects, the former in the Euclidean time interval $[0,\beta]$ (with periodicity/antiperiodicity conditions), and the latter for a spacial coordinate with Dirichlet or Neumann conditions.
Even though the nature of the boundary conditions is different, it should be expected that, when both effects are present, interesting relations (‘dualities’) will arise between the dependence on the inverse temperaure $\beta = T^{-1}$ and the distance between the mirrors.
In this article, we present an approach to the calculation of the Casimir free energy for a scalar field at finite temperature in $d+1$ dimensions, subject to Dirichlet boundary conditions, including self-interactions. This approach is based on the use of a path-integral formulation, whereby the $d+1$-dimensional problem is mapped to a dimensionally reduced one, with fields living on the boundaries.
For all the cases considered we analyze the high and low temperature expansions, computing also perturbative corrections to the Casimir free energy due to the non-interacting fields.
This work is organized as follows: in section \[sec:method\] we present our approach, introducing conventions and definitions. In section \[sec:scalarfree\] we deal with the calculations and the corresponding results for a free real scalar field. Interactions are introduced in \[sec:scalarint\] and, in \[sec:concl\], we present our conclusions.
The method {#sec:method}
==========
The main object we shall be interested in is the free energy $F(\beta,a)$ for a real scalar field $\varphi$ in $d+1$ spacetime dimensions, which is subject to Dirichlet boundary conditions on two plane mirrors, separated by a distance $a$ along the direction corresponding to the $x_d$ coordinate.
In terms of the corresponding partition function ${\mathcal Z}(\beta,a)$, $F(\beta,a)$ is given by: $$F(\beta,a) \;=\; -\frac{1}{\beta} \, \ln {\mathcal Z}(\beta,a) \;,$$ while for ${\mathcal Z}(\beta,a)$ we shall use its standard functional integral representation: $${\mathcal Z}(\beta,a) \;=\; \int \big[{\mathcal D}\varphi\big] \,
e^{- S(\varphi)}\;,$$ where $S$ is the Euclidean action at $T>0$; i.e., with an imaginary time variable restricted to the $[0,\beta]$ interval [@Kapusta:2006pm] and periodic boundary conditions for the fields with respect to this coordinate are implicitly assumed.
In this work, we consider an action $S$ such that , with the free part of the action, $S_0$, is given by: $$\label{eq:defs0}
S_0(\varphi) \;=\; \frac{1}{2} \, \int_0^\beta d\tau \int d^dx
\big(\partial_\mu \varphi \partial_\mu \varphi + m^2 \varphi^2
\big) \;,$$ while the interaction part, $S_I$, is given by: $$\label{eq:defsi}
S_I(\varphi) \;=\;\frac{\lambda}{4!} \int_0^\beta d\tau \int d^dx \,
\big[ \varphi(\tau, {\mathbf x}) \big]^4 \;.$$ We have used brackets in $\big[{\mathcal D}\varphi\big]$ to indicate that the path-integral measure includes only those field configurations satisfying Dirichlet boundary conditions on the locii of the mirrors, which correspond to two parallel planes. Accordingly, we use a coordinate system such that, if the Euclidean coordinates are denoted by ($x_0 \equiv \tau$), the mirrors then correspond to the regions: $x_d=0$ and $x_d=a$, and will be parametrized as follows: $$x = (x_\parallel, 0) \;,\;\; x = (x_\parallel, a) \;,$$ where $x_\parallel=(\tau,\mathbf{x_{\parallel}})=(\tau,x_1,\dots,x_{d-1})$.
Then we have: $$\label{eq:defmeasure}
\big[{\mathcal D}\varphi\big] \;=\; {\mathcal D}\varphi \;
\delta\big[\varphi(\tau,{\mathbf x}_\parallel,0)\big] \;
\delta\big[\varphi(\tau,{\mathbf x}_\parallel,a)\big] \;,$$ where the $\delta$’s with braketed arguments are understood in the functional sense; for example: $$\delta\big[\varphi(\tau,{\mathbf x}_\parallel,a)\big] \;=\;
\prod_{\tau,{\mathbf x}_\parallel} \, \delta\big(\varphi(\tau,{\mathbf
x}_\parallel,a)\big) \;\;,$$ while the ones on the right hand side are ordinary ones. Besides, we assume that the ($\beta$-dependent) factor that comes from the integration over the canonical momentum has been absorbed into the definition of ${\mathcal
D}\varphi$, so that performing the integral over $\varphi$ does indeed reproduce the partition function (without any missing factors).
Following [@Kardar:1997cu], the functional $\delta$-functions are exponentiated by means of two auxiliary fields, $\xi_1(x_\parallel)$ and $\xi_2(x_\parallel)$, living in $d$ spacetime dimensions: $$\label{eq:z00f}
{\mathcal Z}(\beta,a) \;=\; \int {\mathcal D}\varphi \, {\mathcal
D}\xi_1 \, {\mathcal D}\xi_2 \; e^{-S(\varphi) \,+\, i\,\int d^{d+1}x
J_p(x) \varphi(x)}\,,$$ where we introduced the singular current: $$J_p(x)\;\equiv\; \delta (x_d) \xi_1(x_\parallel) \,+\, \delta (x_d- a)
\xi_2(x_\parallel) \,,$$ whose support is the region occupied by the mirrors.
Since we shall treat interactions in a perturbative approach, it is convenient to deal first with the free theory, as a necessary starting point to include the interactions afterwards.
Free theory {#sec:scalarfree}
===========
When the theory is free ($\lambda=0$), $S = S_0$, and the integral over $\varphi$ becomes Gaussian. The resulting partition function, denoted by ${\mathcal Z}^{(0)}(\beta,a)$ is then: $$\label{eq:zint1}
{\mathcal Z}^{(0)}(\beta,a) \;=\; {\mathcal Z}^{(0)}(\beta) \,\times \,
\int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \; e^{-S_p(\xi)} \;,$$ where ${\mathcal Z}^{(0)}(\beta)$ is the (free) partition function in the absence of the mirrors, and: $$S_p(\xi) \;=\; \frac{1}{2} \,\int d^dx_\parallel d^dy_\parallel
\xi_a(x_\parallel) \Omega_{ab}(x_\parallel,y_\parallel)
\xi_b(y_\parallel) \;,$$ where $a,b=1,2$, and $$\label{eq:omega}
\Omega(x_{\parallel};y_{\parallel})=\left[ \begin{array}{cc}
\Delta(x_{\parallel},0;y_{\parallel},0) &
\Delta(x_{\parallel},0;y_{\parallel},a) \\
\Delta(x_{\parallel},a;y_{\parallel},0) &
\Delta(x_{\parallel},a;y_{\parallel},a) \\
\end{array} \right] \;,$$ where $\Delta$ is the free, imaginary-time, propagator. It may be written explicitly as follows: $$\Delta(\tau_x,\mathbf{x};\tau_y,\mathbf{y})=\frac{1}{\beta}\sum_n \int
\frac{d^d\mathbf{k}}{(2\pi)^d} e^{i\omega_n(\tau_x\ - \tau_y)
+i\mathbf{k}(\mathbf{x}-\mathbf{y})}\,
\widetilde{\Delta}(\omega_n,\mathbf{k})\;,$$ with .
We have used the notation , and denotes the Matsubara frequencies. $\Delta$ is the inverse, in the space of $\tau$-periodic functions, of $K\equiv(-\partial^2 +m^2)$.
Expression (\[eq:zint1\]) allows one to extract from the free energy the term that would correspond to a free field in the absence of mirrors, $F^{(0)}$, plus another contribution, which we shall denote by $F^{(0)}_p$: $$F^{(0)}(\beta,a) \;=\; F^{(0)}(\beta) \,+\,F^{(0)}_p(\beta,a) \;,$$ $$F^{(0)}_p(\beta,a)\,=\, - \frac{1}{\beta} \,
\ln\Big[\frac{{\mathcal Z}^{(0)}(\beta,a)}{{\mathcal Z}^{(0)}(\beta)}\Big] \;,$$ which clearly contains the Casimir effect information (including thermal corrections). However, it also carries information that is usually unwanted, associated to the ever-present self-energy of the mirrors. Indeed, we see that it includes a divergent contribution to the free energy, which can be neatly identified, for example, by noting that, when $a \to \infty$: $$\frac{{\mathcal Z}^{(0)}(\beta,a)}{{\mathcal Z}^{(0)}(\beta)}\,\to\,
\big[{\mathcal Z}^{(0)}_m(\beta)\big]^2 \;,$$ where ${\mathcal Z}_m^{(0)}$ is the contribution corresponding to one mirror: $$\begin{aligned}
{\mathcal Z}^{(0)}_m(\beta)&=& \int {\mathcal D}\xi_1 \;
e^{-\frac{1}{2}\int d^dx_\parallel \xi_1(x_\parallel) \Delta(x_{\parallel},0;y_{\parallel},0)
\xi_1(y_\parallel)} \nonumber\\
&=& \int {\mathcal D}\xi_2 \; e^{-\frac{1}{2}\int d^dx_\parallel \xi_2(x_\parallel)
\Delta(x_{\parallel},a;y_{\parallel},a) \xi_2(y_\parallel)} \;.\end{aligned}$$ We then identify $$F^{(0)}_m(\beta) \;\equiv\; -\frac{1}{\beta}
\ln\big[{\mathcal Z}^{(0)}_m(\beta)\big]$$ as the free energy term that measures a mirror’s self-interaction. It is, of course, independent of $a$. Extracting also this contribution, we have the following decomposition for $F^{(0)}$ $$\label{eq:decomp}
F^{(0)}(\beta,a)\;=\; F^{(0)}(\beta) \,+\, 2\, F^{(0)}_m(\beta) \,+\,
F^{(0)}_c(\beta,a)$$ where $F^{(0)}_c(\beta,a)$ has a vanishing limit when .
We can produce a more explicit expression for the interesting term $F^{(0)}_c$, starting from its defining properties above. Indeed, we have: $$F^{(0)}_c(\beta,a) \,=\, -\frac{1}{\beta} \,
\ln \Big\{\frac{{\mathcal Z}^{(0)}(\beta,a)}{{\mathcal Z}^{(0)}(\beta)
\big[{\mathcal Z}^{(0)}_m(\beta)\big]^2 }\Big\}\;,$$ which, by integrating out the auxiliary fields, may be written in terms of the matrix kernel $\Omega$: $$F^{(0)}_c(\beta,a) \,=\, \frac{1}{2 \beta} {\rm Tr} \ln \Omega
\,-\, \frac{1}{2 \beta} {\rm Tr} \ln \Omega_\infty$$ where $\Omega_\infty \equiv\Omega|_{a \to \infty}$ and the trace is over both spacetime coordinates and $a, b$ indices.
Assuming that an UV regularization is introduced in order to make sense of each one of the traces above, and using ‘reg’ to denote UV regularized objects, we see that: $$\begin{aligned}
F^{(0)}_{c,reg}(\beta,a) &=& \frac{1}{2 \beta} \Big[{\rm Tr} \ln
\Omega \Big]_{reg}
\,-\, \frac{1}{2 \beta} \Big[{\rm Tr} \ln \Omega_\infty
\Big]_{reg} \nonumber\\
&=& \frac{1}{2 \beta} \Big[{\rm Tr} \ln\big(\Omega_\infty^{-1}
\Omega \big) \Big]_{reg} \;.\end{aligned}$$ As we shall see, the trace on the second line is finite, and has a finite limit when the regulator is removed. Using some algebra we may reduce the trace to one where only continuous indices appear: $$F^{(0)}_c(\beta,a) \,=\, \frac{1}{2\beta} \,{\rm Tr}
\ln \Omega_c \;,$$ where a ‘reduced’ kernel $\Omega_c$ which is given by: $$\Omega_c (x_\parallel,y_\parallel) \,=\, \delta(x_\parallel - y_\parallel)
\,-\, T(x_\parallel,y_\parallel) \;,$$ with: $$\begin{aligned}
T(x_\parallel,y_\parallel)&=& \int d^dz_\parallel d^dw_\parallel d^du_\parallel
[\Omega_{11}]^{-1}(x_\parallel,z_\parallel)
\Omega_{12}(z_\parallel,w_\parallel)\nonumber\\
&\times&
[\Omega_{22}]^{-1}(w_\parallel,u_\parallel)\Omega_{21}(u_\parallel,y_\parallel)
\;,\end{aligned}$$ has been introduced.
Finally note that, due to translation invariance along the mirrors, the free energy shall be proportial to $V_p$, the area of the mirrors. Then, to absorbe this divergence we shall rather consider the corresponding free energy [*density*]{}, obtained by dividing the extensive quantity by $V_p$. In particular, $${\mathcal F}^{(0)}_c(\beta,a) \,\equiv \, \lim_{V_p \to \infty}
\Big[\frac{1}{V_p} F^{(0)}_{c,V_p}(\beta,a)\Big]$$ where $ F^{(0)}_{c,V_p}$ on the right hand side is evaluated for a system with a finite parallel volume. Moreover, taking advantage of the translation invariance along the parallel coordinates, we may use a Fourier transformation to write: $$\label{eq:denercas}
{\mathcal F}^{(0)}_c(\beta,a) \;=\; \frac{1}{2 \beta}
\int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}}
\sum_{n =-\infty}^{+\infty}
\ln \widetilde{\Omega_c}^{(n)}({\mathbf k}_\parallel)
\;,$$ where $$\widetilde{\Omega_c}^{(n)}(|{\mathbf k}_\parallel|)
\;=\; 1 \,-\, \widetilde{T}^{(n)}({\mathbf k}_\parallel)\;,$$ with: $$\label{eq:defT}
\widetilde{T}^{(n)}({\mathbf k}_\parallel) \;\equiv\;
[{\widetilde\Omega}_{11}^{(n)}({\mathbf k}_\parallel) ]^{-1}
\,{\widetilde\Omega}_{12}^{(n)}({\mathbf k}_\parallel ) \,
[ {\widetilde\Omega}_{22}^{(n)}({\mathbf k}_\parallel )]^{-1} \,
{\widetilde\Omega}_{21}^{(n)}({\mathbf k}_\parallel )\;.$$ where the tilde denotes Fourier transformation in both $\tau$ and ${\mathbf
x_\parallel}$. Note the appearance of the reciprocals of the matrix elements of ${\widetilde \Omega}^{(n)}$ (not to be confused with the matrix elements of the inverse of that matrix). The explicit form of the objects entering in (\[eq:defT\]) may be obtained from (\[eq:omega\]): $$\begin{aligned}
{\widetilde\Omega}_{11}^{(n)}({\mathbf k}_\parallel)
= {\widetilde\Omega}_{22}^{(n)}({\mathbf k}_\parallel) &=&
\frac{1}{2 \sqrt{\omega_n^2 + {\mathcal E}^2({\mathbf k}_\parallel)}} \nonumber\\
{\widetilde\Omega}_{12}^{(n)}({\mathbf k}_\parallel) =
{\widetilde\Omega}_{21}^{(n)}({\mathbf k}_\parallel) &=&
\frac{e^{- a\, \sqrt{\omega_n^2 +
{\mathcal E}^2({\mathbf k}_\parallel)}}}{2 \sqrt{\omega_n^2 + {\mathcal
E}({\mathbf k}_\parallel)}} \;,\end{aligned}$$ so that: $$\widetilde{\Omega}_c^{(n)}({\mathbf k}_\parallel) \;=\; 1 \,-\, e^{- 2 a \sqrt{\omega_n^2 +
{\mathcal E}^2({\mathbf k}_\parallel)}} \;.$$
It is then evident that the sum and the integral in (\[eq:denercas\]) converge, since the integrand falls off exponentially for large values of the momenta/indices. This behaviour should be expected, since we have constructed this object subtracting explicitly the would-be self-energy parts, the possible source of UV divergences.
Thus, the main result of the previous calculations is a (finite) expression for the free energy, which can be written as follows: $$\label{eq:freexact}
{\mathcal F}^{(0)}_c(\beta,a) \;=\; \frac{1}{2 \beta}
\int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}}
\sum_{n =-\infty}^{+\infty}
\ln \Big[ 1 \,-\, e^{- 2 a \sqrt{\omega_n^2 + {\mathcal
E}^2({\mathbf k}_\parallel)}} \Big] \;.$$ It is worth noting that the expression above has been obtained subtracting the free energy corresponding to a situation where the mirrors are separated by an infinite distance; on the other hand, we note that, when the $\beta \to \infty$ limit is taken, the sum is replaced by an integral, and we obtain: $$\label{eq:enexact}
{\mathcal E}^{(0)}_c(a) \,\equiv\,{\mathcal F}^{(0)}_c(\infty,a) \,=\,
\frac{1}{2} \int \frac{d^dk_\parallel}{(2\pi)^d}
\ln \Big( 1 \,-\, e^{- 2 a \sqrt{ k_\parallel^2 + m^2 }} \Big) \;,$$ which is the Casimir energy per unit area.
With this in mind, one may introduce yet another quantity; the [*temperature dependent*]{} part of the free energy, whose area density we will denote by ${\mathcal F}_t^{(0)}(\beta,a)$, and, from the discussion above, it is given by: $${\mathcal F}_t^{(0)}(\beta,a) = {\mathcal F}_c^{(0)}(\beta,a) \,-\,
{\mathcal F}_c^{(0)}(\infty,a)\;.$$ By using a rather simple rescaling it the integral in the second term, we may write: $$\begin{aligned}
{\mathcal F}_t^{(0)}(\beta,a) &=& \frac{1}{2 \beta}
\int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}}
\Big\{
\sum_{n =-\infty}^{+\infty} \ln\big[ 1 \,-\, e^{ - \frac{2 a}{\beta}
\sqrt{(2 \pi n)^2 + (\beta {\mathcal E}({\mathbf k}_\parallel))^2 }}\big]
\nonumber\\
&-& \int_{-\infty}^\infty d\nu \, \ln\big[ 1 \,-\,
e^{ - \frac{2 a}{\beta} \sqrt{(2 \pi \nu)^2 + (\beta {\mathcal E}({\mathbf
k}_\parallel))^2}}\big] \Big\} \;,\end{aligned}$$ which yields the Casimir free energy as a difference between a series and an integral, although the for the thermal part, since the sum over energy modes has already been (implicitly) performed.
Before evaluating the free energy for different numbers of spacetime dimensions, we explore below some consequences of a duality between $\beta$ and $a$.
Duality {#ssec:dual}
-------
The dual role played by $\beta$ and $a$ in the free energy may be understood, for example, by attempting to calculate that object by following an alternative approach, based on the knowledge of the exact energies of the field modes, emerging from the existence of Dirichlet boundary conditions.
The energies $\omega_l$ of the stationary modes are: $$w_l(\mathbf{k_\parallel})\,=\,\sqrt{\frac{\pi^2
l^2}{a^2}+\mathbf{k}_\parallel^2 + m^2}\;\;, \;\;\;\; l \in {\mathbb N}\;.$$
Since each one of these modes behaves as a harmonic oscillator degree of freedom, its free energy $f\big[w_l({\mathbf k}_\parallel)\big]$ has the following form: $$\begin{aligned}
f\big[w_l({\mathbf k}_\parallel)\big]&=&
-\frac{1}{\beta} \,\ln \big[ \sum_{N=0}^\infty
e^{-\beta w_l({\mathbf k}_\parallel) (N+\frac{1}{2})} \big] \nonumber\\
&=& \frac{1}{2} w_l({\mathbf k}_\parallel) + \frac{1}{\beta} \,
\ln \big[ 1 \, - \, e^{-\beta w_l({\mathbf k}_\parallel)} \big]\;.\end{aligned}$$ Now we consider ${\mathcal F}_t^{(0)}(\beta,a)$, the [*temperature dependent*]{} part of the free energy density, obtained by summing the second term in the expression above over all the degrees of freedom, and dividing by the parallel area: $${\mathcal F}_t^{(0)}(\beta,a)\,=\, \frac{1}{\beta} \,
\int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}}
\sum_{l = 1}^{+\infty} \ln \big[ 1 \, - \,
e^{-\beta w_l({\mathbf k}_\parallel)} \big] \;.$$ This may also be written as follows: $$\begin{aligned}
\label{eq:freet}
{\mathcal F}_t^{(0)}(\beta,a) &=& \frac{1}{2 \beta} \, \int
\frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}}
\sum_{l = - \infty}^{+\infty} \ln \big[ 1 \, - \, e^{-\beta w_l({\mathbf
k}_\parallel)} \big] \nonumber\\
&-& \frac{1}{\beta} \, \int \frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}}
\ln \big[ 1 \, - \, e^{-\beta {\mathcal E}({\mathbf k}_\parallel)} \big] \;.\end{aligned}$$ We now recall that we are considering the free energy for one and the same system, albeit with different normalizations: subtraction at $a \to \infty$ in (\[eq:freexact\]), and at $\beta \to \infty$ in (\[eq:freet\]). Then we may write: $${\mathcal F}_c^{(0)}(\beta,a) \,-\, \lim_{\beta \to \infty}
{\mathcal F}_c^{(0)}(\beta, a) \,=\,
{\mathcal F}_t^{(0)}(\beta,a) \,-\, \lim_{a \to \infty}
{\mathcal F}_t^{(0)}(\beta, a) \;,$$ or: $${\mathcal F}_c^{(0)}(\beta,a) \,-\, {\mathcal E}_c^{(0)}(a) \,=\,
{\mathcal F}_t^{(0)}(\beta,a) \,-\, {\mathcal F}_t^{(0)}(\beta) \;,$$ where ${\mathcal F}_t^{(0)}(\beta)$ is an $a$-independent object, which corresponds to the free energy of a free field.
Thus, if we are going to be concerned with temperature and $a$-dependent quantities, for example, to study the temperature dependence of the Casimir force, we only need derivatives with respect to $\beta$ and $a$ of the expression above, and we see that: $$\frac{\partial^2}{\partial a \partial \beta} \big[{\mathcal
F}_c^{(0)}(\beta,a) \big]
\,=\,\frac{\partial^2}{\partial \beta \partial a}
\big[ {\mathcal F}_t^{(0)}(\beta,a) \big]\;,$$ or: $$\label{eq:aux1}
\widetilde{\mathcal F}_c^{(0)}(\beta,a) \,=\,
\widetilde{\mathcal F}_t^{(0)}(\beta,a) \, \equiv \,
\widetilde{\mathcal F}^{(0)}(\beta,a) \;,$$ where the tildes denote subtraction of any term which has a vanishing mixed second partial derivative.
On the other hand, from the explicit form of the free energy density as sum over modes (Matsubara or Casimir), we find the identity: $$\big[\beta \widetilde{\mathcal F}_t^{(0)}\big] (2 a , \beta/2) \,=\,
\big[ \beta \widetilde{\mathcal F}_c^{(0)}\big] (\beta,a) \;,$$ which, combined with (\[eq:aux1\]) yields a duality between $a$ and $\beta$: $$\label{eq:duality}
\widetilde{\mathcal F}^{(0)}(2 a , \beta/2) \,=\, \frac{\beta}{2 a}
\, \widetilde{\mathcal F}^{(0)}(\beta,a)\;.$$ To proceed, we make the simplifying assumption that $m=0$, and transform variables in the integral over $\mathbf{k_\parallel}$ to have a dimensionless integral, obtaining $$\beta \,\widetilde{\mathcal F}^{(0)}(\beta,a)\;=\;(\frac{2\pi}{\beta})^{d-1} g(\gamma,d) \,,$$ where $g(\gamma,d)$ is given by $$\label{eq:G}
g(\gamma,d)\,=\,C_d \,\sum_n \int_0^{\infty} dx\, x^{d-2}\,\ln
\left(1-e^{-\gamma \sqrt{n^2+x^2}} \right) \;,$$ where $\gamma= 4\pi a/\beta$ and $C_d$ is a constant factor that depends solely on the dimension $d$.
Now, using the duality formula ($\ref{eq:duality}$), we obtain the an interesting relation involving $g$: $$\label{eq:Tdualidad}
\gamma^{d-1}g(\gamma,d)\,=\,\alpha^{d-1}g(\alpha,d)\,,$$ where $\alpha=\beta \pi /a$ (or $\alpha = (2 \pi)^2/\gamma$). Note that this relation has immediate relevance to relate the low and high temperature regimes. Indeed, writing the expression above more explicitly, we see that: $$\label{eq:Tdualidadp}
g(\frac{4 \pi}{\beta} , d)\,=\, \big(\frac{\beta}{2 a}\big)^{d-1} \,
g(\frac{\pi \beta}{a} , d)\,.$$
This duality relation had been pointed out, for the $d=3$ case, by Balian and Duplantier [@balian:2004]. In the remainder of this section, we apply the previous results and study some particular properties of the free energy corresponding the the free scalar field, firstly for the $d=1$ case, which we single out since it can be exactly solved, and then for $d > 1$.
$d=1$
-----
The free energy (neglecting an irrelevant constant) is given by the expression: $$\label{eq:ecas1+1}
{\mathcal F_c^{(0)}}(\beta,a)= \frac{1}{2 \beta} \sum_{n\ne0} \, \ln
(1-e^{- 2 a \mid \omega_n \mid})\,,$$ which may be written as an infinite product: $$\label{eq:ecas1+1Tb}
{\mathcal F_c^{(0)}}(\beta,a)= \frac{1}{\beta} \, \ln \prod_{n=1}^{+\infty}(1-q^{2n}),$$ where $q=e^{-2 \pi a /\beta}$, and $\mid q \mid$ <1 for $T>0$. Using standard properties of ellipthic functions, we get a more explicit results for the free energy: $$\label{eq:defzz0}
{\mathcal F_c^{(0)}}(\beta,a)= \frac{\pi a}{6 \beta^2}+ \frac{1}{\beta} \, \ln[\eta(2a/\beta \,i)],$$ where $\eta(z)$ is Dedekind’s eta function. Although it has been obtained for $T>0$, one can verify that it also yields the proper results for , namely, .
Besides, the $\eta$-function satisfies the property: which, in this context, is tantamount to the duality relation.
$d > 1$
-------
Now, we exploit the duality relation to extract the high and low temperature behavior of the Casimir energy, for $d>1$. In a high temperature expansion, formula ($\ref{eq:G}$) can be expanded for small $e^{-\gamma}$. The infinite-temperature limit corresponds to $\gamma \to
\infty$, thus $n=0$ yields the leading contribution in this expansion, with the $n=1,\,2, \ldots $ terms producing higher-order corrections.
The explicit form of the leading high temperature term is: $$\label{eq:Ftt}
\tilde{\mathcal F}_c^{(0)}(\beta,a) \,\sim\,-
\frac{1}{2\pi^{d/2} \beta}\frac{\Gamma(d/2)\zeta(d)}{(2a)^{d-1}} \;,\;\;
\beta \sim 0\;,$$ whereas aplying the duality formula (\[eq:duality\]) $a \to \beta /2$, we have the first order contribution in the low temperature regime, $$\label{eq:Ft0}
{\mathcal F_c^{(0)}}(2a,\beta/2)\,\sim\,-\frac{1}{2\pi^{d/2}}
\frac{\Gamma(d/2)\zeta(d)}{(\beta)^d}\;\;\; (\beta \to \infty),$$ which is $a$ independent. The sub-leading contributions ($n=1,2,...$) include more involved integrals; nevertheless, when $d=3$ and $\gamma \to
\infty$, it reduces to terms like $e^{-n \,\gamma }$. Thus, the most significant contribution for $T\to \infty$ is: $$\label{eq:Ftt1}
-\frac{1}{2 a \beta^2} e^{-4\pi a /\beta}\,.$$ Therefore, the duality formula implies that, when $T \to 0$, $$\label{eq:Ft01}
\tilde{\mathcal F}_c^{(0)}(\beta,a) \sim -\frac{1}{2 a \beta^2} e^{-\pi \beta /a}\,.$$ Collecting these results for $d=3$, we see that, for high temperatures: $T \to \infty$ $${\mathcal F_c^{(0)}}(\beta,a)\,\sim\,-\frac{1}{16 \pi}
\frac{\zeta(3)}{a^2 \beta}-\frac{1}{2 a \beta^2} e^{-4\pi a /\beta}\,\,,$$ while, for low temperatures the corresponding behaviour is: $${\mathcal F_c^{(0)}}(2a, \beta/2)\,
\sim\,-\frac{\pi^2}{1440 a^3} \left[1+ \frac{360}{\pi^3}\frac{a^3}{\beta^3}\zeta(3)+
\frac{720}{\pi^2}\frac{a^2}{\beta^2} e^{-\pi \beta /a} \right]\,.$$
Interacting theory {#sec:scalarint}
==================
Let us now consider the theory that results from turning on the quartic self-interaction term, studying the corresponding corrections to the free energy.
This can be done in (at least) two different ways. We shall consider first the would-be more straightforward approach, which amounts to expanding the interaction term in powers of the coupling constant first, postponing the integration over the scalar field until the end of the calculation. We call it ‘perturbative’ since it is closer in spirit to standard perturbation theory.
Perturbative approach {#ssec:pert}
---------------------
We have to evaluate: $$\begin{aligned}
{\mathcal Z}(\beta, a) &=& \int [{\mathcal D}\varphi] \,e^{-S_I(\varphi)}
\, e^{-S_0(\varphi)} \nonumber\\
&=& {\mathcal Z}^{(0)}(\beta, a) \;\; \langle e^{-S_I(\varphi)} \rangle \;, \end{aligned}$$ where we have used $\langle \ldots \rangle$ to denote functional averaging with the free (Matsubara) action and the scalar-field integration measure satisfying Dirichlet boundary conditions. Namely, $$\langle \ldots \rangle \;\equiv\;
\frac{\int [{\mathcal D}\varphi] \ldots e^{-S_0(\varphi)}}{\int [{\mathcal
D}\varphi] \, e^{-S_0(\varphi)}} \;.$$
Thus, the different terms in the perturbative expansion are obtained by expanding in powers of the coupling constant, $\lambda$. For example, the first order term, ${\mathcal F}^{(1)}$, may be written as follows: $${\mathcal F}^{(1)}(\beta,a) \;=\; \frac{\lambda}{4! \beta} \int
d^{d+1}x \, \langle [\varphi(x)]^4 \rangle \;,$$ which may be expressed (via Wick’s theorem), in terms of the average of just two fields. Indeed, the generating functional for these averages, $Z_0^J(\beta,a)$, is simply: $$\label{eq:zJcomplete}
{\mathcal Z}_0^J(\beta,a)\;=\; \int \big[{\mathcal D}\varphi\big] \, e^{-
S_0(\varphi) + \int d^{d+1}x \, J(x) \varphi(x)}\;.$$ We again introduce the auxiliary fields to impose the periodicity constraints on the measure, so that: $$\label{eq:genfunct0}
{\mathcal Z}_0^J(\beta,a) \,=\,
\int {\mathcal D}\varphi {\mathcal D}\xi \,
\, e^{ - S_0[\varphi] + \int d^{d+1}x [i J_p(x)+J(x)]\varphi (x)}\,.$$ Integrating out the scalar field $\varphi$, and using a simplified notation for the integrals, we obtain: $$\begin{aligned}
{\mathcal Z}_0^J(\beta,a) &=& {\mathcal Z}_0(\beta,a) \, \int {\mathcal D}\xi \,
\exp \Big[ - \frac{1}{2} \int_{x_\parallel,y_\parallel}
\xi_\alpha(x_\parallel)
\Omega_{\alpha\beta}(x_\parallel,y_\parallel) \xi_\beta(y_\parallel) \nonumber\\
&+& i \int_{x_\parallel}\xi_\alpha(x_\parallel) L_\alpha(x_\parallel) \,+\,
\frac{1}{2} \int_{x,y} J(x) \Delta(x,y) J(y) \Big] \;,\end{aligned}$$ where: $L_\alpha (x_\parallel) \equiv \int_y \Delta(x_\parallel,a_\alpha;y)
J(y)$, with $a_\alpha = (\alpha - 1) a$, $\alpha=1,2$. Integrating now the auxiliary fields, we see that: $${\mathcal Z}_0^J(\beta,a) \;=\; {\mathcal Z}_0 (\beta,a)\,
e^{\frac{1}{2} \int_{x,y} J(x) G(x,y) J(y) } \;,$$ where $G$, the (free) thermal correlation function in the presence of the mirrors, may be written more explicitly as follows: $$\begin{aligned}
G(x;y) \;= \; \langle \varphi(x) \varphi(y) \rangle &=& \Delta(x;y) \;-\;
\int_{x'_\parallel,y'_\parallel} \Big\{ \Delta(x_\parallel,x_d;
x'_\parallel,a_\alpha) \nonumber\\
&\times& [\Omega^{-1}]_{\alpha\beta}(x'_\parallel,y'_\parallel)
\Delta(y'_\parallel,a_\beta; y_\parallel,y_d) \Big\} \;,\end{aligned}$$ and this is the building block for all the perturbative corrections.
It is quite straightforward to check that the propagator above does verify the Dirichlet boundary conditions, when each one of its arguments approaches a mirror. For example: $$\begin{aligned}
\lim_{x_d \to a_\gamma} G(x;y) &=& \Delta(x_\parallel,a_\gamma;y) \;-\;
\int_{x'_\parallel,y'_\parallel} \Big\{ \Omega_{\gamma \alpha}(x_\parallel; x'_\parallel)
[\Omega^{-1}]_{\alpha\beta}(x'_\parallel,y'_\parallel)
\Delta(y'_\parallel,a_\beta; y_\parallel,y_d) \Big\} \nonumber\\
&=& \Delta(x_\parallel,a_\gamma;y) \;-\; \int_{y'_\parallel}
\delta_{\gamma \beta} \delta(x_\parallel,y'_\parallel)
\Delta(y'_\parallel,a_\beta; y_\parallel,y_d) \nonumber\\
&=& \Delta(x_\parallel,a_\gamma;y) \,-\, \Delta(x_\parallel,a_\gamma;y)
\;=\;0 \;,\end{aligned}$$ and analogously for the second argument.
The thermal correlation function $G(x;y)$ above is composed of two contributions: one is the usual free thermal propagator in the absence of boundary conditions and the other come from reflexions on the surface boundaries. In this way we write $$G(x;y)\,=\, \Delta(x;y)-M(x;y)\,,$$ where $M(x;y)$ is given by: $$\label{eq:Mpropagador}
M(x;y)=\frac{1}{\beta}\sum_n \int \frac{d^{d-1}\mathbf{k}_\parallel}{(2\pi)^{d-1}} e^{iw_n(\tau_x\ - \tau_y)+i\mathbf{k}_\parallel(\mathbf{x}_\parallel-\mathbf{y}_\parallel)}\,
\widetilde{M}(w_n,\mathbf{k}_\parallel;x_d,y_d)\,\,,$$ with $\widetilde{M}(w_n,\mathbf{k}_\parallel;x_d,y_d)$ equal to: $$\label{eq:Mpropagfourier}
\frac{e^{-(|x_d|+|y_d|)E}-e^{-(|x_d-a|+|y_d|+a)E}-e^{-(|x_d|+|y_d-a|+a)E}
+e^{-(|x_d-a|+|y_d-a|)E}}{2E(1-e^{-2aE})}$$ where $E=\sqrt{w_n^2+\mathbf{k}_\parallel^2+m^2}$.
We apply now the previous approach to the calculation of the first-order correction to the free energy (in fact, to the force) and the self-energy function:
### First-order correction to the free energy
We want to evaluate thermal corrections to Casimir energy in the interacting theory, in particular, to understand how its divergences should be dealt with. To automatically subtract $a$-independent contributions, we work with the derivative of the free energy with respect to $a$. Obviously, this is a force, and it has the same amount of information as an energy which has been subtracted to avoid $a$-independent infinities. Indeed, it can be integrated over any finite range of distances to obtain de energy difference.
Using $F_{cas}$ to denote these derivatives, we have: $$F_\textrm{cas}\,\,=\,\,F_\textrm{cas}^{(0)}+F_\textrm{cas}^{(1)}\,,$$ Where the term $F_\textrm{cas}^{(1)}$ is the first-order correction to free Casimir force $F_\textrm{cas}^{(0)}$. There is a term where only $\Delta(x,y)$ appears; this can be renormalized as usual in finite-temperature field theory, by a zero-temperature subtraction plus the inclusion of the first-order (temperature-dependent) mass counterterm contribution [@Kapusta:2006pm].
Then, keeping temperature [*and*]{} $a$ dependent terms only, we see that $$F_\textrm{cas}^{(1)}\,=\,F_{\textrm{cas},\,\Delta \,M}^{(1)}+F_{\textrm{cas},\,M\,M}^{(1)}\,\,,$$ where: $$F_{\textrm{cas},\,\Delta \,M}^{(1)}\,=\,-\frac{\lambda}{2 \beta}
\,\Delta^T(x,x)\int d^{d+1}x \,\frac{\partial M(x,x)}{\partial a}$$ and $$F_{\textrm{cas},\,M\,M}^{(1)}\,\,=\frac{\lambda}{4 \beta} \int d^{d+1}x \,M(x,x) \frac{\partial M(x,x)}{\partial a}\,\,.$$ Using the notation $f_{\Delta,M}$ and $f_{M\,M}$ for the corresponding area densities (we omit super and subscripts): $$f_{\Delta,M}\,=\,-\frac{\lambda}{2 \beta} \Delta^T(0) \sum_n \int
\frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}}
\int_{-\infty}^{\infty}dx_d \frac{\partial
\widetilde{M}(\mathbf{k_\parallel},w_n,x_d)}{\partial a}\,\,,$$ and $$\begin{aligned}
f_{M,M}&=&\frac{\lambda}{4 \beta^2} \sum_{n,l} \int \frac{d^{d-1}
\mathbf{k_\parallel}}{(2 \pi)^{d-1}}
\int \frac{d^{d-1}\mathbf{p_\parallel}}{(2 \pi)^{d-1}}
\int_{-\infty}^{\infty}dx_d \nonumber\\
&\times& \widetilde{M}(\mathbf{k_\parallel},w_n,x_d)
\frac{\partial \widetilde{M}(\mathbf{p_\parallel},w_l,x_d)}{\partial a}\;,\end{aligned}$$ and the first-order correction to the Casimir force will be given by: $$\frac{F_\textrm{cas}^{(1)}}{A_{d-1}}\,=\,f_{\Delta,M}+f_{M,M}\,\,.$$
The term $\widetilde{M}(w_n,\mathbf{k}_\parallel;x_d)$ can be obtained from (\[eq:Mpropagfourier\]); after changing variables from $x\,\,
\to \,\,x+\frac{a}{2}$ (to obtain a symmetrized form), we perform an integration over $d+1$-dimensional spacetime, obtaining: $$f_{\Delta,M}\,=\,-\frac{\lambda}{4 \beta} \Delta^T(0) \sum_n \int
\frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}}
\left(\frac{1}{2E_k}+\frac{a}{2}\,\textrm{csch}^2(aE_k)-\frac{\textrm{coth}(aE_k)}{2E_k}
\right)\;.$$ The $T$-dependent function $\Delta^T(0)$ is given by $$\Delta^T(0)=\int \frac{d^d \mathbf{p}}{(2\pi)^d} \frac{1}{w} n_B(\beta,w)\,\,,$$ where $n_B(\beta,\omega)$ is the Bose distribution function, with $\omega=\sqrt{\mathbf{p}^2+m^2}$.
$f_{\Delta,M}$ may be expressed in yet another way, using the definition of the Bose distribution function but with $\beta$ replaced by $2a$ while $\omega$ is replaced by $E_k=\sqrt{\omega_n^2+\mathbf{k}_\parallel^2+m^2}$, i.e., $n_B(\beta,\omega)$ is replaced by $$n_k(2a,E_k) \equiv \frac{1}{e^{2a\, E_k}-1} \;.$$ Then $$f_{\Delta,M}\,=\,\frac{\lambda}{4 \beta} \Delta^T(0) \sum_n
\int \frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}}
\left[\frac{n_k}{E_k}-2a \,n_k\,(1+n_k) \right]\,\,,$$ which is clearly a convergent quantity.
For the remaining term, we proceed in a similar way, where now the $p$ represent $(\omega_l,\mathbf{p_\parallel})$ $$f_{M,M}\,=\,\frac{\lambda}{4 \beta^2} \sum_{n,l} \int \frac{d^{d-1}\mathbf{k_\parallel}}{(2 \pi)^{d-1}}
\int \frac{d^{d-1}\mathbf{p_\parallel}}{(2 \pi)^{d-1}} f(k,p)\,,$$ where $$\begin{aligned}
f(k,p) &=&
\left(n_p+1\right)\left(\frac{n_p}{E_k^2}-\frac{n_k
n_p}{E_k/2a}\right)\nonumber\\
&+&\left(n_p+\frac{1}{2}\right)\left(\frac{n_k}{E_k
E_p}-\frac{E_k\,n_p-E_p\,n_k}{E_k(E_k^2-E_p^2)}\right)-\frac{n_p}{2E_k(E_k+E_p)}
\, \end{aligned}$$ which is also convergent.
### First-order correction to the self-energy
We conclude the application of this method with the calculation of the tadpole diagram contribution to the self-energy. There is here a surface divergence due to the $M(x;x)$ contribution: $$\label{eq:selener}
\Pi=\frac{\lambda}{2}(\Delta(x,x)- M(x,x))\,.$$ As before, it is convenient to separate from the self-energy the contribution which is present even when the mirrors are infinitely distant; namely, $$\Pi=\Pi_{free}+\Pi_{mir}\,,$$ where $\Pi_{free}=\frac{\lambda}{2} \Delta(x,x)$ and $\Pi_{mir}=-
\frac{\lambda}{2} M(x,x)$ which contain the contribution to the selfenergy coming from the Dirichlet boundary conditions. The term $\Pi_{free}$ has UV and IR divergences, which are usually analyzed in standard finite temperature calculations.
On the other hand, the $\Pi_{mir}$ term has no UV divergences, although it is IR divergent in low dimensions. Moreover, it presents surface divergences when the $x_d$ coordinate tends to $x_d=0$ or $x_d=a$. We will classify these divergences according to the dimension of the theory.
When $d=1$ and $m=0$, we obtain, $$\Pi_{mir}=-\frac{\lambda}{2 \beta} \sum_n \frac{e^{-2|x|E}-2e^{-(|x|+|x-a|+a)E}+e^{-2|x-a|E} }{2E(1-e^{-2aE})} ,$$ where $E=|\omega_n|$. Then we see that $$\Pi_{mir} = -\frac{\lambda}{4 \pi} \sum_{n=1}^{\infty} \frac{1}{n} \times \left\{ \begin{array}{ll}
e^{\gamma s n} & \textrm{if $s<0$}\\
\\
\frac{cosh((s-1/2)\gamma n)-e^{-\gamma n /2}}{sinh (\gamma n/2)} & \textrm{if $0<s<1$}\\
\\
e^{-\gamma(s-1)n} & \textrm{if $s>1$}
\end{array} \right.$$
where $\gamma=4\pi a/\beta$ and $s=x/a$. The sum for $s<0$ and $s>1$ can be performed. For $s<0$, the selfenergy is proportional to $\ln (1-exp(\gamma
s))$, which diverges for $s \to 0$, regardless of the temperature.
For $d\geq 2$, it is convenient to separate the self-energy part which into two pieces, one coming from $n=0$, and the other coming from the remaining modes: $$\Pi_{mir}=\Pi_{mir}^{n=0}+\Pi_{mir}^{n \neq 0}\,,$$ and $$\Pi_{mirr}^{n=0} = -\frac{\lambda}{4 \pi \beta} \int_{0}^{\infty} \, dp \, \frac{1}{p} \times \left\{ \begin{array}{ll}
e^{2s\,p} & \textrm{if $s<0$}\\
\\
\frac{cosh((2s-1)p)-e^{-p}}{sinh (p)} & \textrm{if $0<s<1$}\\
\\
e^{-2(s-1)p} & \textrm{if $s>1$}
\end{array} \right.$$
$$\Pi_{mirr}^{n \neq 0} = -\frac{\lambda}{2 \pi \beta} \sum_{n=1}^{\infty} \times \left\{ \begin{array}{ll}
K_0 (-n \gamma s) & \textrm{if $s<0$}\\
\\
\int_1^{\infty} \frac{1}{\sqrt{p^2-1}}\frac{cosh((s-1/2)\gamma n p)-e^{-\gamma n p/2}}{sinh (\gamma n p/2)} & \textrm{if $0<s<1$}\\
\\
K_0 (n \gamma (s-1)) & \textrm{if $s>1$}
\end{array} \right. \;.$$
Again, the zero mode contribution $\Pi_{mirr}^{n=0}$ has an IR divergent behaviour for low momenta, while there is a divergence on the mirrors. For instance, for $s<0$ the selfenergy is proportional to $log(-2s)$.
The $\Pi_{mirr}^{n \neq 0}$ term can be analized in the high and low temperature limits, i.e. $\gamma \gg 1$ and $\gamma \ll 1$, respectively. The divergence of the function $K_0 (x)$ is logarithmic in $x=0$, thus surface divergences also are logarithmic. Moreover, we see that $\Pi_{mir}^{n = 0} \gg \Pi_{mir}^{n \neq 0}$.
For $d>2$, the IR divergences are absent, and the surface diverces appear explicity. The two contributions are given by: $$\Pi_{mirr}^{n=0} = -\frac{\lambda \,\,\Gamma(\frac{d}{2}-1)}{\beta \,2^{d+1} \pi^{\frac{d}{2}} a^{d-2}} \times \left\{ \begin{array}{ll}
\frac{1}{s^{d-2}} & \textrm{if $s<0$}\\
\\
\zeta(d-2,1-s)+\zeta(d-2,s)-2\zeta(d-2) & \textrm{if $0<s<1$}\\
\\
\frac{1}{(s-1)^{d-2}} & \textrm{if $s>1$}
\end{array} \right.$$
$$\Pi_{mirr}^{n \neq 0} = -\frac{\lambda \, \pi^{\frac{d-3}{2}}}{2 \beta^{d-1} \Gamma(\frac{d-1}{2})} \sum_{n=1}^\infty \int_1^{\infty}dp\, n^{d-2} (p^2-1)^{\frac{d-3}{2}} \left\{ \begin{array}{ll}
e^{\gamma n s\, p} & \textrm{if $s<0$}\\
\\
\frac{cosh((s-\frac{1}{2})\gamma n p)-e^{-\gamma n \frac{p}{2}}}{sinh (\gamma n \frac{p}{2})} & \textrm{if $0<s<1$}\\
\\
e^{-\gamma n (s-1)p}& \textrm{if $s>1$}
\end{array} \right.$$
The integral for $s<0$ or $s>1$ can be performed $$\Pi_{mirr}^{n \neq 0} = -\frac{\lambda}{\beta^{\frac{d}{2}}}\frac{1}{(2a)^{\frac{d}{2}-1}} \sum_{n=1}^\infty
n^{\frac{d}{2}-1} \times \left\{ \begin{array}{ll}
(-1)^{\frac{d}{2}-1}\frac{K_{\frac{d}{2}-1}(-\gamma n s)}{s^{\frac{d}{2}-1}} & \textrm{if $s<0$}\\
\\
\frac{K_{\frac{d}{2}-1}(\gamma n (s-1))}{s^{\frac{d}{2}-1}}& \textrm{if $s>1$}
\end{array} \right.$$ In the case of $d=3$ dimension we have a simple expression $$\Pi_{mirr}^{n \neq 0} = -\frac{\lambda}{8 \pi a \beta } \times \left\{ \begin{array}{ll}
-\frac{1}{s (e^{-\gamma s}-1)} & \textrm{if $s<0$}\\
\\
\frac{1}{(s-1) (e^{\gamma (s-1)}-1)}& \textrm{if $s>1$}
\end{array} \right.$$
By the above results we conclude that the surface divergences of the $\Pi_{mirr}^{n = 0}$ term are polynomial: $\sim s^{-(d-2)}$, whereas the term $\Pi_{mirr}^{n \neq 0}$ which has the sum of non-zero modes presents a more severe divergence proportional to $s^{-(d-1)}$. This has just been shown explicity for the $d=3$ case.
In figures 1, 2, 3 and 4, we plot the self-energy for 1, 2, and 3 dimensions, for different values of the parameters.
Non-perturbative approach {#ssec:nonpert}
-------------------------
Let us conclude this section by considering a second, alternative procedure to the one just explained. It amounts to integrating out, albeit formally, the scalar field [*before*]{} the perturbative expansion. Indeed, introducing, from the very beginning, the auxiliary fields used to impose the Dirichlet conditions, we see that the partition function becomes $${\mathcal Z}(\beta,a) \;=\;\int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \;
e^{- {\mathcal W}[i J_p]} \; ,$$ where ${\mathcal W}$ denotes the generating functional of connected correlation functions, at finite temperature, and with an unconstrained (no Dirichlet conditions) scalar field integration measure: $$e^{- {\mathcal W}[J]} \;\equiv\; \int {\mathcal D}\varphi \, e^{-S[\varphi]
+ \int d^{d+1}x J(x) \varphi(x) } \;.$$ Of course, the arbitrary current $J$ in the definition above must be replaced by $J_p$, which does depend on the auxiliary fields and on $a$, the distance between the two mirrors. Besides, we assume that, in the course of evaluating ${\mathcal W}$, a renormalization procedure has been used to make sense of the possible infinites, in the usual way.
To proceed, we recall that ${\mathcal W}[J]$ does have a functional expansion: $${\mathcal W}[J] \;=\; {\mathcal W}_0 \, + \, \frac{1}{2} \int d^{d+1}x \int
d^{d+1}y \; {\mathcal W}_2(x,y) J(x) J(y) \,+\, \ldots$$ where ${\mathcal W}_k$ correponds to the connected $k$-point correlation functions. Odd terms are, for the quartic perturbation, absent from the expansion. Then one may invoque some approximation that allows one to truncate the functional expansion. Of course, it cannot be a naive perturbative expansion in the coupling constant; one should rather use, for example, a mean field or large-$N$ expansion. Then the leading term will be just the quadratic one: $${\mathcal Z}(\beta,a) \;\sim\; {\mathcal Z}(\beta) \,\times \,
{\mathcal Z}_q(\beta,a)$$ where ${\mathcal Z}(\beta)= e^{{\mathcal W}_0}$ is the thermal partition function in the absence of mirrors, while $${\mathcal Z}_q(\beta,a) \;=\; \int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \;
e^{- \frac{1}{2} \int d^{d+1}x \int
d^{d+1}y \; {\mathcal W}_2(x,y) J_p(x) J_p(y)} \;,$$ depends on ${\mathcal W}_2(x,y)$, the full renormalized thermal propagator. Using the explicit form of $J_p$, we see that the integral is a Gaussian: $$\label{eq:zq1}
{\mathcal Z}_q(\beta,a) \;=\; \int {\mathcal D}\xi_1 {\mathcal D}\xi_2 \; e^{-S_q(\xi)}$$ where $$S_q(\xi) \;=\; \frac{1}{2} \,\int d^dx_\parallel d^dy_\parallel
\xi_a(x_\parallel) \Omega^{q}_{ab}(x_\parallel,y_\parallel)
\xi_b(y_\parallel) \;,$$ and $$\Omega^q(x_{\parallel};y_{\parallel})=\left[ \begin{array}{cc}
D(x_{\parallel},0;y_{\parallel},0) &
D(x_{\parallel},0;y_{\parallel},a) \\
D(x_{\parallel},a;y_{\parallel},0) &
D(x_{\parallel},a;y_{\parallel},a) \\
\end{array} \right] \;,$$ where $D$ is the [*full*]{} imaginary-time propagator. It may be written quite generally as follows: $$D(\tau_x,\mathbf{x};\tau_y,\mathbf{y})=\frac{1}{\beta}\sum_n \int
\frac{d^d\mathbf{k}}{(2\pi)^d} e^{i\omega_n(\tau_x\ - \tau_y)
+i\mathbf{k}(\mathbf{x}-\mathbf{y})}\,
\widetilde{D}(\omega_n,\mathbf{k})\;,$$ with a generally complicated function of its arguments. However, we note that some non perturbative corrections do produce a simple result. For example, considering the IR resummed version of the massless scalar field, yields an expression of the form: $$\tilde{D}(\omega_n,\mathbf{k}) \,=\, \big[ \omega_n^2 + {\mathbf
k}_\parallel^2 + \Pi_\beta(T) \big]^{-1}$$ where $\Pi_\beta(T)$ is the thermal mass. For example, the first two non-trivial contributions correspond to a term which is linear in $\lambda$ plus a non-analytic term: $$\Pi_\beta(T) \,=\, \frac{\lambda T^2}{24} \big[
1 \,-\, 3 ( \frac{\lambda}{24 \pi^2} )^{\frac{1}{2}} \,+\, \ldots \big]
\;.$$ Upon insertion of this expression, we see that the corresponding contribution to the Casimir free energy becomes: $${\mathcal F}_c(\beta,a) \;=\; \frac{1}{2 \beta} \int
\frac{d^{d-1}{\mathbf k}_\parallel}{(2\pi)^{d-1}} \sum_{n
=-\infty}^{+\infty} \ln \Big[ 1 \,-\, e^{- 2 a \sqrt{\omega_n^2 + {\mathbf
k}_\parallel^2 + \Pi_\beta(T)}} \Big] \;,$$ where the $a \to \infty$ contribution has already been subtracted.
Conclusions {#sec:concl}
===========
We have obtained exact results for the free energy in a Casimir system in $1+1$ dimensions and well as low and high temperature expansions for $d>1$, based on a duality relation between inverse temperatures and distances between mirrors, in $d+1$ spacetime dimensions.
For the interacting theory, we have derived two different approaches. In the first approach, perturbative expansion has been used to obtain the first order corrections to thermal Casimir force, showing that it is finite once the standar renormalization of thermal field theory in the absence of mirrors is performed.
On the other hand, we also studied the selfenergies, showing that they have surface divergences in their coordinate dependence, and that those divergences are of a polynomial type, with a degree which depends on the dimension.
Finally, we have shown that a variation in the order used to integrate the auxiliary fields yields a different method, whereby the one particle irreducible functions naturally appear.
Acknowledgements {#acknowledgements .unnumbered}
================
C.C.T and C.D.F. thank CONICET, ANPCyT and UNCuyo for financial support.
[bib]{} G. Plunien, B. Müller, and W. Greiner, Phys. Rep. **134**, 87 (1986); P. Milonni, [*The Quantum Vacuum*]{} (Academic Press, San Diego, 1994); V. M. Mostepanenko and N. N. Trunov, [*The Casimir Effect and its Applications*]{} (Clarendon, London, 1997); M. Bordag, [*The Casimir Effect 50 Years Later*]{} (World Scientific, Singapore, 1999); M. Bordag, U. Mohideen, and V. M. Mostepanenko, Phys. Rep. **353**, 1 (2001); K. A. Milton, [*The Casimir Effect: Physical Manifestations of the Zero-Point Energy*]{} (World Scientific, Singapore, 2001); S. Reynaud [*et al.*]{}, C. R. Acad. Sci. Paris **IV-2**, 1287 (2001); K. A. Milton, J. Phys. A: Math. Gen. **37**, R209 (2004); S.K. Lamoreaux, Rep. Prog. Phys. **68**, 201 (2005); Special Issue [*“Focus on Casimir Forces”*]{}, New J. Phys. **8** (2006). See, for example:\
G. Bimonte, J. Phys. A [**41**]{}, 164013 (2008) \[arXiv:0801.2832 \[quant-ph\]\]; I. Brevik and K. A. Milton, arXiv:0802.2542 \[quant-ph\]; B. E. Sernelius, J. Phys. A [**39**]{} (2006) 6741; V. B. Bezerra [*et al.*]{}, Phys. Rev. E [**73**]{}, 028101 (2006) \[arXiv:quant-ph/0503134\]. S. C. Lim and L. P. Teo, arXiv:0808.0047 \[hep-th\]; S. C. Lim and L. P. Teo, arXiv:0804.3916 \[hep-th\]. J. I. Kapusta and C. Gale, [*Cambridge, UK: Univ. Pr. (2006) 428 p*]{}. M. Kardar and R. Golestanian, Rev. Mod. Phys. [**71**]{}, 1233 (1999) \[arXiv:cond-mat/9711071\]. R. Balian and B. Duplantier, To appear in the proceedings of 15th SIGRAV Conference on General Relativity and Gravitational Physics, Rome, Italy, 9-12 Sep 2002. \[arXiv:quant-ph/0408124\]. F. Ravndal and D. Tollefsen, Phys. Rev. D **40**, 4191 (1989) L.S. Brown and G.J. Maclay, Phys. Rev. **184** 1272 (1969) C.A. Lutken and F. Ravndal, J. Phys. A**21** L792 (1988) S.A. Gundersen and F. Ravndal, Ann. of Phys. **182**, 90 (1988) N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, O. Schroeder and H. Weigel, Nucl. Phys. B [**677**]{}, 379 (2004) \[arXiv:hep-th/0309130\]. P. Sundberg and R. L. Jaffe, Annals Phys. [**309**]{}, 442 (2004) \[arXiv:hep-th/0308010\]. See, for example, J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{}, Oxford Science Publications, 4th. Ed., (2002).
(0,0){width="80.00000%"}
\#1\#2\#3\#4\#5[ @font ]{}
(4064,2611)(-14,-12500) (-20,-10500)[(0,0)\[lb\]]{} (2550,-12256)[(0,0)\[lb\]]{}
(0,0)
\#1\#2\#3\#4\#5[ @font ]{}
(4064,2611)(-14,-12500) (-100,-11086)[(0,0)\[lb\]]{} (2100,-12536)[(0,0)\[lb\]]{}
(0,0)
\#1\#2\#3\#4\#5[ @font ]{}
(4063,2714)(-14,-12575) ( -300,-11161)[(0,0)\[lb\]]{} (2130,-12511)[(0,0)\[lb\]]{}
(0,0)
\#1\#2\#3\#4\#5[ @font ]{}
(4064,2536)(-14,-12425) (2060,-12500)[(0,0)\[lb\]]{} ( -300,-11086)[(0,0)\[lb\]]{}
[**Figure Captions**]{}
- [**Figure 1:**]{} Self-energy in $1+1$ dimensions, for three values of $\gamma \equiv 4 \pi a / \beta$: upper curve: $\gamma=10$; middle curve: $\gamma=1$ and lower curve: $\gamma=0.1$.
- [**Figure 2:**]{} Self-energy in $2+1$ dimensions, for three values of $\gamma \equiv 4 \pi a / \beta$: upper curve: $\gamma=10$; middle curve: $\gamma=1$ and lower curve: $\gamma=0.1$.
- [**Figure 3:**]{} Self-energy in $3+1$ dimensions, zero mode contribution.
- [**Figure 4:**]{} Self-energy in $3+1$ dimensions, for three values of $\gamma \equiv 4 \pi a / \beta$: upper curve: $\gamma=10$; middle curve: $\gamma=1$ and lower curve: $\gamma=0.1$.
|
---
author:
- 'Vladimir Blinovsky[^1]'
date:
- 'July 10, 2013'
- |
Instituto de Matematica e Estatistica, USP,\
Rua do Matao 1010, 05508- 090, Sao Paulo, Brazil\
Institute for Information Transmission Problems,\
B. Karetnyi 19, Moscow, Russia,\
vblinovs@yandex.ru
title: 'Erdös’s Matching Conjecture and $s$-wise $t$-intersection Conjecture via Symmetrical Smoothing Method'
---
[**Abstract**]{}
We find the formula for the maximal cardinality of the family of $n$-tuples from ${[n]\choose k}$ with does not have $\ell$–matching. This formula after some analytical issues can be reduce to the Erdös’s Matching formula. Also we prove the conjecture about the cardinality of maximal $s$-wise $t$-intersecting family of $k$-element subsets of $[n]$. In the proofs we use original method which we have already used in the proof of Miklós-Manikam-Singhi conjecture in [@1]. We call this method Symmetrical smoothing method.
[**I Introduction and Formulation of Results**]{}
Define $[n]=\{ 1,\ldots ,n\}$ and ${[n]\choose k}=\{ E\subset [n]:\ |E|=k\}$. We say that family ${\cal A}\subset {[n]\choose k}$ has $\ell$-matching if there exists the set $\{ E_i ,\ i\in [\ell ]\}\subset {\cal A}$ such that $E_i \bigcap E_j =\emptyset$ when $i\neq j$.
First problem which we would like to introduce is to find the maximal cardinality $M(\ell ,n ,k)$ of ${\cal A}\subset {[n]\choose k}$ which has no $\ell$-matching.
In 1965 Erdös [@2] formulate the following
\[co1\] The value $M(\ell , n,k)$ satisfies the following equality $$\label{e1}
M(\ell ,n,k)=\max\left\{ {k\ell -1\choose k},{n\choose k}-{n-\ell +1\choose k}\right\} .$$
This conjecture is one of the main statements in extremal hypergraph theory. Erdös wrote in [@2] that he manage to prove this corollary for $k=2$ and for $\ell =2$ it is Erdös-Ko-Rado result but general case seems elusive.
Later this corollary was confirmed for several conditions on parameters of the problem, we mention the proof of the conjecture for $n\geq(2\ell -1)k-\ell+1$ in [@4], there was proved that in this case $$M(\ell ,n,k)={n\choose k}-{n-\ell +1\choose k} .$$ Also the conjecture was proved for $k=3$ in [@5].
Let’s mention also that asymptotic equality for $M(\ell ,n,k)$ , which follows from the conjecture is proved for some parameters in [@6].
First our result is the proof of the following
\[le2\] The following equality is valid $$\label{eq1}
M(\ell ,n,k)=\max_{1\leq i\leq k}\sum_{j\geq i}{\ell i-1\choose j}{n-\ell i+1\choose k-j} .$$
Thus the proof that the Conjecture \[co1\] is true for all parameters $\ell ,n,k$ reduced to the proof of technical equality $$\begin{aligned}
&&
\max_{1\leq i\leq k}\sum_{j\geq i}{\ell i-1\choose j}{n-\ell i+1\choose k-j}\\
&=&
\max\left\{ {k\ell -1\choose k},{n\choose k}-{n-\ell +1\choose k}\right\} .\end{aligned}$$
Note, that for the arbitrary $i\in [k]$ the choice of the set $${\cal A}=\left\{ A\in{[n]\choose k}:\ |A\bigcap [\ell i-1]|\geq i \right\}$$ shows that $$M(\ell ,n,k)\geq\max_{1\leq i\leq k}\sum_{j\geq i}{\ell i-1\choose j}{n-\ell i+1\choose k-j} .$$ So we need to prove the opposite inequality
To introduce our second result we introduce some additional notations. We say that family ${\cal B}\subset {[n]\choose k}$ is $s$-wise $t$-intersecting if for the arbitrary subset $\{ E_i , i\in [s]\}\subset{\cal B}$ the following relation is true $|E_1 \bigcap E_2 \bigcap\ldots \bigcap E_s |\geq t$. Let $N(s,n,k,t)$ is the maximal cardinality of $s$-wise $t$-intersecting family from ${[n]\choose k}$.
There is the following old
\[co2\] Let $sk<(s-1)n +t$.The following equality is valid $$N(s,n,k,t)=\max_{r\geq 0}\left\{ \biggl| E\in {[n]\choose k}:\ |E\bigcap [t+rs]|\geq t+(s-1)r\biggr | \right\} .$$
Note that choice of the set $$\left\{ \biggl| E\in {[n]\choose k}:\ |E\bigcap [t+rs]|\geq t+(s-1)r\biggr | \right\}$$ for $r\geq 0$ shows that $$N(s,n,k,t)\geq\max_{r\geq 0}\left\{ \biggl| E\in {[n]\choose k}:\ |E\bigcap [t+rs]|\geq t+(s-1)r\biggr | \right\} .$$ So we need to prove the opposite inequality.
Note also that if $s k\geq (s -1)n +t$, then the whole set ${[n]\choose k}$ is $s$-wise $t$-intersecting family. There are many publications which are devoted to solution of this problem in particular cases. The most important result was obtained by Ahlswede and Khachatrian in celebrated paper [@7]. They confirm the validness of this conjecture for the case $s=2$. In all other cases there are partial solutions (when some parameters are given and $n$ is sufficiently large). We mention papers [@9]-[@10].
Our second result is the proof of this conjecture for all parameters $s ,n,k$.
We note that in [@11] we prove the fractional analog of lemma \[le2\]. Hence we have confirmed the expression similar to (\[eq1\]) for fractional matching also.
The paper is organized as follows: in Section II we introduce the [*Symmetrical smoothing method*]{}, which actually we have already used in [@1] and in [@11]. We also formulate and prove technical lemma \[le1\] which we use also later, in section III. In Section III we, using lemma \[le1\] , complete the proof of conjecture \[co2\]. In the proof of lemma 1 in section II we use lemma 2.
[**Section II**]{}
Next we use the natural bijection between $2^{[n]}$ and set of binary $n$-tuples $ \{ 0,1\}^n$ and make no difference between these two sets.
We say that family ${\cal A}\subset {[n]\choose k}$ is (left) compressed if from the inclusion $A=(a_1 ,\ldots , a_k )\in {\cal A}$ and the conditions $b_j \leq a_j$ follows that $B=(b_1 ,\ldots ,b_k )\in{\cal A} $. Note, that we can assume that the extremal intersection families are left compressed.
Also we can assume that left compressed family ${\cal A}\subset{[n]\choose k}$ is defined by the inequalities $$\label{er1}
{\cal A}=\left\{ x\in{[n]\choose k}:\ (\omega_i ,x)> 0 ,\ i\in [N]\right\} ,$$ where $$\omega_i =(\omega_{i,1},\ldots ,\omega_{i,n})\in R^n$$ and $$\label{er3}
\omega_{i,j}\geq\omega_{i,j+1}$$ when $j\in [n-1]$. Indeed, the set ${\cal A}$, which defined by the inequalities (\[er1\]) is shifted. Arbitrary left compressed set can be defined as the intersection of the sets, which determined by the inequalities (\[er1\]) (with different omegas). However we will see later that we can restrict ourselves assuming that set is generated by only one inequality.
Next we assume that extremal families in both problems are left compressed and are defined by one inequality from (\[er1\]) and condition (\[er3\]) is satisfied. It is easy to see that if the family ${\cal A}$ has $\ell$-matching then the non intersecting set $(x_1 ,\ldots ,x_\ell )\subset{\cal A}$ can be chosen in such way that $x_i \subset [\ell k]$.
The Symmetrical Smoothing Method consists in approximation of the number $|{\cal A}|$ by the smooth symmetric function of $\omega$ , which allows to use analytic methods to determine the values of $\omega_{j}$ on which achieves extremum of $|{\cal A}|$.
Some of the values $\omega_{j}$ can be negative. Next we make transformations of $\omega$ and write the system (\[er1\]) in the equivalent form, where coefficients are all nonnegative. Consider the following set of basis (for the representation of $\omega$) vectors $z_{j} = (k\ell -d j,\ldots , k\ell -d j, -d j,\ldots , -d j)\in R^n$, where the number of coordinates $k\ell -dj$ is equal to $j$ and $j\in [k\ell -1]$. Because the maximal set ${\cal A}$ is compressed, we need to choose only first $k\ell$ coordinates $\omega_{i,j}$ and other we can choose as large as possible, i.e. all of them are equal to $\omega_{k\ell}$.
Then it is easy to check, that vectors $\omega$, which coordinates satisfy inequalities (\[er3\]) and which determine the maximal family in first problem can be represent as the sum $$\label{er4}
\omega =\sum_{j=1}^{k\ell -1}\alpha_{j}z_{j}$$ with non negative coordinates $\alpha_{j}\geq 0$ and some $d$. Indeed from (\[er4\]) follows that for $j\leq k\ell -1$ $$\omega_{j}-\omega_{j+1} =\alpha_{j}k\ell$$ or $$\alpha_{j}=\frac{\omega_{j}-\omega_{j+1}}{k\ell} \geq 0.$$ The last equation contains only differences of $\omega_{j}$ we have one degree of freedom to determine $\omega_{k\ell}$, to do this we choose proper $d_i$. It can be easily shown, that $$d =\frac{k\ell}{n}-\frac{\sum_{j=1}^{n}\omega_{j}}{n\sum_{j=1}^{k\ell-1}j\alpha_{j}}.$$
Substituting expansion (\[er4\]) to the inequality $(\omega ,x)\geq 0$ we obtain $$\begin{aligned}
\label{er6}
&& \left(\sum_{j=1}^{k\ell -1}\alpha_{j}z_{j} ,x\right) =\sum_{j=1}^{k\ell -1}\alpha_{i,j}\left( k\ell \sum_{m=1}^{j}x_m -jkd \right)\\
&=& k\ell \sum_{j=1}^{k\ell -1}\left(\sum_{m=j}^{k\ell -1}\alpha_{m}\right) x_j -k d \sum_{j=1}^{k\ell -1}\alpha_{j}j\geq 0.\nonumber\end{aligned}$$ Define $$\beta_{j}=\frac{\sum_{m=j}^{k\ell -1}\alpha_{m}}{\sum_{m=1}^{k\ell -1}m\alpha_{m}}.$$ We can rewrite inequality in (\[er6\]) as follows $$\label{er7}
\sum_{j=1}^{k\ell -1}\beta_{j}x_j \geq \delta ,$$ where $\beta_{j}\geq 0$ and $\beta_{j}\geq\beta_{j+1}$. Without loss of generality we can assume that $\delta > 0$, otherwise inequality (\[er7\]) does not impose any restriction on the choice of $x$.
Thus the maximal family ${\cal A}$ which does not has $\ell$-matching can be determined by the system of the inequalities $$\label{er8}
\sum_{j=1}^{k\ell -1}\beta_{j}x_j \geq \delta ,$$ for $i\in [N]$ and $\delta >0$ and the choice of $\beta_{j}$ for $ j\in[k\ell -1]$ is such that $$\label{e100}
\beta_{j}\geq\beta_{j+1},\
\beta_{j}\geq 0,\ \sum_{j=1}^{k\ell -1}\beta_{j}=1.$$
Also note that we can assume that for the $\beta_i$ which determine the maximal family ${\cal A}$ $$\biggl| \sum_{j=1}^{k\ell -1}\beta_{j}x_i - \delta\biggr| >\delta$$ for all $i$ and $x\in{[n]\choose k}$ and sufficiently small $\delta >0$. This is because the number of $k$-element subset of $[n]$ is finite set so as the number of relations (\[er8\]) which determine ${\cal A}$ and we can vary coordinates of $\beta_i$ in small range without changing the family which is determined by equations (\[er8\]). Because of this we can also assume that there are strict inequality in (\[er8\]).
Define $$\varphi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{\xi^2}{2}}d\xi .$$ It is easy to see that $x\in {\cal A}$ if and only if for the arbitrary small but fixed $\mu >0$, and sufficiently small $\sigma =\sigma (\mu ) >0$ the following inequality is satisfied $$\label{e0}
Z(x, \sigma)=\varphi((\beta ,x)-\delta ) /\sigma ))>1-\mu .$$
Define $$Y({\cal A})=\sum_{x\in{[n]\choose k}}Z(x,\sigma ) .$$ We can approximate $|{\cal A}|$ as follows: $$\label{er12}
||{\cal A}|- Y({\cal A})|< \epsilon_1 ,$$ where $\epsilon_1 >0$ can be chosen arbitrary small. Hence to find the maximum of $|{\cal A}|$ is equivalent to find the maximum of $Y({\cal A})$ over the choice allowed for $\beta $ and $\delta $.
Next we show that maximum of $Y({\cal A})$ achieved on step functions $\beta $ i.e. when $\beta_{j}=\frac{1}{a}$ for $j\in [a ]$ for some $a \in [k\ell -1]$.
\[le1\] If we impose condition that $\beta_{j}=0$ when $j=a +1,\ldots ,k\ell -1$, then varying $\delta$ in arbitrary small range we can achieve the situation that maximum of $Y({\cal A})$ achieved when $\beta_{j}=\frac{1}{a}$ for $j\in [a ]$.
[**Proof.**]{} Next we will consider the case when $a_i >4$ for all $i$. The case $a_i \leq 4$ is easy. Proof is made by differentiation over $\beta_{j}$ with conditions $\beta_{j}=0$ for $j=a+1,\ldots ,k\ell -1$. Let’s make this procedure. Because $\beta_{a}=1-\sum_{j=1}^{a -1}\beta_{j}$ we have for $j\in [a -1]$ $$\begin{aligned}
\label{er90}
&&
Y^\prime_{\beta_{j}}({\cal A})=\frac{1}{2\pi \sigma}\biggl(\sum_{x\in{[n]\choose k}:\ j\in x,\ a \not\in x}e^{-\frac{((\beta ,x)-\delta )^2}{2\sigma^2}}\\
&-&\nonumber
\sum_{x\in{[n]\choose k}:\ a \in x,\ j \not\in x}e^{-\frac{((\beta ,x)-\delta )^2}{2\sigma^2}}\biggr) =0,\end{aligned}$$ Next we show that these equalities can be valid together only on step functions $\beta_{j}=1/a$ for $j\in [a ]$.
We tend $\sigma\to 0$ and find, that to satisfy right hand side equality in (\[er90\]) it is necessary to assume that the exponents form the left sum are equal to the corresponding exponents from the right sum i.e. for each given $j\in [a -1]$ $$\label{er89}
((\beta ,x)-\delta )^2 =((\beta ,y)-\delta )^2$$ where $x\in{[n]\choose k}, j\in x$, $y\in{[n]\choose k}, a \in y$ and $x\setminus j ,\ y\setminus a $ run over all sets of cardinality $k-1$ from $[n]\setminus\{ a ,j\}$. This is true, because values of $\sigma$ can be chosen small.
We rewrite equalities (\[er89\]) as follows: $$\begin{aligned}
&&
\beta_{j}^2 +(\beta_{j_1}+\ldots +\beta_{j_{k-1}})^2 -2\delta \beta_{j} -2\delta (\beta_{j_1}+\ldots +\beta_{j_{k-1}})\\
&+&2\beta_{j}(\beta_{j_1}+\ldots +\beta_{j_{k-1}})=\\
&&
\beta_{a}^2 +(\beta_{m_1}+\ldots +\beta_{m_{k-1}})^2 -2\delta \beta_{a} -2\delta (\beta_{m_1}+\ldots +\beta_{m_{k-1}})\\
&+&2\beta_{a }(\beta_{m_1}+\ldots +\beta_{m_{k-1}}).\end{aligned}$$ Both sides of these equality over all permissible choices of $j_1 ,\ldots ,j_{k-1}$ and $m_1 ,\ldots ,m_{k-1}$ leads to the equality $$\begin{aligned}
&&\label{er56}
{n-2\choose k-1}(\beta_{j}^2 -2\delta_i \beta_{j})-2\delta R +2\beta_{j}R\\
&=&\nonumber
{n-2\choose k-1}(\beta_{a}^2 -2\delta \beta_{a})-2\delta R +2\beta_{a}R\end{aligned}$$ where $$\begin{aligned}
&& R= \sum_{x\in{[n]\setminus\{j, a \}\choose k-1}} (\beta ,x)={n-3\choose k-2}\sum_{m\neq j,a}\beta_{m}={n-3\choose k-2}(1-\beta_{j}-\beta_{a} ).\end{aligned}$$ From (\[er56\]) follows, that $\beta_{j}$ can take at most two values: $$\begin{aligned}
\label{er91}
&& \beta_{j}=\beta_{a},\\
&& \beta_{j}+\beta_{a}=\gamma \stackrel{\Delta}{=}2\frac{\delta -\frac{k-1}{n-2}}{1-2\frac{k-1}{n-2}} .
\nonumber\end{aligned}$$ Next we show how we can eliminate the possibility that $\beta_{j}$ takes second value. Assume at first that to each $x$ such that $ |x\bigcap [a]|=p$ corresponds some $y$ such that $|y\bigcap [a ]|=p$ for all $x\in{[n]\choose k}$ and possible values of $p$. For given $p$ we sum left and right sides of the relation (\[er89\]) over $x$ and corresponding $y$ such that $|x\bigcap [a ]|=p$. Then similar to the case of summation over all $x$, we obtain two possibilities: $$\beta_{j}=\beta_{a}$$ or $$\label{et1}
\beta_{j}+\beta_{a}=2\frac{\delta -\frac{p-1}{a-2 }}{1-2\frac{p-1}{a -2 }}$$ Because we can vary $p$ it follows, that last equality for some $p$ contradicts to the second equality from (\[er91\]).
Now assume that for some $b$ $$\label{er77}
\beta_{j}=\left\{\begin{array}{ll}
\gamma-\beta_{a},& j\leq b ,\\
\beta_{a},& j\in [b+1,a].
\end{array}
\right.$$ Because $\sum_j \beta_{j}=1$ we have the following condition on $\beta_{a}$ and $\delta$: $$\label{ek1}
b\gamma +(a -2b)\beta_{a} =1.$$ Let’s $\beta_{j}=\gamma -\beta_{a}$. Assume also that for some $x$ such that $ |x\bigcap [a ]|=p$ corresponds some $y$ such that $|y\bigcap [a]|=q$ for some $p\neq q$. From (\[er89\]) follows that there exists two possibilities $$(\beta ,x)=(\beta ,y)$$ or $$\label{e34}
(\beta ,x)+(\beta ,y)=2\delta .$$ Each of these equalities impose the condition- first equality the condition (for some integers $p_1 , p_2$) $$p_1 \beta_{a}+ p_2 \gamma =0$$ which can be inconsistent with equality (\[ek1\]) or together with equality (\[ek1\]) determine the value $\delta_i$.
From other side if equality (\[e34\]) impose the condition (for some integers $p_3 , p_4 $) $$\label{ed1}
p_3 \beta_{a}+p_4 \gamma =2\delta .$$ It is possible that equality (\[ek1\]) together with equality (\[ed1\]) does not determine the value $\delta$. In this case we consider next three possibilities. First possibility that there exists $x$ such that $ |x\bigcap [a ]|=m$ (where $m$ can be equal to $p$ or $q$) and there exists corresponding $y$ such that $\ |y\bigcap [a]|=v$ where $v\neq p,q$
Second possibility is that to each $x$ such that$ |x\bigcap [a ]|=m$, here $m\neq p,q$ correspond to $y$ such that $ |y\bigcap [a ]|=m$. In this, second case we return to the case which leads to the equalities (\[et1\]) (because when $a \geq 5$ the number of such $m\neq p,q$ is greater that $1$).
Third possibility is that some $x$ such that $|x\bigcap [a ]|$, here $m\neq p,q$ corresponds to some $y$ such that $|y\bigcap [a]|\neq p,q,m$.
If we have the first or third possibility, then we have one additional equation $$\label{el1}
q_3 \beta_{a}+q_4 \gamma =2\delta$$ which together with (\[ek1\]) and (\[ed1\]) are inconsistent or determine unique value of $\delta_i$.
We see, that if $b>1$, and $\beta_{j}=\gamma -\beta_{ a} >\beta_{a}$ when $j\leq b$, then value of $\beta_i$ can take values only from some discrete finite set. Making small variation of $\delta$ we can achieve the situation that neither of values of these functions are equal with true value of $\delta$. Once more we mention that such varying we can do always without violation the relation (\[er12\]). Lemma is proved.
To prove lemma \[le2\] we formulate the basic Optimization problem 1.
[**Optimization problem 1.**]{}
Find maximum over the choice of $\{ \beta_{j}\}$ and $\{\delta \}$ of the function $$Y({\cal A})$$ under the conditions $$\label{ew1}
\sum_{m\in [\ell ]}Z(x_m ,\sigma )<\ell N-1+\mu_1$$ and $$\label{ett1}
\beta_{j+1}-\beta_{j}\leq 0,$$ where $\{ x_1 ,\ldots ,x_\ell \}$ runs over all sets of $\ell$ different nonintersecting $n$-tuples from ${[n]\choose k}$ and $\mu_1$ is some small positive number less than $1$.
Next we concentrate on the case only one $\beta$, the case which we actually need.
As we show before, parameters $\beta ,\delta ,$ which maximize the value of $Y({\cal A})$, maximize the value $|{\cal A}|$ also and conditions (\[ew1\]) make it impossible the event that ${\cal A}$ has $\ell$-matching.
We will show that conditional maximum of $Y({\cal A})$ achieves on the set of $\beta$, such that $$\beta_{j}=\frac{1}{a} ,\ j\in [a]$$ for some $a \in [k\ell -1]$ and $\beta_{j}=0$ when $j >a$.
Next we make the following: we skip all conditions (\[ew1\]) except one. This only increase $Y({\cal A})$. We choose only one set $\{ x \} =\{ x_1 ,\ldots ,x_\ell \}$ of nonintersecting elements from ${[n]\choose k}$: $$x_j =\{ j , j+\ell ,\ldots , j+(k-1)\ell \} ,\ j\in [\ell ].$$ This set is forbidden to be included in ${\cal A}$ by the inequality: $$\label{es1}
(\beta,x_j )\leq \delta ,$$
Let’s $a = (m-1)\ell +p$, for some $m\in [k]$ and $p <\ell $ and $\beta_{j}=1/a $ when $j\in [a ]$. The restriction (\[es1\]) means that it is necessary and sufficient to choose $\delta <\psi$ where $$\label{et11}
\psi =\frac{m-1}{a}=\frac{m-1}{(m-1)\ell+p}.$$ For this choice $\beta$ and $\delta$, which is sufficiently close to $\psi$ we have $$\label{e4r}
\sum_{j\geq m}{(m-1)\ell+p\choose j}{n-(m-1)\ell -p\choose k-j}$$ choices of admissible $x\in{[n]\choose k}$.
It is left to find the maximum over choices of values of $a \in [k\ell -1]$ of the sum from (\[e4r\]): $$\begin{aligned}
\label{edd1}
&&\max_{a\in [k\ell -1]}\sum_{j> a/\ell}{a\choose j}{n-a\choose k-j}\\
&=& \nonumber
\max_{i\in [k]}\sum_{j\geq i}{\ell i -1\choose j}{n-\ell i +1\choose k-j} .
\end{aligned}$$ Equality in (\[edd1\]) follows from the fact, that $$\sum_{j >a/\ell }{a\choose j}{n-a\choose k-j}$$ decreases as $a$ decreases from $\ell i-1$ to $\ell (i-1)$.
It is left to proceed with Optimization Problem 1. Denote corresponding $\sum_{m\in [\ell ]}Z(x_m ,\sigma )$ by $Z (\{ x\},\sigma )$.
Next we use Kuhn- Tucker necessary condition on $\beta =\{ \beta_{j} ,\ j\in [a ]\}$ on which achieved conditioned maximum of $Y({ \cal A})$.
Assume that $\sum_{j=1}^{a}\beta_{j}=1$ for some $a\in [ k\ell -1]$. It follows that $\beta $, on which achieved conditional maximum of $Y({\cal A})$ satisfies the equalities $$\begin{aligned}
\label{t100}
&&Y^\prime_{\beta_{j}}({\cal A})=\lambda Z^\prime_{\beta_{j}}(\{ x\} ,\sigma )-2\lambda_j ,\\
&&Y^\prime_{\delta }({\cal A} )=\lambda Z_{\delta }^\prime (\{ x\} ,\sigma ),
\label{et101}
\end{aligned}$$ where $j\in [a-1]$ and $\lambda_j \geq 0$ satisfy relations $\lambda_j (\beta_{a}-\beta_{j})=0$. Here we relax conditions (\[ett1\]) to $$\beta_{a}-\beta_{j}\leq 0.$$
Because $Z^\prime_{\delta } (\{ x\} ,\sigma )<0$ these conditions allow to find all $\beta_i$ on which achieved conditional maximum of $Y({\cal A})$.
From (\[t100\]), (\[et101\]) follows the equations $$\label{et200}
Y^\prime_{\beta_{j}}({\cal A})Z^\prime_{\delta}(\{ x \} ,\sigma )=Y^\prime_{\delta }({\cal A})Z^\prime_{\beta_{j}}(\{ x\} ,\sigma )-2\lambda_j Z^\prime_{\delta } (\{ x\} ,\sigma ) .$$ Note that parameters $\lambda_j$ arise when one takes into account conditions $\beta_{i,a_i}- \beta_{i,j}\leq 0$ when considered Kuhn- Tucker conditions.
Next we start our analysis from $j=1$. Here can be two cases $\beta_{1}>\beta_{a}$ or $\beta_{1}=\beta_{a}$. In the second case we are done, we only need to choose $\lambda_1 >0$. As we will see later, we need to choose $\lambda_1 =0$ to keep the equality $\beta_1 =\beta_a$. Hence it is left to consider first case. Next we will show that we can eliminate it. In first case we have $\lambda_1 =0$ and thus equation (\[et200\]) reduced to the equality $$\label{t400}
Y^\prime_{\beta_{1}}({\cal A})Z^\prime_{\delta}(\{x \},\sigma )=Y^\prime_{\delta }({\cal A})Z^\prime_{\beta_{1}}(\{ x\} ,\sigma ).$$ It follows that if $$Z^\prime_{\beta_{1}}(\{ x\} ,\sigma )=0$$ then $$Y^\prime_{\beta_{1}}({\cal A})=0.$$ Later we step by step collect the cases (for different $j$) such that $$Y^\prime_{\beta_{j}}(\{ x\} ,\sigma )=0$$ and make above analysis, having the cases (\[er91\]), of the extremum of function $Y({\cal A})$ shows that in this case proper choice of $\delta$ deliver the equality $\beta_{1}=\beta_{a}$ which contradicts to the proposition that $\beta_{1}>\beta_{a}$. Thus we can assume that $ Z^\prime_{\beta_{1}}(\{ x\} ,\sigma )\neq 0$. Next we show that by proper choice of $\delta$ we can make impossible for the equation (\[t400\]) to be valid. We will do this similar to the method we apply to eliminate second value of $\beta_{j}$ when determined unconditional extremum of $Y({\cal A})$. We will show that value $\beta_{1}$ which satisfies the equation (\[t400\]) can takes at most two values and both of them can be eliminate by small shifting of $\delta$. From this will follow that we can exclude the possibility $\beta_{1}>\beta_{a}$ when determine the conditional maximum of $Y({\cal A})$ and can assume that $\beta_{1}=\beta_{a}$. Next we make calculations which support these our considerations.
Remind the definition $$\gamma = 2\frac{\delta -\frac{k-1}{n-2}}{1-2\frac{k-1}{n-2}} .$$ We sum the exponents of the terms in the both sides of (\[t400\]) with proper signs similar as we do this when analyze unconditional maximum of $Y({\cal A})$ we obtain the relation $$\begin{aligned}
&&\ell \left(\sum_{x\in{[n]\choose k}:\ 1\in x,\ a\not\in x}((\beta ,x)-\delta )^2 -\sum_{y\in{[n]\choose k}:\ a \in y,\ 1\not\in x}((\beta ,y)-\delta )^2\right)\\
&=& {n\choose k}\left( ((\beta ,x)-\delta )^2 -((\beta ,y)-\delta )^2 \right) , \end{aligned}$$ here in the right hand side $x\in \{x\} ,\ 1\in x$ and $y\in \{x\} ,\ a \in y$. From here we obtain the quadratic (for $\beta_{1}$) equality $$\begin{aligned}
&& \ell\cdot {n-2\choose k-1}(\beta_{1}-\beta_{a})\left( 1-2\frac{k-1}{n-2}\right) (\beta_{1}+\beta_{a} -\gamma ) \label{ej1}\\
&=& {n\choose k}( (\xi +\beta_{1}-\delta )^2 -(\psi +\beta_{a}-\delta )^2 ) .\nonumber\end{aligned}$$ Here $$\begin{aligned}
\xi &=&(\beta ,x)-\beta_{1},\label{oi}\\
\psi &=& (\beta ,y)-\beta_{a}
\end{aligned}$$ for some $x,y\in \{ x \} $ such that $ 1\in x$ and $a \in y$.
Last relation generate solutions for $\beta_{1}$ which depend on $\delta$ is essentially algebraic or (possibly) linear function. Next we make the important remark. The number of positive or negative terms in the left hand side of (\[et200\]) is $\ell {n-2\choose k-1}$ and in the right hand side ${n\choose k}$. We can assume that $k\ell <n$, otherwise the answer of the matching problem is clear. Then to satisfy equality (\[et200\]) we should assume that some two terms with different signs in the right hand side of (\[et200\]) are equal. This gives the equation $$((\beta ,x )-\delta )^2 +((\beta ,x^\prime )-\delta )^2 =((\beta ,y)-\delta )^2 +((\beta ,y^\prime )-\delta )^2.$$ where $x,y$ are chosen as in (\[oi\]) (actually they are represented in the definitions of $\xi$ and $\psi$ ) and $x^\prime_1 =1$. From this equation it follows that $\beta_{1}$ is essentially algebraic or linear function of $\delta$. It can be easily shown (it leads to some cumbersome calculations), that this function is differ from the function, generated by the equality (\[ej1\]). In both cases we can shift $\delta$ in such a way that these two equations become inconsistent.
This proves that only the equation $\beta_{1}=\beta_{a}$ is possible. Step by step using similar considerations one can show that the only possible case is when $\beta_{1}=\ldots =\beta_{a}$. We only need to choose $\lambda_j \geq 0$. The choice is as follows.
There are (at most) three parts of $[a]$. First, where $Z^\prime_{\beta_j} >0$, second, where $Z^\prime_{\beta_j} =0$ and third, where $Z^\prime_{\beta_j} <0$. In the first part we choose $\lambda_j$, such, that r.h.s. of the equation (\[t100\]) is equal to zero, the same in the second part. In the third part we choose $\lambda_j$ such, that r.h.s of (\[t100\]) is equal $\pm$ l.h.s. of this equation, hence we obtain in this case identity or that $Y^\prime_{\beta_j}({\cal A})=0$, which is consistent with the solution $\beta_j =\beta_a$.
The last we should show is that we can choose $\delta$ in such a way that all $\lambda_j \geq 0$. But it is obvious from the choice considered above- it is necessary to choose $\delta$ sufficiently close to $(\beta ,x)$ when $(\beta ,x)\neq (\beta ,y)$ and note that $(\beta ,x)$, when the last inequality is valid and $\beta_{j} =\frac{1}{a}$ when $j\in [a ]$, does not depends on the choice of $x\in \{ x \}$.
Lemma \[le2\] is proved.
[**Section III. Proof of the Conjecture \[co2\]**]{}
To prove conjecture \[co2\] we will follow the same procedure as in the previous section. First we formulate the optimization problem:
[**Optimization Problem 2.**]{}
Maximize over the choice of $\{\beta_{i ,j}\}$ and $\{\delta_i \}$ the function $$Y({\cal A})$$ under the restrictions $$\sum_{m\in [s]}Z(x_m ,\sigma )<Ns-1+\mu_2 ,$$ $$\beta_{i,j+1}\leq\beta_{i,j},$$ where $(x_1 ,\ldots ,x_s )$ runs over all subsets of $s$ different $n$-tuples from ${[n]\choose k}$ which are not $s$-wise $t$-intersecting and $\mu_2 >0$ is small number.
This problem has some difference from the previous problem, because we should consider the case when $\beta_{i,j}$ can be positive for all $j\in [n-1]$, not only $j\leq ks-1$ as in the previous case. But literally the same procedure with one value $Z(\{ x_m \} ,\sigma )$ for the set of $ x_m $, defined below, shows that for each $i\in [N]$ all positive $\beta_{i,j}$ should be equal. We skip the details.
Let $a$ be the number of positive (equal) $\beta_{j}=1/a$. As in the previous section we leave only one restriction from the optimization problem 2. Restriction which is generated by the following $s$ elements: $$\begin{aligned}
x_1 &=&(1,2,\ldots ,t-1, t+1,\ldots ,t+s-1, t+s+1,\ldots , t+2s-1,t+2s+1 ,\\
&&\ldots ,t+3s-1,\ldots , c (1)), \\
&& x_m =(1,2,\ldots ,t, c_1 (m),c_2 (m),\ldots , c_d (m), c(m)),\ m=2,\ldots ,s,
\end{aligned}$$ where $c_j (m)$ is cyclic shifting of $s$-tuple $(t+(j-1)s+1,\ldots ,t+js-1 ,\emptyset )$ on $m-1$ positions to the right (shifting here actually means that we put $\emptyset$ consequently on each position starting from the leftmost position and renumber the elements of the sequence giving them the number of the right neuborhood, at the beginning empty set has number $t+js$), except, possibly, last $c(m)$ which is, possibly, reduced (with fewer number of elements), because we have restriction on the number $k$ of elements in $x_m$. We choose $c(1)=(t+ds+1,\ldots ,k,\emptyset ,\ldots ,\emptyset ), c(2)=(t+ds+1,\ldots ,k+1,\emptyset ,\ldots ,\emptyset )$ where the length of tuples $c(1), c(2)$ is $s+1$ and $c(m),\ m=\{ 3,\ldots ,s\}$ is obtained from $c(1)$ by the same shifting as $c_j (m)$ from $c_j (1)$. Set $\{ x_m ;\ m\in [s]\}$ is not $s$-wise $t$-intersecting set, because in $n$-tuple $x_1$ does not have element $t$ and apart from the elements $[t]$ these $n$-tuples does have even one element in common.
Now as before assume that restriction $$(\beta ,x)>\delta$$ forbids this set of $n$-tuples. It is easy to see, that it is impossible that $a\in [t-1]$, because in this case should be $\delta \geq1$ and ${\cal A}=\emptyset$. If $a=t$, then we should choose $\delta \in [(t-1)/t ,1)$. This choice forbids $n$-tuple $x_1$ as a member of ${\cal A}$ and allow other $x_i$. If $a \in\{ t+ps+1,\ldots , t+(p+1)s\}$, then we should choose $\delta \in [ (a -p-1)/a , (a -p)/a )$. This choice of $\delta_i$ forbids at least one $n$-tuple $x_i$, but does not make any further restrictions. At last if $ (s-1)$ does not divide $(k-t)$, then for $a\in \{ t+ds+1,\ldots ,k+d \}$ we should choose $\delta \in [(a -d-1)/a ,(a -d)/a))$ and if $a >k+d$, then even allowing all $n$-tuples $x$ such that $|x\bigcap [a ]|=k$ does not warranty $s$-wise $t$-intersection, hence $a \leq k+d$.
Collecting these possibilities for the choice of of pairs $( a ,\delta_i )$ together we see that $(p<s)$ $$\label{ey7}
N(s,n,k,t)\leq\max_{a=t+sp+r}\sum_{i\geq t+(s-1)p+r} {a\choose i}{n-a\choose k-i} .$$ It is enough to make the optimization in (\[ey7\]) only over $a$ such that $s|(a-t)$. Indeed it easily follows from the inequality $$\sum_{j\geq i-1}{a-1\choose j}{n-a+1\choose k-j}\geq\sum_{j\geq i}{a\choose j}{n-a\choose k-j}$$ which can be proved by using the identity $${k\choose m}={k-1\choose m}+{k-1\choose m-1}.$$
[99]{} V.Blinovsky, Minimal number of edges in hypergraph guaranteeing perfect fractional matching and MMS conjecture, http://arxiv.org/pdf/1311.2671.pdf P.Erdös, A problem on independent $r$-tuples, Ann.Univ.Sci.,Budapest, 8 (1965) 93-95 P.Erdös, T.Galai, On maximal paths and circus of graphs, Acta Math. Acad. Sci. Hungar. 10 (1959) 337-356 P.Frankl, Improved bounds for Erdös’ Matching Conjecture, Journal of Combin. Theory, ser.A, 120, (2013), 1068-1072 P. Frankl, Extremal set systems, in: Handbook of Combinatorics, Elsevier, Amsterdam, 1995, pp. 1293Ð1329. H.Aydinian and V.Blinovsky, A remark on the problem of nonnegative $k$-sums, Problems of Information Transm., 2012, V.48, No4, pp. 347- 351. P.Frankl, V.Rödl, A. Ruciński, The maximum number of edges in a triple system not containing a disjoint family of given size, Combinatorics, Probability and Computing, 21, (2012), 141-148 N.Alon, P.Frankl, H.Huang, V.Rödl, A.Rucinnski, B.Sudakov, Large matchings in uniform hypergraphs and the conjectures of Erdös and Samuels, Journal. of Combin. theory, see. A, 119 (2012) 1200-1215 R.Ahlswede, L.Khachatrian, The complete intersection theorem for systems of finite sets, European J. Combin., 18 (1997) 125-136 N.Tokushige, The maximum size of $3$-wise $t$-intersecting families, European J. Combin., 28 (2007),no. 1, 152-166 N.Tokushige, EKR type inequalities fro $4$-wise intersecting families, Journal Combin. Theory, ser.A 114 (2007), no.4, 575-596 V.Blinovsky, Fractional matching in hypergraphs, http://arxiv.org/pdf/1311.2671.pdf
[^1]: The author was supported by NUMEC/USP (Project MaCLinC/USP).
|
---
abstract: 'For any positive integer $n$, let $f(n)$ denote the number of solutions to the Diophantine equation $$\frac{4}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$$ with $x,y,z$ positive integers. The *Erdős-Straus conjecture* asserts that $f(n) > 0$ for every $n {\geqslant}2$. In this paper we obtain a number of upper and lower bounds for $f(n)$ or $f(p)$ for typical values of natural numbers $n$ and primes $p$. For instance, we establish that $$N \log^2 N \ll \sum_{p{\leqslant}N} f(p) \ll N \log^2 N \log \log N.$$ These upper and lower bounds show that a typical prime has a small number of solutions to the Erdős-Straus Diophantine equation; small, when compared with other additive problems, like Waring’s problem.'
address:
- 'Institut für Mathematik A, Steyrergasse 30/II, Technische Universität Graz, A-8010 Graz, Austria'
- 'Department of Mathematics, UCLA, Los Angeles CA 90095-1555'
author:
- Christian Elsholtz
- Terence Tao
title: 'Counting the number of solutions to the Erdős-Straus equation on unit fractions'
---
Introduction
============
For any natural number $n \in {\mathbb{N}}= \{1,2,\ldots\}$, let $f(n)$ denote the number of solutions $(x,y,z) \in {\mathbb{N}}^3$ to the Diophantine equation $$\label{xyz}
\frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z}$$
(we do not assume $x,y,z$ to be distinct or in increasing order). Thus for instance $$f(1)=0, f(2)=3, f(3) = 12, f(4) = 10, f(5)=12, f(6)=39, f(7)=36,
f(8)=46, \ldots$$ We plot the values of $f(n)$ for $n {\leqslant}1000$, and separately restricting to primes $p{\leqslant}1000$ in Figures \[fig1\], \[fig2\].
![The value $f(n)$ for all $n {\leqslant}1000$.[]{data-label="fig1"}](f_all1000_11pt.pdf)
![The value $f(p)$ for all primes $p {\leqslant}1000$.[]{data-label="fig2"}](f_prime1000_11pt.pdf)
From these graphs one might be tempted to draw conclusions, such as “$f(n) \gg n$ infinitely often”, that we will refute in our investigations below.
The *Erdős-Straus conjecture* (see e.g. [@guy]) asserts that $f(n) >
0$ for all $n {\geqslant}2$; it remains unresolved, although there are a number of partial results. The earliest references to this conjecture are papers by Erdős [@Erdos:1950] and Obláth [@Oblath:1950], and we draw attention to the fact that the latter paper was submitted in 1948.
Most subsequent approaches list parametric solutions, which solve the conjecture for $n$ lying in certain residue classes. These soluble classes are either used for analytic approaches via a sieve method, or for computational verifications. For instance, it was shown by Vaughan [@vaughan] that the number of $n < N$ for which $f(n)=0$ is at most $N \exp( - c \log^{2/3} N )$ for some absolute constant $c>0$ and all sufficiently large $N$. (Compare also [@nakayama; @Webb:1970; @Li:1981; @Yang:1982] for some weaker results).
The conjecture was verified for all $n {\leqslant}10^{14}$ in [@swett]. In Table \[table:history\] we list a more complete history of these computations, but there may be further unpublished computations as well.
[\[table:history\]]{}
$5000$ ${\leqslant}$ 1950
----------------------- -------------------- --
$8000$ 1962
$20000$ ${\leqslant}$ 1969
$106128$ 1948/9
$141648$ 1954
$10^7$ 1964
$1.1 \times 10^7$ 1976
$10^8$ 1971
$10^9$ 1994
$10^{10}$ 1995
$1.6 \times 10^{11}$ 1996
$10^{10}$ 1999
$10^{14}$ 1999
$2\times 10^{14}$ 2012
: Numerical verifications of the Erdős-Straus conjecture. It appears that Terzi’s set of soluble residue classes is correct, but that the set of checked primes in these classes is incomplete. Another reference to a calculation up to $10^8$ due to N. Franceschine III (1978) (see [@guy; @erdosandgraham] and frequently restated elsewhere) only mentions Terzi’s calculation, but is not an independent verification. We are grateful to I. Kotsireas for confirming this (private communication).
Most of these previous approaches concentrated on the question whether $f(n)>0$ or not. In this paper we will instead study the average growth or extremal values of $f(n)$.
Since we clearly have $f(nm) {\geqslant}f(n)$ for any $n,m \in {\mathbb{N}}$, we see that to prove the Erdős-Straus conjecture it suffices to do so when $n$ is equal to a prime $p$.
In this paper we investigate the *average* behaviour of $f(p)$ for $p$ a prime. More precisely, we consider the asymptotic behaviour of the sum $$\sum_{p{\leqslant}N} f(p)$$ where $N$ is a large parameter, and $p$ ranges over all primes up to $N$. As we are only interested in asymptotics, we may ignore the case $p=2$, and focus on the odd primes $p$.
Let us call a solution $(x,y,z)$ to a *Type I solution* if $n$ divides $x$ but is coprime to $y,z$, and a *Type II solution* if $n$ divides $y,z$ but is coprime to $x$. Let $f_{{\operatorname{I}}}(n), f_{{{\operatorname{II}}}}(n)$ denote the number of Type I and Type II solutions respectively. By permuting the $x,y,z$ we clearly have $$\label{3ff}
f(n) {\geqslant}3f_{{\operatorname{I}}}(n) + 3f_{{\operatorname{II}}}(n)$$ for all $n>1$. Conversely, when $p$ is an odd prime, it is clear from considering the denominators in the Diophantine equation $$\label{4pn}
\frac{4}{p} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z}$$ that at least one of $x,y,z$ must be divisible by $p$; also, it is not possible for all three of $x,y,z$ to be divisible by $p$ as this forces the right-hand side of to be at most $3/p$. We thus have $$\label{4p}
f(p) = 3f_{{\operatorname{I}}}(p) + 3f_{{\operatorname{II}}}(p)$$ for all odd primes $p$. Thus, to understand the asymptotics of $\sum_{p {\leqslant}N} f(p)$, it suffices to understand the asymptotics of $\sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p)$ and $\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p)$. As we shall see, Type II solutions are somewhat easier to understand than Type I solutions, but we will nevertheless be able to control both types of solutions in a reasonably satisfactory manner.
We can now state our first main theorem.
\[main\] For all sufficiently large $N$, one has the bounds $$\begin{aligned}
N \log^3 N \ll \sum_{n {\leqslant}N} f_{{\operatorname{I}}}(n) &\ll N \log^3 N\\
N \log^3 N \ll \sum_{n {\leqslant}N} f_{{\operatorname{II}}}(n) &\ll N \log^3 N\\
N \log^2 N \ll \sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p) &\ll N \log^2 N \log \log N \\
N \log^2 N \ll \sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p) &\ll N \log^2 N.\end{aligned}$$
Here, we use the usual asymptotic notation $X \ll Y$ or $X=O(Y)$ to denote the estimate $|X| {\leqslant}CY$ for an absolute constant $C$, and use subscripts if we wish to allow dependencies on the implied constant $C$, thus for instance $X \ll_{\varepsilon}Y$ or $X = O_{\varepsilon}(Y)$ denotes the estimate $|X| {\leqslant}C_{\varepsilon}Y$ for some $C_{\varepsilon}$ that can depend on ${\varepsilon}$. We remark that in a previous version of this manuscript, the weaker bound $\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p) \ll N \log^2 N \log\log N$ was claimed. As pointed out subsequently by Jia [@jia], the argument in that previous version in fact only gave $\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p) \ll N \log^2 N \log\log^2 N$, but can be repaired to give the originally claimed bound $\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(n) \ll N \log^2 N \log \log N$. These bounds are of course superceded by the results in Theorem \[main\].
As a corollary of this and , we see that $$N \log^2 N \ll \sum_{p {\leqslant}N} f(p) \ll N \log^2 N \log \log N.$$ From this, the prime number theorem, and Markov’s inequality, we see that for any ${\varepsilon}> 0$, we can find a subset $A$ of primes of relative lower density at least $1-{\varepsilon}$, thus $$\label{liminf}
\liminf_{N \to \infty} \frac{|\{ p \in A: p {\leqslant}N\}|}{|\{ p: p {\leqslant}N \}|} {\geqslant}1-{\varepsilon},$$ such that $f(p) = O_{\varepsilon}(\log^3 p \log \log p)$ for all $p \in A$. Informally, a typical prime has only $O(\log^3 p \log \log p)$ solutions to the Diophantine equation ; or alternatively, for any function $\xi(p)$ of $p$ that goes to infinity as $p \to \infty$, one has $O(\xi(p) \log^3 p \log \log p)$ for all $p$ in a subset of the primes of relative density $1$. This may provide an explanation as to why analytic methods (such as the circle method) appear to be insufficient to resolve the Erdős-Straus conjecture, as such methods usually only give non-trivial lower bounds on the number of solutions to a Diophantine equation in the case when the number of such solutions grows polynomially with the height parameter $N$. (There are however some exceptions to this rule, such as Gallagher’s results [@gallagher] on representing integers as the sum of a prime and a bounded number of powers of two, but such results tend to require a large number of summands in order to compensate for possible logarithmic losses in the analysis.)
The double logarithmic factor $\log \log N$ in the above arguments arises from technical limitations to our method (and specifically, in the inefficient nature of the Brun-Titchmarsh inequality when applied to very short progressions), and we conjecture that it should be eliminated.
In view of these results, one can naively model $f(p)$ as a Poisson process with intensity at least $c \log^3 p$ for some absolute constant $c$. Using this probabilistic model as a heuristic, one expects any given prime to have a “probability” $1-O(\exp(-c \log^3 p))$ of having at least one solution, which by the Borel-Cantelli lemma suggests that the Erdős-Straus conjecture is true for all but finitely many $p$. Of course, this is only a heuristic and does not constitute a rigorous argument. (However, one can view the results in [@vaughan], [@els], based on the large sieve, as a rigorous analogue of this type of reasoning.)
From Theorem \[main\] we have the lower bound $\sum_{n {\leqslant}N} f(n) \gg N \log^3 N$. In fact one has the stronger bound $\sum_{n {\leqslant}N} f(n) \gg N \log^6 N$ (Heath-Brown, private communication) using the methods from [@heath]; see Remark \[heath-remark\] for further discussion. Thus, for composite $n$, most solutions are in fact neither of Type I or Type II. It would be of interest to get matching upper bounds for $\sum_{n {\leqslant}N} f(n)$, but this seems to be beyond the scope of our methods. It would of course also be interesting to control higher moments such as $\sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p)^k$ or $\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p)^k$, but this also seems to unfortunately lie out of reach of our methods, as the level of the relevant divisor sums becomes too great to handle.
To prove Theorem \[main\], we first use some solvability criteria for Type I and Type II solutions to obtain more tractable expressions for $f_{{\operatorname{I}}}(p)$ and $f_{{\operatorname{II}}}(p)$. As we shall see, $f_{{\operatorname{I}}}(p)$ is essentially (up to a factor of two) the number of quadruples $(a,c,d,f) \in {\mathbb{N}}^4$ with $4acd = p+f$, $f$ dividing $4a^2d+1$, and $acd {\leqslant}3p/4$, while $f_{{\operatorname{II}}}(p)$ is essentially the number of quadruples $(a,c,d,e) \in {\mathbb{N}}^4$ with $4acde = p + 4a^2d + e$ and $acde {\leqslant}3p/2$. (We will systematically review the various known representations of Type I and Type II solutions in Section \[represent-sec\].) This, combined with standard tools from analytic number theory such as the Brun-Titchmarsh inequality and the Bombieri-Vinogradov inequality, already gives most of Theorem \[main\]. The most difficult bound is the upper bounds on $f_{{\operatorname{I}}}$, which eventually require an upper bound for expressions of the form $$\sum_{a {\leqslant}A} \sum_{b {\leqslant}B} \tau( kab^2 + 1 )$$ for various $A,B,k$, where $\tau(n) := \sum_{d \mid n} 1$ is the number of divisors of $n$, and $d \mid n$ denotes the assertion that $d$ divides $n$. By using an argument of Erdős [@erdos], we obtain the following bound on this quantity:
\[kab\] For any $A, B > 1$, and any positive integer $k \ll (AB)^{O(1)}$, one has $$\sum_{a {\leqslant}A} \sum_{b {\leqslant}B} \tau(kab^2+1) \ll A B \log(A+B) \log(1+k).$$
Using the heuristic that $\tau(n) \sim \log n$ on the average (see ), one expects the true bound here to be $O( A B \log(A+B) )$. The $\log(1+k)$ loss can be reduced (for some ranges of $A,B,k$, at least) by using more tools (such as the Polya-Vinogradov inequality), but this slightly inefficient bound will be sufficient for our applications.
We prove Proposition \[kab\] (as well as some variants of this estimate) in Section \[kab-sec\]. Our main tool is a more quantitative version of a classical bound of Erdős [@erdos] on the sum $\sum_{n {\leqslant}N} \tau(P(n))$ for various polynomials $P$, which may be of independent interest; see Theorem \[erdos-bound\].
We also collect a number of auxiliary results concerning the quantities $f_i(n)$, some of which were in previous literature. Firstly, we have a vanishing property at odd squares:
\[odd\] For any odd perfect square $n$, we have $f_{{\operatorname{I}}}(n)=f_{{\operatorname{II}}}(n) = 0$.
This observation essentially dates back to Schinzel (see [@guy], [@mordell], [@schinzel]) and Yamomoto (see [@yamamoto]) and is an easy application of quadratic reciprocity : for the convenience of the reader, we give the proof in Section \[odd-sec\]. A variant of this proposition was also established in [@bello]. Note that this does not disprove the Erdős-Straus conjecture, since the inequality does not hold with equality on perfect squares; but it does indicate a key difficulty in attacking this conjecture, in that when showing that $f_{{\operatorname{I}}}(p)$ or $f_{{\operatorname{II}}}(p)$ is non-zero, one can only use methods that *must necessarily fail* when $p$ is replaced by an odd square such as $p^2$, which already rules out many strategies (e.g. a finite set of covering congruence strategies, or the circle method).
Next, we establish some upper bounds on $f_{{\operatorname{I}}}(n), f_{{\operatorname{II}}}(n)$ for fixed $n$:
\[upper\] For any $n \in {\mathbb{N}}$, one has $$f_{{\operatorname{I}}}(n) \ll n^{3/5 + O(1/\log \log n)}$$ and $$f_{{\operatorname{II}}}(n) \ll n^{2/5 + O(1/\log \log n)}.$$ In particular, from this and one can conclude that for any prime $p$ one has $$f(p) \ll p^{3/5 + O(1/\log \log p)}.$$
This should be compared with the recent result in [@browning], which gives the bound $f(n) \ll_{\varepsilon}n^{2/3+{\varepsilon}}$ for all $n$ and all ${\varepsilon}>0$. For composite $n$ the treatment of parameters dividing $n$ appears to be more complicated and here we concentrate on those two cases that are motivated by the Erdős-Straus equation for prime denominator.
We prove this proposition in Section \[upper-sec\].
The main tools here are the multiple representations of Type I and Type II solutions available (see Section \[represent-sec\]) and the divisor bound . The values of $f(p)$ appear to fluctuate in some respects as the values of the divisor function. The average values of $f(p)$ behave much more regularly.
Moreover, in view of Theorem \[main\], one might also expect to have $f(n) \ll_{\varepsilon}n^{\varepsilon}$ for any ${\varepsilon}>0$, but such logarithmic-type bounds on solutions to Diophantine equations seem difficult to obtain in general (Proposition \[upper\] appears to be the limit of what one can obtain purely from the divisor bound alone).
In the reverse direction, we have the following lower bounds on $f(n)$ for various sets of $n$:
[\[lowerboundturankubilius\]]{} For infinitely many $n$, one has $$f(n){\geqslant}\exp ((\log 3 +o(1))\frac{\log n}{\log \log n}),$$ where $o(1)$ denotes a quantity that goes to zero as $n \to \infty$.
For any function $\xi(n)$ going to $+\infty$ as $n \to \infty$, one has $$f(n){\geqslant}\exp \left( \frac{\log 3}{2}\log \log n -
O(\xi(n) \sqrt{\log \log n})\right) \gg (\log n)^{0.549}$$ for all $n$ in a subset $A$ of natural numbers of density $1$ (thus $|A \cap \{1,\ldots,N\}|/N \to 1$ as $N \to \infty$).
Finally, one has $$f(p){\geqslant}\exp \left( (\frac{\log 3}{2} - o(1)) \log \log p \right) \gg (\log p)^{0.549}$$ for all primes $p$ in a subset $B$ of primes of relative density $1$ (thus $|\{ p \in B: p {\leqslant}N\}|/{|\{ p: p {\leqslant}N \}|} \to 1$ as $N \to \infty$).
As the proof shows the first two lower bounds are already valid for sums of two unit fractions. The result directly follow from the growth of certain divisor functions. An even better model for $f(n)$ is a suitable superposition of several divisor functions. The proof will be in Section \[lowerboundsec-2\].
Finally, we consider (following [@mordell], [@schinzel]) the question of finding polynomial solutions to . Let us call a primitive residue class $n = r \mod q$ *solvable by polynomials* if there exist polynomials $P_1(n), P_2(n), P_3(n)$ which take positive integer values for all sufficiently large $n$ in this residue class (so in particular, the coefficients of $P_1,P_2,P_3$ are rational), and such that $$\frac{4}{n} = \frac{1}{P_1(n)} + \frac{1}{P_2(n)} + \frac{1}{P_3(n)}$$ for all $n$. Here we recall that a residue class $r \mod q$ is *primitive* if $r$ is coprime to $q$. One could also consider non-primitive congruences, but these congruences only contain finitely many primes and are thus of less interest to solving the Erdős-Straus conjecture (and if the Erdős-Straus conjecture held for a common factor of $r$ and $q$, then the residue class $r \mod q$ would trivially be solvable by polynomials.
By Dirichlet’s theorem, the primitive residue class $r \mod q$ contains arbitrarily large primes $p$. For each large prime $p$ in this class, we either have one or two of the $P_1(p), P_2(p), P_3(p)$ divisible by $p$, as observed previously. For $p$ large enough, note that $P_i(p)$ can only be divisible by $p$ if there is no constant term in $P_i$. We thus conclude that either one or two of the $P_i(n)$ have no constant term, but not all three. Let us call the congruence *Type I solvable* if one can take exactly one of $P_1,P_2,P_3$ to have no constant term, and *Type II solvable* if exactly two have no constant term. Thus every solvable primitive residue class $r \mod q$ is either Type I or Type II solvable.
It is well-known (see [@Oblath:1950; @mordell]) that any primitive residue class $n = r \mod 840$ is solvable by polynomials unless $r$ is a perfect square. On the other hand, it is also known (see [@mordell], [@schinzel]) that a primitive congruence class $n = r \mod q$ which is a perfect square, cannot be solved by polynomials (this also follows from Proposition \[odd\]). The next proposition classifies all solvable primitive congruences.
\[res-class\] Let $q \mod r$ be a primitive residue class. If this class is Type I solvable by polynomials, then all sufficiently large primes in this class belong to one of the following sets:
- $\{ n = -f \mod 4ad\}$, where $a,d,f \in {\mathbb{N}}$ are such that $f|4a^2 d+1$. [@nakayama]
- $\{ n = -f \mod 4ac \} \cap \{n = -c/a \mod f \}$, where $a,c,f \in {\mathbb{N}}$ are such that $(4ac,f)=1$.
- $\{ n = -f \mod 4cd \} \cap \{n^2 = -4c^2d \mod f \}$, where $c,d,f \in {\mathbb{N}}$ are such that $(4cd,f)=1$.
- $\{ n = -1/e \mod 4ab \}$, where $a,b,e \in {\mathbb{N}}$ are such that $e \mid a+b$ and $(e,4ab)=1$. [@aigner], [@rosati]
Conversely, any residue class in one of the above four sets is solvable by polynomials.
Similarly, $q \mod r$ is Type II solvable by polynomials if and only if it is a subset of one of the following residue classes:
- $-e \mod 4ab$, where $a,b,e \in {\mathbb{N}}$ are such that $e \mid a+b$ and $(e,4ab)=1$. [@aigner]
- $-4a^2d \mod f$, where $a,d,f \in {\mathbb{N}}$ are such that $4ad \mid f+1$. [@vaughan], [@rosati]
- $-4a^2d-e \mod 4ade$, where $a,d,e \in {\mathbb{N}}$ are such that $(4ad,e)=1$. [@nakayama]
As indicated by the citations, many of these residue classes were observed to be solvable by polynomials in previous literature, but some of the conditions listed here appear to be new, and they form the complete list of all such classes. We prove Proposition \[res-class\] in Section \[solution\].
The results in this paper would also extend (with minor changes) to the more general situation in which the numerator $4$ in is replaced by some other fixed positive integer, a situation considered first by Sierpiński and Schinzel (see e.g. [@sierpinski; @vaughan; @Palama:1958; @Palama:1959; @Stewart:1964]).
We will not detail all of these extensions here but in Section \[lower-3\] we extend our study of the average number of solutions to the more general question on sums of $k$ unit fractions $${\label{m/nsumofk}}
\frac{m}{n}=\frac{1}{t_1} + \frac{1}{t_2} + \cdots +
\frac{1}{t_k}.$$ If $m {\leqslant}k$ the greedy algorithm (in this case also known as Fibonacci-Sylvester algorithm) shows there is a solution. Indeed, let $n=my+r$ with $0 <r<m$, then $\frac{m}{n}-\frac{1}{y+1}=\frac{m-r}{n(y+1)}$ has a smaller numerator, and inductively a solution with $k {\leqslant}m$ is constructed. For an alternative method (especially if $m=k=4$) see also Schinzel [@schinzel:1956].
If $m>k{\geqslant}3$, and the $t_i$ are positive integers, then it is an open problem if for each sufficiently large $n$ there is at least one solution. The Erdős-Straus conjecture with $m=4, k=3$, discussed above, is the most prominent case. If $m$ and $k$ are fixed, one can again establish sets of residue classes, such that is generally soluble if $n$ is in any of these residue classes.
The problem of classifying solutions of has been studied by Rav [@rav], Sós [@Sos:1905] and Elsholtz [@els]. Moreover Viola [@viola], Shen [@shen] and Elsholtz [@elsholtz:2001] have used a suitable subset of these solutions to give (for fixed $m > k{\geqslant}3$) quantitive bounds on the number of those integers $n {\leqslant}N$, for which does not have any solution.
In order to study, whether there is at least one solution, it is again sufficient to concentrate on prime denominators. The average number of solutions is smaller when averaging over the primes only, but we intend to prove that even in the prime case the average number of solutions grows quickly, when $k$ increases.
We will focus on the case of *Type II solutions*, in which $t_2,\ldots,t_k$ are divisible by $n$. The classification of solutions that we give below also works for other divisibility patterns, but Type II solutions are the easiest to count, and so we shall restrict our attention to this case. Strictly speaking, the definition of a Type II solution here is slightly different from that discussed previously, because we do not require that $t_1$ is coprime to $n$. However, this coprimality is automatic when $n$ is prime (otherwise the right-hand side of would only be at most $k/n$). For composite $n$, it is possible to insert this condition and still obtain the lower bound , but this would complicate the argument slightly and we have chosen not to do so here.
For given $m,k,n$, let $f_{m,k,{{\operatorname{II}}}}(n)$ denote the number of Type II solutions. Our main result regarding this quantity is the following lower bound on this quantity:
\[many-solutions\] Let $m > k{\geqslant}3$ be fixed. Then, for $N$ sufficiently large, one has $$\label{many-1}
\sum_{n {\leqslant}N} f_{m,k,{{\operatorname{II}}}}(n) \gg_{m,k} N (\log N)^{2^{k-1}-1}$$ and $$\label{many-2}
\sum_{p {\leqslant}N} f_{m,k,{{\operatorname{II}}}}(p) \gg_{m,k} \frac{N (\log N)^{2^{k-1}-2}}{\log \log N}.$$
Our emphasis here is on the exponential growth of the exponent. In particular, as $k$ increases by one, the average number of solutions is roughly squared. The denominator of $\log \log N$ is present for technical reasons (due to use of the crude lower bound on the Euler totient function), and it is likely that it could be eliminated (much as it is in the $m=4,k=3$ case) with additional effort.
If we let $f_{m,k}(n)$ be the total number of solutions to (not just Type II solutions), then we of course obtain as a corollary that $$\sum_{n {\leqslant}N} f_{m,k}(n) \gg_k N (\log N)^{2^{k-1}-1}.$$ We do not expect the power of the logarithm to be sharp in this case (cf. Remark \[heath-remark\]). For instance, in [@huangandvaughan] it is shown that $$\sum_{n {\leqslant}N} f_{m,2}(n) = \left(\frac{1}{\phi(m)}+o(1)\right) N \log^2 N$$ for any fixed $m$.
Note that the equation can be rewritten as $$\frac{1}{mt_1} + \cdots + \frac{1}{mt_k} + \frac{1}{-n} = 0,$$ which is primitive when $n$ is prime. As a consequence, we obtain a lower bound for the number of integer points on the (generalised) Cayley surface:
Let $k{\geqslant}3$. The number of integer points of the following generalization of Cayley’s cubic surface, $$0=\sum_{i=0}^k \frac{1}{t_i},$$ with $t_i$ non-zero integers with $\min_i |t_i| {\leqslant}N$, is at least $c_k N(\log N)^{2^{k-1}-2} / \log\log N$ for some $c_k>0$ depending only on $k$.
Again, the double logarithmic factor should be removable with some additional effort, although the exponent $2^{k-1}-2$ is not expected to be sharp, and should be improvable also.
Finally, let us mention that there are many other problems on the number of solutions of $$\frac{m}{n}=\frac{1}{t_1} + \frac{1}{t_2} + \cdots +
\frac{1}{t_k}$$ which we do not study here. Let us point to some further references: [@sandor:2003], [@browning], [@chen-elsholtz-jiang], [@elsholtz-heuberger-prodinger] study the number of solutions of $1$ as a sum of unit fractions. [@CrootandDobbsandFriedlanderandHetzelandPappalardi] and [@huangandvaughan] study the case $k=2$, also with varying numerator $m$.
Part of the first author’s work on ths project was supported by the German National Merit Foundation. The second author is supported by a grant from the MacArthur Foundation, by NSF grant DMS-0649473, and by the NSF Waterman award. The authors thank Nicolas Templier for many helpful comments and references, and the referee and editor for many useful corrections and suggestions. The first author is very grateful to Roger Heath-Brown for very generous advice on the subject (dating back as far as 1994). Both authors are particularly indebted to him for several remarks (including Remark \[heath-remark\]), and also for contributing some of the key arguments here (such as the lower bound on $\sum_{n {\leqslant}N} f_{{\operatorname{II}}}(n)$ and $\sum_{p {\leqslant}N}
f_{{\operatorname{II}}}(p)$) which have been reproduced here with permission. The first author also wishes to thank Tim Browning, Ernie Croot and Arnd Roth for discussions on the subject.
Representation of Type I and Type II solutions {#represent-sec}
==============================================
We now discuss the representation of Type I and Type II solutions. There are many such representations in the literature (see e.g. [@aigner], [@bello], [@bernstein], [@nakayama], [@rav], [@rosati], [@vaughan], [@webb]); we will remark how each of these representations can be viewed as a form of the one given here after describing a certain algebraic variety in coordinates.
For any non-zero complex number $n$, consider the algebraic surface $$S_n := \{ (x,y,z) \in {\mathbb{C}}^3: 4xyz = nyz + nxz + nxy\} \subset {\mathbb{C}}^3.$$ Of course, when $n$ is a natural number, $f(n)$ is nothing more than the number of ${\mathbb{N}}$-points $(x,y,z) \in S_n \cap {\mathbb{N}}^3$ on this surface.
It is somewhat inconvenient to count ${\mathbb{N}}$-points on $S_n$ directly, due to the fact that $x,y,z$ are likely to share many common factors. To eliminate these common factors, it is convenient to lift $S_n$ to higher-dimensional varieties $\Sigma^{{\operatorname{I}}}_n$, $\Sigma^{{{\operatorname{II}}}}_n$ (and more specifically, to three-dimensional varieties in ${\mathbb{C}}^6$), which are adapted to parameterising Type I and Type II solutions respectively. This will replace the three original coordinates $x,y,z$ by six coordinates $a,b,c,d,e,f$, any three of which can be used to parameterise $\Sigma^I_n$. or $\Sigma^{{{\operatorname{II}}}}_n$. This multiplicity of parameterisations will be useful for many of the applications in this paper; rather than pick one parameterisation in advance, it is convenient to be able to pick and choose between them, depending on the situation.
We begin with the description of Type I solutions. More precisely, we define $\Sigma^{{\operatorname{I}}}_n$ to be the set of all sextuples $(a,b,c,d,e,f) \in {\mathbb{C}}^6$ which are non-zero and obey the constraints $$\begin{aligned}
4abd &= ne+1 \label{I-1}\\
ce &= a+b \label{I-2}\\
4abcd &= na + nb + c \label{I-3}\\
4acde &= ne + 4a^2 d + 1 \label{I-4}\\
4bcde &= ne + 4b^2 d + 1\label{I-5}\\
4acd &= n + f \label{I-6}\\
ef &= 4a^2 d + 1 \label{I-7}\\
bf &= na + c \label{I-8}\\
n^2 + 4c^2d &= f(4bcd-n).\label{I-9}\end{aligned}$$
There are multiple redundancies in these constraints; to take just one example, follows from and . One could in fact specify $\Sigma^{{\operatorname{I}}}_n$ using just three of these nine constraints if desired. However, this redundancy will be useful in the sequel, as we will be taking full advantage of all nine of these identities.
The identities - form an algebraic set that can be parameterised (perhaps up to some bounded multiplicity) by fixing three of the six coordinates $a,b,c,d,e,f$ and solving for the other three coordinates. For instance, using the coordinates $a,c,d$, one easily verifies that $$\Sigma^{{\operatorname{I}}}_n = \left\{ (a, \frac{na+c}{4acd-n}, c, d, \frac{4a^2d+1}{4acd-n}, 4acd-n): a,c,d \in {\mathbb{C}}^3; 4acd \neq n \right\}$$ and similarly for the other $\binom{6}{3}-1 = 14$ choices of three coordinates; we omit the elementary but tedious computations. Thus we see that $\Sigma^{{\operatorname{I}}}_n$ is a three-dimensional algebraic variety. From we see that the map $$\pi^{{\operatorname{I}}}_n: (a,b,c,d,e,f) \mapsto (abdn, acd, bcd)$$ maps $\Sigma^{{\operatorname{I}}}_n$ to $S_n$. After quotienting out by the dilation symmetry $$\label{dilation}
(a,b,c,d,e,f) \mapsto (\lambda a, \lambda b, \lambda c, \lambda^{-2} d, e, f)$$ of $\Sigma^I_n$, this map is injective.
If $n$ is a natural number, then $\pi^I_n$ clearly maps ${\mathbb{N}}$-points of $\Sigma^I_n$ to ${\mathbb{N}}$-points of $S_n$, and if $c$ is coprime to $n$, gives a Type I solution (note that $abd$ is automatically coprime to $n$, thanks to ). In the converse direction, all Type I solutions arise in this manner:
\[type-1\] Let $n \in {\mathbb{N}}$, and let $(x,y,z)$ be a Type I solution. Then there exists a unique $(a,b,c,d,e,f) \in {\mathbb{N}}^6 \cap \Sigma^{{\operatorname{I}}}_n$ with $abcd$ coprime to $n$ and $a,b,c$ having no common factor, such that $\pi^{{\operatorname{I}}}_n(a,b,c,d,e,f) = (x,y,z)$.
The uniqueness follows since $\pi^{{\operatorname{I}}}_n$ is injective after quotienting out by dilations. To show existence, we factor $x=ndx', y = dy', z=dz'$, where $x',y',z'$ are coprime, then after multiplying by $ndx'y'z'$ we have $$\label{dxy}
4dx'y'z' = y'z' + nx'y' + nx'z'.$$ As $y',z'$ are coprime to $n$, we conclude that $x'$ divides $y'z'$, $y'$ divides $x'z'$, and $z'$ divides $x'y'$. Splitting into prime factors, we conclude that $$\label{xabc}
x' = ab, y' = ac, z' = bc$$ for some natural numbers $a,b,c$; since $x',y',z'$ have no common factor, $a,b,c$ have no common factor also. As $y,z$ were coprime to $n$, $abcd$ is coprime to $n$ also.
Substituting into we obtain , which in particular implies (as $c$ is coprime to $n$) that $c$ divides $a+b$. If we then set $e := (a+b)/c$ and $f := 4acd - n = (na+c)/b$, then $e,f$ are natural numbers, and we obtain the other identities - by routine algebra. By construction we have $\pi^{{\operatorname{I}}}_n(a,b,c,d,e,f) = (x,y,z)$, and the claim follows.
In particular, for fixed $n$, a Type I solution exists if and only if there is an ${\mathbb{N}}$-point $(a,b,c,d,e,f)$ of $\Sigma^{{\operatorname{I}}}_n$ with $abcd$ coprime to $n$ (the requirement that $a,b,c$ have no common factor can be removed using the symmetry ). By parameterising $\Sigma^{{\operatorname{I}}}_n$ using three or four of the six coordinates, we recover some of the known characterisations of Type I solvability:
Let $n$ be a natural number. Then the following are equivalent:
- There exists a Type I solution $(x,y,z)$.
- There exists $a,b,e \in {\mathbb{N}}$ with $e \mid a+b$ and $4ab \mid ne+1$. [@aigner]
- There exists $a,b,c,d \in {\mathbb{N}}$ such that $4abcd = na+nb+c$ with $c$ coprime to $n$. [@bernstein]
- There exist $a,c,d,e \in {\mathbb{N}}$ such that $ne+1=4ad (ce-a)$ with $c$ coprime to $n$. [@rosati; @mordell]
- There exist $a,c,d,f \in {\mathbb{N}}$ such that $n=4acd-f$ and $f \mid 4a^2d+1$, with $c$ coprime to $n$. [@nakayama]
- There exist $b,c,d,e$ with $ne = (4bcde-1)- 4b^2 d$ and $c$ coprime to $n$. [@bello]
The proof of this proposition is routine and is omitted.
Type I solutions $(x,y,z)$ have the obvious reflection symmetry $(x,y,z) \mapsto (x,z,y)$. With and the corresponding symmetry for $\Sigma^{{\operatorname{I}}}_n$ is given by $$(a,b,c,d,e,f) \mapsto \left(b,a,c,d,e,\frac{n^2+4c^2d}{f}\right).$$ We will typically only use the $\Sigma^{{\operatorname{I}}}_n$ parameterisation when $y {\leqslant}z$ (or equivalently when $a {\leqslant}b$), in order to keep the sizes of various parameters small.
If we consider ${\mathbb{N}}$-points $(a,b,c,d,e,f)$ of $\Sigma^{{\operatorname{I}}}_n$ with $a=1$, they can be explicitly parameterised as $$\left(1, ce-1, c, \frac{ef-1}{4}, e, f \right)$$ where $e,f$ are natural numbers with $ef=1 \mod 4$ and $n=cef-c-f$. This shows that any $n$ of the form $cef-c-f$ with $ef=1\mod 4$ solves the Erdős-Straus conjecture, an observation made in [@bello]. However, this is a relatively small set of solutions (corresponding to roughly $\log^2 n$ solutions for a given $n$ on average, rather than $\log^3 n$), due to the restriction $a=1$. Nevertheless, in [@bello] it was verified that all primes $p = 1 \mod 4$ with $p {\leqslant}10^{14}$ were representable in this form.
Now we turn to Type II solutions. Here, we replace $\Sigma^{{\operatorname{I}}}_n$ by the variety $\Sigma^{{\operatorname{II}}}_n$, as defined the set of all sextuples $(a,b,c,d,e,f) \in {\mathbb{C}}^6$ which are non-zero and obey the constraints $$\begin{aligned}
4abd &= n+e \label{II-1}\\
ce &= a+b \label{II-2}\\
4abcd &= a + b + nc \label{II-3}\\
4acde &= n + 4a^2 d + e \label{II-4}\\
4bcde &= n + 4b^2 d + e\label{II-5}\\
4acd &= f+1 \label{II-6}\\
ef &= n + 4a^2 d \label{II-7}\\
bf &= nc + a \label{II-8}\\
4c^2 dn + 1 &= f(4bcd-1)\label{II-9}.\end{aligned}$$ This is a very similar variety to $\Sigma^{{\operatorname{I}}}_n$; indeed the non-isotropic dilation $$(a,b,c,d,e,f) \mapsto (a, b, c/n^2, d n, n^2 e, f/n)$$ is a bijection from $\Sigma^{{\operatorname{I}}}_n$ to $\Sigma^{{\operatorname{II}}}_n$. Thus, as with $\Sigma^{{\operatorname{I}}}_n$, $\Sigma^{{\operatorname{II}}}_n$ is a three-dimensional algebraic variety in ${\mathbb{C}}^6$ which can be parameterised by any three of the six coordinates in $(a,b,c,d,e,f)$. As before, many of the constraints can be viewed as redundant; for instance, is a consequence of and . Note that $\Sigma^{{\operatorname{II}}}_n$ enjoys the same dilation symmetry as $\Sigma^{{\operatorname{I}}}_n$, and also has the reflection symmetry (using and ) $$(a,b,c,d,e,f) \mapsto \left(b,a,c,d,e,\frac{4c^2dn+1}{f}\right).$$ Analogously to $\pi^{{\operatorname{I}}}_n$, we have the map $\pi^{{\operatorname{II}}}_n: \Sigma^{{\operatorname{II}}}_n \to S_n$ given by $$\label{pi2-n}
\pi^{{\operatorname{II}}}_n: (a,b,c,d,e,f) \mapsto (abd, acdn, bcdn)$$ which is injective up to the dilation symmetry and which, when $n$ is a natural number, maps ${\mathbb{N}}$-points of $\Sigma^{{\operatorname{II}}}_n$ to ${\mathbb{N}}$-points of $S_n$, and when $abd$ is coprime to $n$, gives Type II solutions. (Note that this latter condition is automatic when $n$ is prime, since $x,y,z$ cannot all be divisible by $n$.)
We have an analogue of Proposition \[type-1\]:
\[type-2\] Let $n \in {\mathbb{N}}$, and let $(x,y,z)$ be a Type II solution. Then there exists a unique $(a,b,c,d,e,f) \in {\mathbb{N}}^6 \cap \Sigma^{{\operatorname{II}}}_n$ with $abd$ coprime to $n$ and $a,b,c$ having no common factor, such that $\pi^{{\operatorname{I}}}_n(a,b,c,d,e,f) = (x,y,z)$.
Uniqueness follows from injectivity modulo dilations of $\pi^{{\operatorname{II}}}_n$ as before. To show existence, we factor $x=dx', y = ndy', z=ndz'$, where $x',y',z'$ are coprime, then after multiplying by $ndx'y'z'$ we have $$\label{dxy-2}
4dx'y'z' = ny'z' + x'y' + x'z'.$$ As $x'$ are coprime to $n$, we conclude that $x'$ divides $y'z'$, $y'$ divides $x'z'$, and $z'$ divides $x'y'$. Splitting into prime factors, we again obtain the representation for some natural numbers $a,b,c$; since $x',y',z'$ have no common factor, $a,b,c$ have no common factor also. As $x$ was coprime to $n$, $abd$ is coprime to $n$ also.
Substituting into we obtain , which in particular implies that $c$ divides $a+b$. If we then set $e := (a+b)/c$ and $f := 4acd - 1$, then $e,f$ are natural numbers, and we obtain the other identities - by routine algebra. By construction we have $\pi^{{\operatorname{II}}}_n(a,b,c,d,e,f) = (x,y,z)$, and the claim follows.
Again, we can recover some known characterisations of Type II solvability:
Let $n$ be a natural number. Then the following are equivalent:
- There exists a Type II solution $(x,y,z)$.
- There exists $a,b,e \in {\mathbb{N}}$ with $e \mid a+b$ and $4ab \mid n+e$, and $(n+e)/4$ coprime to $n$. [@aigner]
- There exists $a,b,c,d \in {\mathbb{N}}$ such that $4abcd = a+b+nc$ with $abd$ coprime to $n$. [@bernstein; @mordell]
- There exists $a,b,d \in {\mathbb{N}}$ with $4abd-1 \mid b+nc$ with $abd$ coprime to $n$. [@vaughan]
- There exist $a,c,d,e \in {\mathbb{N}}$ such that $n = (4acd-1)e - 4a^2 d$ with $(n+e)/4$ coprime to $n$. [@rosati]
- There exist $a,c,d,f \in {\mathbb{N}}$ such that $n=4ad(ce-a)-e=e(4acd-1)-4a^2d$ with $ad(ce-a)$ coprime to $n$. [@nakayama]
Next, we record some bounds on the order of magnitude of the parameters $a,b,c,d,e,f$ assuming that $y {\leqslant}z$.
\[size\] Let $n \in {\mathbb{N}}$, and suppose that $(x,y,z) = \pi^{{\operatorname{I}}}_n(a,b,c,d,e,f)$ is a Type I solution such that $y {\leqslant}z$. Then $$\begin{aligned}
a &{\leqslant}b \\
\frac{1}{4} n < acd &{\leqslant}\frac{3}{4} n \\
b < ce &{\leqslant}2b\\
an {\leqslant}bf &{\leqslant}\frac{5}{3} an.\end{aligned}$$ If instead $(x,y,z) = \pi^{{\operatorname{II}}}_n(a,b,c,d,e,f)$ is a Type II solution such that $y {\leqslant}z$, then $$\begin{aligned}
a &{\leqslant}b \\
\frac{1}{4} n < acde &{\leqslant}n \\
b < ce &{\leqslant}2b\\
3acd {\leqslant}f &< 4acd\end{aligned}$$
Informally, the above lemma asserts that the magnitudes of the quantities $(a,b,c,d,e,f)$ are controlled entirely by the parameters $(a,c,d,f)$ (in the Type I case) and $(a,c,d,e)$ (in the Type II case), with the bounds $acd \sim n, f \ll n$ in the Type I case and $acde \sim n$ in the Type II case. The constants in the bounds here could be improved slightly, but such improvements will not be of importance in our applications.
First suppose we have a Type I solution. As $y {\leqslant}z$, we have $a {\leqslant}b$. From we then have $b < ce {\leqslant}2b$, and thus from we have $$an {\leqslant}bf {\leqslant}an + \frac{2}{ef} bf.$$ Now, from , $ef = 1 \mod 4$. If $e=f=1$, then from and we would have $b = na + c = na + a + b$, which is absurd, thus $ef
{\geqslant}5$. This gives $bf {\leqslant}5 an/3$ as claimed. From this implies that $c {\leqslant}2an/3$, which in particular implies that $bcd< abdn$ and so $y {\leqslant}z < x$. From we conclude that $$\frac{4}{3n} {\leqslant}\frac{1}{y} < \frac{4}{n}$$ which gives the bound $n/4 < acd {\leqslant}3n/4$ as claimed.
Now suppose we have a Type II solution. Again $a {\leqslant}b$ and $b < ce {\leqslant}2b$. From we have $$nc < 4abcd {\leqslant}nc + 2abcd$$ and thus $n/4 < abd {\leqslant}n/2$, which by the $ce$ bound gives $n/4 < acde {\leqslant}n$. Since $f = 4acd-1$, we have $3acd {\leqslant}f < 4acd$, and the claim follows.
From the above bounds one can also easily deduce the following observation: if $4/p = 1/x + 1/y + 1/z$, then the largest denominator $\max(x,y,z)$ is always divisible by $p$. (This observation also appears in [@els].)
\[heath-remark\] Propositions \[type-1\], \[type-2\] can be viewed as special cases of the classification by Heath-Brown [@heath] of primitive integer points $(x_1,x_2,x_3,x_4) \in ({\mathbb{Z}}\backslash \{0\})^4$ on Cayley’s surface $$\left\{ (x_1,x_2,x_3,x_4): \frac{1}{x_1} + \frac{1}{x_2} + \frac{1}{x_3} + \frac{1}{x_4} = 0 \right\},$$ where by “primitive” we mean that $x_1,x_2,x_3,x_4$ have no common factor. Note that if $n,x,y,z$ solve , then $(-n, 4x, 4y, 4z)$ is an integer point on this surface, which will be primitive when $n$ is prime. In [@heath Lemma 1] it is shown that such integer points $(x_1,x_2,x_3,x_4)$ take the form $$x_i = \epsilon y_j y_k y_l z_{ij} z_{ik} z_{il}$$ for $\{i,j,k,l\}=\{1,2,3,4\}$, where $\epsilon \in \{-1,+1\}$ is a sign, and the $y_i, z_{ij}$ are non-zero integers obeying the coprimality constraints $$(y_i,y_j) = (z_{ij},z_{kl}) = (y_i,z_{ij}) = 1$$ for $\{i,j,k,l\}=\{1,2,3,4\}$, and obeying the equation $$\label{da}
\sum_{\{i,j,k,l\}=\{1,2,3,4\}} y_i z_{jk} z_{kl} z_{lj} = 0.$$ Conversely, any $\epsilon, y_i, z_{ij}$ obeying the above conditions induces a primitive integer point on Cayley’s surface. The Type I (resp. Type II) solutions correspond, roughly speaking, to the cases when one of the $z_{1i}$ (resp. one of the $y_i$) in the factorisation $$n = x_1 = \epsilon y_2 y_3 y_4 z_{12} z_{13} z_{14}$$ are equal to $\pm n$. The $y_i, z_{ij}$ coordinates are closely related to the $(a,b,c,d,e,f)$ coordinates used in this section; in [@heath] it is observed that these coordinates obey a number of algebraic equations in addition to , which essentially describe (the closure of) the universal torsor [@cts] of Cayley’s surface.
In [@heath] it was shown that the number of integer points $(x_1,x_2,x_3,x_4)$ on Cayley’s surface of maximal height $\max(|x_1|,\ldots,|x_4|)$ bounded by $N$ was comparable to $N \log^6 N$. This is not quite the situation considered in our paper; a solution to with $n {\leqslant}N$ induces an integer point $(x_1,x_2,x_3,x_4)$ whose *minimal* height $\min(|x_1|,\ldots,|x_4|)$ is bounded by $N$. Nevertheless, the results in [@heath] can be easily modified (by minor adjustments to account for the restriction that three of the $x_i$ are positive, and restricting $n$ to be a multiple of $4$ to eliminate divisibility constraints) to give a *lower bound* $\sum_{n {\leqslant}N} f(n) \gg N \log^6 N$ for the number of such points, though it is not immediately obvious whether this lower bound can be matched by a corresponding upper bound. Nevertheless, we see that there are several logarithmic factors separating the general solution count from the Type I and Type II solution count; in particular, for generic $n$, the majority of solutions to will neither be Type I nor Type II. In spite of this, the number of Type I and Type II solutions is the relevant quantity for studying the Erdős-Straus conjecture, as it is naturally to study it for prime denominators only.
We close this section with a small remark on the well known standard classification of solutions in Mordell’s book: His two cases (in his notation) $$\frac{m}{p} = \frac{1}{abd} + \frac{1}{acd} + \frac{1}{bcdp}$$ with $(a,b)=(a,c)=(b,c)=1$ and $p\nmid abcd$ and $$\frac{m}{p} = \frac{1}{abd} + \frac{1}{acdp} + \frac{1}{bcdp}$$ $(a,b)=(a,c)=(b,c)=1$ with $p\nmid abd$ suggest that $p\mid c$ might be possible. Here we prove, for $m>3$ and $p$ coprime to $m$, that none of the denominators can be divisible by $p^2$. In particular $p \nmid abcd$ in both of the cases above.
[\[refinement\]]{} Let $m/p = 1/x + 1/y +1/z$ where $m > 3$, $p$ is a prime not dividing $m$, and $x,y,z$ are natural numbers. Then none of $x,y,z$ are divisible by $p^2$.
Note that there are a small number of counterexamples to this proposition for $m {\leqslant}3$, such as ${3}/{2} = {1}/{1} + {1}/{4} + {1}/{4}$.
We may assume that $(x,y,z)$ is either a Type I or Type II solution (replacing $4$ by $m$ as needed). In the Type I case $(x,y,z) = (abdp, acd, bcd)$, the claim is already clear since $abcd$ is known to be coprime to $p$. In the Type II case $(x,y,z) = (abd, acdp, bcdp)$ it is known that $abd$ is coprime to $p$, so the only remaining task is to establish that $c$ is coprime to $p$ also.
Suppose $c$ is not coprime to $p$; then $y, z$ are both divisible by $p^2$. In particular $$\frac{1}{y}+\frac{1}{z} {\leqslant}\frac{2}{p^2}$$ and hence $$\frac{m}{p} > \frac{1}{x} {\geqslant}\frac{m}{p} - \frac{2}{p^2}.$$ Taking reciprocals, we conclude that $$p < mx {\leqslant}p (1 - \frac{2}{mp})^{-1}.$$ Bounding $(1-{\varepsilon})^{-1} < 1+2{\varepsilon}$ when $0 < {\varepsilon}< 1/2$, we conclude that $$p < mx < p + \frac{4}{m}.$$ But if $m>4$, this forces $mx$ to be a non-integer, a contradiction.
Upper bounds for $f_i(n)$ {#upper-sec}
=========================
We may now prove Proposition \[upper\].
We begin with the bound for $f_{{\operatorname{I}}}(n)$. By symmetry we may restrict attention to Type I solutions $(x,y,z)$ for which $y {\leqslant}z$. By Proposition \[type-1\] and Lemma \[size\], these solutions arise from sextuples $(a,b,c,d,e,f) \in {\mathbb{N}}^6 \cap \Sigma^{{\operatorname{I}}}_n$ obeying the Type I bounds in Lemma \[size\]. In particular we see that $$e \cdot f \cdot (cd)^2 \cdot ac = (acd)^2 (\frac{ce}{b}) (\frac{bf}{a}) \ll n^3,$$ and hence at least one of $e, f, cd, ac$ is $O(n^{3/5})$.
Suppose first that $e \ll n^{3/5}$. For fixed $e$, we see from and the divisor bound that there are $n^{O({1}/{\log\log n})}$ choices for $a,b,d$, giving a net total of $n^{3/5+O({1}/{\log\log n})}$ points in $\Sigma^{{\operatorname{I}}}_n$ in this case.
Similarly, if $f \ll n^{3/5}$, and the divisor bound gives $n^{O({1}/{\log\log n})}$ choices for $a,c,d$ for each $f$, giving $n^{3/5+O({1}/{\log\log n})}$ solutions. If $cd \ll n^{3/5}$, one uses and the divisor bound to get $n^{O({1}/{\log\log n})}$ choices for $b,f,c,d$ for each choice of $cd$, and if $ac \ll n^{3/5}$, then and the divisor bound gives $n^{O({1}/{\log\log n})}$ choices for $a,b,c,f$ for each fixed $ac$. Putting all this together (and recalling that any three coordinates in $\Sigma^{{\operatorname{I}}}_n$ determine the other three) we obtain the first part of Proposition \[upper\].
Now we prove the bound for $f_{{\operatorname{II}}}(n)$, which is similar. Again we may restrict attention to sextuples $(a,b,c,d,e,f) \in {\mathbb{N}}^6 \cap \Sigma^{{\operatorname{II}}}_n$ obeying the Type II bounds in Lemma \[size\]. In particular we have $$e^2 \cdot (ad) \cdot (ac) \cdot (cd) = (acde)^2 {\leqslant}n^2$$ and so at least one of $e, ad, ac, cd$ is $O(n^{2/5})$.
If $e \ll n^{2/5}$, we use and the divisor bound to get $n^{O({1}/{\log \log n})}$ choices for $a,b,d$ for each $e$. If $ad \ll n^{2/5}$, we use and the divisor bound to get $n^{O({1}/{\log \log n})}$ choices for $a,d,e,f$ for each fixed $ad$. If $ac \ll n^{2/5}$, we use to get $n^{O({1}/{\log \log n})}$ choices for $a,c,b,f$ for each fixed $ac$. If $cd \ll n^{2/5}$, we use and the divisor bound to get $n^{O({1}/{\log \log n})}$ choices for $b,c,d,f$ for each fixed $cd$. Putting all this together we obtain the second part of Proposition \[upper\].
This argument, together with the fact that a large number $n$ can be factorised in expected $O(n^{o(1)})$ time (using, say, the quadratic sieve [@pomerance]), gives an algorithm to find all Type I solutions for a given $n$ in expected run time $O(n^{3/5+o(1)})$, and an algorithm to find all the Type II solutions in expected run time $O(n^{2/5+o(1)})$.
Insolubility for odd squares {#odd-sec}
============================
We now prove Proposition \[odd\]. Suppose for contradiction that $n$ is an odd perfect square (in particular, $n=1 \mod 8$) with a Type I solution. Then by Proposition \[type-1\], we can find an ${\mathbb{N}}$-point $(a,b,c,d,e,f)$ in $\Sigma^{{\operatorname{I}}}_n$.
Let $q$ be the largest odd factor of $ab$. From we have $ne+1=0 \mod q$. Since $n$ is a perfect square, we conclude that $$\left( \frac{e}{q} \right) = \left( \frac{-1}{q} \right) = (-1)^{(q-1)/4}$$ thanks to . Since $n=1 \mod 8$, we see from that $e=3 \mod 4$. By quadratic reciprocity we thus have $$\left( \frac{q}{e} \right) = 1.$$ On the other hand, from we see that $ab = -a^2 \mod e$, and thus $$\left( \frac{ab}{e} \right) = \left( \frac{-1}{e} \right) = -1$$ by . This forces $ab \neq q$, and so (by definition of $q$) $ab$ is even. By , this forces $e=7 \mod 8$, which by implies that $$\left( \frac{2}{e} \right) =1$$ and thus $$\left( \frac{q}{e} \right) = \left( \frac{ab}{e} \right),$$ a contradiction.
The proof in the Type II case is almost identical, using , in place of , ; we omit the details.
Lower bounds I {#lower-sec}
==============
Now we prove the lower bounds in Theorem \[main\].
We begin with the lower bound $$\label{2n-lower}
\sum_{n {\leqslant}N} f_{{\operatorname{II}}}(n) \gg N \log^3 N.$$
Suppose $a,c,d,e$ are natural numbers with $d$ square-free, $e$ coprime to $ad$, $e > a$, and $acde {\leqslant}N/4$. Then the quantity $$\label{acde}
n := 4acde - e - 4a^2 d$$ is a natural number of size at most $N$, and $(a,ce-a,c,d,e,4acd-1)$ is an ${\mathbb{N}}$-point of $\Sigma^{{\operatorname{II}}}_{\mathfrak{n}}$. Applying $\pi^{{\operatorname{II}}}_n$, we obtain a solution $$(x,y,z) = (a(ce-a)d, acdn, (ce-a)cdn)$$ to . We claim that this is a Type II solution, or equivalently that $a(ce-a)d$ is coprime to $n$. As $e$ is coprime to $ad$, we see from that $n$ is coprime to $ade$, so it suffices to show that $n$ is coprime to $b := ce-a$. But if $q$ is a common factor of both $n$ and $b$, then from the identity (with $f = 4acd-1$) we see that $q$ is also a common factor of $a$, a contradiction. Thus we have obtained a Type II solution. Also, as $d$ is square-free, any two quadruples $(a,c,d,e)$ will generate different solutions, as the associated sextuples $(a,ce-a,c,d,e,4acd-1)$ cannot be related to each other by the dilation . Thus, it will suffice to show that there are at least $\delta N \log^3 N$ quadruples $(a,c,d,e) \in {\mathbb{N}}$ with $d$ square-free, $e$ coprime to $ad$, $e > a$, and $acde {\leqslant}N/4$ for some absolute constant $\delta>0$. Restricting $a,c,d$ to be at most $N^{0.1}$ (say), we see that the number of possible choices of $e$ is at least $\delta' ({N}/{acd}) {\phi(ad)}/{ad}$, where $\phi$ is the Euler totient function and $\delta'>0$ is another absolute constant. It thus suffices to show that $$\sum_{a,c,d {\leqslant}N^{0.1}} \mu^2(d) \frac{\phi(ad)}{ad} \frac{1}{adc} \gg \log^3 N,$$ where $\mu$ is the Möbius function (so $\mu^2(d)=1$ exactly when $d$ is square-free). Using the elementary estimate $\phi(ad) {\geqslant}\phi(a) \phi(d)$ and factorising, we see that it suffices to show that $$\label{doe}
\sum_{d {\leqslant}N^{0.1}} \frac{\mu(d)^2 \phi(d)}{d^2} \gg \log N.$$ But this follows from Lemma \[upper-crude\].
Now we prove the lower bound $$\sum_{n {\leqslant}N} f_{{\operatorname{I}}}(n) \gg N \log^3 N,$$ which follows by a similar method.
Suppose $a,c,d,f$ are natural numbers with $d$ square-free, $f$ dividing $4a^2d+1$ and coprime to $c$, $d {\geqslant}f$, and $acd {\leqslant}N/4$. Then the quantity $$\label{acdef}
n := 4acd - f$$ is a natural number which is at most $N$, and $(a, b, c, d, {4a^2 d+1}/{f}, f)$ is an ${\mathbb{N}}$-point of $\Sigma^{{\operatorname{I}}}_n$, where $$b := c \frac{4a^2d+1}{f}-e = \frac{na+c}{f}.$$ Applying $\pi^{{\operatorname{I}}}_n$, this gives a solution $$(x,y,z) = (abdn, acd, bcd)$$ to , and as before the square-free nature of $d$ ensures that each quadruple $(a,c,d,f)$ gives a different solution. We claim that this is a Type I solution, i.e. that $abcd$ is coprime to $n$. As $f$ divides $4a^2d+1$, $f$ and with also $n$ is coprime to $ad$. As $f$ and $c$ are coprime by assumption, $n$ is coprime to $acd$ by . As $b = (na+c)/f$, we conclude that $n$ is also coprime to $b$.
Thus it will suffice to show that there are at least $\delta N \log^3 N$ quadruples $(a,c,d,f) \in {\mathbb{N}}^4$ with $f$ coprime to $2ac$, and $d$ square-free with $f$ dividing $4a^2d+1$, $d {\geqslant}f$, and $acd {\leqslant}N/4$, for some absolute constant $\delta>0$.
We restrict $a, c, f$ to be at most $N^{0.1}$. If $f$ is coprime to $2ac$, then there is a unique primitive residue class of $f$ such that $4a^2d+1$ is a multiple of $f$ for all $d$ in this class. Also, there are at least $\delta {N}/{acf}$ elements $d$ of this residue class with $d {\geqslant}f$ and $acd {\leqslant}N/4$ for some absolute constant $\delta>0$; a standard sieving argument shows that a positive proportion of these elements are square-free. Thus, we have a lower bound of $$\sum_{a,c,f {\leqslant}N^{0.1}: (f,2ac)=1} \frac{N}{acf}$$ for the number of quadruples. Restricting $f$ to be odd and then using the crude sieve $$\label{crude-sieve}
1_{(f,2ac)=1} {\geqslant}1 - \sum_p 1_{p \mid f} 1_{p \mid a} - \sum_p 1_{p \mid f} 1_{p \mid c}$$ where $p$ ranges over odd primes, where $1_E$ denotes the indicator function of a statement $E$ (i.e. $1_E=1$ if $E$ holds, and $1_E=0$ otherwise), one easily verifies that the above expression is at least $\delta N \log^3 N$ for some absolute constant $\delta>0$, and the claim follows.
Now we establish the lower bound $$\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p) \gg N \log^2 N.$$ We will repeat the proof of , but because we are now counting primes instead of natural numbers we will need to invoke the Bombieri-Vinogradov inequality at a key juncture.
Suppose $a,c,d,e$ are natural numbers with $d$ square-free, $a,c,d {\leqslant}N^{0.1}$, and $e$ between $N^{0.6}$ and $N/4acd$ with $$\label{pacd}
p := 4acde - e - 4a^2 d$$ prime. Then $p$ is at most $N$ and at least $N^{0.6}$, and in particular is automatically coprime to $ade$ (and thus $ce-a$, by previous arguments). Thus, as before, each such $(a,c,d,e)$ gives a Type II solution for a prime $p {\leqslant}N$, with different quadruples giving different solutions. Thus it suffices to show that there are at least $\delta N \log^2 N$ quadruples $(a,c,d,e)$ with the above properties for some absolute constant $\delta>0$.
Fix $a,c,d$. As $e$ ranges from $N^{0.6}$ to $N/4acd$, the expression traces out a primitive residue class modulo $4acd-1$, omitting at most $O(N^{0.6})$ members of this class that are less than $N$. Thus, the number of primes of the form for fixed $acd$ is $$\pi(N;4acd-1,-4a^2d) - O(N^{0.6}),$$ where $\pi(N;q,t)$ denotes the number of primes $p<N$ that are congruent to $t$ mod $q$. We replace $\pi(N;4acd-1,-4a^2d)$ by a good approximation, and bound the error. If we set $$D(N;q) := \max_{(a,q)=1} \left|\pi(N;q,a) - \frac{{ {\rm li } }(N)}{\phi(q)}\right|$$ (as in ), where ${ {\rm li } }(x) := \int_0^x {dt}/{\log t}$ is the Cauchy principal value of the logarithmic integral, the number of primes of the form for fixed $acd$ is at least $$\frac{{ {\rm li } }(N)}{\phi(4acd-1)} - D(N;4acd-1) - O(N^{0.6})$$ The overall contribution of those $acd$ combinations referring to the $O(N^{0.6})$ error term is at most $O( (N^{0.1})^3 N^{0.6} ) = o(N \log^2 N)$, while ${ {\rm li } }(N)$ is comparable to $N/\log N$, so it will suffice to show the lower bound $$\label{lower-ii} \sum_{a,c,d {\leqslant}N^{0.1}} \frac{\mu^2(d)}{\phi(4acd-1)} \gg \log^3 N$$ and the upper bound $$\label{upper-ii} \sum_{a,c,d {\leqslant}N^{0.1}} D(N;4acd-1) = o( N \log^2 N ).$$ We first prove . Using the trivial bound $\phi(4acd-1) {\leqslant}4acd$, it suffices to show that $$\sum_{a,c,d {\leqslant}N^{0.1}} \frac{\mu^2(d) }{acd} \gg \log^3 N$$ which upon factorising reduces to showing $$\sum_{d {\leqslant}N^{0.1}} \frac{\mu^2(d)}{d} \gg \log N.$$ But this follows from Lemma \[upper-crude\].
Now we show . Writing $q := 4acd-1$, we can upper bound the left-hand side of somewhat crudely by $$\sum_{q {\leqslant}N^{0.3}} D(N;q) \tau(q+1)^2.$$ From divisor moment estimates (see ) we have $$\sum_{q {\leqslant}N^{0.3}} \frac{\tau(q+1)^4}{q} \ll \log^{O(1)} N;$$ hence by Cauchy-Schwarz, we may bound the preceding quantity by $$\ll \log^{O(1)} N \left(\sum_{q {\leqslant}N^{0.3}} q D(N;q)^2\right)^{1/2}.$$ Using the trivial bound $D(N;q) \ll N/q$, we bound this in turn by $$\ll N^{1/2} \log^{O(1)} N \left(\sum_{q {\leqslant}N^{0.3}} D(N;q)\right)^{1/2}.$$ But from the Bombieri-Vinogradov inequality , we have $$\sum_{q {\leqslant}N^{0.3}} D(N;q) \ll_A N \log^{-A} N$$ for any $A > 0$, and the claim follows.
Finally, we establish the lower bound $$\sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p) \gg N \log^2 N.$$ Unsurprisingly, we will repeat many of the arguments from preceding cases. Suppose $a,c,d,f$ are natural numbers with $a,c,f {\leqslant}N^{0.1}$ with $(a,c)=(2ac,f)=1$, $N^{0.6} {\leqslant}d {\leqslant}N/4ac$, such that $f$ divides $4a^2d+1$, and the quantity $$\label{acdef-p}
p := 4acd - f$$ is prime. Then $p$ is at most $N$ and is at least $N^{0.4}$, and in particular is coprime to $a,c,f$; from it is coprime to $d$ also. This thus yields a Type I solution for $p$; by the coprimality of $a,c$, these solutions are all distinct as no two of the associated sextuples $(a, b, c, d, {4a^2 d+1}/{f}, f)$ can be related by . Thus it suffices to show that there are at least $\delta N \log^2 N$ quadruples $(a,c,d,f)$ with the above properties for some absolute constant $\delta>0$.
For fixed $a,c,f$, the parameter $d$ traverses a primitive congruence class modulo $f$, and $p=4acd-f$ traverses a primitive congruence class modulo $4acf$, that omits at most $O(N^{0.6})$ of the elements of this class that are less than $N$. By , the total number of $d$ that thus give a prime $p$ for fixed $acf$ is at least $$\frac{{ {\rm li } }(N)}{\phi(4acf)} - D(N;4acf) - O(N^{0.6})$$ and so by arguing as before it suffices to show the bounds $$\sum_{a,c,f {\leqslant}N^{0.1}} 1_{(a,c) = (2ac,f)=1} \frac{1}{\phi(4acf)} \gg \log^3 N$$ and $$\sum_{a,c,f {\leqslant}N^{0.1}} D(N;4acf) = o( N \log^2 N).$$ But this is proven by a simple modification of the arguments used to establish , (the constraints $(a,c)=(2ac,f)=1$ being easily handled by an elementary sieve such as ). This concludes all the lower bounds for Theorem \[main\].
Lower bounds II
===============
[\[lowerboundsec-2\]]{}
Here we prove Theorem \[lowerboundturankubilius\].
For any natural numbers $m,n$, let $g_2(m,n)$ denote the number of solutions $(x,y)\in {\mathbb{N}}^2$ to the Diophantine equation ${m}/{n}={1}/{x}+{1}/{y}$. Since $$\frac{1}{x} + \frac{1}{y} = \frac{1}{x} + \frac{1}{2y} + \frac{1}{2y}$$ we conclude the crude bound $f(n) {\geqslant}g_2(4,n)$ for any $n$.
In [@browning Theorem 1] it was shown that $g_2(m,n) \gg 3^s$ whenever $n$ is the product of $s$ distinct primes congruent to $-1 \mod m$. Since $g_2(m, kn) {\geqslant}g_2(m, n)$ for any $k$, we conclude that $$\label{flower}
f(n) {\geqslant}g_2(4,n) \gg 3^{w_4(n)}$$ for all $n$, where $w_m(n)$ is the number of distinct prime factors of $n$ that are congruent to $-1 \mod m$.
Now we prove the first part of the theorem. Let $s$ be a large number, and let $n$ be the product of the first $s$ primes equal to $-1 \mod 4$, then from the prime number theorem in arithmetic progressions we have $\log n = (1+o(1)) s \log s$, and thus $s = (1+o(1)) {\log n}/{\log \log n}$. From we then have $$f(n) \gg \exp\left( \log 3 (1+o(1)) \frac{\log n}{\log \log n}\right).$$ Letting $s \to \infty$ we obtain the claim.
For the second part of the theorem, we use the Turán-Kubilius inequality (Lemma \[turan-kubilius\]) to the additive function $w_4$. This inequality gives that $$\sum_{n {\leqslant}N} |w_4(n)- \frac{1}{2}\log \log N|^2 \ll N \log \log N.$$ From this and Chebyshev’s inequality (see also [@tenenbaum p. 307]), we see that $$w_4(n) {\geqslant}\frac{1}{2}\log \log n +O(\xi(n) \sqrt{\log \log n})$$ for all $n$ in a density $1$ subset of ${\mathbb{N}}$. The claim then follows from .
Now we turn to the third part of the theorem. We first deal with the case when $p = 4t-1$ is prime, then $$\frac{4}{p} = \frac{4}{p+1} + \frac{1}{t(4t-1)}$$ which in particular implies that $$f(p) {\geqslant}g_2(4,p+1)$$ and thus $$f(p) \gg 3^{w_4(p+1)}.$$ By Lemma \[barban-turankubilius\] we know that $$\label{w4}
w_4(p+1) {\geqslant}\left(\frac{1}{2}-o(1)\right) \log \log p$$ for all $p$ in a a set of primes of relative prime density $1$.
It remains to deal with those primes $p$ congruent to $1 \mod 4$. Writing $$\frac{4}{p} = \frac{1}{(p+3)/4} + \frac{3}{p(p+3)/4}$$ we see that $$f(p) {\geqslant}g_2(3,p(p+3)/4) \gg 3^{w_3((p+3)/4)} \gg 3^{w_3(p+3)}.$$ It thus suffices to show that $$w_3(p+3) {\geqslant}\left(\frac{1}{2}-o(1)\right) \log \log p$$ for all $p$ in a set of primes of relative density $1$. But this can be established by the same techniques used to establish .
Sums of divisor functions {#kab-sec}
=========================
Let $P: {\mathbb{Z}}\to {\mathbb{Z}}$ be a polynomial with integer coefficients, which for simplicity we will assume to be non-negative, and consider the sum $$\sum_{n {\leqslant}N} \tau(P(n)).$$ In [@erdos], Erdős established the bounds $$\label{oed}
N \log N \ll_{P} \sum_{n {\leqslant}N} \tau(P(n)) \ll_{P} N \log N$$ for all $N>1$ and for $P$ irreducible; note that the implied constants here can depend on both the degree and the coefficients of $P$. This is of course consistent with the heuristic $\tau(n) \sim \log n$ “on average”. Of course, the irreducibility hypothesis is necessary as otherwise $P(n)$ would be expected to have many more divisors.
In this section we establish a refinement of the Erdős upper bound that gives a more precise description of the dependence of the implied constant on $P$ (and with irreducibility replaced by a much weaker hypothesis), which may be of some independent interest:
\[erdos-bound\] Let $N > 1$, let $P$ be a polynomial with degree $D$ and coefficients being non-negative integers of magnitude at most $N^l$. For any natural number $m$, let $\rho(m)$ be the number of roots of $P \mod m$ in ${\mathbb{Z}}/m{\mathbb{Z}}$, and suppose one has the bound $$\label{pj}
\rho( p^j ) {\leqslant}C$$ for all primes $p$ and all $j {\geqslant}1$. Then $$N \sum_{m {\leqslant}N} \frac{\rho(m)}{m} \ll \sum_{n {\leqslant}N} \tau(P(n)) \ll_{D,l,C} N \sum_{m {\leqslant}N} \frac{\rho(m)}{m}.$$
For any fixed $P$, one has for some $C = C_P$ (by many applications of Hensel’s lemma, and treating the case of small $p$ separately), and when $P$ is irreducible one can use tools such as Landau’s prime ideal theorem to show that $\sum_{m {\leqslant}N} {\rho(m)}/{m} \ll_P \log N$ (indeed, much more precise asymptotics are available here). See [@stewart] for more precise bounds on $C$ in terms of quantities such as the discriminant $\Delta(P)$ of $P$; bounds of this type go back to Nagell [@nagell] and Ore [@ore] (see also [@sandor], [@huxley]). One should in fact be able to establish a version of Theorem \[erdos-bound\] in which the implied constant depends explicitly on the $\Delta(P)$ rather than on $C$ by using the estimates of Henriot [@henriot] (which build upon earlier work of Barban-Vehov [@barban], Daniel [@daniel], Shiu [@shiu], Nair [@nair], and Nair-Tenenbaum [@nairt]), but we will not do so here, as we will need to apply this bound in a situation in which the discriminant may be large, but for which the bound $C$ in can still be taken to be small. However, the version of Nair’s estimate given in [@breteche Theorem 2], having no explicit dependence on the discriminant, may be able to give an alternate derivation of Theorem \[erdos-bound\]; we thank the referee for this observation.
Thus we see that Erdős’ original result is a corollary of Theorem \[erdos-bound\]. For special types of $P$ (e.g. linear or quadratic polynomials), more precise asymptotics on $\sum_{n {\leqslant}N} \tau(P(n))$ are known (see e.g. [@fouvry], [@fi] for the linear case, and [@hooley], [@scourfield], [@mckee], [@mckee2], [@mckee3] for the quadratic case), but the methods used are less elementary (e.g. Kloosterman sum bounds in the linear case, and class field theory in the quadratic case), and do not cover all ranges of coefficients of $P$ for the applications to the Erdős-Straus conjecture. See also [@pom] for another upper bound in the quadratic case which is uniform over large ranges of coefficients but gives weaker bounds (losing some powers of $\log N$).
Our argument will be based on the methods in [@erdos]. In this proof all implied constants will be allowed to depend on $D,l$ and $C$.
We begin with the lower bound, which is very easy. Clearly $$\label{low}
\tau(P(n)) {\geqslant}\sum_{m {\leqslant}N: m \mid P(n)} 1$$ and thus $$\sum_{n {\leqslant}N} \tau(P(n)) {\geqslant}\sum_{m {\leqslant}N} \sum_{n {\leqslant}N: m \mid P(n)} 1.$$ The expression $P(n) \mod m$ is periodic in $n$ with period $m$, and thus for $m {\leqslant}N$ one has $$\label{period}
N \frac{\rho(m)}{m} \ll \sum_{n {\leqslant}N: m \mid P(n)} 1 \ll N \frac{\rho(m)}{m}$$ which gives the lower bound on $\sum_{n {\leqslant}N} \tau(P(n))$.
Now we turn to the upper bound, which is more difficult. We first establish a preliminary bound $$\label{note}
\sum_{n {\leqslant}N} \tau(P(n))^2 \ll N \log^{O(1)} N$$ using an argument of Landreau [@landreau]. Let $n {\leqslant}N$. By the coefficient bounds on $P$ we have $$\label{n-bound}
P(n) \ll N^{O(1)}.$$ Using the main lemma from [@landreau], we conclude that $$\tau(P(n))^2 \ll \sum_{m {\leqslant}N: m \mid P(n)} \tau(m)^{O(1)}$$ and thus $$\sum_{n {\leqslant}N} \tau(P(n))^2 \ll \sum_{m {\leqslant}N} \tau(m)^{O(1)} \sum_{n {\leqslant}N: m \mid P(n)} 1.$$ Using , we may crudely bound $\sum_{n {\leqslant}N: m \mid P(n)} 1 {\leqslant}\tau(m)^{O(1)}$, thus $$\sum_{n {\leqslant}N} \tau(P(n))^2 \ll \sum_{m {\leqslant}N} \tau(m)^{O(1)}$$ and the claim then follows from Lemma \[upper-crude\].
In view of and the Cauchy-Schwarz inequality, we may discard from the $n$ summation any subset of $\{1,\ldots,N\}$ of cardinality at most $N \log^{-C'} N$ for sufficiently large $C'$. We will take advantage of this freedom in the sequel.
Suppose for the moment that we could reverse and obtain the bound $$\label{dpn}
\tau(P(n)) \ll \sum_{m {\leqslant}N: m \mid P(n)} 1.$$ Combining this with , we would obtain $$\begin{aligned}
\sum_{n {\leqslant}N} \tau(P(n)) &\ll \sum_{m {\leqslant}N} \sum_{n {\leqslant}N: m \mid P(n)} 1 \\
&\ll \sum_{m {\leqslant}N} \frac{N}{m} \rho(m)\end{aligned}$$ which would give the theorem. Unfortunately, while is certainly true when $P(n) {\leqslant}N^2$, it can fail for larger values of $P(n)$, and from the coefficient bounds on $P$ we only have the weaker upper bound .
Nevertheless, as observed by Erdős, we have the following substitute for :
\[leo\] Let $C'$ be a fixed constant. For all but at most $O(N \log^{-C'} N)$ values of $n$ in the range $1 {\leqslant}n {\leqslant}N$, either holds, or one has $$\tau(P(n)) \ll O(1)^r \sum_{m \in S_r: m \mid P(n)} 1$$ for some $2 {\leqslant}r \ll (\log \log N)^2$, where $S_r$ is the set of all $m$ with the following properties:
- $m$ lies between $N^{1/4}$ and $N$.
- $m$ is $N^{1/r}$-smooth (i.e. $m$ is divisible by any prime larger than $N^{1/r}$).
- $m$ has at most $(\log \log N)^2$ prime factors.
- $m$ is not divisible by any prime power $p^k$ with $p {\leqslant}N^{1/2}$, $k > 1$, and $p^k {\geqslant}N^{1/8(\log \log N)^2}$.
The point here is that the exponential loss in the $O(1)^r$ factor will be more than compensated for by the $N^{1/r}$-smooth requirement, which as we shall see gains a factor of $r^{-cr}$ for some absolute constant $c>0$.
The claim follows from when $P(n) {\leqslant}N^2$, so we may assume that $P(n) > N^2$.
We factorise $P(n)$ as $$P(n) = p_1 \ldots p_J$$ where the primes $p_1 {\leqslant}\ldots {\leqslant}p_J$ are arranged in non-decreasing order. Let $0 {\leqslant}j < J$ be the largest integer such that $p_1 \ldots p_j {\leqslant}N$. If $j=0$ then all prime factors of $P(n)$ are greater than $N$, and thus by we have $J=O(1)$ and thus $\tau(P(n)) = O(1)$, which makes the claim trivial. Thus we may assume that $j {\geqslant}1$.
Suppose first that all the primes $p_{j+1},\ldots,p_J$ have size at least $N^{1/2}$. Then from we in fact have $J = j+O(1)$, and so $$\tau(P(n)) \ll \tau(p_1 \ldots p_j).$$ Note that every factor of $p_1 \ldots p_j$ divides $P(n)$ and is at most $N$, which gives . Thus we may assume that $p_{j+1}$, in particular, is less than $N^{1/2}$, which forces $$\label{naples} N^{1/2} < p_1 \ldots p_j {\leqslant}N$$ and $p_j <N^{1/2}$.
Following [@erdos], we eliminate some small exceptional sets of natural numbers $n$. First we consider those $n$ for which $P(n)$ has at least $(\log \log N)^2$ distinct prime factors. For such $P(n)$, one has $\tau(P(n)) {\geqslant}2^{(\log \log N)^2}$, which is asymptotically larger than any given power of $\log N$; thus by , the set of such $n$ has size at most $O( N \log^{-C'} N )$ and can be discarded.
Next, we consider those $n$ for which $P(n)$ is divisible by a prime power $p^k$ with $p {\leqslant}N^{1/2}$, $k > 1$, and $p^k {\geqslant}N^{1/8(\log \log N)^2}$. By reducing $k$ if necessary we may assume that $p^k {\leqslant}N$. For each $p$ and $k$, there are at most $O( ({N}/{p^k}) \rho(p^k) ) = O( {N}/{p^k} )$ numbers $n$ with $P(n)$ divisible by $p^k$, thanks to ; thus the total number of such $n$ is bounded by $$\ll N \sum_{p {\leqslant}N^{1/2}} \sum_{j {\geqslant}2: p^j {\geqslant}N^{1/8(\log \log N)^2}} \frac{1}{p^j}$$ which can easily be computed to be $O( N \log^{-C'} N)$. Thus we may discard all $n$ of this type.
After removing all such $n$, we must have $p_j > N^{1/8(\log \log N)^2}$. Indeed, after eliminating the exceptional $n$ as above, $p_1 \ldots p_j$ is the product of at most $(\log \log N)^2$ prime powers, each of which is bounded by $N^{1/8(\log\log N)^2}$, or is a single prime larger than $N^{1/8(\log \log N)^2}$. The former possibility thus contributes at most $N^{1/8}$ to the final product $p_1 \ldots p_j$; from we conclude that the latter possibility must occur at least once, and the claim follows.
Let $r$ be the positive integer such that $$N^{1/(r+1)} < p_j {\leqslant}N^{1/r},$$ then $2 {\leqslant}r \ll (\log \log N)^2$. The primes $p_{j+1},\ldots,p_J$ have size at least $N^{1/(r+1)}$, so by we have $J = j + O(r)$, which implies that $$\tau(P(n)) \ll O(1)^r \tau(p_1 \ldots p_j).$$ As $p_1\ldots p_j$ is at least $N^{1/2}$, we have $$\tau(p_1 \ldots p_j)
{\leqslant}2 \sum_{m \mid p_1 \ldots p_j; m {\geqslant}(p_1 \ldots p_j)^{1/2}} 1
{\leqslant}2 \sum_{m \mid p_1 \ldots p_j; m {\geqslant}N^{1/4}} 1.$$ Note that all $m$ in the above summand lie in $S_r$ and divide $P(n)$. The claim follows.
Invoking the above lemma, it remains to bound $$\begin{aligned}
&\sum_{m {\leqslant}N} \sum_{n {\leqslant}N: m \mid P(n)} 1
\quad + \sum_{r=2}^{O((\log \log N)^2)} O(1)^r \sum_{m \in S_r} \sum_{n {\leqslant}N: m \mid P(n)} 1.\end{aligned}$$ by $O( N \sum_{n {\leqslant}N} {P(m)}/{m} )$. The first term was already shown to be acceptable by . For the second sum, we also apply and bound it by $$\label{modo}
\ll N \sum_{r=2}^{O((\log \log N)^2)} O(1)^r \sum_{m \in S_r} \frac{\rho(m)}{m}.$$ To estimate this expression, let $r, m$ be as in the above summation, and factor $m$ into primes. As in the proof of Lemma \[leo\], the contribution to $m$ coming from primes less than $N^{1/8(\log\log N)^2}$ is at most $N^{1/8}$, and the primes larger than $N^{1/8(\log\log N)^2}$ that divide $m$ are distinct. Hence, by the pigeonhole principle (as in [@erdos]), there exists $t {\geqslant}1$ with $r2^t \ll (\log \log N)^2$ such that the $N^{1/r}$-smooth number $m$ has at least $\lfloor{rt}/{100}\rfloor$ distinct prime factors between $N^{1/2^{t+1} r}$ and $N^{1/2^t r}$, and can thus be factored as $m = q_1 \ldots q_{\lfloor{rt}/{100}\rfloor} u$ where $q_1 < \ldots < q_{\lfloor{rt}/{100}\rfloor}$ are primes between $N^{1/2^{t+1} r}$ and $N^{1/2^t r}$, and $u$ is an integer of size at most $N$. From the Chinese remainder theorem and we have the crude bound $$\rho(m) \ll O(1)^{rt} \rho(u)$$ and thus $$\sum_{m \in S_r} \frac{\rho(m)}{m} \ll \sum_{t=1}^\infty O(1)^{rt} \frac{1}{\lfloor\frac{rt}{100}\rfloor!} \left(\sum_{N^{1/2^{t+1}r} {\leqslant}p {\leqslant}N^{1/2^t r}} \frac{1}{p}\right)^{\lfloor{rt}/{100}\rfloor} \sum_{u {\leqslant}N} \frac{\rho(u)}{u}.$$ By the standard asymptotic $\sum_{p<x} {1}/{p} = \log\log x + O(1)$, we have $$\sum_{N^{1/2^{t+1}r} {\leqslant}p {\leqslant}N^{1/2^t r}} \frac{1}{p} = O(1);$$ putting this all together, we can bound by $$\ll \left(\sum_{r=2}^\infty \sum_{t=1}^\infty \frac{O(1)^{rt}}{\lfloor\frac{rt}{100}\rfloor!}\right) \sum_{m {\leqslant}N} \frac{\rho(m)}{m}$$ and the claim follows.
We isolate a simple special case of Theorem \[erdos-bound\], when the polynomial $P$ is linear:
\[eb\] If $a, b, N$ are natural numbers with $a,b \ll N^{O(1)}$, then $$\sum_{n {\leqslant}N} \tau(an+b) \ll \tau( (a,b) ) N \log N$$ where $(a,b)$ is the greatest common divisor of $a$ and $b$.
By the elementary inequality $\tau(nm) {\leqslant}\tau(n) \tau(m)$ we may factor out $(a,b)$ and assume without loss of generality that $a,b$ are coprime.
We apply Theorem \[erdos-bound\] with $P(n) := an+b$. From the coprimality of $a,b$ and elementary modular arithmetic, we see that $\rho(m) {\leqslant}1$ for all $m$, and the claim follows.
We may now prove Proposition \[kab\] from the introduction.
We divide into two cases, depending on whether $A {\geqslant}B$ or $A {\leqslant}B$.
First suppose that $A {\geqslant}B$. From Corollary \[eb\] we have $$\sum_{a {\leqslant}A} \tau(kab^2+1) \ll A \sum_{m {\leqslant}A} \frac{1}{m} \ll A \log A,$$ for each fixed $b {\leqslant}B$, and the claim follows on summing in $B$. (Note that this argument in fact works whenever $A {\geqslant}B^{\varepsilon}$ for any fixed ${\varepsilon}>0$.)
Now suppose that $A {\leqslant}B$. For each fixed $a \in A$, we apply Theorem \[erdos-bound\] to the polynomial $P_{ka}(b) := kab^2+1$. To do this we first must obtain a bound on $\rho_{ka}(p^j)$, where $\rho_{ka}(m)$ is the number of solutions $b \mod m$ to $ka b^2+1=0 \mod m$. Clearly $\rho_{ka}(m)$ vanishes whenever $m$ is not coprime to $ka$, so it suffices to consider $\rho_{ka}(p^j)$ when $p$ does not divide $ka$. Then $P_{ka}$ is quadratic, and a simple application of Hensel’s lemma reveals that $\rho_{ka}(p^j) {\leqslant}2$ for all odd prime powers $p^j$ and $\rho_{ka}(p^j) {\leqslant}4$ for $p=2$. We may therefore apply Theorem \[erdos-bound\] and conclude that $$\sum_{b {\leqslant}B} \tau(kab^2+1) \ll B \sum_{m {\leqslant}B} \frac{\rho_{ka}(m)}{m}.$$ It thus suffices to show that $$\label{ab}
\sum_{a {\leqslant}A} \sum_{m {\leqslant}B} \frac{\rho_{ka}(m)}{m} \ll A \log B \log(1+k).$$
To control $\rho_{ka}(m)$, the obvious tool to use here is the quadratic reciprocity law . To apply this law, it is of course convenient to first reduce to the case when $a$ and $m$ are odd. If $m = 2^j m'$ for some odd $m'$, then $\rho_{ka}(m) \ll \rho_{ka}(m')$, and from this it is easy to see that the bound follows from the same bound with $m$ restricted to be odd. Similarly, by splitting $a = 2^l a'$ and absorbing the $2^l$ factor into $k$ (and dividing $A$ by $2^l$ to compensate), we may assume without loss of generality that $a$ is odd.
As previously observed, $\rho_{ka}(m)$ vanishes unless $ka$ and $m$ are coprime, so we may also restrict to the case $(ka,m)=1$, where $(n,m)$ denotes the greatest common divisor of $n,m$. If $p$ is an odd prime not dividing $ka$, then from elementary manipulation and Hensel’s lemma we see that $$\rho_{ka}(p^j) = \rho_{ka}(p) {\leqslant}1 + \left( \frac{-ka}{p} \right),$$ and thus for odd $m$ coprime to $ka$ we have $$\rho_{ka}(m) {\leqslant}\prod_{p \mid m} \left(1 + \left( \frac{-ka}{p} \right)\right).$$ For odd $m$, not necessarily coprime to $ka$, we thus have $$\rho_{ka}(m) {\leqslant}\prod_{p \mid m; (p,2ka)=1} \left(1 + \left( \frac{-ka}{p} \right)\right).$$ using the multiplicativity properties of the Jacobi symbol, one has $$1 + \left( \frac{-ka}{p} \right) {\leqslant}\sum_{j:p^j \mid m} \left( \frac{-ka}{p^j} \right)$$ whenever $p \mid m$ and $(p,2ka)=1$, and thus $$\rho_{ka}(m) {\leqslant}\prod_{p \mid m; (p,2ka)=1} \sum_{j:p^j \mid m} \left( \frac{-ka}{p^j} \right).$$ The right-hand side can be expanded as $$\sum_{q \mid m; (q,2ka)=1} \left( \frac{-ka}{q} \right).$$ We can thus bound the left-hand side of by $$\sum_{q {\leqslant}B: (q,2k)=1} \sum_{a {\leqslant}A; (a,2q)=1} \left( \frac{-ka}{q} \right)
\sum_{m {\leqslant}B; q \mid m} \frac{1}{m}.$$ The final sum is of course $({\log \frac{B}{q}})/{q}+O({1}/{q})$. The contribution of the error term is bounded by $$O( \sum_{q {\leqslant}B} \sum_{a {\leqslant}A} \frac{1}{q} ) = O( A \log B )$$ which is acceptable, so it suffices to show that $$\label{qo}
\left|\sum_{q {\leqslant}B: (q,2k)=1} \sum_{a {\leqslant}A; (a,2q)=1} \left( \frac{-ka}{q} \right) \frac{\log \frac{B}{q}}{q}\right| \ll A \log B \log(1+k).$$ We first dispose of an easy contribution, when $q$ is less than $A$. The expression $$a \mapsto \left( \frac{-ka}{q} \right) 1_{(a,2q)=1}$$ is periodic with period $2q$ and sums to zero (being essentially a quadratic character on ${\mathbb{Z}}/2q{\mathbb{Z}}$), and so in this case we have $$\sum_{a {\leqslant}A; (a,2q)=1} \left( \frac{-ka}{q} \right) = O( q ).$$ One could obtain better estimates and deal with somewhat larger $q$ here by using tools such as the Pólya-Vinogradov inequality, but we will not need to do so here; similarly for the treatment of the regime $A {\leqslant}q {\leqslant}kA$ below. In any event, the contribution of the $q < A$ case is bounded by $$O\left( \sum_{q {\leqslant}A} q \frac{\log \frac{B}{q}}{q} \right) = O( A \log B )$$ which is acceptable.
Next, we deal with the contribution when $q$ is between $A$ and $kA$. Here we crudely bound the Jacobi symbol in magnitude by $1$ and obtain a bound of $$O( \sum_{A {\leqslant}q {\leqslant}kA} \sum_{a {\leqslant}A} \frac{\log B}{q} ) = O( A \log B \log(1+k) )$$ which is acceptable.
Finally, we deal with the case when $q$ exceeds $kA$. We write $k = 2^m k'$ where $k'$ is odd, then from quadratic reciprocity (and , ) we have $$\left( \frac{-ka}{q} \right) = c(q) \left( \frac{q}{k'a} \right)$$ where $c(q) := (-1)^{(q-1)/2 + m(q^2-1)/8}$ is periodic with period $8$. We can thus rewrite this contribution to as $$\left|\sum_{a {\leqslant}A; (a,2)=1} \sum_{kA {\leqslant}q {\leqslant}B: (q,2ak)=1} c(q) \left( \frac{q}{k'a} \right) \frac{\log\frac{B}{q}}{q}\right|.$$ For any fixed $a$ in the above sum, the expression $$q \mapsto c(q) \left( \frac{q}{k'a} \right) 1_{(q,2ak)=1}$$ is periodic with period $8k'a = O(kA)$, is bounded in magnitude by $1$ and has mean zero. A summation by parts then gives $$\left|\sum_{kA {\leqslant}q {\leqslant}B: (q,2ak)=1} c(q) \left( \frac{q}{k'a} \right) \frac{\log\frac{B}{q}}{q}\right| \ll \log B$$ and so on summing in $A$ we see that this contribution is acceptable. This concludes the proof of the proposition.
We now record some variants of Proposition \[kab\] that will also be useful in our applications.
\[ab3\] For any $A, B > 1$, one has $$\label{abba}
\sum_{a {\leqslant}A} \sum_{b {\leqslant}B} \tau_3(ab+1) \ll A B \log^2(A+B).$$
By symmetry we may assume that $A {\leqslant}B$, so that $ab \ll B^2$ for all $a {\leqslant}A$ and $b {\leqslant}B$. For any $n$, $\tau_3$ is the number of ways to represent $n$ as the product $n=d_1 d_2 d_3$ of three terms. One of these terms must be at most $n^{1/3}$, and so $$\tau_3(n) \ll \sum_{d \mid n: d {\leqslant}n^{1/3}} \tau(\frac{n}{d}).$$ We can thus bound the left-hand side of by $$\ll \sum_{d \ll B^{2/3}} \sum_{a {\leqslant}A} \sum_{b {\leqslant}B: d \mid ab+1} \tau(\frac{ab+1}{d}).$$ Note that for fixed $a,d$, the constraint $d \mid ab+1$ is only possible if $a$ is coprime to $d$, and restricts $b$ to some primitive residue class $q \mod d$ for some $q = q_{a,d}$ between $1$ and $d$. Writing $b = cd+q$, we can thus bound the above expression by $$\ll \sum_{d \ll B^{2/3}} \sum_{a {\leqslant}A} \sum_{c \ll B/d} \tau( ac + r )$$ where $r = r_{a,d} := ({aq+1})/{d}$. Note that $r$ is clearly coprime to $a$. Thus by Corollary \[eb\], we may bound the preceding expression by $$\ll \sum_{d \ll B^{2/3}} \sum_{a {\leqslant}A} \frac{B}{d} \log B$$ which is $O(AB \log^2 B)$. The claim follows.
\[abcd-prop\] For any $A,B,C,D > 1$, one has $$\label{abcd}
\sum_{\substack{a {\leqslant}A, b {\leqslant}B, c {\leqslant}C, d {\leqslant}D: \\ (a,b,c,d) = 1}} \tau(ab+cd) \ll ABCD \log(A+B+C+D).$$
By symmetry we may assume that $A,B,C {\leqslant}D$. Then for fixed $a,b,c$ coprime, we have $$\sum_{d {\leqslant}D} \tau(ab+cd) \ll D \log D$$ by Corollary \[eb\], and the claim follows by summing in $a,b,c,d$.
Informally, one can view the above propositions as asserting that the heuristics $\tau(n) \ll \log n$, $\tau_3(n) \ll \log^2 n$ are valid on average (in a first moment sense) on the range of various polynomial forms in several variables. A result similar to Proposition \[abcd-prop\] was established in [@heath Lemma 3], but with the coprimality condition $(a,b,c,d)=1$ replaced by $(ab,cd)=1$, and also the divisor function $\tau$ being restricted by forcing one of the divisors to live in a given dyadic range, with the logarithm being removed as a consequence. Also, products of three factors were permitted instead of the terms $ab, cd$. As remarked after [@heath Lemma 4], the logarithmic term in is necessary.
Upper bound for $\sum_{n {\leqslant}N} f_{{\operatorname{I}}}(n)$ and $\sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p)$
=======================================================================================================================
Now that we have established Proposition \[kab\], we can obtain upper bounds on sums of $f_{{\operatorname{I}}}$.
We begin with the bound $$\sum_{n {\leqslant}N} f_{{\operatorname{I}}}(n) \ll N \log^3 N.$$ By Proposition \[type-1\] and symmetry followed by Lemma \[size\], it suffices to show that there are at most $O(N \log^3 N)$ septuples $(a,b,c,d,e,f,n) \in {\mathbb{N}}^7$ obeying - and the Type I estimates from Lemma \[size\]. In particular, $acd \ll N$, $f$ is a factor of $4a^2 d+1$, and $n = 4acd-f$. As $a,c,d,f$ determine the remaining components of the septuple, we may thus bound the number of such septuples as $$\sum_{a,c,d: acd \ll N} \tau(4a^2 d + 1).$$ Dividing $a,c,d$ into dyadic blocks ($A/2 {\leqslant}a {\leqslant}A$, etc.) and applying Proposition \[kab\] (with $k=4$) to each block, we obtain the desired bound $O( N \log^3 N)$.
Now we establish the bound $$\sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p) \ll N \log^2 N \log \log N.$$ As before, it suffices to count quadruples $(a,c,d,f)$ with $acd \ll N$, and $f$ a factor of $4a^2d+1$; but now we can restrict $p = 4acd-f$ to be prime. Also, from Proposition \[type-1\] we may assume that $p$ is coprime to $acd$ (and hence to $4acd$, if we discard the prime $p=2$).
Thus we may assume without loss of generality that $-f \mod 4ad$ is a primitive residue class. From the Brun-Titchmarsh inequality , we conclude that for each fixed $a,d,f$, there are $O( {N}/({\phi(4ad) \log(N/4ad)}) )$ primes $p$ in this residue class that are less than $N$ if $ad {\leqslant}N/100$ (say); if instead $ad > N/100$, then we of course only have $O(1) = O( {N}/{\phi(4ad)})$ primes in this class. Thus, in any event, we can bound the number of such primes as $O( {N}/({\phi(4ad) \log(2 + N/ad)}) )$. We therefore have the bound $$\label{logo}
\sum_{p {\leqslant}N} f_{{\operatorname{I}}}(p) \ll \sum_{a,d: ad \ll N} \tau(4a^2 d+1) \frac{N}{\phi(4ad) \log(2 + N/ad)}.$$ By dyadic decomposition (and bounding $\phi(4ad) {\geqslant}\phi(ad)$), it thus suffices to show that $$\label{phoad}
\sum_{a,d: N/2 {\leqslant}ad {\leqslant}N} \frac{\tau(4a^2 d + 1)}{\phi(ad)} \ll \log^2 N.$$ Indeed, assuming this bound for all $N$, we can bound the right-hand side of by $$\sum_{j=1}^{O(\log N)} \frac{N \log^2 N}{j} \ll N \log^2 N \log \log N$$ and the claim follows.
To prove , we would like to again apply Proposition \[kab\], but we must first deal with the $\phi(ad)$ denominator. From one has $$\frac{1}{\phi(ad)} \ll \frac{1}{ad} \sum_{s \mid a} \sum_{t \mid d} \frac{1}{st}.$$ Writing $a = sa'$, $d = td'$, we may thus bound the left-hand side of by $$\ll \frac{1}{N} \sum_{s,t: st {\leqslant}N} \frac{1}{st} \sum_{a',d': a'd' {\leqslant}N/st} \tau(4s^2 t (a')^2 d' + 1).$$ Applying Proposition \[kab\] to the inner sum (decomposed into dyadic blocks, and setting $k = 4s^2 t$), we see that $$\sum_{a',d': a'd' {\leqslant}N/st} \tau(4s^2 t (a')^2 d' + 1) \ll \frac{N}{st} \log^2 \frac{N}{st} \log(1+s^2 t).$$ Inserting this bound and summing in $s,t$ we obtain the claim.
Upper bound for $\sum_{n {\leqslant}N} f_{{\operatorname{II}}}(n)$ and $\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p)$
=========================================================================================================================
Now we prove the upper bound $$\sum_{n {\leqslant}N} f_{{\operatorname{II}}}(n) \ll N \log^3 N.$$ By Proposition \[type-2\] followed by Lemma \[size\] (and symmetry), it suffices to show that there are at most $O(N \log^3 N)$ ${\mathbb{N}}$-points $(a,b,c,d,e,f)$ that lie in $\Sigma_n^{{\operatorname{II}}}$ for some $n {\leqslant}N$, which also obeys the Type II bound $acde{\leqslant}N$ in Lemma \[size\].
Observe from - that $a,c,d,e$ determine the other variables $b,f,n$. Thus, it suffices to show that there are $O(N \log^3 N)$ quadruples $(a,b,d,e) \in {\mathbb{N}}^4$ with $acde {\leqslant}N$. But this follows from with $k=4$.
Finally, we prove the upper bound $$\sum_{p {\leqslant}N} f_{{\operatorname{II}}}(p) \ll N \log^2 N.$$ By dyadic decomposition, it suffices to show that $$\label{ppn}
\sum_{N/2 {\leqslant}p {\leqslant}N} f_{{\operatorname{II}}}(p) \ll N \log^2 N.$$ As before, we can bound the left-hand side (up to constants) by the number of quadruples $(a,c,d,e) \in {\mathbb{N}}^4$ with $acde \ll N$. However, by , we may also add the restriction that $4acde-4a^2 d - e$ is a prime between $N/2$ and $N$. Also, if we set $b := ce-a$, then by Lemma \[size\] we may also add the restrictions $a {\leqslant}b$ and $b < ce$, and from Proposition \[type-2\] we can also require that $a,b$ be coprime. Since $$\begin{aligned}
(ade) (acd) (ab)^{1/2} &\ll (ade) (acd) b\\
&\ll (ade) (acd) (ce) \\
&= (acde)^2 \\
&\ll N^2\end{aligned}$$ we see that one of the quantities $ade, acd, ab$ must be at most $O(N^{4/5})$ (cf. Section \[upper-sec\]). As we shall soon see, the ability to take one of these quantities to be significantly less than $N$ allows us to avoid the inefficiencies in the Brun-Titchmarsh inequality that led to a double logarithmic loss in the Type I case. (Unfortunately, it does not seem that a similar trick is available in the Type II case.)
Let us first consider those quadruples with $ade \ll N^{4/5}$, which is the easiest case. For fixed $a,d,e$, $4acde-4a^2d-e$ traverses (a possibly non-primitive) residue class modulo $4ade$. As $ade \ll N^{4/5}$, there are no primes in this class that are at least $N/2$ if the class is not primitive. If it is primitive, we may apply the Brun-Titchmarsh inequality to bound the number of primes between $N/2$ and $N$ in this class by $O(\frac{N}{\phi(4ade) \log(N)})$, noting that $\log(N/4ade)$ is comparable to $\log N$. Thus, we can bound this contribution to the left-hand side of by $$\ll \frac{N}{\log N} \sum_{a,d,e: ade \ll N^{4/5}} \frac{1}{\phi(4acd)};$$ setting $m:= ade$ and bounding $\phi(4ade) {\geqslant}\phi(ade)$, we can bound this in turn by $$\ll \frac{N}{\log N} \sum_{m \ll N^{4/5}} \frac{\tau_3(m)}{\phi(m)}$$ where $\tau_3(m) := \sum_{a,d,e: ade=m} 1$. Applying Lemma \[upper-crude\], we have $$\label{easy}
\sum_{m \ll N^{4/5}} \frac{\tau_3(m)}{\phi(m)} \ll \log^3 N,$$ and so this contribution is acceptable.
Now we consider the case $acd \ll N^{4/5}$. Here, we rewrite $4acde-4a^2d-e$ as $(4acd-1) e - 4a^2 d$, which then traverses a (possibly non-primitive) residue class modulo $4acd-1$. Applying the Brun-Titchmarsh inequality as before, we may bound this contribution by $$\ll \frac{N}{\log N} \sum_{a,c,d: acd \ll N^{4/5}} \frac{1}{\phi(4acd-1)}$$ and hence (setting $m := 4acd-1$) by $$\ll \frac{N}{\log N} \sum_{m \ll N^{4/5}} \frac{\tau_3(m+1)}{\phi(m)},$$ so that it suffices to establish the bound $$\label{hard}
\sum_{m \ll N^{4/5}} \frac{\tau_3(m+1)}{\phi(m)} \ll \log^3 N.$$ This is superficially similar to , but this time the summand is not multiplicative in $m$, and we can no longer directly apply Lemma \[upper-crude\]. To deal with this, we apply and bound by $$\ll \sum_{m \ll N^{4/5}} \sum_{d \mid m} \frac{\tau_3(m+1)}{dm};$$ writing $m = dn$, we can rearrange this as $$\ll \sum_{d \ll N^{4/5}} \frac{1}{d^2} \sum_{n \ll N^{4/5}/d} \frac{\tau_3(dn+1)}{n}.$$ Applying dyadic decomposition of the $d,n$ variables and using Proposition \[ab3\], we obtain as required.
Finally, we consider the case $ab \ll N^{4/5}$. Here, we rewrite $4acde-4a^2d-e$ as $4abd - e$, and note that $e$ divides $a+b=ce$. If we fix $a,b$, there are thus at most $\tau(a+b)$ choices for $e$ (which also fixes $c$), and once one fixes such a choice, $4abd-e$ traverses a (possibly non-primitive) residue class modulo $4ab$. Applying the Brun-Titchmarsh inequality again, we may bound this contribution by $$\ll \frac{N}{\log N} \sum_{a,b: ab \ll N^{4/5}; (a,b)=1} \frac{\tau(a+b)}{\phi(4ab)}.$$ Bounding $\phi(4ab) {\geqslant}\phi(ab)$ and using , we can bound this by $$\ll \frac{N}{\log N} \sum_{a,b: ab \ll N^{4/5}; (a,b)=1} \sum_{k \mid a} \sum_{l \mid b} \frac{\tau(a+b)}{abkl}.$$ Writing $a = km$, $b=ln$, we may bound this by $$\ll \frac{N}{\log N} \sum_{\substack{k,l,m,n: klmn \ll N^{4/5};\\ (k,l,m,n) = 1}} \frac{1}{k^2l^2mn} \tau(km+ln).$$ Dyadically decomposing in $k,l,m,n$ and using Proposition \[abcd-prop\], we see that this contribution is also $O(N \log^2 N)$. The proof of (and thus Theorem \[main\]) is now complete.
Solutions by polynomials {#solution}
========================
We now prove Proposition \[res-class\]. We first verify that each of the sets is solvable by polynomials (which of course implies that any residue class contained in such classes are also solvable by polynomials). We first do this for the Type I sets. In view of the $\pi^{{\operatorname{I}}}_n$ map (which clearly preserves polynomiality), it will suffice to find polynomials $a=a(n),\ldots,f=f(n)$ of $n$ that take values in ${\mathbb{N}}$ for sufficiently large $n$ in these sets, and such that $(a(n),\ldots,f(n)) \in \Sigma^{{\operatorname{I}}}_n$ for all $n$. This is achieved as follows:
- If $n=-f \mod 4ad$, where $a,d,f \in {\mathbb{N}}$ are such that $f \mid 4a^2 d+1$, then we take $$(a,b,c,d,e,f) := \left(a, \frac{n+f}{4ad}e - a, \frac{n+f}{4ad}, d, e,\frac{4a^2d+1}{e}\right).$$
- If $n = -f \mod 4ac$ and $n = -{c}/{a} \mod f$, where $a,c,f \in {\mathbb{N}}$ are such that $(4ac,f)=1$, then we take $$(a,b,c,d,e,f) := \left(a, \frac{na+c}{f}, c, \frac{n+f}{4ac}, \frac{na+af+c}{fc}, f\right);$$ note from the hypotheses that $na+af+c$ is divisible by the coprime moduli $f$ and $c$, and is thus also divisible by $fc$.
- If $n = -f \mod 4cd$ and $n^2 = -4c^2d \mod f$, where $c,d,f,q \in {\mathbb{N}}$ are such that $(4cd,f)=1$, then we take $$(a,b,c,d,e,f) := \left(\frac{n+f}{4cd}, \frac{n^2+4c^2d+nf}{4cdf}, c, d, \frac{(n+f)^2+4c^2d}{4c^2df}, f\right);$$ note from the hypotheses that $(n+f)^2+4c^2d$ is divisible by the coprime moduli $4c^2d$ and $f$, and is thus also divisible by $4c^2df$.
- If $n = -{1}/{e} \mod 4ab$, where $a,b,e \in {\mathbb{N}}$ are such that $e \mid a+b$ and $(e,4ab)=1$, then we take $$(a,b,c,d,e,f) := \left(a,b, \frac{a+b}{e}, \frac{ne+1}{4ab}, e, 4a\frac{a+b}{e} \frac{ne+1}{4ab}-n\right)$$
One easily verifies in each of these cases that one has an ${\mathbb{N}}$-point of $\Sigma^{{\operatorname{I}}}_n$ for $n$ large enough.
Now we turn to the Type II case. We use the same arguments as before, but using $\Sigma^{{\operatorname{II}}}_n$ in place of $\Sigma^{{\operatorname{I}}}_n$ of course:
- If $n=-e \mod 4ab$, where $a,b,e \in {\mathbb{N}}$ are such that $e \mid a+b$ and $(e,4ab)=1$, then we take $$(a,b,c,d,e,f) := \left(a,b,\frac{a+b}{e},\frac{n+e}{4ab}, e, \frac{a+b}{e} \frac{n+e}{b} - 1\right).$$
- If $n=-4a^2d \mod f$, where $a,d,f \in {\mathbb{N}}$ are such that $4ad \mid f+1$, then we take $$(a,b,c,d,e,f) := \left(a,\frac{f+1}{4ad} \frac{n+4a^2d}{f}-a,\frac{f+1}{4ad},d,\frac{n+4a^2d}{f}, f\right).$$
- If $n=-4a^2d-e \mod 4ade$, where $a,d,e \in {\mathbb{N}}$ are such that $(4ad,e)=1$, then we take $$(a,b,c,d,e,f) := \left(a,\frac{n+e}{4ad},\frac{n+4a^2 d + e}{4ade},d,e,\frac{n+4a^2d}{e}\right).$$
Again, one easily verifies in each of these cases that one has an ${\mathbb{N}}$-point of $\Sigma^{{\operatorname{II}}}_n$ for $n$ large enough.
Now we establish the converse claim. Suppose first that we have a primitive residue class $q \mod r$ that can be Type I solved by polynomials, and is maximal with respect to this property, then we have $$\frac{4}{p} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z}$$ for all sufficiently large primes $p$ in this class, where $x=x(p), y=y(p), z=z(p)$ are polynomials of $p$ that take natural number values for all large $p$ in this class. For all sufficiently large $p$, we either have $y(p) {\leqslant}z(p)$ for all $p$, or $y(p) {\geqslant}z(p)$ for all $p$; by symmetry we may assume the latter.
Applying Proposition \[type-1\], we see that $$(x,y,z) = (abdp, acd, bcd)$$ for some ${\mathbb{N}}$-point $(a,\ldots,f) = (a(p),\ldots,f(p))$ in $\Sigma^{{\operatorname{I}}}_p$ with $a(p), b(p), c(p)$ having no common factor. In particular, $d=d(p)$ is the least common multiple of $x(p), y(p), z(p)$. Applying the Euclidean algorithm to the polynomials $x(p), y(p), z(p)$, we conclude that for sufficiently large $p$ in the primitive residue class, $d$ is also a polynomial in $p$, which divides the polynomials $x,y,z$. Dividing out by $d$ and repeating these arguments, we conclude that $a = a(p)$, $b = b(p)$, and $c = c(p)$ are also polynomials in $p$ for sufficiently large $p$ in the primitive residue class. Applying the identities - we also see that $e = e(p)$ and $f=f(p)$ are polynomials in $p$ for sufficiently large $p$.
From Lemma \[size\] we have $a(p) c(p) d(p) = O(p)$ and $f(p) = O(p)$ for all $p$, which implies that at least two of the polynomials $a(p), c(p), d(p)$ must be constant in $p$, and that $f(p)$ has degree at most $1$ in $p$. We now divide into several cases.
First suppose that $a, d$ are independent of $p$. By this forces $e, f$ to be independent of $p$ as well, and $f$ divides $4a^2d+1$. By we have $$p = -f \mod 4ad$$ for all sufficiently large primes $p=q \mod r$ and thus (by Dirichlet’s theorem on primes in arithmetic progressions) the primitive residue class $q \mod r$ is contained in the residue class $-f \mod 4ad$, and the claim follows in this case.
Now suppose that $a, c$ are independent of $p$, and $f$ has degree $0$ (i.e. is also independent of $p$). Then from we have $p = -f \mod 4ac$, and from we have $p = -{c}/{a} \mod f$; since $p$ is a large prime this also forces $(4ac,f)=1$, and the claim follows.
Now suppose that $a, c$ are independent of $p$, and $f$ has degree $1$ (and thus grows linearly in $p$). By Lemma \[size\], $b, e$ are then bounded and thus constant in $p$. From we have $e \mid a+b$, and from we have $p = -{1}/{e} \mod 4ab$. As $p$ is an arbitrarily large prime, this forces $(4ab,e)=1$, and the claim follows.
Next, suppose that $c, d$ are independent of $p$, and $f$ has degree $0$. Then from one has $p = -f \mod 4cd$, which in particular forces $(4cd,f)=1$. From one has $p^2 = -4c^2 d \mod f$, and the claim follows.
Finally, suppose that $c, d$ are independent of $p$, and $f$ has degree $1$. By , $f(p)$ divides $p^2 + 4c^2 d$ for all large primes $p$ in the primitive residue class. Applying the Euclidean algorithm, we conclude that $f$ in fact divides $p^2+4c^2d$ *as a polynomial* in $p$. But as $c,d$ are positive, $p^2+4c^2 d$ is irreducible over the reals, a contradiction. This concludes the treatment of the Type I case.
We now turn to the Type II case. Let $q \mod r$ be a residue class that is Type II solvable by polynomials. Arguing as in the Type I case, we obtain an ${\mathbb{N}}$-point $(a,\ldots,f) = (a(p),\ldots,f(p))$ in $\Sigma^{{\operatorname{II}}}_p$ for all sufficiently large primes $p$ in this class, and obeying the bounds in Lemma \[size\], with $a(p),\ldots,f(p)$ all depending in a polynomial fashion on $p$.
From Lemma \[size\] we have $a(p) c(p) d(p) e(p) = O(p)$, and so three of these polynomials $a(p), c(p), d(p), e(p)$ must be independent of $p$.
Suppose first that $a, c, e$ are independent of $p$. By , $b$ is independent of $p$ also, and $e \mid a+b$. By , $p = -e \mod 4ab$, and thus $(e,4ab)=1$, and the claim then follows from Dirichlet’s theorem.
Now suppose that $a, c, d$ are independent of $p$. By , $f$ is independent of $p$ also, and $4ad \mid f+1$. From one has $p=-4a^2 d \mod f$, and the claim follows.
Next, suppose $a, d, e$ are independent of $p$. By one has $p = -4a^2d -e \mod 4ade$, which implies $(4ad,e)=1$, and the claim follows.
Finally, suppose $c,d,e$ are independent of $p$. By this forces $a,b$ to be bounded, and hence also independent of $p$; and so this case is subsumed by the preceding cases.
Lower bounds III {#lower-3}
================
Generation of solutions
-----------------------
We begin the proof of Theorem \[many-solutions\]; the method of proof will be a generalisation of that in Section \[lower-sec\]. For the rest of this section, $m$ and $k$ are fixed, and all implied constants in asymptotic notation are allowed to depend on $m,k$. We assume that $N$ is sufficiently large depending on $m,k$.
In the $m=4,k=3$ case, Type II solutions were generated by the ansatz $$(t_1,t_2,t_3) = (abd, acdn, bcdn)$$ for various quadruples $(a,b,c,d)$ (or equivalently, quadruples $(a,c,d,e)$, setting $b := ce-a$); see . We will use a generalisation of this ansatz for higher $k$; for instance, when $k=4$ we will construct solutions of the form $$(t_1,t_2,t_3,t_4) = (b x_{12} x_{123} x_{124} x_{1234}, x_{12} x_{23} x_{24} x_{123} x_{124} x_{234} x_{1234} n, b x_{23} x_{123} x_{234} x_{1234} n, b x_{24} x_{124} x_{234} x_{1234} n)$$ for various octuples $(b,x_{12}, x_{23}, x_{24}, x_{123}, x_{124}, x_{234}, x_{1234})$, or equivalently, using octuples $$(x_{12}, x_{23}, x_{24}, x_{123}, x_{124}, x_{234}, x_{1234}, e),$$ and setting $$b = e x_{23} x_{24} x_{234} - x_{12} x_{24} x_{124} - x_{12} x_{23} x_{123}.$$ More generally, we will generate Type II solutions via the following lemma.
Let ${\mathcal P}$ denote the set $2^{k-1}-1$-element set $${\mathcal P} := \{ I \subset \{1,\ldots,k\}: 2 \in I; I \neq \{2\} \}.$$ Let $(x_I)_{I \in {\mathcal P}}$ be a tuple of natural numbers, and let $e$ be another natural number, obeying the inequalities $$\begin{aligned}
\frac{1}{2m} N {\leqslant}e \prod_{I \in {\mathcal P}} x_I &{\leqslant}\frac{1}{m} N\label{eprod} \end{aligned}$$ and $$\label{xi-bound}
1 < x_I {\leqslant}N^{1/2^{k+2}}$$ whenever $I \in {\mathcal P}$. Suppose also that the quantity $$\label{square}
w := \prod_{I \in {\mathcal P}: I \neq \{1,2\}} x_I$$ is square-free. Set $$\begin{aligned}
b &:= e \prod_{I \in {\mathcal P}: 1 \not \in I} x_I - \sum_{j=3}^k \prod_{I \in {\mathcal P}: j \not \in I} x_I\label{x13}\\
t_1 &:= b \prod_{I \in {\mathcal P}: 1 \in I} x_I \label{t1-def} \\
n &:= m t_1 - e \label{ndef} \\
t_2 &:= n \prod_{I \in {\mathcal P}} x_I \label{t2-def}\end{aligned}$$ and $$\label{tj-def}
t_j := b\, n \prod_{I \in {\mathcal P}: j \in I} x_I.$$ Then $n$ is a natural number with $n {\leqslant}N$, and $(t_1,\ldots,t_k)$ is a Type II solution for this value of $n$. Furthermore, each choice of $(x_I)_{I \in {\mathcal P}}$ and $e$ generates a distinct Type II solution.
In the $m=4,k=3$ case, the parameters $x_I$ are related to the coordinates $(a,b,c,d,e,f)$ appearing in Proposition \[type-2\] by the formula $$(a,b,c,d,e,f) = (x_{12}, b, x_{23}, x_{123}, e, 4 x_{12} x_{23} x_{123} - 1);$$ however, the constraint that $a,b,c$ have no common factor and $abd$ is coprime to $n$ has been replaced by the slightly different criterion that $d$ is squarefree, which turns out to be more convenient for obtaining lower bounds (note that the same trick was also used to prove ). Parameterisations of this type have appeared numerous times in the previous literature (see [@gupta; @hall; @ruzsa; @els], or indeed Propositions \[type-1\], \[type-2\]), though because most of these parameterisations were focused on dealing with *all* solutions of a given type, as opposed to an easily countable subset of solutions, there were more parameters $x_I$ (indexed by all non-empty subsets of $\{1,\ldots,k\}$, not just the ones in ${\mathcal P}$), and there were some coprimality conditions on the $x_I$ rather than square-free conditions.
Let the notation be as in the lemma. Then from one has $$\sum_{j=3}^k \prod_{I \in {\mathcal P}: j \not \in I} x_I {\leqslant}(k-2) N^{2^{k-2}/2^{k+2}} \ll N^{1/16}$$ while since $$\prod_{I \in {\mathcal P}} x_I \ll N^{2^{k-1}/2^{k+2}} \ll N^{1/8}$$ we see from that $$e \gg N^{7/8}.$$ From we then have that $$\frac{1}{2} e \prod_{I \in {\mathcal P}: 1 \not \in I} x_I {\leqslant}b {\leqslant}e \prod_{I \in {\mathcal P}: 1 \not \in I} x_I$$ and thus by $$\frac{1}{2} e \prod_{I \in {\mathcal P}} x_I {\leqslant}t_1 {\leqslant}e \prod_{I \in {\mathcal P}} x_I$$ and thus by (noting that $m {\geqslant}4$) $$\frac{1}{4} m e \prod_{I \in {\mathcal P}} x_I {\leqslant}n {\leqslant}m e \prod_{I \in {\mathcal P}} x_I.$$ These bounds ensure that $b, n, t_1,\ldots,t_k$ are natural numbers with $n
{\leqslant}N$, and with $t_2,\ldots,t_k$ divisible by $n$. Dividing by $b\, n\prod_{I \in {\mathcal P}} x_I$ and using , , , we conclude that $$\frac{1}{t_2} = \frac{e}{nt_1} - \sum_{j=3}^k \frac{1}{t_j};$$ applying one concludes that $(t_1,\ldots,t_k)$ is a Type II solution.
It remains to demonstrate that each choice of $(x_I)_{I \in {\mathcal P}}$ and $e$ generates a distinct Type II solution, or equivalently that the Type II solution $(t_1,\ldots,t_k)$ uniquely determines $(x_I)_{I \in {\mathcal P}}$ and $e$. To do this, first observe from that $(t_1,\ldots,t_k)$ determines $n$, and from we see that $e$ is determined also. Next, observe from , , that for any $3 {\leqslant}j {\leqslant}k$, one has $$\label{2j1}
\frac{t_2 t_j}{n^2 t_1} = \left(\prod_{I \in {\mathcal P}: j \in I; 1 \not \in I} x_I\right)^2 \left(\prod_{I \in {\mathcal P}: j \in I \hbox{ XOR } 1 \not \in I} x_I\right)$$ where XOR denotes the exclusive or operator; in particular, the left-hand side is necessarily a natural number. Note that all the factors $x_I$ appearing on the right-hand side are components of the square-free quantity $w$ given by . We conclude that $(\prod_{I \in {\mathcal P}: j \in I; 1 \not \in I} x_I)^2$ is the largest perfect square dividing $\frac{t_2 t_j}{n^2 t_1}$. We conclude that the Type II solution $(t_1,\ldots,t_k)$ determines all the products $$\label{xii}
\prod_{I \in {\mathcal P}: j \in I; 1 \not \in I} x_I$$ for $3 {\leqslant}j {\leqslant}k$. Note (from the square-free nature of $w$) that the $x_I$ with $1 \not \in I$ are all coprime. Taking the greatest common divisor of the for all $3 {\leqslant}j {\leqslant}k$, we see that the Type II solution determines $x_{\{2,3,\ldots,k\}}$. Dividing this quantity out from all the expressions , and then taking the greatest common divisor of the resulting quotients for $4 {\leqslant}j {\leqslant}k$, one recovers $x_{\{2,4,\ldots,k\}}$; a similar argument gives $x_I$ for any $I \in {\mathcal P}$ with $1 \not \in I$ of cardinality $k-3$. Dividing out these quantities and taking greatest common divisors again, one can then recover $x_I$ for any $I \in {\mathcal P}$ with $1 \not \in I$ of cardinality $k-4$; continuing in this fashion we can recover all the $x_I$ with $I \in {\mathcal P}$ and $1 \not \in I$.
Returning to , we can then recover the products $\prod_{I \in {\mathcal P}: 1, j \in I} x_I$ for all $3 {\leqslant}j {\leqslant}k$. Taking greatest common divisors iteratively as before, we can then recover all the $x_I$ with $I \in {\mathcal P}$ and $1 \in I$, thus reconstructing all of the data $(x_I)_{I \in {\mathcal P}}$ and $e$, as claimed.
In view of this above lemma, we see that to prove , it suffices to show that the number of tuples $( (x_I)_{I \in {\mathcal P}}, e )$ obeying the hypotheses of the lemma is at least $c N (\log N)^{2^{k-1}-1}$ for an absolute constant $c>0$.
Observe that if we fix $x_I$ with $I \in {\mathcal P}$ obeying and with the quantity $w$ defined by , then there are $$\gg \frac{N}{\prod_{I \in {\mathcal P}} x_I}$$ choices of $e$ that obey . Thus, noting that $\mu^2(w) {\geqslant}\mu^2(\prod_{I \in {\mathcal P}} x_I)$, the number of tuples obeying the hypotheses of the lemma is $$\label{mush}
\gg N \sum_* \frac{\mu^2(\prod_{I \in {\mathcal P}} x_I)}{\prod_{I \in {\mathcal P}} x_I},$$ where the sum $\sum_*$ ranges over all choices of $(x_I)_{I \in {\mathcal P}}$ obeying the bounds . To estimate , we make use of [@elsholtz:2001 Theorem 6.4], which we restate as a lemma:
[\[squarefree-sum\]]{} Let $l {\geqslant}1$, and for each $1 {\leqslant}i {\leqslant}l$, let $\alpha_i < \beta_i$ be positive real numbers. Then $$\sum_{N^{\alpha_i} {\leqslant}n_i {\leqslant}N^{\beta_i} \hbox{ for all } 1 {\leqslant}i {\leqslant}l}
\frac{\mu^2(n_1 \cdots n_l)}{n_1 \cdots n_l}
\gg_l (\log N)^l \prod_{i=1}^l(\beta_i - \alpha_i),$$ for $N$ sufficiently large depending on $l$ and the $\alpha_1,\ldots,\alpha_l,\beta_1,\ldots,\beta_l$.
From this lemma (and noting that there are $2^{k-1}-1$ parameters $x_I$ in the sum $\sum_*$) we see that $$\label{mush-2}
\sum_* \frac{\mu^2(\prod_{I \in {\mathcal P}} x_I)}{\prod_{I \in {\mathcal P}} x_I} \gg \log^{2^{k-1}-1} N;$$ inserting this into we obtain the claim.
Now we prove . As in Section \[lower-sec\], the arguments are similar to those used to prove , but with the additional input of the Bombieri-Vinogradov inequality.
As in the proof of , it suffices to obtain a lower bound (in this case, $c {N (\log N)^{2^{k-1}-2}}/{\log \log N}$ for some $c>0$) on the number of tuples $( (x_I)_{I \in {\mathcal P}}, e )$, but now with the additional constraint that the quantity $$\begin{aligned}
p := mt_1 - e = m b \prod_{I \in {\mathcal P}: 1 \in I} x_I - e\end{aligned}$$ is prime.
Suppose we fix $(x_I)_{I \in {\mathcal P}}$ obeying with $w$ squarefree. We may write $$p = qe + r$$ where $$\label{qmash}
q := m \prod_{I \in {\mathcal P}} x_I - 1$$ and $$r := - m \prod_{I \in {\mathcal P}: 1 \in I} x_I \sum_{j=3}^k \prod_{I \in {\mathcal P}: j \not \in I} x_I.$$ Thus as $e$ varies in the range given by , $qe+r$ traces out an arithmetic progression of spacing $q$ whose convex hull contains $[0.6 N, 0.9 N]$ (say). Thus, every prime $p$ in this interval $[0.6 N, 0.9 N]$ that is congruent to $r \mod q$ will provide an $e$ that will give a Type II solution with $n=p$ prime, and different choices of $(x_I)_{I \in {\mathcal P}}$ and $p$ will give different Type II solutions.
For fixed $(x_I)_{I \in {\mathcal P}}$, if $r$ is coprime to $q$, then we see from (and estimating ${ {\rm li } }(x) = (1+o(1)) {x}/{\log x}$) that the number of such $p$ is at least $${\geqslant}c \frac{N}{\log N \phi(q)} - D(0.6 N; q) - D(0.9 N; q)$$ for some absolute constant $c>0$. It thus suffices to show that $$\label{many-2a}
\sum_* \mu^2(w) 1_{(r,q)=1} \frac{N}{\log N \phi(q)} \gg \frac{N (\log N)^{2^{k-1}-2}}{\log \log N}$$ and $$\label{many-2b}
\sum_* D(cN;q) = o\left( \frac{N (\log N)^{2^{k-1}-2}}{\log \log N} \right)$$ for $c=0.6, 0.9$.
We first show . Since ${ {\rm li } }(N/100)$ is comparable to $N/\log N$, and $\phi(q) {\leqslant}q \ll w$, we may simplify as $$\label{many-2c}
\sum_* \frac{\mu^2(w)}{\prod_{I \in {\mathcal P}} x_I} 1_{(r,q)=1} \gg \frac{(\log N)^{2^{k-1}-1}}{\log \log N}.$$ The expression on the left-hand side is similar to , but now one also has the additional constraint $1_{(r,q)=1}$. To deal with this constraint, we restrict the ranges of the $x_I$ parameters somewhat to perform an averaging in the $x_{\{1,2\}}$ parameter (taking advantage of the fact that this parameter does not appear in the $\mu^2(w)$ term). More precisely, we restrict to the ranges where $$\label{x-bang}
x_I {\leqslant}N^{1/2^{100k}}$$ (say) for $I \neq \{1,2\}$, and $$\label{y-bang}
x_{\{1,2\}} {\leqslant}N^{1/2^{k+2}}.$$ We now analyse the constraint that $r$ and $q$ are coprime. We can factor $$r = - m x_{\{1,2\}}^2 s$$ where $$s := \left(\prod_{I \in {\mathcal P}: 1 \in I; I \neq \{1,2\}} x_I\right) \sum_{j=3}^k \prod_{I \in {\mathcal P}: j \not \in I; I \ne \{1,2\}} x_I;$$ the point is that $s$ does not depend on $x_{\{1,2\}}$. Since $q+1$ is divisible by $m x_{\{1,2\}}$, we see that $m x_{\{1,2\}}^2$ is coprime to $q$, and thus $(q,r)=1$ iff $(q,s)=1$. We can write $q = u x_{\{1,2\}} - 1$, where $u := m \prod_{I \in {\mathcal P}: I \neq \{1,2\}} x_I$, and so $(q,r)=1$ iff $(u x_{\{1,2\}}-1,s)=1$.
We may replace $s$ here by the largest square-free factor $s'$ of $s$. If we then factor $s' = vy$, where $v := (s',u)$ and $y := s'/v$, then $u x_{\{1,2\}}-1$ is already coprime to $v$, and so we conclude that $(q,r)=1$ iff $(u x_{\{1,2\}}-1,y)=1$.
Fix $x_I$ for $I \neq \{1,2\}$. By construction, $u$ and $y$ are coprime, and so the constraint $(u x_{\{1,2\}}-1,y)=1$ restricts $x_{\{1,2\}}$ to $\phi(y)$ distinct residue classes modulo $y$. Since $$y {\leqslant}s \ll N^{1/2^{90k}}$$ (say) thanks to , we conclude that $$\sum_{x_{\{1,2\}} {\leqslant}N^{1/2^{k+2}}} \frac{1_{(q,r)=1}}{x_{\{1,2\}}} \gg \frac{\phi(y)}{y} \log N.$$ Using the crude bound , we may lower bound ${\phi(y)}/{y} \gg {1}/{\log \log N}$. (It is quite likely that by a finer analysis of the generic divisibility properties of $y$, one can remove this double logarithmic loss, but we will not attempt to do so here.) We may thus lower bound the left-hand side of by $$\frac{\log N}{\log \log N} \sum_{**} \frac{\mu^2(w)}{w},$$ where $\sum_{**}$ sums over all $x_I$ for $I \neq \{1,2\}$ obeying . But by Lemma \[squarefree-sum\] we have $$\sum_{**} \frac{\mu^2(w)}{w} \gg (\log N)^{2^{k-1}-2},$$ and the claim follows.
Finally, we show . Observe that each $q$ can be represented in the form in at most $\tau_{2^{k-1}-1}(q+1)$ different ways; also, from we have $q \ll N^{2^{k-1}/2^{k+2}} = N^{1/8}$. We may thus bound the left-hand side of by $$\sum_{q \ll N^{1/8}} D(cN;q) \tau_{2^{k-1}-1}(q+1).$$ From the Bombieri-Vinogradov inequality and the trivial bound $D(cN;q) \ll N/q$ one has $$\sum_{q \ll N^{1/8}} q D(cN;q)^2 \ll_A N \log^{-A} N$$ for any $A>0$, while from Lemma \[upper-crude\] (and shifting $q$ by $1$) one has $$\sum_{q \ll N^{1/8}} \frac{\tau_{2^{k-1}-1}(q+1)^2}{q} \ll \log^{O(1)} N.$$ The claim then follows from the Cauchy-Schwarz inequality (taking $A$ large enough). The proof of Theorem \[many-solutions\] is now complete.
Some results from number theory
===============================
In this section we record some well-known facts from number theory that we will need throughout the paper. We begin with a crude estimate for averages of multiplicative functions.
Now we record some asymptotic formulae for the divisor function $\tau$. From the Dirichlet hyperbola method we have the asymptotic $$\label{tau-1}
\sum_{n {\leqslant}N} \tau(n) = N \log N + O(N)$$ (see e.g. [@iwaniec §1.5]). More generally, we have $$\label{tau-k}
\sum_{n {\leqslant}N} \tau_k(n) = N \log^{k-1} N + O_k(N \log^{k-2} N)$$ for all $k {\geqslant}1$, where $\tau_k(n) := \sum_{d_1,\ldots,d_k: d_1 \ldots d_k = n} 1$. Indeed, the left-hand side of can be rearranged as $$\sum_{d_1 {\leqslant}N} \sum_{d_2 {\leqslant}N/d_1} \ldots \sum_{d_k {\leqslant}N/d_1 \ldots d_{k-1}} 1$$ and the claim follows by evaluating each of the summations in turn.
We can perturb this asymptotic:
\[upper-crude\] Let $f(n)$ be a multiplicative function obeying the bounds $$f(p) = m + O(\frac{1}{p})$$ for all primes $p$ and some integer $m {\geqslant}1$, and $$|f(p^j)| \ll j^{O(1)}$$ for all primes $p$ and $j > 1$. Then one has $$\sum_{n {\leqslant}N} f(n) \ll_m N \log^{m-1} N$$ for $N$ sufficiently large depending on $m$; from this and summation by parts we have in particular that $$\sum_{n {\leqslant}N} \frac{f(n)}{n} \ll_m \log^{m} N$$ If $f$ is non-negative, we also have the corresponding lower bound $$\sum_{n {\leqslant}N} f(n) \gg_m N \log^{m-1} N$$ and hence $$\sum_{n {\leqslant}N} \frac{f(n)}{n} \gg_m \log^{m} N$$
One can of course get much better estimates by contour integration methods (and these estimates also follow without much difficulty from the more general results in [@halb]), but the above crude bounds will suffice for our purposes.
We allow all implied constants to depend on $m$. By Möbius inversion, we can write $$f(n) = \sum_{d \mid n} \tau_{m}(d) g(\frac{n}{d})$$ where $g$ is a multiplicative function obeying the bounds $$g(p) = O(\frac{1}{p})$$ and $$|g(p^j)|\ll j^{O(1)}$$ for all $j > 1$. In particular, the Euler product $$\sum_{n=1}^\infty \frac{|g(n)|}{n} = \prod_p \left(1 + \frac{|g(p)|}{p} +
\sum_{j=2}^\infty \frac{|g(p^j)|}{p^j}\right) =
\prod_p \left(1 + O\left(\frac{1}{p^2}\right)\right)$$ is absolutely convergent.
We may therefore write $\sum_{n {\leqslant}N} f(n)$ as $$\label{loo}
\sum_{k {\leqslant}N} g(k) \sum_{d {\leqslant}N/k} \tau_{m}(d).$$ Applying , we conclude $$|\sum_{n {\leqslant}N} f(n)| \ll \sum_{k {\leqslant}N} \frac{|g(k)|}{k} N \log^{m-1} N$$ and the upper bound follows from the absolute convergence of $\sum_{n=1}^\infty {|g(n)|}/{n}$.
Now we establish the lower bound. By zeroing out $f$ at various small primes $p$ (and all their multiples), we may assume that $f(p^j)=g(p^j)=0$ for all $p {\leqslant}w$ for any fixed threshold $w$. By making $w$ large enough, we may ensure that $$1 - \sum_{n=2}^\infty \frac{|g(n)|}{n} > 0.$$ If we then insert the bound into we obtain the claim.
As a typical application of Lemma \[upper-crude\] we have $$\label{tau-2}
\sum_{n {\leqslant}N} \tau^k(n) \ll_k N \log^{2^k-1} N$$ for any $N>1$ and $k {\geqslant}1$, (see also [@Mardjanichvili:1939]).
To study some more detailed distribution of divisors and prime divisors we recall the *Turán-Kubilius inequality* for additive functions. A function $w$ is called additive, if $w(n_1 n_2)=w(n_1)+w(n_2)$, whenever $\gcd(n_1,n_2)=1$.
[\[turan-kubilius\]]{} Let $w:{\mathbb{N}}\rightarrow {\mathbb{R}}$ denote an arithmetic function which is additive (thus $w(nm)=w(n)+w(m)$ whenever $n,m$ are coprime). Let $A(N):=\sum_{p^k{\leqslant}N} {w(p^k)}/{p^k}$ and $D^2(N):=\sum_{p^k{\leqslant}N} {|w(p^k)|^2}/{p^k}$. For every $N{\geqslant}2$ and for any additive function $w$ the following inequality holds: $$\sum_{n {\leqslant}N} |w(n)-A(N)|^2{\leqslant}30 N D^2(N).$$ (Here $\sum_{p^k}$ denotes the sum over all prime powers.)
Let $\omega(n)$ denote the number of distinct prime factors of $n$, then $A(N)=\sum_{p^k{\leqslant}N} {\omega(p^k)}/{p^k}=\log \log N+ O(1)$ and $D^2(N)=\sum_{p^k{\leqslant}N} {\omega(p^k)^2}/{p^k}=A(N)=\log \log N + O(1)$. The Turán-Kubilius inequality then gives $$\sum_{n {\leqslant}N} |\omega(n)-\log \log N|^2{\leqslant}30 N \log \log N +O(N).$$ In particular, if $\xi(n) \to \infty$ as $n \to \infty$, then one has $|\omega(n) - \log \log n| {\leqslant}\xi(n) \sqrt{\log\log n}$ for all $n$ in a set of integers of density $1$. For more details see [@tenenbaum].
From one might guess the heuristic $$\label{tau-heuristic}
\tau(n) \approx \log n$$ *on average*. But it follows from the Turán-Kubilius inequality that for “typical” $n$, the number of divsors is about $2^{\log \log n}=(\log n)^{\log 2}$, which is considerably smaller, and that a small number of integers with an exceptionally large number of divisors heavily influences this average. The influence of these integers with a very large number of divsiors dominates even more for higher moments. The extremal cases heuristically consist of many small prime factors, and the following “divisor bound” holds $$\label{tau-divisor}
\tau(n) {\leqslant}2^{(1+o(1)) \frac{\log n}{\log \log n} } = O(n^{\frac{1}{\log \log n}})$$ for any $n {\geqslant}1$; see [@ramanujan].
The Turán-Kubilius type inequalities have been studied for shifted primes as well. We make use of the following result of Barban (see Elliott [@elliott], Theorem 12.10).
[\[barban-turankubilius\]]{} A function $w: {\mathbb{N}}\rightarrow {\mathbb{R}}^+$ is said to be strongly additive if it is additive and $w(p^k)=w(p)$ holds, for every prime power $p^k$, $k {\geqslant}1$. Let $w$ denote a real nonnegative strongly additive function. Define $S(N) :=\sum_{p {\leqslant}N} {w(p)}/{(p-1)}$ and $\Lambda(N):=\max_{p {\leqslant}N} w(p)$. Suppose that $\Lambda(N)=o(S(N))$, as $N
\rightarrow \infty$. Then for any fixed $\varepsilon >0$, the prime density $$\nu_N(p; |w(p+1)-S(N)| > \varepsilon S(N)) \rightarrow 0 \text{ as } N
\rightarrow \infty.$$ The same holds for other shifts $p+a$, where $a \neq 0$.
The function $\omega(n)$ is strongly additive. This lemma implies that for primes with relative prime density 1, $p+1$ contains about $\frac{1}{2}\log \log p$ primes of the form $1 \bmod 4$. To see this one chooses $w(p)=1$ if $p\equiv 1 \bmod 4$, and $0$ otherwise. In this example one has $S(N)\sim \frac{1}{2}\log \log N$ and $\Lambda(N)=1$.
We recall the quadratic reciprocity law $$\label{quadratic}
\left( \frac{m}{n} \right) \left( \frac{n}{m} \right) = (-1)^{(n-1)(m-1)/4}$$ for all odd $m,n$, where $\left( \frac{m}{n} \right)$ is the Jacobi symbol, as well as the companion laws $$\label{quadratic-1}
\left( \frac{-1}{n} \right) = (-1)^{(n-1)/4}$$ and $$\label{quadratic-2}
\left( \frac{2}{n} \right) = (-1)^{(n^2-1)/8}$$ for odd $n$.
For any primitive residue class $a \mod q$ and any $N > 0$, let $\pi(N;q,a)$ denote the number of primes $p<N$ that are congruent to $a$ mod $q$. We recall the *Brun-Titchmarsh inequality* (see e.g. [@iwaniec Theorem 6.6]) $$\label{brun}
\pi(N;q,a) \ll \frac{N}{\phi(q) \log \frac{N}{q}}$$ for any such class with $N {\geqslant}q$. This bound suffices for upper bound estimates on primes in residue classes. Due to the $q$ in the denominator of $\log({N}/{q})$, it will only be efficient to apply this inequality when $q$ is much smaller than $N$, e.g. $q {\leqslant}N^c$ for some $c<1$.
The Euler totient function $\phi(q)$ in the denominator is also inconvenient; it would be preferable if one could replace it with $q$. Unfortunately, this is not possible; the best bound on ${1}/{\phi(q)}$ in terms of $q$ that one has in general is $$\label{phi-lower}
\frac{1}{\phi(q)} \ll \frac{\log \log q}{q}$$ (see e.g. [@rosser]). Using this bound would simplify our arguments, but one would lose an additional factor of $\log \log N$ or so in the final estimates. To avoid this loss, we observe the related estimate $$\label{phiq-bound}
\frac{1}{\phi(q)} \ll \frac{1}{q} \sum_{d \mid q} \frac{1}{d}.$$ Indeed, we have $$\begin{aligned}
\frac{q}{\phi(q)} &= \prod_{p \mid q} \frac{p}{p-1} \\
&= \prod_{p \mid q} (1 + \frac{1}{p}) (1 + O(\frac{1}{p^2})) \\
&\ll \prod_{p \mid q} (1 + \frac{1}{p}) \\
&{\leqslant}\sum_{d \mid q} \frac{1}{d},\end{aligned}$$ and follows. (One could restrict $d$ to be square-free here if desired, but we will not need to do so in this paper.)
The Brun-Titchmarsh inequality only gives upper bounds for the number of primes in an arithmetic progression. To get lower bounds, we let $D(N;q)$ denote the quantity $$\label{dnq}
D(N;q) := \max_{(a,q)=1} \left|\pi(N;q,a) - \frac{{ {\rm li } }(N)}{\phi(q)}\right|.$$ where ${ {\rm li } }(x) := \int_0^x {dt}/{\log t}$ is the Cauchy principal value of the logarithmic integral. The Bombieri-Vinogradov inequality (see e.g. [@iwaniec Theorem 17.1]) implies in particular that $$\label{bombieri}
\sum_{q {\leqslant}N^\theta} D(N;q) \ll_{\theta,A} N \log^{-A} N.$$ We remark that the above inequality is usually phrased using the summatory von Mangoldt function $\psi(N;q,a) = \sum_{n {\leqslant}N; n = a \mod q} \Lambda(n)$. A summation by parts converts it to an estimate using the prime counting function; see [@Bruedern:1995] for details.
for all $0 < \theta < 1/2$ and $A > 0$. Informally, this gives lower bounds on $\pi(N;q,a)$ on the average for $q$ much smaller than $N^{1/2}$.
[10]{}
A. Aigner, ‘Brüche als Summe von Stammbrüchen’, *J. Reine Angew. Math.* **214/215** (1964), 174–179.
M. B. Barban, P. P. Vehov, ‘Summation of multiplicative functions of polynomials’, *Mat. Zametki* **5** (1969), 669–680.
P. Bartoš, ‘K Riešitel’nosti Diofantickej Rovnice $\sum_{j=1}^{n} {1}/{x_j} = {a}/{b} $’, *Časopis pro pěstování matematiky*, **98** (1973), 261–264.
P. Bartoš and K. [Pehatzová-Bošanká]{}. ‘K Riešeniu Diofantickej Rovnice ${1}/{x} +{1}/{y} +
{1}/{z} = {a}/{b}$’, *Časopis pro pěstování matematiky*, **96** (1971), 294–299.
M. Bello-Hernández, M. Benito, E. Fernández, ‘[On egyptian fractions]{}’, preprint, [arXiv:1010.2035]{}, version 2, 30. April 2012.
L. Bernstein, ‘[Zur Lösung der diophantischen Gleichung $\frac{m}{n}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$, insbesondere im Fall $m = 4$]{}’, *J. Reine Angew. Math.* **211**, 1962, 1–10.
R. de la Bretéche, T. Browning, ‘Sums of arithmetic functions over values of binary forms’, *Acta Arith.* **125** (2006), 291–304.
T. Browning, C. Elsholtz, ‘[The number of representations of rationals as a sum of unit fractions]{}’, to appear in Illinois Journal of Mathematics.
J. Br[ü]{}dern. ‘[[Einführung in die analytische Zahlentheorie]{}]{}’. Springer, Berlin, Heidelberg, 1995.
Yong-Gao Chen, C. Elsholtz, Li-Li Jiang, ‘[Egyptian fractions with restrictions]{}’, *Acta Arith.* **154** (2012), 109–123.
J-L. Colliot-Théelène, J-J. Sansuc, ‘[Torseurs sous des groupes de type multiplicatif; applications á l’étude des points rationnels de certaines variétés algébriques]{}’, *C. R. Acad. Sci. Paris Sér. A-B* **282** (1976), no. 18, Aii, A1113–A1116.
E.S. Croot, D.E. Dobbs, J.B. Friedlander, A.J Hetzel, F. Pappalardi, ‘[Binary Egyptian fractions]{}’. *J. Number Theory* **84** (2000), no. 1, 63–79.
S. Daniel, ‘[Uniform bounds for short sums of certain arithmetic functions of polynomial arguments]{}’, Unpublished manuscript.
P. D. T. A. Elliott, ‘Probabilistic number theory. II.’ Central limit theorems. Grundlehren der Mathematischen Wissenschaften, 240. Springer-Verlag, Berlin-New York, 1980.
C. Elsholtz, ‘Sums of $k$ unit fractions’, PhD thesis, Technische Universität Darmstadt, 1998.
C. Elsholtz, ‘[Sums of $k$ unit fractions]{}’ *Trans. Amer. Math. Soc.* **353** (2001), 3209–3227.
C Elsholtz, C. Heuberger, H. Prodinger, ‘[The number of Huffman codes, compact trees, and sums of unit fractions]{}’, to appear in IEEE Trans. Inform. Theory.
P. Erdős, ‘[Az ${1}/{x_1} + {1}/{x_2} + \ldots + {1}/{x_n}
={a}/{b}$ egyenlet egész számú megoldásairól]{}’, *Mat. Lapok* **1** (1950), 192–210.
P. Erdős, ‘[On the sum $\sum_{k=1}^x d(f(k))$]{}’, *J. London Math. Soc.* **27** (1952), 7–15.
P. Erdős, P.; R.L. Graham, ‘Old and new problems and results in combinatorial number theory’ *Monographies de L’Enseignement Mathématique*, 28. L’Enseignement Mathématique, Geneva, 1980. 128 pp.
É. Fouvry, ‘[Sur le probléme des diviseurs de Titchmarsh]{}’, *J. Reine Angew. Math.* **357** (1985), 51–76.
É. Fouvry, H. Iwaniec, ‘[The divisor function over arithmetic progressions]{}’, With an appendix by Nicholas Katz. *Acta Arith.* **61** (1992), no. 3, 271–287.
P. X. Gallagher, ‘Primes and Powers of two’, *Inventiones Math.* **29** (1975), 125–142.
H. Gupta, ‘Selected topics in number theory.’ Abacus Press, Tunbridge Wells, 1980. 394 pp.
R. Guy, ‘[Unsolved Problems in Number Theory]{}’, 2nd ed. New York: Springer-Verlag, pp. 158-166, 1994.
H. Halberstam, H.-E. Richert, ‘On a result of R. R. Hall.’, *J. Number Theory* **11** (1979), no. 1, 76–89.
R.R. Hall, ’[Sets of Multiples]{}’, Cambridge University Press, Cambridge, 1996.
D. R. Heath-Brown, ‘[The density of rational points on Cayley’s cubic surface]{}’, Proceedings of the Session in Analytic Number Theory and Diophantine Equations, 33 pp., Bonner Math. Schriften, 360, Univ. Bonn, Bonn, 2003.
K. Henriot, ‘[Nair-Tenenbaum bounds uniform with respect to the discriminant]{}’, Math. Proc. Camb. Phil. Soc. **152** (2012), no. 3, 405–424.
C. Hooley, ‘[On the number of divisors of quadratic polynomials]{}’, *Acta Math.* **110** (1963), 97–114.
J. Huang, R. C. Vaughan, ‘[Mean value theorems for binary Egyptian fractions]{}’, *J. Number Theory* **131** (2011), 1641–1656.
M. N. Huxley, ‘[A note on polynomial congruences]{}’, Recent Progress in Analytic Number Theory, Vol. I (H. Halberstam and C. Hooley, eds.), Academic Press, London, 1981, pp. 193-196.
H. Iwaniec, E. Kowalski, ‘Analytic number theory’, American Mathematical Society Colloquium Publications, 53. American Mathematical Society, Providence, RI, 2004.
C. Jia, ‘A Note on Terence Tao’s Paper “On the Number of Solutions to $4/p=1/n_1+1/n_2+1/n_3$”’, preprint.
C. Jia, ‘ The estimate for mean values on prime numbers relative to $4/p=1/n_1+1/n_2+1/n_3$’ *Science China Mathematics* **55** (2012), no. 3, 465–474.
R.W. Jollenstein, ‘A note on the Egyptian problem’, *[Congressus Numerantium, 17, Utilitas Math., Winnipeg, Man.]{}* In [*Proceedings of the Seventh Southeastern Conference on Combinatorics, Graph Theory, and Computing*]{}, 351–364, Louisiana State Univ., Baton Rouge, La., 1976.
I. Kotsireas, ‘The Erdős-Straus conjecture on Egyptian fractions’, Paul Erdős and his mathematics (Budapest, 1999), 140–144, János Bolyai Math. Soc., Budapest, 1999.
B. Landreau, ‘[A new proof of a theorem of van der Corput]{}’, *Bull. London Math. Soc.* **21** (1989), no. 4, 366–368.
Delang Li, ‘[On the Equation ${4}/{n}= {1}/{x} +{1}/{y}+{1}/{z}$]{}’, *Journal of Number Theory* **13** (1981), 485–494, 1981.
C. Mardjanichvili, ‘[Estimation d’une somme arithmetique.]{}’ *[Comptes Rendus (Doklady) de l’Académie des Sciences de l’URSS]{}* **22** (1939), 387–389.
J. McKee, ‘[On the average number of divisors of quadratic polynomials]{}’, *Math. Proc. Cambridge Philos. Soc.* **117** (1995), no. 3, 389–392.
J. McKee, ‘[A note on the number of divisors of quadratic polynomials. Sieve methods, exponential sums, and their applications in number theory]{}’ (Cardiff, 1995), 275–281, *London Math. Soc. Lecture Note Ser.*, 237, Cambridge Univ. Press, Cambridge, 1997.
J. McKee, ‘[The average number of divisors of an irreducible quadratic polynomial]{}’, *Math. Proc. Cambridge Philos. Soc.* **126** (1999), no. 1, 17–22.
L. J. Mordell, ‘Diophantine Equations’, volume 30 of Pure and Applied Mathematics. Academic Press, 1969.
T. Nagell, ‘[Généralisation d’un theórème de Tchebicheff]{}’, *J. Math.* **8** (1921), 343–356.
M. Nair, ‘[Multiplicative functions of polynomial values in short intervals]{}’, *Acta Arith.* **62** (1992), no. 3, 257–269.
M. Nair, G. Tenenbaum, ‘[Short sums of certain arithmetic functions]{}’, *Acta Math.* **180** (1998), 119–144.
M. Nakayama, ‘[On the decomposition of a rational number into “Stammbrüche.”]{}’, *Tôhoku Math. J.* **46**, (1939). 1–21.
M.R. Obláth, ‘[Sur l’ équation diophantienne ${4}/{n}={1}/{x_1}
+{1}/{x_2} +{1}/{x_3}$]{}’, *Mathesis* **59** (1950), 308–316.
O. Ore, ‘[Anzahl der Wurzeln höherer Kongruenzen]{}’, *Norsk Matematisk Tidsskrift*, 3 Aagang, Kristiana (1921), 343–356.
G. Palam[à]{}, ‘[Su di una congettura di [Sierpi[ń]{}ski]{} relativa alla possibilit[à]{} in numeri naturali della ${5}/{n}={1}/{x_1}+
{1}/{x_2} +{1}/{x_3}$]{}’, *Bollettino della Unione Matematica Italiana (3)*, **13** (1958), 65–72.
G. Palam[à]{}, [Su di una congettura di [Schinzel]{}]{}, *Bollettino della Unione Matematica Italiana (3)*, **14** (1959), 82–94.
C. P. Popovici, ‘[On the diophantine equation ${a}/{b}={1}/{x_1}+{1}/{x_2}+{1}/{x_3}$]{}’, *Analele Universitătii Bucureti. Seria tiinele Naturii. Matematică-Fizikă* **10** (1961), 29–44, 1961.
C. Pomerance, ‘[Analysis and Comparison of Some Integer Factoring Algorithms]{}’, in *Computational Methods in Number Theory, Part I*, H.W. Lenstra, Jr. and R. Tijdeman, eds., Math. Centre Tract 154, Amsterdam, 1982, pp 89–139.
C. Pomerance, ‘[Ruth-Aaron numbers revisited]{}’, *Paul Erdős and his Mathematics*, I (Budapest, 1999), Bolyai Soc. Math. Stud. 11, János Bolyai Math. Soc., Budapest, 2002, pp. 567–579.
S. Ramanujan, ‘[Highly composite numbers]{}’, *Proc. London Math. Soc.* **14** (1915), 347–409.
Y. Rav, ‘[On the representation of rational numbers as a sum of a fixed number of unit fractions]{}’, *J. Reine Angew. Math.* **222** (1966), 207–213.
L. Rosati, ‘[Sull’equazione diofantea $4/n=1/x_1+1/x_2+1/x_3$]{}’, *Boll. Un. Mat. Ital.* (3) **9**, (1954), 59–63.
J. Rosser, L. Schoenfeld, ‘[Approximate formulas for some functions of prime numbers]{}’, *Illinois J. Math.* **6** (1962), 64–94.
I.Z. Ruzsa, ‘[On an additive property of squares and primes]{}’, *Acta Arithmetica* **49** (1988), 281–289.
J.W. Sander, ‘On ${4}/{n}={1}/{x}+{1}/{y}+{1}/{z}$ and [Rosser’s]{} sieve’, [*Acta Arithmetica*]{} **59** (1991), 183–204.
J.W. Sander, ‘[On ${4}/{n}={1}/{x}+{1}/{y}+{1}/{z}$ and Iwaniec’ Half Dimensional Sieve]{}’, [*Journal of Number Theory*]{} **46** (1994), 123–136.
J.W. Sander, ‘[Egyptian Fractions and the Erdős-Straus Conjecture]{}.’ [*Nieuw Archief voor Wiskunde (4)*]{} **15** (1997), 43–50.
C. Sándor, ‘[On the number of solutions of the Diophantine equation $\sum_{i=1}^n \frac{1}{x_i}=1$]{}’. *Period. Math. Hungar.* **47** (2003), no. 1-2, 215–219.
G. Sándor, ‘[Über die Anzahl der Lösungen einer Kongruenz]{}’, *Acta. Math.* **87** (1952), 13–17.
A. Schinzel, ‘[Sur quelques propriétés des nombres $3/n$ et $4/n$]{}’, où $n$ est un nombre impair. *Mathesis* **65** (1956), 219–222.
A. Schinzel, ‘[On sums of three unit fractions with polynomial denominators]{}’, *Funct. Approx. Comment. Math.* **28** (2000), 187–194.
W. Schwarz, J. Spilker, ‘Arithmetical functions’, *London Mathematical Society Lecture Note Series*, 184. Cambridge University Press, Cambridge, 1994.
E.J. Scourfield, ‘[The divisors of a quadratic polynomial]{}’, *Proc. Glasgow Math. Assoc.* **5** (1961) 8–20.
A. Selberg, ‘[Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series]{}’, *J. Indian Math. Soc. (N.S.)* **20** (1956), 47–87.
Shen Zun, ‘[On the diophantine equation $\sum_{i=0}^k {1}/{x_i} = {a}/{n}$]{}’, *Chinese Ann. Math. Ser. B*, [**7**]{} (1986), 213–220.
P. Shiu, ‘[A Brun-Titchmarsh theorem for multiplicative functions]{}’, *J. Reine Angew. Math.* **313** (1980), 161–170.
W. Sierpi[ń]{}ski, ‘[Sur les d[é]{}compositions de nombres rationnels en fractions primaires]{}’, *Mathesis* [**65**]{} (1956), 16–32.
W. Sierpi[ń]{}ski. ‘[[On the Decomposition of Rational Numbers into Unit Fractions]{}]{}’, *Pánstwowe Wydawnictwo Naukowe*, Warsaw, 1957.
E. S[ó]{}s. ‘[Die diophantische Gleichung ${1}/{x}={1}/{x_1} +
{1}/{x_2} + \ldots + {1}/{x_n}$ ]{}’, , **36** (1905), 97–102.
B.M. Stewart. [Theory of Numbers]{}. *2nd ed. New York: The Macmillan Company; London: Collier-Macmillan*, 1964.
C. L. Stewart, ‘[On the number of solutions of polynomial congruences and Thue equations]{}’, *J. Amer. Math. Soc.* **4** (1991), no. 4, 793–835.
A. Swett, [http://math.uindy.edu/swett/esc.htm]{} accessed on 27 July 2011.
G. Tenenbaum, ‘Introduction to analytic and probabilistic number theory’, Cambridge Studies in Advanced Mathematics, 46. Cambridge University Press, Cambridge, 1995.
D.G. Terzi. ‘On a conjecture by [Erdős-Straus]{}’, [*Nordisk Tidskr. Informations-Behandling (BIT)*]{} **11** (1971), 212–216.
R. Vaughan, ‘[On a problem of Erdős, Straus and Schinzel]{}’, *Mathematika* **17** (1970), 193–198.
C. Viola, ‘[On the diophantine equations $\prod_0^k x_i - \sum_0^k x_i=n$ and $\sum_0^k {1}/{x_i} = {a}/{n}$]{}’, *Acta Arith.* [**22**]{} (1973), 339–352.
W. Webb, ‘[On $4/n=1/x+1/y+1/z$]{}’, *Proc. Amer. Math. Soc.* [**25**]{} (1970), 578–584.
W. Webb, ‘[On a theorem of Rav concerning Egyptian fractions]{}’, Canad. Math. Bull. **18** (1975), no. 1, 155–156.
W. Webb, ‘[On the Diophantine equation ${k}/{n} = {a_1}/{x_1} + {a_2}/{x_2} + {a_3}/{x_3}$]{}’, *[Č]{}asopis pro p[ě]{}stováni matematiy, ro[č]{}*, **101** (1976), 360–365.
A. Wintner, ‘Eratosthenian Averages’, Waverly Press, Baltimore, Md., 1943. v+81 pp.
K. Yamamoto, ‘[On the Diophantine Equation ${4}/{n}={1}/{x}+{1}/{y}+{1}/{z}$]{}’, *Mem Fac. Sci. Kyushu Univ. Ser. A, V.* **19** (1965), 37–47.
Xun Qian Yang, ‘A note on ${4}/{n} = {1}/{x} +{1}/{y}+ {1}/{z}$’ [*Proceedings of the American Mathematical Society*]{}, **85** (1982), 496–498.
|
---
address:
- 'Pontifícia Universidade Católica do Rio de Janeiro (PUC–Rio)'
- 'Institut de Mathématiques de Bordeaux, Université Bordeaux 1'
author:
- Jairo Bochi
- Nicolas Gourmelon
title: Note on the dimension of certain algebraic sets of matrices
---
Preamble
========
In this short note we prove a lemma about the dimension of certain algebraic sets of matrices. This result is needed in our paper [@BG_control]. The result presented here has also applications in other situations and so it should appear as part of a larger work [@BG_transitive].
Statement of the result {#s.statement}
=======================
If $A \in {\mathrm{Mat}}_{n \times m}({\mathbb{C}})$, let $\operatorname{col}A \subset {\mathbb{C}}^n$ denote the column space of $A$. A set $X \subset {\mathrm{Mat}}_{n \times m}({\mathbb{C}})$ is called *column-invariant* if $$\left.
\begin{array}{c}
A \in X \\
B \in {\mathrm{Mat}}_{n \times m}({\mathbb{C}})\\
\operatorname{col}A = \operatorname{col}B
\end{array}
\right\} \ \Rightarrow \
B \in X.$$ So a column-invariant set $X$ is characterized by its set of column spaces. We enlarge the latter set by including also subspaces, thus defining: $$\label{e.bracket_notation}
\ldbrack X \rdbrack := \big\{ E \text{ subspace of } {\mathbb{C}}^n ; \; E \subset \operatorname{col}A \text{ for some } A \in X \big\}.$$ Then we have:
\[t.main\] Let $X \subset {\mathrm{Mat}}_{n \times m}({\mathbb{C}})$ be a nonempty algebraically closed, column-invariant set. Suppose $E$ is a vector subspace of ${\mathbb{C}}^n$ that does not belong to $\ldbrack X \rdbrack$. Then $$\operatorname{codim}X \ge m + 1 - \dim E \, .$$
It is obvious that the algebraicity hypothesis is indispensable.
follows without difficulty from intersection theory of the grassmannians (“Schubert calculus”). We tried to make the exposition the least technical as possible, to make it accessible to non-experts (like ourselves).
A particular case {#s.particular}
=================
Define $$\label{e.R_k}
R_k := \big \{ A \in {\mathrm{Mat}}_{n \times m}({\mathbb{C}}) ; \; \operatorname{rank}A \le k \big\} \, .$$ We recall (see [@Harris Prop. 12.2]) that this is an irreducible algebraically closed set of codimension $$\label{e.cod_R_k}
\operatorname{codim}R_k = (m-k)(n-k) \qquad \text{if } 0 \le k \le \min(m,n).$$
If $E = {\mathbb{C}}^n$ then the hypothesis ${\mathbb{C}}^n \not\in \ldbrack X \rdbrack$ means that $X \subset R_{n-1}$. We can assume that $n-1 \le m$, otherwise the conclusion of the is vacuous. Thus $\operatorname{codim}X \ge \operatorname{codim}R_{n-1} = m + 1 - n$, as we wanted to show.
It does not seem likely that the general \[t.main\] can be reduced to .
Reduction to a property of grassmannians {#s.reduction}
========================================
We will show that to prove \[t.main\] it is sufficient to prove a dimension estimate (\[t.schubert\] below) for certain subvarieties of a grassmaniann.
Grassmannians
-------------
Given integers $n > k \ge 1$, the *grassmanniann* $G_k({\mathbb{C}}^n)$ is the set of the vector subspaces of ${\mathbb{C}}^{n}$ of dimension $k$.
The grassmannian can be interpreted a subvariety of a higher dimensional complex projective space as follows. The *Plücker embedding* is the map $G_k({\mathbb{C}}^n) \to P(\bigwedge^k C^n)$ defined as follows: for each $V \in G_k({\mathbb{C}}^n)$, take a basis $\{v_1, \dots, v_k\}$ and map $V$ to $[v_1 \wedge \cdots v_k]$. This is clearly an one-to-one map. It can be shown (see e.g.[@Harris p. 61ff]) that the image is an algebraically closed subset of $P(\bigwedge^k C^n)$. Its dimension is $$\label{e.dim_G}
\dim G_k({\mathbb{C}}^n) = k(n-k).$$
If $E \subset {\mathbb{C}}^n$ is a vector space with $\dim E = e \le k$ then we consider the following subset of $G_k({\mathbb{C}}^n)$: $$\label{e.special schubert}
S_k(E) := \big\{V \in G_k({\mathbb{C}}^n) ; \; V \supset E \big\}.$$ (This is a Schubert variety of a special type, as we will see later.) Since any $V \in S_k(E)$ can be written as $E \oplus W$ for some $V \subset W^\perp$, we see that $S_k(E)$ is homeomorphic to $G_{k-e}({\mathbb{C}}^{n-e})$.
We will show that an algebraic set that avoids $S_k(E)$ cannot be too large:
\[t.schubert\] Fix integers $1 \le e \le k < n$. Suppose that $Y$ is an algebraically closed subset of $G_k({\mathbb{C}}^n)$ that is disjoint from $S_k(E)$, for some $e$-dimensional subspace $E \subset {\mathbb{C}}^n$. Then $\operatorname{codim}Y \ge k + 1 - e$.
Proof of \[t.main\] assuming \[t.schubert\] {#ss.reduction}
-------------------------------------------
Assuming \[t.schubert\] for the while, let us see how it yields \[t.main\].
Recalling notation , define the quasiprojective variety $$\hat{R}_k := R_k {\smallsetminus}R_{k-1} \, .$$ We define a map $\pi_k \colon \hat{R}_k \to G_k({\mathbb{C}}^n)$ by $A \mapsto \operatorname{col}A$.
\[l.projection\] If $X$ is an algebraically closed column-invariant subset of $\hat{R}_k$ then $Y = \pi_k(X)$ is algebraically closed subset of $G_k({\mathbb{C}}^n)$, and the codimension of $Y$ inside $G_k({\mathbb{C}}^n)$ is the same as the codimension of $X$ inside $\hat{R}_k$.
First, let us see that $\pi_k \colon \hat{R}_k \to G_k({\mathbb{C}}^n)$ is a regular map. We identify $G_k({\mathbb{C}}^n)$ with the image of the Plücker embedding. In a Zariski neighborhood of each matrix $A \in \hat{R}_k$, the map $\pi_k$ can be defined as $A \mapsto [a_{j_1} \wedge \dots \wedge a_{j_k}]$ for some $j_1 < \dots < j_k$, where $a_j$ is the $j^\text{th}$ column of $A$. This shows regularity.
Next, let us see that $Y = \pi_k (X)$ is closed with respect to the classical (not Zariski) topology. Consider the subset $K$ of $X$ formed by the matrices $A \in \hat{R}_k$ whose first $k$ columns form an orthonormal set, and whose $m-k$ remaining columns are zero. Then $K$ is compact (in the classical sense), and thus so is $\pi_k(K)$. But column-invariance of $X$ implies that $\pi_k(K) = Y$, so $Y$ is closed (in the classical sense).
It follows (see e.g. [@Harris p.39]) from regularity of $\pi_k$ is regular that the set $Y$ is constructible, i.e., it can be written as $$Y = \bigcup_{i=1}^{p} Z_i {\smallsetminus}W_i \, ,$$ where $Z_i \varsupsetneq W_i$ are algebraically closed subsets of $G_k({\mathbb{C}}^n)$. We can assume that each $Z_i$ is irreducible. It follows from [@Mumford Thrm. 2.33] that $\overline{Z_i {\smallsetminus}W_i} = Z_i$, where the bar denotes closure in the classical sense. In particular, $Y = \overline{Y} = \bigcup_{i=1}^{p} Z_i$, showing that $Y$ is algebraically closed.
We are left to show the equality between codimensions. Since the codimension of an algebraically closed set equals the minimum of the codimensions of its components, we can assume that $X$ is irreducible.
By column-invariance of $X$, for each $y\in Y$, the whole fiber $\pi^{-1}(y)$ is contained in $X$. All those fibers have the same dimension $\mu = km$. By [@Harris Thrm. 11.12], $\dim X = \dim Y + km$. By and , we have $\dim \hat{R}_k - \dim G_k = km$, so the claim about codimensions follows.
Let $X \subset {\mathrm{Mat}}_{n \times m}({\mathbb{C}})$ be a nonempty algebraically closed, column-invariant set. Suppose $E$ is a vector subspace of ${\mathbb{C}}^n$ that does not belong to $\ldbrack X \rdbrack$. Let $e = \dim E$. We can assume $e > 0$ (otherwise the result is vacuously true), and $e<n$ (because the $e=n$ case was already considered in \[s.particular\]).
Notice that $X \subset R_{n-1}$. Let $$X_k := X \cap \hat{R}_k \quad \text{and} \quad
Y_k := \pi_k(X_k) , \quad \text{for } 0 \le k \le \min(m,n-1).$$ For every $k$ with $e \le k < n$, the set $Y_k$ is disjoint from the set $S_k(E)$ defined by . In view of \[l.projection\] and \[t.schubert\], we have $$\operatorname{codim}_{\hat{R}_k} X_k = \operatorname{codim}Y_k \ge k + 1 - e \, .$$ So the codimension of $X_k$ as a subset of ${\mathrm{Mat}}_{n\times m}({\mathbb{C}})$ is $$\begin{aligned}
\operatorname{codim}X_k &= \operatorname{codim}\hat{R}_k + \operatorname{codim}_{\hat{R}_k} X_k \\
&\ge (m-k)(n-k) + k + 1 - e =: f(k) \, .\end{aligned}$$ One checks that the function $f(k)$ is decreasing on the interval $0 \le k \le \min(m,n-1)$. Therefore: $$\begin{gathered}
\operatorname{codim}X
= \min_{0 \le k \le \min(m,n-1)} \operatorname{codim}X_k
\ge \min_{0 \le k \le \min(m,n-1)} f(k) \\
= f(\min(m,n-1))
= m + 1 - e,\end{gathered}$$ as claimed. This proves \[t.main\] modulo \[t.schubert\].
The proof of \[t.schubert\] will be given in \[s.end\], after we explain the necessary tools in \[s.schubert,s.intersection\].
Schubert calculus {#s.schubert}
=================
Here we will outline some facts about the intersection of Schubert varieties. The readable expositions [@Blasiak; @Vakil] contain more information.
A (complete) flag in ${\mathbb{C}}^{n}$ is a sequence of subspaces $F_0 \subset F_1 \subset \cdots \subset F_{n}$ with $\dim F_j = j$. We denote $F_\bullet = \{F_i\}$.
Given $V \in G_k ({\mathbb{C}}^n)$, its *rank table* (with respect to the flag $F_\bullet$) is the data $\dim (V \cap F_j)$, $j=0,\dots,n$. The *jumping numbers* are the indexes $j \in \{1,\dots,n\}$ such that $\dim (V \cap F_j) - \dim (V \cap F_{j-1})$ is positive (and thus equal to $1$). Of course, if one knows the jumping numbers, one know the rank table and vice-versa. Let us define a third way to encode this information: Consider a rectangle of height $m$ and width $n-m$, divided in $1 \times 1$ squares. We form a path of square edges: Start in the northeast corner of the rectangle. In the $j^\text{th}$ step ($1 \le j \le n$), if $j$ is a jumping number then we move one unit in the south direction, otherwise we move one unit in the west direction. Since there are exactly $k$ jumping numbers, the path ends at the southwest corner of the rectangle. The *Young diagram* of $V$ with respect to the flag $F_\bullet$ is the set of squares in the rectangle that lie northwest of the path. We denote a Young diagram by $\lambda = (\lambda_1, \lambda_2, \dots, \lambda_k)$, where $\lambda_i$ is the number of squares in the $i^\text{th}$ row (from north to south). Its *area* $\lambda_1+\cdots+\lambda_k$ is denoted by $|\lambda|$.
\[ex.Young\] Here is a possible rank table with $k=5$, $n=12$; the jumping numbers are underlined:
---------------------- --- --- --- --- --- --- --- --- --- --- ---- --- ----
$j = $ 0 1 2 4 5 7 10 12
$\dim (W \cap F_j)=$ 0 0 0 1 1 1 2 2 3 4 4 5 5
---------------------- --- --- --- --- --- --- --- --- --- --- ---- --- ----
The associated path in the rectangle is:
and so the Young diagram is $$\lambda=\tiny{\yng(5,3,2,2,1)} = (5,3,2,2,1).$$
In general, we have:
- $\lambda = (\lambda_1, \dots, \lambda_k)$ is a possible Young diagram if and only if $n-k \ge \lambda_1 \ge \dots \ge \lambda_k \ge 0$.
- If $j_1 < \dots < j_k$ are the jumping numbers then $\lambda_i = n-k-j_i+i$.
The set of $V \in G_k({\mathbb{C}}^n)$ that have a given Young diagram $\lambda$ is called a *Schubert cell*, denoted by $\Omega(\lambda)$ or $\Omega(\lambda,F_\bullet)$. Each Schubert cell is a topological disk of real codimension $2|\lambda|$. The Schubert cells (for a fixed flag) give a CW decomposition of the space $G_k({\mathbb{C}}^n)$. The closure of $\Omega(\lambda)$ (in either classical or Zariski topologies) is the set of $V \in G_k({\mathbb{C}}^n)$ such that $\dim (V \cap F_{j_i}) \ge i$ for each $i=1,\ldots,n$ (where $j_1 < \dots < j_k$ are the jumping numbers associated to $\lambda$). These sets are closed irreducible varieties, called *Schubert varieties*. (See e.g. [@Fulton §9.4].)
\[ex.special schubert\] If $E \subset {\mathbb{C}}^n$ is a subspace with $\dim E = e \le k$ then the set $S_k(E)$ defined by is a Schubert variety $\bar\Omega(\lambda,F_\bullet)$, where $F_\bullet$ is any flag with $F_e = E$ and $$\label{e.special young}
\lambda = \big( \underbrace{n-k,\dots,n-k}_{e \text{ times}}, \underbrace{0,\dots,0}_{k-e \text{ times}} \big) =
\raisebox{-4\unitlength}{
\begin{picture}(12,8)
\thinlines
\put(0,0){\grid(12,8)(1,1)}
\multiput(0,5)(0,1){3}{\multiput(0,0)(1,0){12}{{\drawline(.5,0)(1,.5)\drawline(0,0)(1,1)\drawline(0,.5)(.5,1)}}}
\end{picture}
}$$
Let $A^*(k,n)$ denote the set of formal linear combinations with integer coefficients of Young diagrams in the $k \times (n-k)$ rectangle. This is by definition an abelian group.
There is a second ${\smallsmile}$ called the *cup product* that makes $A^*(k,n)$ a commutative ring, and is characterized by the following propertis:
If $\lambda$ and $\mu$ are Young diagrams with respective areas $r$ and $s$ then their cup product is of the form: $$\lambda {\smallsmile}\mu = \nu_1 + \cdots + \nu_N \, .$$ where $\nu_1$, …, $\nu_N$ are Young diagrams with area $r+s$ (possibly with repetitions, possibly $N=0$). Moreover, there are flags $F_\bullet$, $G_\bullet$, $H^{(i)}_\bullet$ such that the manifolds $\bar\Omega(\lambda,F_\bullet)$ and $\bar\Omega(\mu,G_\bullet)$ are transverse and their intersection is $\bigcup \bar\Omega(\nu_i,H^{(i)}_\bullet)$.
Working in $A^*(2,4)$, let us compute the products of the Young diagrams $\lambda = {\tiny \yng(2)}$ and $\mu={\tiny \yng(1,1)}$. Fix a flag $F_\bullet$. Then $\bar\Omega(\lambda, F_\bullet)$ is the set of $W \in G_2({\mathbb{C}}^4)$ that contain $F_1$, and $\bar\Omega(\mu, F_\bullet)$ is the set of $W \in G_2({\mathbb{C}}^4)$ that are contained in $F_3$. Take another flag $G_\bullet$ which is in general position with respect to $F_\bullet$, that is $F_i \cap G_{4-i} = \{0\}$. Then:
- The set $\bar\Omega(\lambda, F_\bullet) \cap \bar\Omega(\lambda, G_\bullet)$ contains a single element, namely $F_1 \oplus G_1$, and thus equals $\bar\Omega((2,2),H_\bullet) = \{H_2\}$ for an appropriate flag $H_\bullet$. This shows that $\lambda {\smallsmile}\lambda = {\tiny \yng(2,2)}$.
- The space $F_3 \cap G_3$ is $2$-dimensional and thus is the single element of $\bar\Omega(\mu, F_\bullet) \cap \bar\Omega(\mu, G_\bullet)$. So $\mu {\smallsmile}\mu = {\tiny \yng(2,2)}$.
- The set $\bar\Omega(\lambda, F_\bullet) \cap \bar\Omega(\mu, G_\bullet)$ is empty, thus $\lambda {\smallsmile}\mu = 0$.
However, if we work in $A^*(4,8)$ then it can be shown that: $${\tiny \yng(2)} {\smallsmile}{\tiny \yng(2)} = {\tiny \yng(2,2)} + {\tiny \yng(4)} + {\tiny \yng(3,1)}, \quad
{\tiny \yng(1,1)} {\smallsmile}{\tiny \yng(1,1)} = {\tiny \yng(2,2)} + {\tiny \yng(2,1,1)} + {\tiny \yng(1,1,1,1)}, \quad
{\tiny \yng(2)} {\smallsmile}{\tiny \yng(1,1)} = {\tiny \yng(3,1)} + {\tiny \yng(2,1,1)}.$$ If we drop the terms that do not fit in a $2 \times 2$ rectangle, we reobtain the results for $G_2({\mathbb{C}}^4)$.
The general computation of the product $\lambda {\smallsmile}\mu$ is not simple and can be done in various ways – see e.g. [@Vakil; @Fulton].[^1] For our purposes, however, it will be sufficient to know when the product is zero or not. The answer is provided by the following simple [^2]:
\[l.overlap\] Let $\lambda$ and $\mu$ be Young diagrams in the $k \times (n-k)$ rectangle. The following two conditions are equivalent:
1. \[i.nonzero\] $\lambda {\smallsmile}\mu \neq 0$.
2. \[i.nonoverlap\] If one draws inside the $k \times (n-k)$ rectangle the Young diagrams of $\lambda$ and $\mu$, being the later rotated by $180^{\circ}$ and put in the southeast corner, then the two figures do not overlap (see \[f.nooverlap\]). Equivalently, $\lambda_i + \mu_{k+1-i} \le n-k$ for every $i=1, \ldots, n$.
(7,5)
(0,0)[(7,5)(1,1)]{}
(0,0)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (0,1)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (1,1)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (0,2)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (1,2)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (0,3)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (1,3)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (2,3)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (0,4)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (1,4)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (2,4)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (3,4)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{} (4,4)[[(.75,0)(1,.25)(.5,0)(1,.5)(.25,0)(1,.75)(0,0)(1,1)(0,.25)(.75,1)(0,.5)(.5,1)(0,.75)(.25,1)]{}]{}
(2,0)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (3,0)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (4,0)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (5,0)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (6,0)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (2,1)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (3,1)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (4,1)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (5,1)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (6,1)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (3,2)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (4,2)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (5,2)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (6,2)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (5,3)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (6,3)[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{}
Intersection of subvarieties of the grassmannian {#s.intersection}
================================================
Next we explain how the Schubert calculus sketched above can be used to obtain information about intersection of general subvarieties of the Grassmannian, by means of cohomology and Poincaré duality. Our primary source is [@Fulton Appendix B]; also, [@Hutchings] is a very readable account about the geometric interpretation of the cup product in cohomology.
Any topological space $X$ has singular homology groups $H_i X$ and cohomology groups $H^i X$ (here taken always with integer coefficients). With the cup product $H^i X \times H^j X \to H^{i+j} X$, the cohomology $H^* X = \bigoplus H^i X$ has a ring structure.
If $X$ is a real compact oriented manifold of dimension $d$ then the homology group $H_d X$ is canonically isomorphic to ${\mathbb{Z}}$, with a generator $[X]$ called the *fundamental class* of $X.$ In addition, there is *Poincaré duality isomorphism* $H^i X \to H_{d-i} X$, which is given by $\alpha \mapsto \alpha \smallfrown [X]$ (taking the cap product with the fundamental class). Let us denote by $\omega \mapsto \omega^*$ the inverse isomorphism.
Next suppose $Y$ and $Z$ are compact oriented submanifolds of $X$, of codimensions $i$ and $j$ respectively. Also suppose that $Y$ and $Z$ have transverse intersection $Y \cap Z$, which therefore is either empty or a compact submanifold of codimension $i + j$, which is oriented in a canonical way. The images of the fundamental classes of $Y$, $Z$, and $Y\cap Z$ under the inclusions into $X$ define homology classes that we denote (with a slight abuse of notation) by $[Y] \in H_{d-i} X$, $[Z] \in H_{d-j} X$, $[Y\cap Z] \in H_{d-i-j} X$. Then their Poincaré duals $[Y]^*\in H^i X$, $[Z]^* \in H^j X$, and $[Y \cap Z]^* \in H^{i+j} X$ are related by: $$[Y]^* {\smallsmile}[Z]^* = [Y \cap Z]^* \, .$$ That is, *cup product is Poincaré dual to intersection.*
Now consider the case where $X$ is a projective nonsingular (i.e., smooth) complex variety, and $Y$ and $Z$ are irreducible subvarieties of $X$. Obviously, the fundamental class $[X]$ makes sense, because $X$ is a compact manifold with a canonical orientation induced from the complex structure. A deeper fact (see [@Fulton Appendix B]) is that fundamental classes $[Y]$ and $[Z]$ can also be canonically associated to the (possibly singular) subvarieties $Y$ and $Z$, and the Poincaré duality between cup product and intersection works in this situation. More precisely, suppose that $Y$ and $Z$ are transverse in the algebraic sense: $Y \cap Z$ is a union of subvarieties $W_1$, …, $W_\ell$ whose codimensions are the sum of the codimensions of $Y$ and $Z$, and for each $i=1,\dots,\ell$, the tangent spaces $T_w Y$ and $T_w Z$ are transverse for all $w$ in a Zariski-open subset of $W_i$. Then each $W_i$ has its canonical fundamental class, and the following duality formula holds: $$[Y]^* {\smallsmile}[Z]^* = [W_1]^* + \cdots + [W_\ell]^* \, .$$
In our application of this machinery, $X$ will be the grassmannian $G_k({\mathbb{C}}^n)$. In this case:
- The fundamental classes of the Schubert varieties $[\bar\Omega(\lambda, F_\bullet)]$ do not depend on the flag $F_\bullet$.
- Let $\sigma_\lambda$ denote the Poincaré dual of $[\bar\Omega(\lambda, F_\bullet)]$. Then $H^{2r} G_k({\mathbb{C}}^n)$ is a free abelian group and the elements $\sigma_\lambda$ with $|\lambda| = r$ form a set of generators. (The cohomology groups of odd codimension are zero.)
- The cup product on cohomology agrees with the “cup” product of Young diagrams explained in the previous section.
End of the proof {#s.end}
================
We are now able to give to prove \[t.schubert\].[^3]
Let $1 \le e \le k < n$. Let $E \subset {\mathbb{C}}^n$ be a subspace of dimension $e$, and consider the set $S_k(E)$ defined by . Recall from \[ex.special schubert\] that this is the Schubert variety for the Young diagram $\lambda$ given by .
Now consider a (nonempty) subvariety $Y \subset G_k({\mathbb{C}}^n)$ that is disjoint from $S_k(E)$. We want to give a lower bound for the codimension $c$ of $Y$. We can of course assume that $Y$ is irreducible.
Let $[Y]^*$ be the dual of fundamental class of $Y$. This is a nonzero element of $H^{2c} G_k({\mathbb{C}}^n)$. It can be expressed as $\sum n_i \sigma_{\mu_i}$, where $\mu_i$ are Young diagrams with area $|\mu_i|=c$, and $n_i$ are nonzero integers. In fact we have $n_i>0$, because of the canonical orientations induced by complex structure. Since the intersection between $S_k(E)$ and $Y$ is empty (and in particular transverse), Poincaré duality gives $[S_k(E)]^* {\smallsmile}[Y]^* = 0$. Therefore we have $\sigma_\lambda {\smallsmile}\sigma_{\mu_i} = 0$ for each $i$.
By \[l.overlap\], if we draw the Young diagram of $\mu_i$ rotated by $180^{\circ}$ and put in the southeast corner of the $k \times (n-k)$ rectangle, then it overlaps the Young diagram $\lambda$ pictured in . This is only possible if $c \ge k-e+1$; indeed the Young diagram $\mu$ with least area such that $\lambda {\smallsmile}\mu \neq 0$ is $$\mu = \big( \underbrace{1,\dots,1}_{k-e+1 \text{ times}}, \underbrace{0,\dots,0}_{e-1 \text{ times}} \big),$$ for which the overlapping picture becomes:
(12,8) (0,0)[(12,8)(1,1)]{} (0,5)(0,1)[3]{}[(0,0)(1,0)[12]{}[[(.5,0)(1,.5)(0,0)(1,1)(0,.5)(.5,1)]{}]{}]{} (11,0)(0,1)[6]{}[[(.25,0)(.25,.25)(.75,.25)(.75,.5)(.25,.5)(.25,.75)(.75,.75)(.75,1)(0,0)[(1,1)(1,.25)]{}]{}]{} (11,5)[(1,1)(1,1)]{}
This concludes the proof of \[t.schubert\].
As explained in \[ss.reduction\], \[t.main\] follows.
[BG2]{}
<span style="font-variant:small-caps;">Blasiak, J.</span> Cohomology of the complex Grassmannian. [www-personal.umich.edu/ $\sim$jblasiak/grassmannian.pdf](http://www-personal.umich.edu/~jblasiak/grassmannian.pdf)
<span style="font-variant:small-caps;">Bochi, J.; Gourmelon, N.</span> Universal regular control for generic semilinear systems. [Preprint [arXiv:[1201.1672]{}](http://arxiv.org/abs/1201.1672)]{}
. Transitivity of spaces of matrices. In preparation.
<span style="font-variant:small-caps;">Fulton, W.</span> *Young tableaux. With applications to representation theory and geometry.* Cambridge Univ. Press, 1997.
<span style="font-variant:small-caps;">Harris, J.</span> *Algebraic geometry: a first course.* Springer, 1992.
<span style="font-variant:small-caps;">Hutchings, M.</span> *Cup product and intersection.* Course notes. [math.berkeley.edu/ $\sim$hutching/teach/215b-2011/cup.pdf](http://math.berkeley.edu/~hutching/teach/215b-2011/cup.pdf)
<span style="font-variant:small-caps;">Mumford, D.</span> *Algebraic geometry. I. Complex projective varieties.* Springer, 1976.
<span style="font-variant:small-caps;">Vakil, R.</span> A geometric Littlewood-Richardson rule. *Ann. of Math. *164 (2006), no. 2, 371–421.
[^1]: Here is an online calculator: [young.sp2mi.univ-poitiers.fr/ cgi-bin/ form-prep/ marc/ LiE[\_]{}form.act?action=LRR](http://young.sp2mi.univ-poitiers.fr/cgi-bin/form-prep/marc/LiE_form.act?action=LRR)
[^2]: In [@Vakil] condition \[i.nonoverlap\] of the is expressed as “the white checkers are happy”.
[^3]: Probably the result could also be proved using the Chow ring, but we feel more comfortable with cohomology.
|
---
author:
- |
Deeksha Sinha\
Operations Research Center\
Massachusetts Institute of Technology\
email: [deeksha@mit.edu](deeksha@mit.edu)
- |
Theja Tulabandhula\
Information and Decision Sciences\
University of Illinois at Chicago\
email: [tt@theja.org](tt@theja.org)
bibliography:
- 'assort\_lsh.bib'
date: 'April 30, 2018'
title: 'Optimizing Revenue over Data-driven Assortments'
---
|
---
abstract: 'Magnetic fluctuations with a zero mean field in a random flow with a finite correlation time and a small yet finite magnetic diffusion are studied. Equation for the second-order correlation function of a magnetic field is derived. This equation comprises spatial derivatives of high orders due to a non-local nature of magnetic field transport in a random velocity field with a finite correlation time. For a random Gaussian velocity field with a small correlation time the equation for the second-order correlation function of the magnetic field is a third-order partial differential equation. For this velocity field and a small magnetic diffusion with large magnetic Prandtl numbers the growth rate of the second moment of magnetic field is estimated. The finite correlation time of a turbulent velocity field causes an increase of the growth rate of magnetic fluctuations. It is demonstrated that the results obtained for the cases of a small yet finite magnetic diffusion and a zero magnetic diffusion are different. Astrophysical applications of the obtained results are discussed.'
author:
- Nathan Kleeorin
- Igor Rogachevskii
- Dmitry Sokoloff
date: 'Received 6 August 2001; published 12 February 2002'
title: Magnetic fluctuations with a zero mean field in a random fluid flow with a finite correlation time and a small magnetic diffusion
---
**PHYSICAL REVIEW E, v. 65, 036303 (2002)**
Introduction
============
In recent time magnetic fluctuations are a subject of intensive study (see, e.g., [@ZMR88; @ZRS90; @CG95; @KA92; @GD94; @KRA94; @GS96; @RK97; @KR99; @SK01]). There are two types of magnetic fluctuations: the fluctuations with a zero and a nonzero mean magnetic field. These two types of magnetic fluctuations have different mechanisms of generation and different properties. Magnetic fluctuations with a zero mean magnetic field in a random velocity field are generated by the stretch-twist-fold mechanism (see, e.g., [@ZMR88; @ZRS90]). On the other hand, magnetic fluctuations with a nonzero mean magnetic field are generated by a tangling of the mean magnetic field by a random velocity field (see, e.g., [@M78; @P79; @KR80; @ZRS83]).
In the present paper we considered only magnetic fluctuations with a zero mean magnetic field which where observed, [*e.g.,*]{} in the ionosphere of Venus (see, e.g., [@RE79; @KRE94]), in the quiet sun (see, e.g., [@ZRS83]) and probably in galaxies (see, e.g., [@RSS88]). In spite of that the dynamics of a mean magnetic field at least in kinematic (linear) stage is well studied (see, e.g., [@M78; @P79; @KR80; @ZRS83; @RSS88]), a generation of magnetic fluctuations with a zero mean magnetic field even in kinematic stage still remains a subject of numerous discussions. Most studies starting with a seminal paper by Kazantsev [@K68] were performed in the delta-correlated in time approximation for a random velocity field (see, e.g., [@ZMR88; @ZRS90; @RK97; @KR99], and references therein). A use the delta-correlated in time approximation for a random velocity field is a great mathematical convenience.
However, a real velocity field in astrophysical and geophysical applications cannot be considered as the delta-correlated in time velocity field. As follows from the analysis in [@DM84; @EKRS99] a finite correlation time of the velocity field does not essentially change a form of the mean-field equations and the growth rates of the mean fields. In particular, there is a wide range of scales in which the mean-field equations are the second-order partial differential equations (in spatial derivatives). However, the effect of a finite correlation time of the velocity field on magnetic fluctuations is poorly understood. It is not clear how conditions for the generation of magnetic fluctuations are changed in a random velocity field with a finite correlation time.
In this study we took into account a finite correlation time of a random velocity field and a small yet finite magnetic diffusion caused by an electrical conductivity of fluid. We derived an equation for the second-order correlation function of magnetic field in a random velocity field with a finite correlation time using a method described in [@DM84; @EKRS99; @EKRS00]. The derived equation comprises spatial derivatives of high orders. For a random Gaussian velocity field with a small correlation time the equation for the second-order correlation function of the magnetic field is a third-order partial differential equation. We calculated the growth rate of the second moment of magnetic field for this velocity field and a small magnetic diffusion with large magnetic Prandl numbers. In the limit of extremely small correlation time of a random velocity field we recovered the results obtained in the delta-correlated in time approximation for a random velocity field.
Recently, the finite correlation time effects of a random velocity field in the kinematic dynamo in the case of a zero magnetic diffusion have been studied in [@SK01]. We will show that the results obtained for the cases of a zero magnetic diffusion and of a small yet finite magnetic diffusion are different.
Governing equations
===================
We study magnetic fluctuations with a zero mean magnetic field. A mechanism of the generation of magnetic fluctuations with a zero mean magnetic field was proposed by Zeldovich (see, e.g., [@ZMR88; @ZRS90]) and comprises stretching, twisting and folding of the original loop of a magnetic field. These non-trivial motions are three dimensional and result in an amplification of the magnetic field. The magnetic field $ {\bf b}(t,{\bf r}) $ is determined by the induction equation $$\begin{aligned}
{\partial {\bf b} \over \partial t} + ({\bf v} \cdot {\mbox{\boldmath $ \bf \nabla$}})
{\bf b} = ({\bf b} \cdot {\mbox{\boldmath $ \bf \nabla$}}) {\bf v} - {\bf b}
({\mbox{\boldmath $ \bf \nabla$}} \cdot {\bf v}) + D_{m} \Delta {\bf b} \;,
\label{T1}\end{aligned}$$ where $ D_{m} $ is the magnetic diffusion caused by an electrical conductivity of a fluid, $ {\bf v} $ is a random velocity field. The goal of the present paper is to derive equation for the second-order correlation function of the magnetic field in a random velocity field with a finite correlation time.
Now we discuss a method of derivation of the equation for the second-order correlation function of the magnetic field (for details, see Appendix A). We use an exact solution of Eq. (\[T1\]) in the form of a functional integral for an arbitrary velocity field taking into account a small yet finite molecular magnetic diffusion. The molecular magnetic diffusion can be described by a random Brownian motions of a particle. The functional integral implies an averaging over a random Brownian motions of a particle. The form of the exact solution used in the present paper allows us to separate the averaging over both, a random Brownian motions of a particle and a random velocity field. This method allows us to derive equation for the second-order correlation function $ \Phi_{ij}(t, {\bf x}, {\bf y}) = \langle b_{i}(t,{\bf x})
b_{j}(t, {\bf y}) \rangle $ of the magnetic field: $$\begin{aligned}
&& \Phi_{ij}(t, {\bf r}) = P_{ijpl}(\tau,{\bf r}, i {\mbox{\boldmath $ \nabla$}})
\Phi_{pl}(s, {\bf r}) \;,
\label{A51}\\
&& P_{ijpl}(\tau,{\bf r}, i {\mbox{\boldmath $ \nabla$}}) = M_{{\mbox{\boldmath $ \xi$}}} \{
\langle G_{ip}({\bf x}) G_{jl}({\bf y}) \exp({\mbox{\boldmath $ \tilde \xi$}}
\cdot {\mbox{\boldmath $ \nabla$}}) \rangle \} \label{A52}\end{aligned}$$ (see Appendix A), where $ \tau = t - s ,$ $ \, G_{ij}({\bf x})
\equiv G_{ij}(t,s, {\mbox{\boldmath $ \xi$}}({\bf x})) $ is determined by equation $ d G_{ij}(t,s,{\mbox{\boldmath $ \xi$}}) / ds = N_{ik} G_{kj}(t,s,{\mbox{\boldmath $ \xi$}}) $ with the initial condition $ G_{ij}(t=s) = \delta_{ij} ,$ and the tensor $ G_{ij} $ can be considered as the Jacobian for magnetic field transport. Here $ N_{ij} = \partial v_{i} /
\partial x_{j} - \delta_{ij} ({\mbox{\boldmath $ \bf \nabla$}} \cdot {\bf v}) ,$ $ \, M_{{\mbox{\boldmath $ \xi$}}} \{ \cdot \} $ denotes the mathematical expectation over the Wiener paths $ {\mbox{\boldmath $ \xi$}}({\bf x}) = {\bf x} - \int_{0}^{t-s}
{\bf v}(t-\sigma,{\mbox{\boldmath $ \xi$}}) \,d\sigma + (2 D_{m})^{1/2} {\bf w}(t-s) ,$ and $ {\mbox{\boldmath $ \tilde \xi$}} = {\mbox{\boldmath $ \xi$}}({\bf y})
- {\mbox{\boldmath $ \xi$}}({\bf x}) - {\bf r} ,$ $ \, {\bf r} = {\bf y} - {\bf x} ,$ $ \, {\mbox{\boldmath $ \nabla$}} = \partial / \partial {\bf r} ,$ the angular brackets $ \langle \cdot \rangle $ denote the ensemble average over the random velocity field, and the molecular magnetic diffusion, $ D_{m} ,$ is described by a Wiener process $ {\bf w}(t) .$ Another equivalent approach which includes a weak molecular diffusion in a Lagrangian map, with a Green’s function was considered in [@V88; @V89].
Equation (\[A51\]) for the second moment of a magnetic field comprises spatial derivatives of high orders due to a non-local nature of turbulent transport of magnetic field in a random velocity field with a finite correlation time (for details, see Appendix A).
The random Gaussian velocity field with a small correlation time
================================================================
Now we use the model of the random Gaussian velocity field with a small yet finite correlation time. We seek a solution for the second moment of the magnetic field in the form $$\begin{aligned}
\Phi_{ij}(t,{\bf r}) &\equiv& \langle b_i(t,{\bf x}) b_j(t,{\bf
y}) \rangle = W(t,r) \delta_{ij}
\nonumber\\
& & + (r W' / 2) P_{ij}({\bf r}) \;, \label{D7}\end{aligned}$$ where $ W(t,r) = \langle \tilde b(t,{\bf x}) \tilde b(t,{\bf y})
\rangle ,$ $ \, \tilde b = {\bf b} \cdot {\bf r} ,$ $ \, {\bf r} = {\bf y} - {\bf x} ,$ $ \, P_{ij}({\bf r}) = \delta_{ij} - r_{ij} ,$ $ \, r_{ij} = r_{i} r_{j} / r^{2} $ and $ W' = \partial W(t,r) / \partial r .$ This form of the second moment corresponds to the condition $ {\mbox{\boldmath $ \nabla$}} \cdot {\bf b} = 0 $ and an assumption of the homogeneous and isotropic magnetic fluctuations. We considered a homogeneous, isotropic and incompressible random velocity field (see below). The equation for the correlation function $ W(t,r) $ is given by $$\begin{aligned}
{\partial W(t,r) \over \partial t} &=& (1/3) \, \sigma_{_{\xi}} \,
r^{3} \, W''' + m^{-1}(r) \, W''
\nonumber\\
& & + \mu(r) \, W' + \kappa \, W \;, \label{D8}\end{aligned}$$ (for details, see Appendix B), where in the leading order of asymptotic expansion $ \kappa = (20/3) (1 + \sigma_{_{\xi}}/4) ,$ and $$\begin{aligned}
1 / m(r) &=& 2 / \Pr + (2/3) r^{2}(1 + 8 \sigma_{_{\xi}}) \;,
\\
\mu(r) &=& {4 \over m(r) r} + \biggl( {1 \over m(r)} \biggr)' - 27
\, \sigma_{_{\xi}} \, r \;,\end{aligned}$$ $ \Pr = \nu / D_{m} $ is the magnetic Prandtl number, $ \nu $ is the kinematic viscosity, $ \sigma_{_{\xi}} = (2/3) {\rm St}^{2} ,$ $ \, {\rm St} = \tau u_{d} / l_{d} $ is the Strouhal number. Equation (\[D8\]) is written in dimensionless form: the distance $ r $ is measured in the units of the inner scale of turbulence $ l_{d} = l_{0} {\rm Re}^{-3/4} ,$ the time $ t $ is measured in the units $ \tau_{d} = \tau_{0} {\rm Re}^{-1/2} ,$ where $ \tau_{d} $ is the turnover time of eddies in the inner scale $ l_{d} $ and the velocity $ v $ is measured in the units $ u_{d} = l_{d} / \tau_{d} ,$ $ {\rm Re} = u_0 l_0 / \nu \gg 1 $ is the Reynolds number, $ u_{0} $ is the characteristic turbulent velocity in the maximum scale of turbulent motions $ l_{0} $ and $ \tau_{0} = l_{0} / u_{0} .$ In this study we consider the case of large magnetic Prandtl numbers. For the derivation of Eq. (\[D8\]) we used a homogeneous, isotropic and incompressible random velocity field and the correlation function $ f_{ij}(t,{\bf r}) = \langle v_{i}(t,{\bf x})
v_{j}(t,{\bf y}) \rangle $ for the velocity field is given by $$\begin{aligned}
f_{ij} = (1/3) [F(r) \delta_{ij} + (r F' / 2) P_{ij}({\bf r})] \; .
\label{D6}\end{aligned}$$ We assumed that in dissipative range $ (0 \leq r \leq 1) $ of a turbulent velocity field the function $ F(r) $ is given by $ F(r) = 1 - r^{2} .$
Now we analyze a solution of Eq. (\[D8\]). In a molecular magnetic diffusion region of scales whereby $ r \ll \Pr^{-1/2}$, all terms $\propto r^2$ may be neglected. Then the solution of Eq. (\[D8\]) is given by $ W(t,r) = (1 - \alpha \, \Pr \, r^{2}) \exp(\gamma t)$, where $ \gamma $ are the eigenvalues to be found, $ \alpha
= (\kappa - \gamma) / 20 $ and $\kappa > \gamma .$ In a turbulent magnetic diffusion region of scales, $\Pr^{-1/2} \ll
r \ll 1$, the molecular magnetic diffusion term $\propto 1/\Pr$ is negligible. Thus, the solution of Eq. (\[D8\]) in this region is $ W(t,r) = A_{1} r ^{-\lambda} \exp(\gamma t)$, where $ \lambda $ is determined by an equation $$\begin{aligned}
\sigma_{_{\xi}} \lambda^{3} &-& (3 + 13 \sigma_{_{\xi}})
\lambda^{2} + (15 - 1226 \sigma_{_{\xi}}) \lambda
\nonumber\\
&+& (9/2) \gamma - 30 - 5 \sigma_{_{\xi}} = 0 \; . \label{D9}\end{aligned}$$ For a small parameter $\sigma_{_{\xi}}$ we obtain $ \lambda \approx 5/2
- 424 \, \sigma_{_{\xi}} \pm i a_{0} ,$ where $ a_{0}^{2} =
3 (5 - 2 \gamma_{0}) / 4 ,$ $ \gamma =
\gamma_{0} + \sigma_{_{\xi}} \gamma_{1} $ and $ \gamma_{1} \approx 348 .$ For $r \gg 1$ the solution for $W(t,r)$ decays rapidly with $r .$ The value $ \gamma_0$ can be calculated by matching the correlation function $ W(t,r) $ and its first and second derivatives at the boundaries of the above regions, i.e., at the points $r = \Pr^{-1/2}$ and $r = 1$. In particular, the matching yields $ a_{0} \approx 2 \pi k / \ln \Pr ,$ where the parameter $ k = 1; 2; 3; \ldots $ determines modes with different numbers of zero-points $ (W = 0) $ in the correlation function $ W(r) .$ In particular, the mode with $ k=1 $ has only one zero-point in the correlation function $ W(r) .$ Thus, the growth rate $ \gamma $ of magnetic fluctuations is given by $$\begin{aligned}
\gamma \approx {5 \over 2} - {2 \over 3} \biggl({2 \pi k \over
\ln \Pr} \biggr)^{2} + 348 \, \sigma_{_{\xi}} \; .
\label{D10}\end{aligned}$$ The correlation function $ W(t,r) $ has global maximum at $ r = 0 .$ This implies that the real part of $ \lambda $ is positive. Thus, $ \tau < 0.1 \, (l_{d} / u_{d}) .$ It follows from Eq. (\[D10\]) that the finite correlation time of a turbulent velocity field causes an increase of the growth rate of magnetic fluctuations. The latter is important in view of applications in astrophysics and planetary physics because the real velocity field has a finite correlation time. Note that the considered case corresponds to the fast dynamo because the growth rate tends to the nonzero constant at very large magnetic Reynolds numbers.
Discussion
==========
In the present paper we studied an effect of a finite correlation time of a turbulent velocity field on dynamics of magnetic fluctuations with a zero mean magnetic field in the case of a small yet finite magnetic diffusion. The finite correlation time results in an increase of the growth rate of magnetic fluctuations. However, the developed theory is limited by an assumption about small correlation time, [*i.e.,*]{} $ \tau < 0.1 \, (l_{d} / u_{d}) .$ The latter estimate is quit realistic [*e.g.,*]{} for galactic turbulence (see Ref. [@RSS88]). We showed also that for an arbitrary correlation time of a turbulent velocity field the equation for the second moment of the turbulent magnetic field comprises higher-order spatial derivatives.
In this study we took into account a small yet finite magnetic diffusion caused by an electrical conductivity of a fluid. The obtained results are different from that derived for a zero magnetic diffusion (see [@SK01]). In particular, the finite correlation time of a turbulent velocity field reduces the growth rate of magnetic fluctuations in the case of a zero magnetic diffusion (see [@SK01]). A difference between two cases with a zero magnetic diffusion and a small yet finite magnetic diffusion can be demonstrated even for the $\delta$-correlated in time random velocity field. For instance, for large magnetic Prandtl numbers the growth rate of the second moment of a turbulent magnetic field is given by $$\begin{aligned}
\gamma = {5 (1 + \sigma/3) \over 2(1 + 3 \sigma)} -
{2 (1 + 3 \sigma) \over 3(1 + \sigma)} \biggl({2 \pi k \over
\ln \Pr} \biggr)^{2} \;,
\label{D12}\end{aligned}$$ where $ \sigma = \langle ({\mbox{\boldmath $ \nabla$}} \cdot {\bf v})^{2} \rangle /
\langle ({\mbox{\boldmath $ \nabla$}} \times {\bf v})^{2} \rangle $ is the degree of compressibility of fluid velocity field. Equation (\[D12\]) is obtained using Eqs. (29) and (30) in Ref. [@RK97] and implies that the compressibility of fluid velocity field causes a reduction of the growth rate of the second moment of a turbulent magnetic field. On the other hand, in the case of a zero magnetic diffusion the growth rate of the second moment of magnetic fluctuations generated by the $\delta$-correlated in time random velocity field is given by $$\begin{aligned}
\gamma = {10 (1 + 2\sigma) \over 3(1 + \sigma)} \;
\label{D14}\end{aligned}$$ (see [@SK01]), and the compressibility results in an increase the growth rate of the second moment of a turbulent magnetic field. This implies that a transition from the case of a zero magnetic diffusion to that of a small yet finite magnetic diffusion is singular. The limit of zero magnetic diffusion is singular because the growth rate $ \gamma $ of magnetic fluctuations is discontinuous at zero magnetic diffusion, [*i.e.,*]{} it is different from the limit of magnetic diffusion tending to zero. This stresses a danger for an application of the results obtained for a zero magnetic diffusion to astrophysics and planetary physics where the magnetic diffusion caused by an electrical conductivity of fluid is small yet finite.
We acknowledge support from INTAS Program Foundation (Grant No. 99-348) and RFBR grant 01-02-16158. DS is grateful to a special fund of the Faculty of Engineering of the Ben-Gurion University of the Negev for visiting senior scientists.
Derivation of Eq. (\[A51\])
===========================
When $ D_{m} \not= 0 $ the magnetic field $ {\bf b}(t, {\bf x}) $ is given by $$\begin{aligned}
b_{i}(t, {\bf x}) = M_{{\mbox{\boldmath $ \xi$}}} \{G_{ij}(t,{\mbox{\boldmath $ \xi$}}) \,
\exp({\mbox{\boldmath $ \xi$}}^{\ast} \cdot {\mbox{\boldmath $ \nabla$}}) b_{j}(s, {\bf x}) \} \;,
\label{A5}\end{aligned}$$ where $ {\mbox{\boldmath $ \xi$}}^{\ast} = {\mbox{\boldmath $ \xi$}} - {\bf x} .$ In order to derive Eq. (\[A5\]) we use an exact solution of Eq. (\[T1\]) with an initial condition $ {\bf b}(t=s,{\bf x}) = {\bf b}(s,{\bf x}) $ in the form of the Feynman-Kac formula: $$\begin{aligned}
b_{i}(t,{\bf x}) = M_{{\mbox{\boldmath $ \xi$}}} \{G_{ij}(t,s,{\mbox{\boldmath $ \xi$}}(t,s)) \,
b_{j}(s,{\mbox{\boldmath $ \xi$}}(t,s))\} \;,
\label{T5}\end{aligned}$$ where $ d G_{ij}(t,s,{\mbox{\boldmath $ \xi$}}) / ds = N_{ik}
G_{kj}(t,s,{\mbox{\boldmath $ \xi$}}) ,$ $ \, N_{ij} = \partial v_{i} / \partial
x_{j} - \delta_{ij} ({\mbox{\boldmath $ \nabla$}} \cdot {\bf v}) $ and $ \,
M_{{\mbox{\boldmath $ \xi$}}} \{ \cdot \} $ denotes the mathematical expectation over the Wiener paths $ {\mbox{\boldmath $ \xi$}}(t,s) = {\bf x} - \int_{0}^{t-s}
{\bf v}[t-\sigma,{\mbox{\boldmath $ \xi$}}(t,\sigma)] \,d\sigma + (2 D_{m})^{1/2}
{\bf w}(t-s) .$ Now we assume that $$\begin{aligned}
{\bf b}(t, {\mbox{\boldmath $ \xi$}}) = \int \exp(i {\mbox{\boldmath $ \xi$}} \cdot {\bf q})
{\bf b}(s, {\bf q}) \,d{\bf q} \; .
\label{CC8}\end{aligned}$$ Substituting Eq. (\[CC8\]) into Eq. (\[T5\]) we obtain $$\begin{aligned}
b_{i}(s, {\bf x}) &=& \int M_{{\mbox{\boldmath $ \xi$}}}
\{G_{ij}(t,s,{\mbox{\boldmath $ \xi$}}(t,s)) \, \exp[i {\mbox{\boldmath $ \xi$}}^{\ast} \cdot {\bf
q}] \, b_{j}(s, {\bf q}) \}
\nonumber\\
& & \times \exp(i {\bf q} \cdot {\bf x}) \,d{\bf q} \; .
\label{C8}\end{aligned}$$ In Eq. (\[C8\]) we expand the function $ \exp[i {\mbox{\boldmath $ \xi$}}^{\ast} \cdot {\bf q}] $ in Taylor series at $ {\bf q} = 0 ,$ i.e., $ \exp[i {\mbox{\boldmath $ \xi$}}^{\ast} \cdot {\bf q}] = \sum_{k=0}^{\infty}
(1/k!) (i {\mbox{\boldmath $ \xi$}}^{\ast} \cdot {\bf q})^{k} .$ Using the identity $ (i {\bf q})^{k} \exp[i {\bf x} \cdot {\bf q}] =
{\mbox{\boldmath $ \nabla$}}^{k} \exp[i {\bf x} \cdot {\bf q}] $ and Eq. (\[C8\]) we get $$\begin{aligned}
b_{i}(t, {\bf x}) &=& M_{{\mbox{\boldmath $ \xi$}}} \{G_{ij}(t,s,{\mbox{\boldmath $ \xi$}})
[\sum_{k=0}^{\infty} (1/k!) ({\mbox{\boldmath $ \xi$}}^{\ast} \cdot
{\mbox{\boldmath $ \nabla$}})^{k}]
\nonumber\\
& & \times \int b_{j}(s, {\bf q}) \exp(i {\bf q} \cdot {\bf x})
\,d{\bf q} \} \; . \label{BC8}\end{aligned}$$ After the inverse Fourier transformation in Eq. (\[BC8\]) we obtain Eq. (\[A5\]). Equation (\[CC8\]) can be formally considered as an inverse Fourier transformation of the function $ b_{i}(t,
{\mbox{\boldmath $ \xi$}}) .$ However, $ {\mbox{\boldmath $ \xi$}} $ is the Wiener path which is not a usual spatial variable. Therefore, it is desirable to derive Eq. (\[A5\]) by a more rigorous method as it is done below.
To this end we use an exact solution of the Cauchy problem for Eq. (\[T1\]) with an initial condition $
{\bf b}(t=s,{\bf x}) = {\bf b}(s,{\bf x}) $ in the form $$\begin{aligned}
b_{i}(t,{\bf x}) = M_{{\mbox{\boldmath $ \zeta$}}} \{J(t,s,{\mbox{\boldmath $ \zeta$}}) \tilde
G_{ij}(t,s,{\mbox{\boldmath $ \zeta$}}) \, b_{j}(s,{\mbox{\boldmath $ \zeta$}}(t,s)) \} \;,
\label{T2}\end{aligned}$$ where the matrix $ \tilde G_{ij} $ is determined by the equation $ d \tilde G_{ij}(t,s,{\mbox{\boldmath $ \zeta$}}) / d s = N_{ik}
\tilde G_{kj}(t,s,{\mbox{\boldmath $ \zeta$}}) $ with the initial condition $ \tilde G_{ij}(t=s) =
\delta_{ij} ,$ and the function $ J(t,s,{\mbox{\boldmath $ \zeta$}}) $ is given by $$\begin{aligned}
J(t,s,{\mbox{\boldmath $ \zeta$}}) = \exp [- (2 D_{m})^{-1/2}
\nonumber\\
\times \int_{0}^{t-s} {\bf v}(t-\eta,{\mbox{\boldmath $ \zeta$}}(t,\eta)) \cdot
\,d{\bf w}(\eta)
\nonumber \\
- (4 D_{m})^{-1} \int_{0}^{t-s} {\bf
v}^{2}(t-\eta,{\mbox{\boldmath $ \zeta$}}(t,\eta)) \,d{\eta} ] \;, \label{T4}\end{aligned}$$ $ {\bf w}(t) $ is a Wiener process, and $ M_{{\mbox{\boldmath $ \zeta$}}} \{ \cdot \} $ denotes the mathematical expectation over the paths $ {\mbox{\boldmath $ \zeta$}}(t,s) = {\bf x} + (2 D_{m})^{1/2} ({\bf w}(t) -
{\bf w}(s)) .$ The solution (\[T2\]) was first found in [@DM84] for a magnetic field in an incompressible fluid flow. Equation (\[T2\]) generalizes the solution obtained in [@DM84] for a magnetic field in a compressible random velocity field. The first integral $ \int_{0}^{t-s}
{\bf v}(t-\eta,{\mbox{\boldmath $ \zeta$}}(t,\eta)) \cdot \,d{\bf w}(\eta) $ in Eq. (\[T4\]) is the Ito stochastic integral (see, e.g., [@Mc69]).
The difference between the solutions (\[T2\]) and (\[T5\]) is as follows. The function $ b_{j}(s,{\mbox{\boldmath $ \xi$}}(t,s)) $ in Eq. (\[T5\]) explicitly depends on the random velocity field $ {\bf v} $ via the Wiener path $ {\mbox{\boldmath $ \xi$}} ,$ while the function $ b_{j}(s,{\mbox{\boldmath $ \zeta$}}(t,s)) $ in Eq. (\[T2\]) is independent of the velocity $ {\bf v} .$ Trajectories in the Feynman-Kac formula (\[T5\]) are determined by both, a random velocity field and magnetic diffusion. On the other hand, trajectories in Eq. (\[T2\]) are determined only by magnetic diffusion. Due to the Markovian property of the Wiener process the solution (\[T2\]) can be rewritten in the form $$\begin{aligned}
b_{i}(t,{\bf x}) &=& E \{S_{ij}(t,s,{\bf x},{\bf X}')
\, b_{j}(s,{\bf X}') \}
\nonumber\\
& = & \int Q_{ij}(t,s,{\bf x},{\bf x}') b_{j}(s,{\bf x}') \,d {\bf
x}' \;, \label{T8}\end{aligned}$$ where $$\begin{aligned}
Q_{ij}(t,s,{\bf x},{\bf x}') &=& [4 \pi D_{m} (t - s)]^{3/2}
\exp \biggl(- {({\bf x}' - {\bf x})^{2} \over 4 D_{m} (t - s) } \biggr)
\nonumber\\
& & \times S_{ij}(t,s,{\bf x},{\bf x}') \;, \label{T9}\end{aligned}$$ $ S_{ij}(t,s,{\bf x},{\bf x}') = M_{{\mbox{\boldmath $ \mu$}}} \{J(t,s,{\mbox{\boldmath $ \mu$}})
\tilde G_{ij}(t,s,{\mbox{\boldmath $ \mu$}}) \} $ and $ \, M_{{\mbox{\boldmath $ \mu$}}} \{ \cdot \} $ means the path integral taken over the set of trajectories $ {\mbox{\boldmath $ \mu$}} $ which connect points $ (t,{\bf x}) $ and $ (s,{\bf x}') .$ The mathematical expectation $ E \{ \cdot \} $ in Eq. (\[T8\]) denotes the averaging over the set of random points $ {\bf X}' $ which have a Gaussian statistics (see, e.g., [@S80]). We used here the following property of the averaging over the Wiener process $ E \{ M_{{\mbox{\boldmath $ \mu$}}} \{ \cdot \} \} = M_{{\mbox{\boldmath $ \zeta$}}} \{ \cdot \} .$ We considered a random velocity field with a finite renewal time. In the intervals $ \ldots (- \tau, 0]; (0, \tau];
(\tau, 2 \tau]; \ldots $ the velocity fields are assumed to be statistically independent and have the same statistics. This implies that the velocity field loses memory at the prescribed instants $ t = n \tau ,$ where $ n = 0, \pm 1, \pm 2, \ldots .$ This velocity field cannot be considered as a stationary velocity field for small times $ \sim \tau ,$ however, it behaves like a stationary field for $ t \gg \tau .$ Note that the fields $ b_{j}(s, {\bf x}') $ and $ Q_{ij}(t, s, {\bf x}, {\bf x}') $ are statistically independent because the field $ b_{j}(s, {\bf x}') $ is determined in the time interval $ (- \infty, s] ,$ whereas the function $ Q_{ij}(t, s, {\bf x}, {\bf x}') $ is defined on the interval $ (s, t] .$ Due to a renewal, the velocity field as well as its functionals $ b_{j}(s, {\bf x}') $ and $ Q_{ij}(t, s, {\bf x}, {\bf x}') $ in these two time intervals are statistically independent. Now we make a change of variables $ ({\bf x},{\bf x}') \to ({\bf x},{\bf x}'
= {\bf z}+{\bf x}) $ in Eq. (\[T8\]), i.e., $ \tilde Q_{ij}(t,s,{\bf x},{\bf x}') = \tilde Q_{ij}(t,s,{\bf x},{\bf z}
+{\bf x}) = Q_{ij}(t,s,{\bf x},{\bf z}) .$ The Fourier transformation in Eq. (\[T8\]) yields $$\begin{aligned}
b_{i}(t, {\bf x}) = & & \int \int Q_{ij}(t,s,{\bf x},{\bf k})
\exp(i {\bf k} \cdot {\bf z}) \,d {\bf k}
\\
& \times & \int b_{j}(s, {\bf q}) \exp[i {\bf q} \cdot ({\bf z}+{\bf x})]
\,d {\bf q} \,d {\bf z} \; .\end{aligned}$$ Since $ \delta({\bf k} + {\bf q}) = (2 \pi)^{-3} \int
\exp[i ({\bf k} + {\bf q}) \cdot {\bf z})] \,d {\bf z} ,$ we obtain that $$\begin{aligned}
b_{i}(t, {\bf x}) &=& (2 \pi)^{3} \int Q_{ij}(t,s,{\bf x},-{\bf
q}) b_{j}(s, {\bf q})
\nonumber\\
& & \times \exp(i {\bf q} \cdot {\bf x}) \,d{\bf q} \; .
\label{T13}\end{aligned}$$ In Eq. (\[T13\]) the function $ Q_{ij}(t,s,{\bf x},-{\bf q}) $ is given by $$\begin{aligned}
Q_{ij}(t,s,{\bf x},-{\bf q}) &=& (2 \pi)^{-3} \int Q_{ij}(t,s,{\bf
x},{\bf z})
\nonumber\\
& & \times \exp(i {\bf q} \cdot {\bf z}) \,d{\bf z} \; .
\label{C2}\end{aligned}$$ Substituting $ \tilde Q_{ij}(t,s,{\bf x},{\bf x}')
= Q_{ij}(t,s,{\bf x},{\bf z}) $ in Eq. (\[T8\]) and taking into account that $ {\bf x}' = {\bf z} + {\bf x} $ we obtain $$\begin{aligned}
b_{i}(t,{\bf x}) = \int Q_{ij}(t,s,{\bf x},{\bf z}) b_{j}(s,{\bf z} + {\bf x})
\,d {\bf z} \; .
\label{C3}\end{aligned}$$ Equation (\[C2\]) can be rewritten in the form $$\begin{aligned}
(2 \pi)^{3} Q_{ij}(t,s,{\bf x},&-&{\bf q}) \exp(i {\bf q} \cdot
{\bf x}) = \int Q_{ij}(t,s,{\bf x},{\bf z})
\nonumber\\
& & \times \exp[i {\bf q} \cdot ({\bf z} + {\bf x})] \,d{\bf z}
\; . \label{C4}\end{aligned}$$ The right hand sides of Eqs. (\[C3\]) and (\[C4\]) coincide when $ {\bf b}(s,{\bf z} + {\bf x}) = {\bf e} \, \exp[i {\bf q} \cdot ({\bf z}
+ {\bf x})] ,$ where $ {\bf e} $ is a unit vector. Thus, a particular solution (\[C3\]) of Eq. (\[T1\]) with the initial condition $ {\bf b}(s, {\bf x}') = {\bf e} \, \exp(i {\bf q} \cdot {\bf x}') $ coincides in form with the integral (\[C4\]). On the other hand, a solution of Eq. (\[T1\]) is given by Eq. (\[T2\]). Substituting the initial condition $ {\bf b}(s,{\mbox{\boldmath $ \zeta$}}) = {\bf e}
\, \exp(i {\bf q} \cdot {\mbox{\boldmath $ \zeta$}}) = {\bf e} \, \exp[i {\bf q} \cdot ({\bf x}
+ (2 D_{m})^{1/2} {\bf w})] $ into Eq. (\[T2\]) we obtain $$\begin{aligned}
b_{i}(t,{\bf x}) &=& M_{{\mbox{\boldmath $ \zeta$}}} \{J(t,s,{\mbox{\boldmath $ \zeta$}}) \tilde
G_{ij}(t,s,{\mbox{\boldmath $ \zeta$}}) e_{j} \,
\nonumber\\
& & \times \exp[i {\bf q} \cdot ({\bf x} + (2 D_{m})^{1/2} {\bf
w})] \} \; . \label{C5}\end{aligned}$$ Comparing Eqs. (\[C3\])-(\[C5\]) we get $$\begin{aligned}
Q_{ij}(t,s,{\bf x},-{\bf q}) &=& (2 \pi)^{-3} M_{{\mbox{\boldmath $ \zeta$}}}
\{J(t,s,{\mbox{\boldmath $ \zeta$}}) \tilde G_{ij}(t,s,{\mbox{\boldmath $ \zeta$}}) \,
\nonumber\\
& & \times \exp[i (2 D_{m})^{1/2} {\bf q} \cdot {\bf w}] \} \; .
\label{C6}\end{aligned}$$ Now we rewrite Eq. (\[C6\]) using Feynman-Kac formula (\[T5\]). The result is given by $$\begin{aligned}
Q_{ij}(t,s,{\bf x},-{\bf q}) &=& (2 \pi)^{-3} M_{{\mbox{\boldmath $ \xi$}}}
\{G_{ij}(t,s,{\mbox{\boldmath $ \xi$}}(t,s)) \,
\nonumber\\
& & \times \exp[i {\bf q} \cdot {\mbox{\boldmath $ \xi$}}^{\ast}] \} \;,
\label{C7}\end{aligned}$$ where $ {\mbox{\boldmath $ \xi$}}^{\ast} = {\mbox{\boldmath $ \xi$}} - {\bf x} .$ Substituting Eq. (\[C7\]) into Eq. (\[T13\]) we obtain $$\begin{aligned}
b_{i}(t, {\bf x}) &=& \int M_{{\mbox{\boldmath $ \xi$}}} \{G_{ij}(t,s,{\mbox{\boldmath $ \xi$}})
\, \exp[i {\bf q} \cdot {\mbox{\boldmath $ \xi$}}^{\ast}] b_{j}(s, {\bf q}) \}
\nonumber\\
& & \times \exp(i {\bf q} \cdot {\bf x}) \,d{\bf q} \; .
\label{C8C}\end{aligned}$$ The Fourier transformation in Eq. (\[C8C\]) yields Eq. (\[A5\]). The above derivation proves that the assumption (\[CC8\]) is correct for a Wiener path $ {\mbox{\boldmath $ \xi$}} .$ In order to derive equation for the second-order correlation function $ \Phi_{ij}(t, {\bf x}, {\bf y}) = \langle b_{i}(t, {\bf x})
b_{j}(t, {\bf y}) \rangle $ we use Eq. (\[C8C\]), where the angular brackets $ \langle \cdot \rangle $ denote the ensemble average over the random velocity field. After the Fourier transformation we obtain $$\begin{aligned}
\Phi_{ij}(t, {\bf x}, {\bf y}) = (2 \pi)^{-6} \int \int
P_{ijpl}(\tau, {\bf x}, {\bf y}, {\bf k}_{1}, {\bf k}_{2})
\nonumber\\
\times \exp[i ({\bf k}_{1} \cdot {\bf x} + {\bf k}_{2} \cdot {\bf
y})] \biggl[\int \int \Phi_{pl}(s, {\bf x}', {\bf y}')
\nonumber \\
\times \exp[- i ({\bf k}_{1} \cdot {\bf x}' + {\bf k}_{2} \cdot
{\bf y}')] \,d {\bf x}' \,d {\bf y}' \biggr] \,d {\bf k}_{1} \,d
{\bf k}_{2} \;, \label{A48}\end{aligned}$$ where $$\begin{aligned}
P_{ijpl}(&\tau&, {\bf x}, {\bf y}, {\bf k}_{1}, {\bf k}_{2}) =
M_{{\mbox{\boldmath $ \xi$}}} \{ \langle G_{ip}({\bf x}) G_{jl}({\bf y})
\nonumber \\
& & \times \exp[i ({\bf k}_{1} \cdot {\mbox{\boldmath $ \xi$}}^{\ast}({\bf x}) +
{\bf k}_{2} \cdot {\mbox{\boldmath $ \xi$}}^{\ast}({\bf y}))] \rangle \} \;,
\label{AA48}\end{aligned}$$ $ G_{ij}({\bf x}) \equiv G_{ij}(\tau, {\mbox{\boldmath $ \xi$}}({\bf x})) $ and $ \tau = t - s .$ For a homogeneous and isotropic random flow Eq. (\[A48\]) reads $$\begin{aligned}
\Phi_{ij}(t, {\bf r}) &=& \int \int P_{ijpl}(\tau, - {\bf q}, {\bf
r}) \exp[i {\bf q} \cdot ({\bf r} - {\bf r}')]
\nonumber \\
& & \times \Phi_{pl}(s, {\bf r}') \,d {\bf r}' \,d {\bf q} \;,
\label{A49}\end{aligned}$$ where $ {\bf r} = {\bf y} - {\bf x} ,$ $$\begin{aligned}
P_{ijpl}(\tau, - {\bf q}, {\bf r}) &=& M_{{\mbox{\boldmath $ \xi$}}} \{ \langle
G_{ip}({\bf x}) G_{jl}({\bf y})
\nonumber \\
& & \times \exp(i {\bf q} \cdot {\mbox{\boldmath $ \tilde \xi$}}) \rangle \} \;
\label{A50}\end{aligned}$$ and $ {\mbox{\boldmath $ \tilde \xi$}} = {\mbox{\boldmath $ \xi$}}^{\ast}({\bf y}) -
{\mbox{\boldmath $ \xi$}}^{\ast}({\bf x}) .$ The Fourier transformation of Eq. (\[A49\]) yields Eq. (\[A51\]).
Derivation of Eq. (\[D8\])
==========================
Now we use the model of the random velocity field with a small correlation time. We expand the functions $ {\mbox{\boldmath $ \xi$}}^{\ast} $ and $ G_{ij}(\tau,{\mbox{\boldmath $ \xi$}}) $ in Taylor series of small time $ \tau .$ Then an expression for the function $ P_{ijpl}(\tau,{\bf r}, i {\mbox{\boldmath $ \nabla$}}) $ reads: $$\begin{aligned}
P_{ijpl}(\tau,{\bf r}, i {\mbox{\boldmath $ \nabla$}}) &=& \delta_{ip} \delta_{jl}
+ \tau B_{ijpl} + \tau U_{ijplm} \nabla_{m}
\nonumber \\
& & + \tau D_{ijplmn} \nabla_{m} \nabla_{n} + \ldots \;,
\label{D1}\end{aligned}$$ where $$\begin{aligned}
D_{ijplmn} &=& (1 / 2 \tau) M_{{\mbox{\boldmath $ \xi$}}} \{ \langle
\tilde \xi_{m} \tilde \xi_{n} G_{ip}({\bf x}) G_{jl}({\bf y}) \rangle \} \;,
\label{D2} \\
U_{ijplm}(r) &=& \tau^{-1} [\delta_{jl} M_{{\mbox{\boldmath $ \xi$}}} \{\langle g_{ip}({\bf x})
\xi_{m}^{\ast}({\bf y}) \rangle \}
\nonumber \\
& & + \delta_{ip} M_{{\mbox{\boldmath $ \xi$}}} \{\langle g_{jl}({\bf x})
\xi_{m}^{\ast}({\bf y}) \rangle \}
\nonumber \\
&-& (1/2) M_{{\mbox{\boldmath $ \xi$}}}
\{\langle g_{ip}({\bf x}) g_{jl}({\bf y}) \tilde \xi_{m} \rangle \}] \;,
\label{D3} \\
B_{ijpl}(r) &=& \tau^{-1} M_{{\mbox{\boldmath $ \xi$}}} \{\langle g_{ip}({\bf x})
g_{jl}({\bf y}) \rangle \} \;,
\label{D4}\end{aligned}$$ and $ \quad G_{ij} = \delta_{ij} + g_{ij} $ and $ M_{{\mbox{\boldmath $ \xi$}}}
\{\langle g_{ij} \rangle \} = 0 .$ Thus an equation for the second-order correlation function for a magnetic field in a random velocity field with a small yet finite correlation time reads: $$\begin{aligned}
{\partial \Phi_{ij} \over \partial t} &=& [B_{ijpl} + U_{ijplm}
\nabla_{m}
\nonumber \\
& & + D_{ijplmn} \nabla_{m} \nabla_{n}] \Phi_{pl}(t,{\bf r}) \; .
\label{D5}\end{aligned}$$ Now we consider a random velocity field with a Gaussian statistics. This assumption allows us to calculate the tensors $ D_{ijplmn} ,$ $ \quad U_{ijplm} $ and $ B_{ijpl} .$ We omit the lengthy algebra and present the final results: $$\begin{aligned}
D_{ijplqn} &=& D_{ijplqn}^{(1)} + D_{ijplqn}^{(2)} + D_{ijplqn}^{(3)}
\nonumber \\
& & + 2 D_{m} \delta_{qn} \delta_{ip} \delta_{jl} \;,
\label{B1} \\
D_{ijplmn}^{(1)} &=& 2 \tau \{ \tilde f_{mn} + {\rm St}^{2}
[(\nabla_{s} f_{kn}) (\nabla_{k} f_{ms})
\nonumber \\
& &- \tilde f_{sk} (\nabla_{s} \nabla_{k} f_{mn})] \} \delta_{ip}
\delta_{jl} \;,
\label{B2} \\
D_{ijplmn}^{(2)} &=& (1/2) \tau {\rm St}^{2}
[(\nabla_{k} f_{im}) (\nabla_{p} f_{nk})
\nonumber \\
& & - (\nabla_{k} f_{mn}) (\nabla_{p} f_{ik})
\nonumber \\
& &+ 2 \tilde f_{ms} (\nabla_{s} \nabla_{p} f_{in})] \delta_{jl} \;,
\label{B3} \\
D_{ijplmn}^{(3)} &=& (1/2) \tau {\rm St}^{2}
[(\nabla_{p} f_{im}) (\nabla_{l} f_{jn})
\nonumber \\
& & - \tilde f_{mn} (\nabla_{p} \nabla_{l} f_{ij})] \;,
\label{B4} \\
B_{ijpl} &=& - 2 \tau \{ (\nabla_{p} \nabla_{l} f_{ij}) - {\rm St}^{2}
[(\nabla_{k} \nabla_{s} f_{ij}) (\nabla_{p} \nabla_{l} f_{ks})
\nonumber \\
& & + 2 (\nabla_{p} \nabla_{m} f_{is}) (\nabla_{l} \nabla_{s} f_{jm})] \} \;,
\label{B5} \\
U_{ijplm} &=& 4 \tau \{ (\nabla_{p} f_{im}) \delta_{jl} + {\rm St}^{2}
[((\nabla_{k} f_{is}) (\nabla_{p} \nabla_{s} f_{km})
\nonumber \\
& & + (\nabla_{p} f_{sk}) (\nabla_{k} \nabla_{s} f_{im})
\nonumber \\
& & - (\nabla_{s} f_{km}) (\nabla_{k} \nabla_{p} f_{is}))
\delta_{jl}
\nonumber \\
& & + 2 ((\nabla_{k} f_{jm}) (\nabla_{p} \nabla_{l} f_{ik})
\nonumber \\
& & + (\nabla_{l} f_{km}) (\nabla_{k} \nabla_{p} f_{ij}))] \} \;,
\label{B6}\end{aligned}$$ where $ {\rm St} = \tau u_{d} / l_{d} $ is the Strouhal number, $ \tilde f_{mn} = f_{mn}(0) - f_{mn}({\bf r}) ,$ and we changed $ \tau \to 2 \tau $ in order to compare the obtained results with those for the $\delta$-correlated in time approximation for a random velocity field. Here the small terms of the order of $ \sim O({\rm St}^{4}) $ are being neglected. In Eqs. (\[B1\])-(\[B6\]) we took into account a commutative symmetry in every pair of the following indexes: $ (i,j); $ $ (p,l) $ and $ (m,n) .$ The latter is due to a symmetry of the following tensors: $ r_{ij} ,$ $ \Phi_{pl} $ and $ \nabla_{m} \nabla_{n} .$ In Eqs. (\[B1\])-(\[B6\]) we assumed also that the form of the tensor $ \tilde f_{mn} $ is given by $ \tilde f_{mn} = C_{mnps} r_{p} r_{s} ,$ where $ C_{mnps} $ is an arbitrary constant tensor. This satisfies for the model of the velocity field (\[D6\]) with $ F(r) = 1 - r^{2} .$
Now we seek a solution for the second moment of the magnetic field in the form of Eq. (\[D7\]). Multiplying Eq. (\[D5\]) by $ r_{ij} $ and using Eq. (\[D7\]) we obtain the equation for the correlation function $ W(t,r) =
\langle b_{\bf r}(t,{\bf x}) b_{\bf r}(t,{\bf y}) \rangle .$ This equation is given by Eq. (\[D8\]). For the derivation of Eq. (\[D8\]) we used the following identities $$\begin{aligned}
\hat D W & \equiv & r_{ij} D_{ijplmn} \nabla_{m} \nabla_{n}
\Phi_{pl} = {2 \tau \over 3} [r^{2} W'' + 8 r W'
\nonumber \\
& & + {\sigma_{_{\xi}} \over 4} (2 r^{3} W''' + 31 r^{2} W'' + 12
r W')
\nonumber \\
& & + {3 \over \Pr} (W'' + 4 W / r)] \;,
\label{B7} \\
\hat B W & \equiv & r_{ij} B_{ijpl} \Phi_{pl} = {4 \tau \over 3}
[2 r W' + 5 W
\nonumber \\
& & + {\sigma_{_{\xi}} \over 4} (r W' + 5 W)] \;,
\label{B8} \\
\hat U W & \equiv & r_{ij} U_{ijplm} \nabla_{m} \Phi_{pl} = {2
\tau \over 3} \{ - 6 r W'
\nonumber \\
& & + {\sigma_{_{\xi}} \over 4} [r^{2} W'' + (29 / 2) r W'] \} \;
. \label{B9}\end{aligned}$$ Equations (\[B7\])-(\[B9\]) are derived by means of Eqs. (\[D6\]), (\[B1\])-(\[B6\]) and we also used the following identities: $$\begin{aligned}
\nabla_{n} \Phi_{pl} = {1\over 2} \biggl[\biggl(W'' - {W' \over r}
\biggr) P_{pl} r_{m} + {W' \over r} (4 \delta_{pl} r_{m}
\nonumber \\
- \delta_{pm} r_{l} - \delta_{lm} r_{p})\biggr] \;,
\label{B10} \\
\nabla_{m} \nabla_{n} \Phi_{pl} = {1\over 2} \biggl[(r W''')
P_{pl} r_{mn} + \biggl(W'' - {W' \over r} \biggr)
\nonumber\\
\times (P_{mn} P_{pl} + 4 P_{pl} r_{mn} - P_{pm} r_{ln} - P_{lm}
r_{pn}
\nonumber \\
- P_{pn} r_{lm} - P_{ln} r_{pm} + 2 r_{plmn}) + {W' \over r} (4
\delta_{pl} \delta_{mn}
\nonumber \\
- \delta_{pm} \delta_{ln} - \delta_{lm} \delta_{pn}) \biggr] \; .
\label{B11}\end{aligned}$$ The corresponding derivatives for $ f_{pl} $ coincide with Eqs. (\[B10\]) and (\[B11\]) after the change $ W(r) \to (1 / 3)
F(r) .$ Note that for $ F(r) = 1 - r^{2} $ the following identities are valid: $ F'' - F' / r = 0 $ and $ F''' = 0 .$ Turbulent magnetic diffusion is determined by function $ \hat D W
= r_{ij} D_{ijplmn} \nabla_{m} \nabla_{n} \Phi_{pl}(t,{\bf r}) .$ The latter depends on the field of Lagrangian trajectories $
{\mbox{\boldmath $ \xi$}} $ \[see Eqs. (\[D2\]) and (\[D5\])\]. Due to a finite correlation time of a random velocity field $ \langle
({\mbox{\boldmath $ \nabla$}} \cdot {\mbox{\boldmath $ \xi$}})^{2} \rangle \not=0 $ even if the velocity field is incompressible. Indeed, $ \langle ({\mbox{\boldmath $ \nabla$}}
\cdot {\mbox{\boldmath $ \xi$}})^{2} \rangle \approx (4/9) {\rm St}^{4} =
\sigma_{_{\xi}}^{2} .$ Thus the parameter $ \sigma_{_{\xi}} $ describes the compressibility of the field of Lagrangian trajectories. The latter results in a change of the dynamics of magnetic fluctuations. Thus, the equation for the correlation function $ W(t,r) $ is given by Eq. (\[D8\]).
Ya. B. Zeldovich, S. A. Molchanov, A. A. Ruzmaikin and D. D. Sokoloff, Sov. Sci. Rev. C. Math Phys. [**7**]{}, 1 (1988), and references therein.
Ya. B. Zeldovich, A. A. Ruzmaikin, and D. D. Sokoloff, [*The Almighty Chance*]{} (Word Scientific Publ., Singapore, 1990), and references therein.
S. Childress and A. Gilbert, [*Stretch, Twist, Fold: The Fast Dynamo*]{} (Springer-Verlag, Berlin, 1995), and references therein.
R. M. Kulsrud and S. W. Anderson, Astrophys. J. [**396**]{}, 606 (1992).
A. Gruzinov, and P. H. Diamond, Phys. Rev. Lett., [**72**]{}, 1651 (1994); Phys. Plasmas [**2**]{}, 1941 (1995).
N. Kleeorin and I. Rogachevskii, Phys. Rev. E. [**50**]{}, 2716 (1994); N. Kleeorin, M. Mond, and I. Rogachevskii, Astron. Astrophys. [**307**]{}, 293 (1996).
A. Gruzinov, S. Cowley, and R. Sudan, Phys. Rev. Lett., [**77**]{}, 4342 (1996).
I. Rogachevskii and N. Kleeorin, Phys. Rev. E [**56**]{}, 417 (1997); [**59**]{}, 3008 (1999); [**61**]{}, 5202 (2000); [**64**]{}, 056307 (2001).
N. Kleeorin and I. Rogachevskii, Phys. Rev. E. [**59**]{}, 6724 (1999).
A. A. Schekochihin and R. M. Kulsrud, Phys. Plasmas, [**8**]{}, 4937 (2001).
H. K. Moffatt, [*Magnetic Field Generation in Electrically Conducting Fluids*]{} (Cambridge University Press, New York, 1978).
E. Parker, [*Cosmical Magnetic Fields*]{} (Oxford University Press, New York, 1979), and references therein.
F. Krause, and K. H. Rädler, [*Mean-Field Magnetohydrodynamics and Dynamo Theory*]{} (Pergamon, Oxford, 1980), and references therein.
Ya. B. Zeldovich, A. A. Ruzmaikin and D. D. Sokoloff, [*Magnetic Fields in Astrophysics*]{} (Gordon and Breach, New York, 1983).
C. T. Russell and R. C. Elphic, Nature, [**279**]{}, 616 (1979).
N. Kleeorin, I. Rogachevskii, and A. Eviatar, J. Geophys. Res. A. [**99**]{}, 6475 (1994); N. Kleeorin and I. Rogachevskii, Phys. Rev. E [**50**]{}, 493 (1994).
A. Ruzmaikin, A. M. Shukurov, and D. D. Sokoloff, [*Magnetic Fields of Galaxies*]{} (Kluver Acad. Publ., Dordrecht, 1988).
A. P. Kazantsev, Sov. Phys. JETP [**26**]{}, 1031 (1968).
P. Dittrich, S. A. Molchanov, A. A. Ruzmaikin and D. D. Sokoloff, Astron. Nachr. [**305**]{}, 119 (1984).
T. Elperin, N. Kleeorin, I. Rogachevskii and D. Sokoloff, Phys. Rev. E [**61**]{}, 2617 (2000); [**64**]{}, 026304 (2001).
T. Elperin, N. Kleeorin, I. Rogachevskii and D. Sokoloff, Phys. Chem. Earth [**A 25**]{}, 797 (2000).
M. M. Vishik, Izv., Akad. Nauk S.S.S.R., Fiz. Zemli [**No. 3**]{}, 3 (1988) \[Engl. transl.: Izv., Acad. Sci. U.S.S.R., Phys. Solid Earth [**24**]{}, 173 (1988)\].
M. M. Vishik, Geophys. Astroph. Fluid Dyn. [**48**]{}, 151 (1989).
H. P. McKean, [*Stochastic Integrals*]{} (Academic Press, New York, 1969).
Z. Schuss, [*Theory and Applications of Stochastic Differential Equations*]{}, (John Wiley, New York, 1980), p. 52.
|
---
abstract: |
A brief reference to the two Schwarzschild solutions and what Petrov had to say about them is given. Comments on how the Schwarzschild vacuum solution describes a black hole are also provided. Then we compare the properties, differences and similarities between black holes and quasiblack holes. Black holes are well known. Quasiblack hole is a new concept. A quasiblack hole, either nonextremal or extremal, can be broadly defined as the limiting configuration of a body when its boundary approaches the body’s own gravitational radius (the quasihorizon). They are objects that are on the verge of being black holes but actually are distinct from them in many ways. We display some of their properties: there are infinite redshift whole regions; the curvature invariants remain perfectly regular everywhere, in the quasiblack hole limit; a free-falling observer finds in his own frame infinitely large tidal forces in the whole inner region, showing some form of degeneracy; outer and inner regions become mutually impenetrable and disjoint, although, in contrast to the usual black holes, this separation is of a dynamical nature, rather than purely causal; for external far away observers the spacetime is virtually indistinguishable from that of extremal black holes. Other important properties, such as the mass formula, and the entropy, are also discussed and compared to the corresponding properties of black holes.
**Key words:** Schwarzschild solution, Petrov, Black holes, Quasiblack holes.
---
-0.1cm
------------------------------------------------------------------------
Introduction {#S:In}
============
The Schwarzschild solution. {#sch}
---------------------------
0.1cm Finding vacuum solutions of Einstein’s equation -0.7cm $$G_{ab}=0\,,$$ -0.2cm where $G_{ab}$ is the Einstein tensor, is an important branch of General Relativity and known to be a non-trivial task. On the other hand, finding solutions of the field equations with matter is a somewhat different setup. Given any metric, there is always one stress-energy tensor $T_{ab}$ for which Einstein’s equations $(G=1\,,c=1)$ -0.3cm $$G_{ab}=8\pi\,T_{ab}\,,$$ -0.2cm are trivially satisfied. Now, arbitrarily chosen metrics usually give rise to unphysical stress-tensors, corresponding to matter which is of no interest. Therefore, the task of finding non-vacuum solutions to the field equations is, in a certain way, twice as hard in comparison to solutions in vacuum, one has to choose physically relevant sources, and then solve for the gravitational field in the equations.
Schwarzschild, in 1916, in two strokes, initiated the field of exact solutions in General Relativity, both in vacuum [@Schw1] and in matter for an incompressible fluid [@Schw2]. These solutions are called the Schwarzschild solution and the interior Schwarzschild solution, respectively. The Schwarzschild solution [@Schw1] is perhaps the most well-known exact solution in General Relativity, and its line element can be written in appropriate spherical coordinates $(t,r,\theta,\phi)$ as, $$ds^2=-\left(1-\frac{2m}{r}\right)\,dt^2+
\frac{dr^2}{\left(1-\frac{2m}{r}\right)}+
r^2\left(d\theta^2+\sin^2\theta\,d\phi^2\right)\,.
\label{leschw}$$ Here $m$ is the mass of the object, outside which there is vacuum. To interpret the solution as a whole vacuum solution, and the emergence of the notion of a black hole it took some time.
Petrov on the Schwarzschild solution. {#pet}
-------------------------------------
0.2cm In a Petrov Symposium it is worth to spend some lines on what Petrov had to say on both Schwarzschild solutions. For this we refer to his book [*Einstein spaces*]{}, published in Russian in 1961 and then translated into English in 1969 [@petrov1].
On p. 141 of the book [@petrov1] one can read the rather remarkable phrase: “It is clear that Einstein, Hilbert, and their contemporaries had a rather primitive idea of what is meant by ‘spacetime metric’ and of its scope. They possessed only a few of the simplest examples (for example Schwarzschild’s solution, the solution of Weyl and Levi-Civita with axial symmetry, and cosmological metrics). They did not realize what a powerful instrument they were forging.”
Then there are several mentions, in passing, of the Schwarzschild solution. On p. 179 it is stated that the Schwarzschild solution is a particular case of solutions included in $T_1$, i.e., solutions with Segre characteristic $(111)$, referring to his algebraic classification of 1954 of the Riemann and Weyl tensors [@petrov2], repeated in the book in page 99. On p. 196 Kotler’s solution is mentioned, stating it is a generalization of the Schwarzschild solution by including a cosmological term $\Lambda$. On p. 360, in Chapter 9, Einstein’s equations for a spherically symmetric vacuum are solved, and the Schwarzschild solution is finally displayed, as a textbook should do. On p. 362, exercises on Schwarzschild and interior Schwarzschild are given, and the Landau and Lifshitz 1948 book [*The Classical Theory of Fields*]{} (and the English translation of 1959) is cited [@landaulifsh]. On p. 386 the two Schwarzschild’s papers of 1916 on the vacuum and the interior solutions are quoted in citations 37 and 37a, respectively.
There is an interesting contribution of Petrov to the field of exact solutions. In the paper [*Gravitational field geometry as the geometry of automorphisms*]{} [@petrov3], among a discussion of many solutions, Petrov finds a Type I (111) solution with metric $
ds^2=e^r\cos\sqrt{3}r(-dt^2+d\phi^2)-2\sin\sqrt{3}r\,d\phi\,dt
+
dr^2+e^{-2r}dz^2
$. It is the only vacuum solution admitting a simply-transitive four-dimensional maximal group of motions. Bonnor [@bonnoronpetr] showed that it is the vacuum solution exterior to an infinite rotating dust, a particular case of the Lanczos-van Stockum solution. This is no black hole but has relations to the hoop conjecture, closed timelike curves and so on.
Black holes. {#bhsfinally}
------------
0.2cm It is clear that the Schwarzschild solution (\[leschw\]) presents a problem, in the coordinates used, at $r=2m$. For a long time $r=2m$ was a mysterious place. Only in the 1960s the ultimate interpretation was given and the problem solved. The radius $r_{\rm h}=2m$ defines the event horizon, a lightlike surface, of the solution. In its full form it represents a wormhole, with its two phases, the white hole and the black hole, connecting two asympotically flat universes [@kruskal] (a work done under the supervision of Wheeler [@wheelbiog]). If, besides a mass $m$ as in the Schwarzschild solution, one includes electrical charge $q$, the Reissner-Norström solution is obtained [@reiss; @nordst] (for the interpretation of its full form see [@gravesbrill]). The inclusion of angular momentum $J$ gives the Kerr solution [@kerr], and the inclusion of the three parameters ($m,J,q$) is the Kerr-Newman family [@newman]. For a full account of the Kerr-Newman family within General Relativity see [@mtw].
As predicted early by Oppenheimer and Snyder [@oppenheimer] black holes can form through gravitational collapse of a lump of matter. As the matter falls in, an event horizon develops from the center of the matter, and stays put, as a null surface, in the spherical symmetric case at $r_{\rm h}=2m$, while the matter falls in towards a singularity. A posterior important result is that if the matter is made of perfect fluid (such as the Schwarzschild interior solution [@Schw2]) there is the Buchdahl limit [@buch] which states that when the boundary of the fluid matter approaches quasistatically the value $\frac98\,r_{\rm h}$, then the system ensues in an Oppenheimer-Snyder collapse, presumably into a black hole.
The possibility of existence of black holes came with Quasars in 1963. Salpeter [@salpeter] and Zel’dovich [@zeldovich] were the first to advocate that a massive central black hole should be present in these objects in order to explain the huge amount of energy liberated by them. Lynden-Bell in 1969 then took a step forward and proposed that a central massive black hole should inhabit every galaxy [@lyndenbell], a prediction that has been essentially confirmed, almost every galaxy has a central black hole. Then with the discovery of pulsars in 1968 and the reality of neutron stars the possibility of small stellar mass black holes became obvious, confirmed in 1973 with the X-ray binary Cygnus X1 and then with other X-ray binary sources (see, e.g., [@lemosbhsgalelempart]).
It is supposed that black holes can form in many ways. The traditional manner is the Oppenheimer-Snyder type collapse [@oppenheimer]. Nowadays, one also admits that black holes can form from the collision of particles, or have a cosmological primordial inbuilt origin (see, e.g., [@lemosbhsgalelempart]). The Reissner-Nordström black hole may be not very useful astrophysically, although all black holes should have a tiny, fluctuating, charge. Notwithstanding it might be important in particle physics, perhaps it is an elementary soliton of gravitation, as proposed by some supergravity ideas. Nowadays there is a profusion of theoretical black holes of all types, in all theories, with all charges, in all dimensions (see, e.g., [@profusion]).
Classically, black holes are well understood from the outside: there is astrophysical evidence and theoretical consistency. Perhaps there will be phenomenological evidence in the near future from the collision of particles.
Quantically, black holes still pose problems. For the outside, these problems are related to the Hawking radiation and the Bekenstein-Hawking entropy. For the inside, the understanding of the inside of a black hole is one of the outstanding problems in gravitational theory, and it certainly is a quantum phenomenon. The horizon harbors a singularity. What is a singularity? The two quantum problems, the outside and the inside, are perhaps related. There are many approaches, some try to solve part of the problems others all of them (see, e.g., [@lemosfundamental]). These approaches are the quantum gravity approach, mass inflation, wormhole, regular black hole, holographic reasoning (see, e.g., [@lemosholographic]) and so on. Here, we advocate the quasiblack hole approach to better understand a black hole, both the outside and the inside stories. We do not claim to solve the problems, we look at it through a different angle and see where it leads us to.
Quasiblack holes. {#qbhsfinally}
-----------------
0.2cm Following [@buch], for matter made of perfect fluid there is the Buchdahl limit. However, putting charge into the matter to bypass the limit opens up a new world. The charge can be electrical, or angular momentum, or many other charges. The simplest case is to have matter with electric charge alone, nothing else.
In Newtonian gravitation, i.e., for a Newton-Coulomb system, the solution is easy. Suppose one has two massive charged particles. Then, the gravitational force exerted on each particle is $F_{\rm g}=\frac{Gm^2}{r^2}$, where for a moment we have restored $G$, and the electric force is $F_{\rm e}=\frac{e^2}{r^2}$. Thus, when $\sqrt{G}m=e$ it implies $F_{\rm g}=F_{\rm e}$. The system is in equilibrium. Of course, if we put another such particle, any number of particles, a continuous distribution of matter, any symmetry, any configuration, the result still holds. For a continuous distribution the relation $\sqrt{G}\rho_{\rm m}=\rho_{\rm e}$ must hold, where $\rho_{\rm m}$ and $\rho_{\rm e}$ are the mass-energy density and the electric charge density, respectively.
In General Relativity, i.e., for an Einstein-Maxwell system, the history is long. Weyl in 1917 [@weyl1] started with a static solution in the form, $$ds^2=-W^2(x^i)\,dt^2+g_{ij}(x^k)\,dx^i\,dx^j.
\label{weyl1}$$ Then he sought $W$ such that $W^2=W^2(\phi)$, in vacuum with axial symmetry, where $\phi$ is the electric potential. He found $W^2= \left(\,\sqrt{G}\,\phi+b\right)^2+c$, with $b$ and $c$ constants. In 1947 Majumdar [@maj] showed that Weyl’s quadratic function works for any symmetry, not only axial symmetry. It was also shown that the (vacuum) extremal Reissner-Nordström solution obeyed this quadratic relation, and that many such solutions could be put together since, remarkably, equilibrium would be maintained, as in the Newton-Coulomb case. Papapetrou [@papa] also worked along the same lines. Hartle-Hawking in 1973 [@hh73] worked out the maximal extension and other properties of a number of extremal black holes dispersed in spacetime. Furthermore, for a perfect square, $W^2(\phi)= \left(\,\sqrt{G}\,\phi+b\right)^2$, if now there is matter, Majumdar and Papapetrou found that $\sqrt{G}\rho_{\rm m}
=\rho_{\rm e}$ [@maj; @papa], and the matter is in an equilibrium state, bringing into General Relativity the Newtonian result. This type of matter we call extremal charged dust matter. The solutions, vacuum or matter solutions, are generically called Majumdar-Papapetrou solutions.
Now, if one wants to make a star one has to put some boundary on the matter. The interior solution is then Majumdar-Papapetrou and the exterior is extremal Reissner-Nordström. This analysis was started by Bonnor who since 1953 has called attention to them, see, e.g., [@bonnor1999]. Examples of Bonnor stars are: (i) A star of clouds, in which each cloud has 1 proton and $10^{18}\,$neutrons, so to maintain the relation $\rho_{\rm m}=\rho_{\rm e}$ ($G=1$). For a spherically symmetric star with radius $R$, the star as a whole has $m=q$, and the exterior is extremal Reissner-Nordström, see Figure 1.
\[bonnorstar\] 
\(ii) A star made of supersymmetric stable particles with $m_{\rm
s}=e_{\rm s}$. Again, the star has total mass $m$ and total charge $q$ related by $m=q$.
Now comes the important point. For any star radius $R$ the star is in equilibrium. Inclusive for $R=r_{\rm h}$, where $r_{\rm h}=m$ is the gravitational, or horizon, radius of the extremal Reissner-Nordström metric. What happens when $R$ shrinks to $r_{\rm h}$? Something new: a quasiblack hole forms.
Black hole and quasiblack hole solutions {#qbhsols}
========================================
Generic features of the solutions. {#bhsqbhssols}
----------------------------------
0.2cm The difference between an extremal spherically symmetric black hole and an (extremal) spherically symmetric quasiblack hole spacetime is best displayed if we write the metric as, $$ds^{2}=-B(r)\,dt^{2}+A(r)\,dr^{2}+r^{2}\,\left(d\theta
^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\,.
\label{metricgeneric}$$ When one approaches the gravitational radius of the object one finds that the solutions have the features shown in Figure 2.

For the extremal Reissner-Nordström black hole one has $B(r)=1/A(r)=(1-m/r)^2$, so that at $r=r_{\rm h}=m$ there is the usual event horizon, and at $r=0$ the potentials are singular and indeed yield a singular spacetime where the curvature invariants diverge. For the extremal quasiblack hole the function $1/A(r)$ is well behaved, touches zero at $r=r_{\rm h}$, when a quasihorizon (not an event horizon) forms, and tends to 1 at $r=0$ so that there are no conical singularities. The function $B(r)$ is well-behaved up to the quasiblack hole limit. At the quasiblack hole limit, $R=r_{\rm h}$, the function is zero in the whole interior region. This brings new features.
Black holes and quasiblack holes made of Majumdar-Papapetrou stuff. {#lemosweinberg}
-------------------------------------------------------------------
0.2cm The Majumdar-Papapetrou vacuum black hole is the extremal Reissner-Nordström black hole, a solution with well known properties.
For quasiblack holes, Majumdar-Papapetrou matter provides perhaps the simplest case, as shown by Lemos and Weinberg in 2004 [@lemosweinberg2004]. In [@lemosweinberg2004] it was found a solution in which there is no need for a junction. In the solution, the Majumdar-Papapetrou matter decays sufficiently rapid to yield at infinity, in a continuous way, the extremal Reissner-Nordström metric. In this way the existence of simple quasiblack hole solutions were shown beyond doubt. The potentials and all their derivatives are continuous. Thus one avoids the possible problems caused by Bonnor stars where the potentials are only one time differentiable. To find the solutions, insist on putting the metric as in Eq. (\[metricgeneric\]). Then Einstein-Maxwell equations give $$\frac{(AB)'}{AB}=8\pi\,r\,\rho\,A
\,,\quad \left[r\left(1-\frac{1}{A}\right)\right]'
=8\pi\,r^2\,\rho+\frac{r^2}{A\,B}\,{\varphi'}^2\,,
\label{equationforB-EM}$$ $$\frac{\sqrt{B}}{r^2\sqrt{AB}}\left[
\frac{r^2}{\sqrt{AB}}\,\varphi'\right]'
= - 4\pi\rho_{\rm e}
\,.
\label{equationfophiEM}$$ where primes denote differentiation with respect to $r$. One can then work out the various types of solutions [@lemosweinberg2004].
![Plots of the potentials and matter functions as a function of $r$ for $q=1$ and for four different stars, each with compact parameter $c$ given by $c=0.5,\,0.3,\,0.1,\,0.001$. The emergence of the quasihorizon is quite evident in the $c=0.001$ curve, see [@lemosweinberg2004] for details.](B.eps "fig:"){height="2.2cm"} ![Plots of the potentials and matter functions as a function of $r$ for $q=1$ and for four different stars, each with compact parameter $c$ given by $c=0.5,\,0.3,\,0.1,\,0.001$. The emergence of the quasihorizon is quite evident in the $c=0.001$ curve, see [@lemosweinberg2004] for details.](oneoverA.eps "fig:"){height="2.2cm"} ![Plots of the potentials and matter functions as a function of $r$ for $q=1$ and for four different stars, each with compact parameter $c$ given by $c=0.5,\,0.3,\,0.1,\,0.001$. The emergence of the quasihorizon is quite evident in the $c=0.001$ curve, see [@lemosweinberg2004] for details.](ABroot.eps "fig:"){height="2.2cm"}
![Plots of the potentials and matter functions as a function of $r$ for $q=1$ and for four different stars, each with compact parameter $c$ given by $c=0.5,\,0.3,\,0.1,\,0.001$. The emergence of the quasihorizon is quite evident in the $c=0.001$ curve, see [@lemosweinberg2004] for details.](electricpotential.eps "fig:"){height="2.2cm"} ![Plots of the potentials and matter functions as a function of $r$ for $q=1$ and for four different stars, each with compact parameter $c$ given by $c=0.5,\,0.3,\,0.1,\,0.001$. The emergence of the quasihorizon is quite evident in the $c=0.001$ curve, see [@lemosweinberg2004] for details.](Arootrho.eps "fig:"){height="2.2cm"}
These stars have no well-defined radius $R$ since there is no boundary. The solutions tend smoothly into the extremal Reissner-Nordström vacuum. Instead, there is a compact parameter $c$ which characterizes the solution. As this parameter tends to zero, $c\to0$, the star gets denser at the center and more compact. At $c=0$ a quasiblack hole appears. This is shown in Figure 3, where plots for four different stars (i.e., with stars with different $c$[s]{}) are displayed. The one in which $c\to0$ shows clearly a quasiblack hole behavior, with the emergence of a quasihorizon.
Other ways: Black holes and quasiblack holes made of various sorts of matter. {#otherways}
-----------------------------------------------------------------------------
0.2cm There are black hole solutions in general relativity other than the ones provided by the Kerr-Newman family. Those are regular black holes in which the vacuum inside the horizon with its singularity is replaced by a de Sitter core, which can be magnetically charged [@bardeen1968] or have non-isotropic pressures [@dymn1992], or have some other form (see, e.g., [@ansoldi2008]). There are also regular black holes electrically charged in a special way [@lemoszanchin2011]. Black holes are generic.
What about quasiblack holes? Can they be built from other configurations and forms of matter other than Majumdar-Papapetrou? Yes, there are several different quasiblack hole solutions found up to now.
First, there are the simple quasiblack holes of Lemos and Weinberg, already mentioned [@lemosweinberg2004].
Second, spherical Bonnor stars (charged stars with a spherical boundary surface) also yield quasiblack holes. This was shown preliminary by Bonnor himself (see, e.g., [@bonnor; @1999]) and by subsequent works [@kleberlemoszanchin2005; @lemoszanchin2008]. Moreover, recently Bonnor has shown that spheroidal stars made of extremal charged dust tend in the appropriate limit to quasiblack holes [@bonnor2010]. Generic properties of the Majumdar-Papapetrou matter in d-dimensions were displayed in [@lemoszanchin2005].
Third, charged matter with pressure (with a generalized Schwarzschild interior ansatz to include electrical charge) also yield charged stars that when sufficiently compact tend to quasiblack holes. These are the relativistic charged spheres which can then be considered as the frozen stars [@lemoszanchin2010; @felice1995; @felice1999]. For the properties of the solutions and the connection with the Weyl-Guilfoyle ansatz [@guilf1999] see [@lemoszanchin2009prop]. These solutions have additional interest since the pressure stabilizes the fluid against kinetic perturbations.
Fourth, the Einstein—Yang-Mills–Higgs equation yield gravitationally magnetic monopoles that when sufficiently compact, form, in certain instances, quasiblack holes, as shown by Lue and Weinberg [@lueweinberg1999; @lueweinberg2000]. In these works the name quasiblack hole was coined for the first time. A comparison between gravitationally magnetic monopole and Bonnor star behavior was done in [@lemoszanchin2006].
Fifth, the Einstein-Cartan system with spin and torsion, in which the spinning matter, put in a spherically symmetric configuration, is joined into the Schwarzschild solution, also yields quasiblack holes [@lemoseinsteincartanqbh2011].
Sixth, disk matter systems, when sufficiently compact and rotating at the extremal limit have, as exterior metric, the extremal Kerr spacetime. These solutions were found by Bardeen and Wagoner back in 1971 [@bardeenwagoner1971]. In the new language they are quasiblack holes and their properties have been explored by Meinel and collaborators [@meinel2006; @meinel2010].
Finally, it is a simple exercise to show that a shell of matter, for which the inside is a Minkowski spacetime, and the outside is Schwarzschild, yields solutions with quasiblack hole properties if the shell is allowed to hover on the quasihorizon. A drawback here, that does not appear in the six mentioned cases above, is that in the quasihorizon limit the tangential pressures grow unbound. We will comment on this when we work out the mass formula for quasiblack holes.
There are certainly many other examples in which quasiblack holes may form.
Black holes and quasiblack holes: Definition and properties {#defsprops}
===========================================================
Black holes. {#bhsdef}
------------
0.2cm Black hole definition can be seen in [@pen72; @hawkhouches; @hawkingellis]. Some of the black hole properties were developed in, e.g., [@cart1979; @bardcarhawk73; @smarr1973; @beken1973; @hawktemp; @by1993].
Quasiblack holes. {#defiprop}
-----------------
0.2cm Since it appears that quasiblack hole solutions are more ubiquitous than one could have thought, one should consider the core properties of those solutions the most independently as possible from the matter they are made, in much the same way as one does for black holes [@lzs07; @lzs08mim; @lzs10p; @lzs08m1; @lzs09m2; @lzs10e1; @lzs10e2].
### Definition.
0.2cm Write the metric as in Eq. (\[metricgeneric\]), for an interior metric with an asymptotic flat exterior region. Consider the solution satisfies the following requisites: (a) the function $1/A(r)$ attains a minimum at some $r^{\ast }\neq 0$, such that $1/a(r^{\ast })=\varepsilon$, with $\varepsilon
<<1$. (b) For such a small but nonzero $\varepsilon$ the configuration is regular everywhere with a nonvanishing metric function $B$. (c) In the limit $\varepsilon \rightarrow 0$ the metric coefficient $B\rightarrow
0$ for all $r\leq r^{\ast}$. See Figure 2. These three features define a quasiblack hole [@lzs07]. The quasiblack hole is on the verge of forming an event horizon, but instead, a quasihorizon appears with $r^{\ast}=r_{\rm h}$. The metric is well defined everywhere and the spacetime should be regular everywhere. One can try to give an invariant definition of a quasiblack hole instead. For instance, in (a) one can replace $1/A$ by $(\nabla r)^{2}$. Note that this definition shows that the quasihorizon is related to an apparent horizon [@hawkhouches] rather than to an event horizon.
### Generic properties.
0.2cm A study of the several properties that can be deduced from the above definition was initiated by Lemos and Zaslavskii [@lzs07]. Some generic properties are: (i) The quasiblack hole is on the verge of forming an event horizon, instead, a quasihorizon appears. (ii) The curvature invariants remain regular everywhere. (iii) A free-falling observer finds in his frame infinite tidal forces at the interface showing some form of degeneracy. The inner region is, in a sense, a mimicker of a singularity. (iv) Outer and inner regions become somehow mutually impenetrable and disjoint. E.g., in the Lemos-Weinberg solution [@lemosweinberg2004], the interior is Bertotti-Robinson, the quasihorizon region is extremal Bertotti-Robinson, and the exterior is extremal RN [@lzs07]. (v) There are infinite redshift whole 3-regions. (vi) For far away observers the spacetime is indistinguishable from that of black holes. (vii) Quasiblack holes with finite stresses must be extremal to the outside.
A comparison of quasiblack holes with other objects, such as wormholes, that can mimick black hole behavior was given in [@lzs08mim].
### Pressure properties.
0.2cm One can also work out what conditions the matter pressure should obey at the boundary when the configuration approaches the quasiblack hole regime. For these interesting properties see [@lzs10p].
### The mass formula.
0.2cm To find the mass of a quasiblack hole one develops the Tolman formula $m=\int (-T_{0}^{0}+T_{i}^{i})\sqrt{-g}\,d^{3}x$, where $i$ stands for spacelike indices $1,2,3$. Since one uses the energy-momentum tensor $T_{ab}$ of the matter, this formula is not applicable for vacuum black holes, for black holes one has to use other methods [@bardcarhawk73; @smarr1973]. Nevertheless, in the general stationary case, we obtain in the horizon limit [@lzs08m1; @lzs09m2] $$m=\frac{\kappa A}{4\pi}+2\omega_{\mathrm{h}}J+
\varphi_{\mathrm{h}}q\,,$$ where $\kappa$ is the surface gravity, $A$ is the horizon area, $\omega_{\mathrm{h}}$ is the horizon angular velocity, $J$ the quasiblack hole angular momentum, $\varphi_{\mathrm{h}}$ the electric potential, and $q$ the quasiblack hole electrical charge. This is precisely Smarr’s formula [@smarr1973], but now for quasiblack holes. The contribution of the term $\frac{\kappa A}{4\pi}$ comes from the tangential pressures that grow unbound at the quasiblack hole limit but are at the same time redshifted away to give precisely $\frac{\kappa A}{4\pi}$. For the extremal case, the term $\frac{\kappa
A}{4\pi}$ goes to zero, since $\kappa$ is zero. See also Meinel [@meinel2006; @meinel2010] for the pure stationary solution of the Bardeen-Wagoner type disks [@bardeenwagoner1971].
### The entropy.
0.2cm To find the entropy one uses the first law of thermodynamics together with the Brown-York formalism [@by1993]. The approach developed is model-independent, it solely explores the fact that the boundary of the matter almost coincides with quasihorizon [@lzs10e1; @lzs10e2].
For nonextremal quasiblack holes, when one carefully takes the horizon limit, one finds that the entropy $S$ is [@lzs10e1] $$S=\frac14 A\,,$$ where $A$ is the quasihorizon area, in accord with the black hole entropy [@beken1973; @hawktemp]. The contribution to this value comes again from the tangential stresses that grow unbound in the nonextremal case. Since these divergent stresses are at the boundary, the result suggest that the degrees of freedom are on the horizon. It is precisely when a quasihorizon is achieved and the system has to settle to the Hawking temperature that the entropy has the value $A/4$. The result, together with the approach, suggest further that the degrees of freedom are ultimately gravitational modes. Since the tangential pressures grow unbound here, all modes, presumably quantum modes, are excited. In pure vacuum, as for a simple black hole, they should be gravitational modes.
For extremal quasiblack holes the stresses are finite at the quasihorizon. So one should deduce that not all possible modes are excited. This means that the entropy of an extremal quasiblack hole, and by continuity of an extremal black hole, should be $S\leq\frac14
A$. Indeed in [@lzs10e2] we find for extremal quasiblack holes, $$0<S\leq\frac14
A\,.$$ The problem of entropy for extremal black holes is a particularly interesting one. Arguments based on periodicity of the Euclidean section of the black hole lead one to assign zero entropy in the extremal case. However, extremal black hole solutions in string theory typically have the conventional value given by the the Bekenstein-Hawking area formula $S=A/4$. We find an interesting compromise.
Conclusions
===========
Black holes are generic and stable. Quasiblack holes perhaps not. Any perturbation would lead them into a black hole, although the inclusion of pressure may stabilize the system.
However, stable or not, the quasiblack hole approach can elucidate many features of black holes such as the mass formula and the entropy. The quasiblack hole approach to the understanding of black hole physics seems somehow like the membrane paradigm [@thorne]. Indeed, by taking a timelike matter surface into a null horizon, in a limiting process, we are recovering the membrane paradigm. One big difference is that our membrane is not fictitious like the membrane of the membrane paradigm, it is a real matter membrane.
[**Acknowledgments.**]{} I would like to thank Vilson Zanchin (São Paulo) and Oleg Zaslavskii (Kharkov) for the many works in collaboration related to quasiblack holes. I thank Alexander Balakin and Asya Aminova for inviting me to the Petrov Anniversary Symposium in Kazan held in November 2010. I thank the interest in my talk by Gennady Bisnovatyi-Kogan who raised the important point connected with the stability of quasiblack holes and suggested a way of accessing the problem. I also thank the interest in my talk by Mikhail Katanaev and Dieter Brill who raised the important point connected with the Penrose diagrams of quasiblack holes. One of my motivations to come to this Petrov Anniversary Symposium was a work on the Petrov classification of tensors with four indices, such as the Levi-Civita tensor, with my student André Moita. For personal reasons, he was not able to participate in the Symposium, so our contribution has not appeared in it. Nevertheless, we will publish the results elsewhere. I also thank for financial support the Fundação para a Ciência e Tecnologia (FCT) through projects CERN/FP/109276/2009 and PTDC/FIS/098962/2008 and the grant SFRH/BSAB/987/2010, and the Reitoria da Universidade Técnica de Lisboa for specific support to the presentation of my talk at the Petrov Symposium.
[99]{}
K. Schwarzschild, Sitz. Kön. Preuss. Akad. Wiss. 1, 189 (1916). K. Schwarzschild, Sitz. Kön. Preuss. Akad. Wiss. 1, 424 (1916).
A. Z. Petrov, [*Einstein spaces*]{}, (Pergamon Press, Oxford 1969), (a translation with improvements of the book published in 1961 in Russian).
A. Z. Petrov, Uch. Zapiski Kazan Gos. Univ. 144, 55 (1954), English translation in Gen. Rel. Grav. 22, 1665 (2000).
L. D. Landau and E. M. Lifshitz, [*The classical theory of fields*]{} (Pergamon Press, Oxford 1959).
A. Z. Petrov, in [*Recent developments in general relativity (a book dedicated to the 60$^{\rm th}$ birthday of L. Infeld)*]{}, eds. S. Bazanski et al (Pergamon Press, Oxford 1962), p. 383.
W. B. Bonnor, Phys. Lett. A 75, 25 (1979).
M. D. Kruskal, Phys. Rev. 119, 1743 (1960).
K. W. Ford and J. A. Wheeler [*Geons, black holes, and quantum foam: A life in physics*]{}, (W. W. Norton & Company, New York, 2000).
H. Reissner, Ann. Phys. (Berlin) 355, 106 (1916).
G. Nordström, Proc. Kon. Ned. Akad. Wet. 20, 1238 (1918).
J. C. Graves and D. R. Brill, Phys. Rev. 120, 1507 (1960).
R. P. Kerr, Phys. Rev. Lett. 11, 237 (1963).
E. T. Newman, E. Couch, K. Chinnapared, A. Exton, A. Prakash, and R. Torrence, J. Math. Phys. 6, 918 (1965).
C. W. Misner, K. S. Thorne, and J. A. Wheeler, [*Gravitation*]{}, (Freeman, San Francisco, 1973).
J. R. Oppenheimer and H. Snyder, Phys. Rev. 56, 455 (1839).
H. A. Buchdahl, Phys. Rev. 116, 1027 (1959).
E. E. Salpeter, Astroph. Journ. 140, 796 (1964).
Ya. B. Zel’dovich, Soviet Physics Doklady 9, 195 (1964).
D. Lynden-Bell, Nature 223, 690 (1969).
J. P. S. Lemos, “Black holes: from galactic nuclei to elementary particles”, in [*Proceedings of the 21$^{th}$ annual meeting of the Brazilian Astronomical Society*]{}, eds. F. Jablonski, F. Elizalde, L. Sodré Jr., and Vera Jablonsky, (Instituto Astronomico e Geofisico–USP, S. Paulo, 1996), p. 57; arXiv:astro-ph/9612220.
J. P. S. Lemos, “A profusion of black holes from two to ten dimensions”, in [*Proceedings of the 17$^{\rm th}$ national meeting of particle physics and fields*]{}, ed. A. J. Silva, (University of São Paulo Press, 1997), p. 40; arXiv:hep-th/9701121.
J. P. S. Lemos, “Black holes and fundamental physics”, in [*Proceedings of the 5$^{\rm th}$ international workshop on new worlds in astroparticle physics*]{}, eds. A. Mourão et al (World Scientific, Singapore 2005), p. 71; arXiv:gr-qc/0507101.
J. P. S. Lemos, “Black hole entropy and the holographic principle”, in [*Advances in physical sciences*]{}, ed. Luis D. Carlos (Universidade de Aveiro Press, Aveiro 2008), p. 97; arXiv:0712.3945 \[gr-qc\].
H. Weyl, Ann. Phys. (Berlin) 359, 117 (1917).
S. D. Majumdar, Phys. Rev. 72, 390 (1947).
A. Papapetrou, Proc. Roy. Irish Acad. A 51, 191 (1947).
J. B. Hartle and S. W. Hawking, Comm. Math. Phys. 26, 87 (1972).
W. B. Bonnor, Class. Quant. Grav. 16, 4125 (1999).
J. P. S. Lemos and E. Weinberg, Phys. Rev. D 69, 104004 (2004).
J. Bardeen, in [*Proceedings of the 5$^{\rm
th}$ international conference on general relativity and gravitation - GR5*]{}, (Tbilisi, URSS, 1968), p. 174.
I. G. Dymnikova, Gen. Rel. Grav. 24, 235 (1992).
S. Ansoldi, arXiv:0802.0330 \[gr-qc\] (2008).
J. P. S. Lemos and V. T. Zanchin, “Regular black holes as elementary particle models’, in preparation.
A. Kleber, J. P. S. Lemos, and V. T. Zanchin, J. Gravit. and Cosmology 11, 269 (2005).
J. P. S. Lemos and V. T. Zanchin, Phys. Rev. D 77, 064003 (2008).
W. B. Bonnor, Gen. Rel. Grav. 42, 1825 (2010).
J. P. S. Lemos and V. T. Zanchin, Phys. Rev. D 71, 124021 (2005).
J. P. S. Lemos and V. T. Zanchin, Phys. Rev. D 81, 124016 (2010).
F. de Felice, Y. Yunqiang, and F. Jing, Mon. Not. R. Astron. Soc. 277, L17 (1995).
F. de Felice, L. Siming, and Y. Yunqiang, Class. Quant. Grav. 16, 2669 (1999).
B. S. Guilfoyle, Gen. Rel.. Grav. 31, 1645 (1999).
J. P. S. Lemos and V. T. Zanchin, Phys. Rev. D 80, 024010 (2009).
A. Lue and E. J. Weinberg, Phys. Rev. D 60, 084025 (1999).
A. Lue and E. J. Weinberg, Phys. Rev. D 61, 124003 (2000).
J. P. S. Lemos and V. T. Zanchin, J. Math. Phys. 47, 042504 (2006).
J. P. S. Lemos, “Quasiblack holes in the Einstein-Cartan theory”, in preparation.
J. M. Bardeen and R. V. Wagoner, Astrophys. J. 167, 359 (1971).
R. Meinel, Class. Quantum Grav. 23, 1359 (2006).
A. Kleinwächter, H. Labranche, and R. Meinel, Gen. Rel. and Grav. 43, 1469 (2011).
R. Penrose, Nature 236, 377 (1972).
S. W. Hawking, in [*Black Holes*]{}, eds. C. DeWitt and B. S. DeWitt (Gordon and Breach, New York, 1973), p. 1.
S. W. Hawking and G. F. R. Ellis, [*The Large Scale Structure of Space-Time*]{}, (Cambridge University Press, Cambridge, 1973).
B. Carter, in [*General Relativity, an Einstein Centenary Survey*]{}, eds. S. W. Hawking and W. Israel, (Cambridge University Press, Cambridge 1979), p. 294.
J. M. Bardeen, B. Carter, and S. W. Hawking, Commun. Math. Phys. 31, 161 (1973).
L. Smarr, Phys. Rev. Lett. 30, 71 (1973).
J. D. Bekenstein, Phys. Rev. D 7, 2333 (1973).
S. W. Hawking, Commun. Math. Phys. 43, 199 (1975).
J. D. Brown and J. W. York, Phys. Rev. D 47, 1407 (1993).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 76, 084030 (2007).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 78, 024040 (2008).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 82, 024029 (2010).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 78, 124013 (2008).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 79, 044020 (2009).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 81, 064012 (2010).
J. P. S. Lemos and O. B. Zaslavskii, Phys. Lett. B 695, 37 (2011).
K. S. Thorne, D. A. MacDonald, and R. H. Price, [*Black holes: The membrane paradigm*]{}, (Yale University Press, Yale, 1986).
|
---
author:
- Guillermo Cortiñas
title: 'Algebraic v. topological $K$-theory: a friendly match'
---
[^1]
Introduction
============
These notes evolved from the lecture notes of a minicourse given in Swisk, the Sedano Winter School on K-theory held in Sedano, Spain, during the week January 22–27 of 2007, and from those of a longer course given in the University of Buenos Aires, during the second half of 2006. They intend to be an introduction to $K$-theory, with emphasis in the comparison between its algebraic and topological variants. We have tried to keep as elementary as possible. Section \[sec:knle1\] introduces $K_n$ for $n\le 1$. Elementary properties such as matrix stability and excision are discussed. Section \[sec:topk\] is concerned with topological $K$-theory of Banach algebras; its excision property is derived from the excision sequence for algebraic $K_0$ and $K_1$. Cuntz’ proof of Bott periodicity for $C^*$-algebras, via the $C^*$-Toeplitz extension, is sketched. In the next section we review Karoubi-Villamayor $K$-theory, which is an algebraic version of $K^{{\rm top}}$, and has some formally similar properties, such as (algebraic) homotopy invariance, but does not satisfy excision in general. Section \[sec:kh\] discusses $KH$, Weibel’s homotopy $K$-theory, which is introduced in a purely algebraic, spectrum-free manner. Several of its properties, including excision, homotopy invariance and the fundamental theorem, are proved. The parallelism between Bott periodicity and the fundamental theorem for $KH$ is emphasized by the use of the algebraic Toeplitz extension in the proof of the latter. Quillen’s higher $K$-theory is introduced in Section \[sec:kq\], via the plus construction of the classifying space of the general linear group. This is the first place where some algebraic topology is needed. The “décalage" formula $K_n\Sigma R=K_{n-1}R$ via Karoubi’s suspension is proved, and some some of the deep results of Suslin and Wodzicki on excision are discussed. Then the fundamental theorem for $K$-theory is reviewed, and its formal connection to Bott periodicity via the algebraic Toeplitz extension is established. The next section is the first of three devoted to the comparison between algebraic and topological $K$-theory of topological algebras. Using Higson’s homotopy invariance theorem, and the excision results of Suslin and Wodzicki, we give proofs of the $C^*$- and Banach variants of Karoubi’s conjecture, that algebraic and topological $K$-theory become isomorphic after stabilizing with respect to the ideal of compact operators (theorems of Suslin-Wodzicki and Wodzicki, respectively). Section \[sec:topk2\] defines two variants of topological $K$-theory for locally convex algebras: $KV^{{\rm dif}}$ and $KD$ which are formally analogue to $KV$ and $KH$. Some of their basic properties are similar and derived with essentially the same arguments as their algebraic counterparts. We also give a proof of Bott periodicity for $KD$ of locally convex algebras stabilized by the algebra of smooth compact operators. The proof uses the locally convex Toeplitz extension, and is modelled on Cuntz’ proof of Bott periodicity for his bivariant $K$-theory of locally convex algebras. In Section \[sec:compa2\] we review some of the results of [@cot]. Using the homotopy invariance theorem of Cuntz and Thom, we show that $KH$ and $KD$ agree on locally convex algebras stabilized by Fréchet operator ideals. The spectra for Quillen’s and Weibel’s $K$-theory, and the space for Karoubi-Villamayor $K$-theory are introduced in Section \[sec:spectra\], where also the primary and secondary characters going from $K$-theory to cyclic homology are reviewed. The technical results of this section are used in the next, where we again deal with the comparison between algebraic and topological $K$-theory of locally convex algebras. We give proofs of the Fréchet variant of Karoubi’s conjecture (due to Wodzicki), and of the $6$-term exact sequence of [@cot], which relates algebraic $K$-theory and cyclic homology to topological $K$-theory of a stable locally convex algebra.
The groups $K_n$ for $n\le 1$. {#sec:knle1}
==============================
Throughout these notes, $A, B, C$ will be rings and $R, S, T$ will be rings with unit.
Definition and basic properties of $K_j$ for $j=0,1$. {#sec:basick01}
-----------------------------------------------------
Let $R$ be a ring with unit. Write $M_nR$ for the matrix ring. Regard $M_nR\subset M_{n+1}R$ via $$\label{inclumat}
a\mapsto \left[\begin{array}{cc} a & 0\\ 0 & 0\end{array}\right]$$ Put $$M_\infty R=\bigcup_{n=1}^\infty M_nR$$ Note $M_\infty R$ is a ring (without unit). We write $\idem_n R$ and $\idem_\infty R$ for the set of idempotent elements of $M_nR$ and $M_\infty R$. Thus $$M_\infty R\supset\idem_\infty R=\bigcup_{n=1}^\infty \idem_nR.$$ We write $\GL_nR=(M_nR)^*$ for the group of invertible matrices. Regard $\GL_nR\subset \GL_{n+1}R$ via $$g\mapsto\left[\begin{array}{cc}g&0\\0&1\end{array}\right]$$ Put $$\GL R:=\bigcup_{n=1}^\infty\GL_nR.$$ Note $\GL R$ acts by conjugation on $M_\infty R$, $\idem_\infty R$ and, of course, $\GL R$.
For $a,b\in M_\infty R$ there is defined a [*direct sum*]{} operation $$\label{directsum}
a\oplus b:=\left[\begin{array}{ccccccc}a_{1,1}&0&a_{1,2}&0&a_{1,3}&0&\dots\\
0 & b_{1,1}&0&b_{1,2}&0&b_{1,3}&\dots\\
a_{2,1}&0&a_{2,2}&0&a_{2,3}&0&\dots\\
\vdots &\vdots &\vdots &\vdots &\vdots & \vdots& \ddots\end{array}\right].$$ We remark that if $a, b\in M_pR$ then $a\oplus b\in M_{2p}R$ and is conjugate, by a permutation matrix, to the usual direct sum $$\left[\begin{array}{cc}a&0\\0&b\end{array}\right].$$ One checks that $\oplus$ is associative and commutative up to conjugation. Thus the coinvariants under the conjugation action $$I(R):=((\idem_\infty R)_{\GL R},\oplus)$$ form an abelian monoid.
\[exe:othersums\] The operation can be described as follows. Consider the decomposition ${\mathbb{N}}={\mathbb{N}}_0\coprod {\mathbb{N}}_1$ into even and odd positive integers; write $\phi_i$ for the bijection $\phi_i:{\mathbb{N}}\to {\mathbb{N}}_i$, $\phi_i(n)=2n-i$ $i=0,1$. The map $\phi_i$ induces an $R$-module monomorphism $$\phi_i:R^{({\mathbb{N}})}:=\bigoplus_{n=1}^\infty R\to R^{({\mathbb{N}}_i)}\subset R^{({\mathbb{N}})},\qquad e_n\mapsto e_{\phi_i(n)}.$$ We abuse notation and also write $\phi_i$ for the matrix of this homomorphism with respect to the canonical basis and $\phi_i^t$ for its transpose. Check the formula $$a\oplus b=\phi_0a\phi^t_0+\phi_1a\phi^t_1.$$ Observe that the same procedure can be applied to any decomposition ${\mathbb{N}}={\mathbb{N}}'_0\coprod {\mathbb{N}}'_1$ into two infinite disjoint subsets and any choice of bijections $\phi'_i:{\mathbb{N}}\to {\mathbb{N}}'_i$, to obtain an operation $\oplus_{\phi'}:M_\infty R\times M_\infty R\to M_\infty R$. Verify that the operation so obtained defines the same monoid structure on the coinvariants $(M_\infty R)_{\GL R}$, and thus also on $I(R)$.
Let $M$ be an abelian monoid. Then there exist an abelian group $M^+$ and a monoid homomorphism $M\to M^+$ such that if $M\to G$ is any other such homomorphism, then there exists a unique group homomorphism $M^+\to G$ such that $$\xymatrix{M\ar[r]\ar[dr]&M^+\ar@{.>}[d]\\&G}$$ commutes.
Let $F={\mathbb{Z}}^{(M)}$ be the free abelian group on one generator $e_m$ for each $m\in M$, and let $S\subset F$ be the subgroup generated by all elements of the form $e_{m_1}+e_{m_2}-e_{m_1+m_2}$. One checks that $M^+=F/S$ satisfies the desired properties.
\[defi:k01\] $$\begin{gathered}
K_0(R):=I(R)^+\\
K_1(R):=\frac{\GL R}{[\GL R,\GL R]}=(\GL R)_{ab}.\end{gathered}$$ Here $[,]$ denotes the commutator subgroup, and the subscript ${}_{ab}$ indicates abelianization.
(see [@rosen Section 2.1])
- $[\GL R,\GL R]=ER:=<1+ae_{i,j}:a\in R, i\ne j>$, the subgroup of $\GL R$ generated by elementary matrices.
- If $\alpha\in \GL_n R$ then $$\left[\begin{array}{cc}\alpha &0\\0&\alpha^{-1}\end{array}\right]\in E_{2n}R\text{ \qquad\qquad (Whitehead's Lemma).}$$ (Here $E_{2n}R=ER\cap\GL_{2n}R$).
As a consequence of Whitehead’s lemma above, if $\beta\in \GL_n R$, then $$\begin{aligned}
\label{whitehead}
\alpha\beta=&\left[\begin{array}{ccc}\alpha\beta&0\\0&1_{n\times n}\end{array}\right]\nonumber\\
=&\left[\begin{array}{cc}\alpha&0\\0&\beta\end{array}\right]\left[\begin{array}{cc}\beta &0\\0&\beta^{-1}\end{array}\right]\\
\equiv&\alpha\oplus\beta\quad\mod ER.\nonumber\end{aligned}$$
\[exe:othersums2\] Let $R$ be a unital ring, and let $\phi'$ and $\oplus_{\phi'}$ be as in Exercise \[exe:othersums\]. Prove that $\oplus_{\phi'}$ and $\oplus$ define the same operation in $K_1(R)$, which coincides with the product of matrices.
Let $r\ge 1$. Then $$p_r=1_{r\times r}\in\idem_\infty R.$$ Because $p_r\oplus p_s=p_{r+s}$, the assignment $r\mapsto p_r$ defines a monoid homomorphism ${\mathbb{N}}\to I(R)$. Applying the group completion functor we obtain a group homomorphism $$\label{ztok0}
{\mathbb{Z}}={\mathbb{N}}^+\to I(R)^+=K_0R.$$ Similarly, the inclusion $R^*=\GL_1R\subset \GL R$ induces a homomorphism $$\label{antidet}
R^*_{ab}\to K_1R.$$
\[exa:conmut\] If $F$ is a field, and $e\in \idem_\infty F$ is of rank $r$, then $e$ is conjugate to $p_r$; moreover $p_r$ and $p_s$ are conjugate $\iff$ $r=s$. Thus is an isomorphism in this case. Assume more generally that $R$ is commutative. Then is a split monomorphism. Indeed, there exists a surjective unital homomorphism $R{\twoheadrightarrow}F$ onto a field $F$; the induced map $K_0(R)\to K_0(F)={\mathbb{Z}}$ is a left inverse of . Similarly, for commutative $R$, the homomorphism is injective, since it is split by the map $det:K_1R\to R^*$ induced by the determinant.
\[dualn\] The following are examples of rings for which the maps and are isomorphisms (see [@rosen Ch.1§3, Ch.2§2,§3]): fields, division rings, principal ideal domains and local rings. Recall that a ring $R$ is a [*local ring*]{} if the subset $R\backslash R^*$ of noninvertible elements is an ideal of $R$. For instance if $k$ is a field, then the $k$-algebra $k[\epsilon]:=k\oplus k\epsilon$ with $\epsilon^2=0$ is a local ring. Indeed $k[\epsilon]^*=k^*+k\epsilon$ and $k[\epsilon]\backslash k[\epsilon]^*=k\epsilon{\triangleleft}k[\epsilon]$.
\[exa:rsubi\] Here is an example of a local ring involving operator theory. Let $H$ be a separable Hilbert space over ${\mathbb{C}}$; put ${{\mathcal{B}}}={{\mathcal{B}}}(H)$ for the algebra of bounded operators. Write ${\mathcal{K}}\subset {\mathcal{B}}$ for the ideal of compact operators, and ${{\mathcal{F}}}$ for that of finite rank operators. The Riesz-Schauder theorem from elementary operator theory implies that if $\lambda\in{\mathbb{C}}^*$ and $T\in {\mathcal{K}}$ then there exists an $f\in{{\mathcal{F}}}$ such that $\lambda+T+f$ is invertible in ${\mathcal{B}}$. In fact one checks that if ${{\mathcal{F}}}\subset I\subset{\mathcal{K}}$ is an ideal of ${{\mathcal{B}}}$ such that $T\in I$ then the inverse of $\lambda+T+f$ is again in ${\mathbb{C}}\oplus I$. Hence the ring $$R_I:={\mathbb{C}}\oplus I/{{\mathcal{F}}}$$ is local, and thus $K_0(R_I)={\mathbb{Z}}$.
\[rem:k0projmod\]([*$K_0$ from projective modules*]{}) In the literature, $K_0$ of a unital ring is often defined in terms of finitely generated projective modules. This approach is equivalent to ours, as we shall see presently. If $R$ is a unital ring and $e\in \idem_nR$, then left multiplication by $e$ defines a right module homomorphism $R^n=R^{n\times 1}\to R^n$ with image $eR^n$. Similarly $(1-e)R^n\subset R^n$ is a submodule, and we have a direct sum decomposition $$R^n=eR^n\oplus (1-e)R^n.$$ Hence $eR^n$ is a finitely generated projective module, as it is a direct summand of a finitely generated free $R$-module. Note every finitely generated projective right $R$-module arises in this way for some $n$ and some $e\in\idem_nR$. Moreover, one checks that if $e\in \idem_nR$ and $f\in\idem_mR$, then the modules $eR^n$ and $fR^m$ are isomorphic if and only if the images of $e$ and $f$ in $\idem_\infty R$ define the same class in $I(R)$ (see [@rosen Lemma 1.2.1]). Thus we have a natural bijection from the monoid $I(R)$ to the set $P(R)$ of isomorphism classes of finitely generated projective modules; further, one checks that the direct sum of idempotents corresponds to the direct sum of modules. Hence the monoids $I(R)$ and $P(R)$ are isomorphic, and therefore they have the same group completion: $$K_0(R)=I(R)^+=P(R)^+.$$
#### *Additivity.*
If $R_1$ and $R_2$ are unital rings, then $M_\infty(R_1\times R_2)\to M_\infty R_1\times M_\infty R_2$ is an isomorphism. It follows from this that the natural map induced by the projections $R_1\times R_2\to R_i$ is an isomorphism: $$K_j(R_1\times R_2)\to K_jR_1\oplus K_jR_2\qquad (j=0,1).$$
#### *Application: extension to nonunital rings.*
If $A$ is any (not necessarily unital) ring, then the abelian group $\tilde{A}=A\oplus{\mathbb{Z}}$ equipped with the multiplication $$\label{unital_prod}
(a+n)(b+m):=ab+nm\qquad (a,b\in A, \ \ n,m\in{\mathbb{Z}})$$ is a unital ring, with unit element $1\in{\mathbb{Z}}$, and $\tilde{A}\to{\mathbb{Z}}$, $a+n\mapsto n$, is a unital homomorphism. Put $$K_j(A):=\ker(K_j\tilde{A}\to K_j{\mathbb{Z}}) \qquad (j=0,1).$$ If $A$ happens to have a unit, we have two definitions for $K_jA$. To check that they are the same, one observes that the map $$\label{unitalize}
\tilde{A}\to A\times{\mathbb{Z}}, \ \ a+n\mapsto (a+n\cdot 1,n)$$ is a unital isomorphism. One verifies that, under this isomorphism, $\tilde{A}\to {\mathbb{Z}}$ identifies with the projection $A\times {\mathbb{Z}}\to {\mathbb{Z}}$, and $\ker(K_j(\tilde{A})\to K_j{\mathbb{Z}})$ with $\ker(K_jA\oplus K_j{\mathbb{Z}}\to K_j{\mathbb{Z}})=K_jA$. Note that the same procedure works to extend any additive functor of unital rings unambiguously to all rings.
We write ${\mathfrak{Ass}}$ for the category of rings and ring homomorphisms, and ${\mathfrak{Ass}}_1$ for the subcategory of unital rings and unit preserving ring homomorphisms.
\[rem:nunitalk1\] The functor $\GL:{\mathfrak{Ass}}_1\to{\mathfrak{Grp}}$ preserves products. Hence it extends to all rings by $$\GL(A):=\ker(\GL(\tilde{A})\to \GL{\mathbb{Z}})$$ It is a straightforward exercise to show that, with this definition, $\GL$ becomes a left exact functor in ${\mathfrak{Ass}}$; thus if $A{\triangleleft}B$ is an ideal embedding, then $\GL(A)=\ker(\GL(B)\to\GL(B/A))$. It is straightforward from this that the group $K_1A$ defined above can be described as $$\label{form=k1nunital}
K_1A=\GL(A)/E(\tilde{A})\cap \GL(A)$$ A little more work shows that $E(\tilde{A})\cap \GL(A)$ is the smallest normal subgroup of $E(\tilde{A})$ which contains the elementary matrices $1+a e_{i,j}$ with $a\in A$ (see [@rosen 2.5]).
#### *Matrix stability.*
Let $R$ be a unital ring and $n\ge 2$. A choice of bijection $\phi:{\mathbb{N}}\times {\mathbb{N}}_{\le n}\cong{\mathbb{N}}$ gives a ring isomorphism $\phi:M_\infty(M_nR)\cong M_\infty(R)$ which induces, for $j=0,1$, a group isomorphism $\phi_j:K_j(M_nR)\cong K_jR $. Next, consider the decomposition ${\mathbb{N}}={\mathbb{N}}'_0\coprod{\mathbb{N}}'_1$, ${\mathbb{N}}'_0=\phi({\mathbb{N}}\times\{1\})$, ${\mathbb{N}}'_1=\phi({\mathbb{N}}\times {\mathbb{N}}_{<n}\backslash {\mathbb{N}}\times\{1\})$. Setting $\psi_0:{\mathbb{N}}\to {\mathbb{N}}'_0$, $\psi_0(m)=\phi(m,1)$ and choosing any bijection $\psi_1:{\mathbb{N}}\to {\mathbb{N}}'_1$, we obtain, as in Exercise \[exe:othersums\], a direct sum operation $\oplus_\psi:M_\infty R\times M_\infty R\to M_\infty R$. Set $\iota: R\mapsto M_nR$, $r\mapsto re_{11}$. The composite of $M_\infty\iota$ followed by the isomorphism induced by $\phi$ is the map sending $$\label{map:+0}
e_{i,j}(r)\mapsto e_{\phi(i,1),\phi(j,1)}(r)=e_{i,j}(r)\oplus_\psi 0.$$ By Exercise \[exe:othersums\] the latter map induces the identity in $K_0$. Moreover, one checks that induces the map $\GL(M_nR)\to \GL(M_nR)$, $g\mapsto g\oplus_\psi 1$, whence it also gives the identity in $K_1$, by Exercise \[exe:othersums2\]. It follows that, for $j=0,1$, the map $$K_j(\iota):K_j(R)\to K_j(M_nR)$$ is an isomorphism, inverse to $\phi_j$. Starting with a bijection $\phi:{\mathbb{N}}\times {\mathbb{N}}\to {\mathbb{N}}$ and using the same argument as above, one shows that also $$K_j(\iota):K_j(R)\to K_j(M_\infty R)$$ is an isomorphism.
#### *Nilinvariance for $K_0$.*
If $I{\triangleleft}R$ is a nilpotent ideal, then $K_0(R)\to K_0(R/I)$ is an isomorphism. This property is a consequence of the well-known fact that nilpotent extensions admit idempotent liftings, and that any two liftings of the same idempotent are conjugate (see for example [@ben 1.7.3]). Note that $K_1$ does not have the same property, as the following example shows.
\[exa:dualn2\] Let $k$ be a field. Then by \[dualn\], $K_1(k[\epsilon])=k^*+k\epsilon$ and $K_1(k)=k^*$. Thus $k[\epsilon]\to k[\epsilon]/\epsilon k[\epsilon]=k$ does not become an isomorphism under $K_1$.
\[exa:square-zero\] Let $A$ be an abelian group; make it into a ring with the trivial product: $ab=0$ $\forall a,b\in A$. The map $A\to\GL_1A$, $a\mapsto 1+a$ is an isomorphism of groups, and thus induces a group homomorphism $A\to K_1A$. We are going to show that the latter map is an isomorphism. First of all, it is injective, since $\GL_1(\tilde{A)}\to K_1(\tilde{A})$ is (by \[exa:conmut\]) and since by definition, $K_1A\subset K_1(\tilde{A})$. Second, note that if $\epsilon=1+ae_{ij}$ is an elementary matrix with $a\in A$ and $g\in\GL A$, then $(\epsilon g)_{ij}=g_{ij}+a$, and $(\epsilon g)_{p,q}=g_{p,q}$ for $(p,q)\ne (i,j)$. Thus $g$ is congruent to its diagonal in $K_1 A$. But by Whitehead’s lemma, any diagonal matrix in $\GL(\tilde{A})$ is $K_1$-equivalent to its determinant (see ). This shows that $A\to K_1A$ is surjective, whence an isomorphism.
The example above shows that $K_1$ is no longer matrix stable when extended to general nonunital rings. In addition, it gives another example of the failure of nilinvariance for $K_1$ of unital rings. It follows from \[exa:dualn2\] and \[exa:square-zero\] that if $k$ and $\epsilon$ are as in Example \[exa:dualn2\], then $K_1(k\epsilon)=\ker(K_1(k[\epsilon])\to K_1(k))$. In \[exa:swanex\] below, we give an example of a unital ring $T$ such that $k\epsilon$ is an ideal in $T$, and such that $\ker(T\to T/k\epsilon)=0$.
Prove that $K_0$ and $K_1$ commute with filtering colimits; that is, show that if $I$ is a small filtering category and $A:I\to {\mathfrak{Ass}}$ is a functor, then for $j=0,1$, the map ${\mathop{\mathrm{colim}}}_I K_jA_i\to K_j({\mathop{\mathrm{colim}}}_IA_i)$ is an isomorphism.
Matrix-stable functors.
-----------------------
\[defi:stability\] Let ${\mathfrak{C}}\subset{\mathfrak{Ass}}$ be a subcategory of the category of rings, $S:{\mathfrak{C}}\to {\mathfrak{C}}$ a functor, and $\gamma:1_{\mathfrak{C}}\to S$ a natural transformation. If ${\mathfrak{D}}$ is any category, $F:{\mathfrak{C}}\to{\mathfrak{D}}$ a functor and $A\in{\mathfrak{C}}$, then we say that $F$ is [stable on $A$]{} with respect to $(S,\gamma)$ (or $S$-stable on $A$, for short) if the map $F(\gamma_A):F(A)\to F(S(A))$ is an isomorphism. We say that $F$ is [$S$-stable]{} if it is stable on every $A\in{\mathfrak{C}}$.
We showed in Section \[sec:basick01\] that $K_j$ is $M_n$ and even $M_\infty$-stable on unital rings; in both cases, the natural transformation of the definition above is $r\mapsto re_{11}$.
\[exe:matrix\] Let $F:{\mathfrak{Ass}}\to {\mathfrak{Ab}}$ be a functor and $A$ a ring. Prove:
[i)]{} The following are equivalent:
- For all $n,p\in {\mathbb{N}}$, $F$ is $M_p$-stable on $M_nA$.
- For all $n\in {\mathbb{N}}$, $F$ is $M_2$-stable on $M_nA$
In particular, an $M_2$-stable functor is $M_n$-stable, for all $n$.
[ii)]{} If $F$ is $M_\infty$-stable on both $A$ and $M_nA$, then $F$ is $M_n$-stable on $A$. In particular, if $F$ is $M_\infty$-stable, then it is $M_n$-stable for all $n$.
\[lem:i\_0=i\_1\] Let $F:{\mathfrak{Ass}}\to {\mathfrak{D}}$ be a functor, and $A\in{\mathfrak{Ass}}$. Assume $F$ is $M_2$-stable on both $A$ and $M_2A$. Then the inclusions $\iota_0,\iota_1:A\to M_2A$ $$\iota_0(a)=ae_{11},\qquad \iota_1(a)=ae_{22}$$ induce the same isomorphism $FA\to FM_2A$.
Consider the composites $j_0=\iota_0M_2\circ\iota_0$ and $j_1=\iota_0M_2\circ\iota_1$, and the matrices $$J_2=\left[\begin{array}{cccc} 0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\end{array}\right],\quad
J_3=\left[\begin{array}{cccc} 0&0&1&0\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1 \end{array}\right]\in \GL_4{\mathbb{Z}}.$$ Conjugation by $J_i$ induces an automorphism $\sigma_i$ of $M_4A=M_2M_2A$ of order $i$ such that $$\sigma_ij_0=j_1\qquad (i=2,3).$$ Since $F(j_0)$ is an isomorphism, and the orders of $\sigma_2$ and $\sigma_3$ are relatively prime, it follows that $F(\sigma_2)=F(\sigma_3)=1_{F(M_4A)}$ and hence that $F(j_0)=F(j_1)$ and $F(\iota_0)=F(\iota_1)$.
\[exe:add\] Let $F$ and $A$ be as in Lemma \[lem:i\_0=i\_1\]. Assume in addition that ${\mathfrak{D}}$ and $F$ are additive. Consider the map $${\mathrm{diag}}:A\times A\to M_2A,\ \ {\mathrm{diag}}(a,b)=\left[\begin{array}{cc}a&0\\ 0&b\end{array}\right].$$ Prove that the composite $$F(A)\oplus F(A)=F(A\times A)\overset{F({\mathrm{diag}})}\longrightarrow F(M_2A)\overset{F(i_0)^{-1}}\longrightarrow F(A)$$ is the codiagonal map (i.e. it restricts to the identity on each copy of $F(A)$).
\[prop:vw\] Let $F$ and $A$ be as in Lemma \[lem:i\_0=i\_1\], $A\subset B$ an overring, and $V,W\in B$ elements such that $$WA, AV\subset A,\ \ aVWa'=aa'\ \ (a,a'\in A).$$ Then $$\phi^{V,W}:A\to A,\quad a \mapsto WaV$$ is a ring homomorphism, and $$F(\phi^{V,W}) = 1_{F(A)}.$$
We may assume that $B$ is unital. Consider the elements $V\oplus 1$ and $W\oplus 1\in M_2B$. The hypothesis guarantee that both $\phi:=\phi^{V,W}$ and $\phi':=\phi^{V \oplus 1,W \oplus 1}:M_2A\to M_2A$ are well-defined ring homomorphisms. Moreover, $\phi'\iota_1=\iota_1$ and $\phi'\iota_0=\iota_0\phi$. It follows that $F(\phi')$ and $F(\phi)$ are the identity maps, by Lemma \[lem:i\_0=i\_1\].
\[exe:finrank\]
[i)]{} Let $R$ be a unital ring and $L$ a free, finitely generated $R$-module of rank $n$. A choice of basis ${\mathfrak{B}}$ of $L$ gives an isomorphism $\phi=\phi_{\mathfrak{B}}:M_nR\to {\mathrm{End}}_RL$. Use \[prop:vw\] to show that $K_j(\phi)$ is independent of the choice of ${\mathfrak{B}}$ $(j=0,1)$.
[ii)]{} Assume $R$ is a field. If $e\in {\mathrm{End}}_RL$ is idempotent, then $\iota_e:R\to{\mathrm{End}}_RL$, $x\mapsto xe$ is a ring monomorphism. Show that if $e\in {\mathrm{End}}_RL$ is of rank $1$, then $K_j(\iota_e)=K_j(\phi\iota)$. In particular, $K_j(\iota_e)$ is independent of the choice of the rank-one idempotent $e$.
[iii)]{} Let $H$ and ${{\mathcal{F}}}$ be as in Example \[exa:rsubi\]. If $V\subset W\subset H$ are finite dimensional subspaces and $U=V^{\perp}\cap W$ then the decomposition $W=V\oplus U$ induces an inclusion ${\mathrm{End}}_{\mathbb{C}}(V)\subset {\mathrm{End}}_{\mathbb{C}}(W)$. Show that $${{\mathcal{F}}}=\bigcup_{\dim V<\infty}{\mathrm{End}}_{\mathbb{C}}(V)$$
[iv)]{} Prove that if $e\in{{\mathcal{F}}}$ is any self-adjoint, rank-one idempotent, then the inclusion ${\mathbb{C}}\to{{\mathcal{F}}}$, $x\mapsto xe$, induces an isomorphism $K_j({\mathbb{C}}){\overset{\cong}{\to}}K_j({{\mathcal{F}}})$. Show moreover that this isomorphism is independent of the choice of $e$.
Sum rings and infinite sum rings.
---------------------------------
Recall from [@wa] that a [*sum ring*]{} is a unital ring $R$ together with elements $\alpha_i,\beta_i$, $i=0,1$ such that the following identities hold $$\begin{gathered}
\alpha_0\beta_0=\alpha_1\beta_1=1\nonumber\\
\beta_0\alpha_0+\beta_1\alpha_1=1\label{sumring}\end{gathered}$$ If $R$ is a sum ring, then $$\begin{gathered}
\label{boxplus}
\boxplus:R\times R\to R,\\
(a,b)\mapsto a\boxplus b=\beta_0a\alpha_0+\beta_1b\alpha_1\nonumber\end{gathered}$$ is a unital ring homomorphism. An [*infinite sum ring*]{} is a sum ring $R$ together with a unit preserving ring homomorphism $\infty:R\to R$, $a\mapsto a^\infty$ such that $$\label{ainfi}
a\boxplus a^\infty=a^\infty\qquad (a\in R).$$
\[prop:sumring\] Let ${\mathfrak{D}}$ be an additive category, $F:{\mathfrak{Ass}}\to{\mathfrak{D}}$ a functor, and $R$ a sum ring. Assume that the sum of projection maps $\gamma=F\pi_0+F\pi_1:F(R\times R)\to FR\oplus FR$ is an isomorphism, and that $F$ is $M_2$-stable on both $R$ and $M_2R$. Then the composite $$\xymatrix{F(R)\oplus F(R)\ar[r]^(0.6){\gamma^{-1}} &F(R\times R)\ar[rr]^{F(\boxplus)}&&F(R)}$$ is the codiagonal map; that is, it restricts to the identity on each copy of $F(R)$. If moreover $R$ is an infinite sum ring, then $F(R)=0$.
Let $j_0,j_1:R\to R\times R$, $j_0(x)=(x,0)$, $j_1(x)=(0,x)$. Note that $\gamma^{-1}=Fj_0+Fj_1$. Because $F$ is $M_2$-stable on both $R$ and $M_2R$, $F(\boxplus)F(j_i)=1_{F(R)}$, by Proposition \[prop:vw\]. Thus $F(\boxplus)\circ\gamma^{-1}$ is the codiagonal map, as claimed. It follows that if $\alpha,\beta:R\to R$ are homomorphisms, then $F\alpha+F\beta=F(\boxplus(\alpha,\beta))$. In particular, if $R$ is an infinite sum ring, then $$F(\infty)+1_{F(R)}=F(\infty)+F(1_R)=F(\boxplus(\infty,1_R))=F(\infty).$$ Thus $1_{F(R)}=0$, whence $F(R)=0$.
Let $A$ be a ring. Write $\Gamma A$ for the ring of all ${\mathbb{N}}\times {\mathbb{N}}$ matrices $a=(a_{i,j})_{i,j\ge 1}$ which satisfy the following two conditions:
- The set $\{a_{ij}, i,j \in {\mathbb{N}}\}$ is finite.
- There exists a natural number $N \in {\mathbb{N}}$ such that each row and each column has at most $N$ nonzero entries.
It is an exercise to show that $\Gamma A$ is indeed a ring and that $M_\infty A\subset\Gamma A$ is an ideal. The ring $\Gamma A$ is called (Karoubi’s) [*cone ring*]{}; the quotient $\Sigma A:=\Gamma A/M_\infty A$ is the [*suspension*]{} of $A$. A useful fact about $\Gamma$ and $\Sigma$ is that the well-known isomorphism $M_\infty{\mathbb{Z}}\otimes A\cong M_\infty A$ extends to $\Gamma$, so that there are isomorphisms (see [@biva 4.7.1]) $$\label{Gammatenso}
\Gamma {\mathbb{Z}}\otimes A{\overset{\cong}{\to}}\Gamma A \text{ and } \Sigma{\mathbb{Z}}\otimes A{\overset{\cong}{\to}}\Sigma A.$$
Let $R$ be a unital ring. One checks that the following elements of $\Gamma R$ satisfy the identities : $$\alpha_0 =\sum_{i=1}^\infty e_{i,2i}, \quad \beta_0 =\sum_{i=1}^\infty e_{2i,i}, \quad
\alpha_1 =\sum_{i=1}^\infty e_{i,2i-1}, \quad \mbox{and} \quad \beta_1 =\sum_{i=1}^\infty e_{2i-1,i}.$$ Let $a\in\Gamma R$. Because the map ${\mathbb{N}}\times{\mathbb{N}}\to {\mathbb{N}}$, $(k,i)\mapsto 2^{k+1}i+2^k-1$, is injective, the following assignment gives a well-defined, ${\mathbb{N}}\times{\mathbb{N}}$-matrix $$\label{finfty}
\phi^\infty(a) = \sum_{k=0}^\infty \beta_1^k \beta_0 a \alpha_0 \alpha_1^k=\sum_{k,i,j}e_{2^{k+1}i+2^k-1,2^{k+1}j+2^k-1}\otimes a_{i,j}.$$ One checks that $\alpha_1\beta_0=\alpha_0\beta_1=0$ and $\alpha_0\alpha_1^i\beta_1^j\beta_0=\delta_{ij}$. It follows from this that $\phi^\infty$ is a ring endomorphism of $\Gamma R$; it is straightforward that is satisfied too. In particular $K_n\Gamma R=0$ for $n=0,1$.
Let $A$ be a ring. If $m=(m_{i,j})$ is an ${\mathbb{N}}\times{\mathbb{N}}$-matrix with coefficients in $A$, and $x\in M_\infty \tilde{A}$, then both $m\cdot x$ and $x\cdot m$ are well-defined ${\mathbb{N}}\times {\mathbb{N}}$-matrices. Put $$\label{waggamma}
\Gamma^\ell A:=\{m\in M_{{\mathbb{N}}\times{\mathbb{N}}}A:m\cdot M_\infty \tilde{A}\subset M_\infty A\supset M_\infty \tilde{A}\cdot m\}.$$ Prove
[i)]{} $\Gamma^\ell A$ consists of those matrices in $M_{{\mathbb{N}}\times{\mathbb{N}}}A$ having finitely many nonzero elements in each row and column. In particular, $\Gamma^\ell A\supset\Gamma A$.
[ii)]{} The usual matrix sum and product operations make $\Gamma^\ell A$ into a ring.
[iii)]{} If $R$ is a unital ring then $\Gamma^\ell R$ is an infinite sum ring.
The ring $\Gamma^\ell A$ is the cone ring considered by Wagoner in [@wa], where it was denoted $\ell A$. The notion of infinite sum ring was introduced in [*loc. cit.*]{}, where it was also shown that if $R$ is unital, then $\Gamma^\ell R$ is an example of such a ring.
Let $F:{\mathfrak{Ass}}\to{\mathfrak{Ab}}$ be a functor. Assume $F$ is both additive and $M_2$-stable for unital rings and for rings of the form $M_\infty R$, with $R$ unital. Show that if $R$ is a unital ring, then the direct sum operation , induces the group operation in $F(M_\infty R)$, and that the same is true of any of the other direct sum operations of \[exe:othersums\].
Let ${\mathcal{B}}$ and $H$ be as in Example \[exa:rsubi\]. Choose a Hilbert basis $\{e_i\}_{i\ge 1}$ of $H$, and regard ${\mathcal{B}}$ as a ring of ${\mathbb{N}}\times{\mathbb{N}}$ matrices. With these identifications, show that ${\mathcal{B}}\supset\Gamma{\mathbb{C}}$. Deduce from this that ${\mathcal{B}}$ is a sum ring. Further show that extends to ${\mathcal{B}}$, so that the latter is in fact an infinite sum ring.
The excision sequence for $K_0$ and $K_1$.
------------------------------------------
A reason for considering $K_0$ and $K_1$ as part of the same theory is that they are connected by a long exact sequence, as shown in Theorem \[thm:exci01\] below. We need some notation. Let $$\label{abc}
0\to A\to B\to C\to 0$$ be an exact sequence of rings. If $\hat{g}\in M_nB$ maps to an invertible matrix $g\in \GL_nC$ and $\hat{g}^*$ maps to $g^{-1}$, then $$\begin{aligned}
\label{elh}
h=h(\hat{g},\hat{g}^*):=&\left[\begin{matrix}2\hat{g}-\hat{g}\hat{g}^*\hat{g}&&\hat{g}\hat{g}^*-1\\1-\hat{g}^*\hat{g}&&\hat{g}^*\end{matrix}\right]\\
=& \left[\begin{array}{cc}1&\hat{g}\\ 0&1\end{array}\right]\cdot \left[\begin{array}{cc}1&0\\
-\hat{g}^*&1\end{array}\right]\cdot \left[\begin{array}{cc}1&\hat{g}\\ 0&1\end{array}\right]\cdot
\left[\begin{array}{cc}0&-1\\ 1&0\end{array}\right]\in \mathrm{E}_{2n}(\tilde{B})\subset\GL_{2n}(\tilde{B})\nonumber\end{aligned}$$ Note that $h$ maps to ${\mathrm{diag}}(g,g^{-1})\in\GL_{2n}(C)$. Thus $hp_nh^{-1}$ maps to $p_n$, whence $hp_nh^{-1}-p_n\in M_{2n}A$ and $hp_nh^{-1}\in M_{2n}\tilde{A}$. Put $$\label{map:partial}
\partial(\hat{g},\hat{g}^*):=[hp_nh^{-1}]-[p_n]\in\ker(K_0(\tilde{A})\to K_0{\mathbb{Z}})=K_0A$$
\[thm:exci01\] If is an exact sequence of rings, then there is a long exact sequence $$\xymatrix{K_1A\ar[r] &K_1B\ar[r]& K_1C\ar[d]^{\partial}\\K_0C&K_0B\ar[l]&K_0A\ar[l]}$$ The map $\partial$ sends the class of an element $g\in\GL_nC$ to the class of the element ; in particular the latter depends only on the $K_1$-class of $g$.
(Sketch) The exactness of the top row of the sequence of the theorem is straightforward. Putting together [@rosen Thms. 1.5.5, 1.5.9 and 2.5.4] we obtain the theorem for those sequences in which $B\to C$ is a unital homomorphism. It follows that we have a map of exact sequences $$\xymatrix{K_1\tilde{B}\ar[r]\ar[d]&K_1\tilde{C}\ar[d]\ar[r]&
K_0A\ar[d]\ar[r]&K_0\tilde{B}\ar[d]\ar[r]&K_0\tilde{C}\ar[d]\\
K_1{\mathbb{Z}}\ar@{=}[r]&K_1{\mathbb{Z}}\ar[r]&0\ar[r]&K_0{\mathbb{Z}}\ar@{=}[r]&K_0{\mathbb{Z}}}$$ Taking kernels of the vertical maps, we obtain an exact sequence $$\xymatrix{K_1B\ar[r]&K_1C\ar[r]& K_0A\ar[r]&K_0B\ar[r]&K_0C}$$ It remains to show that the map $K_1C\to K_0A$ of this sequence is given by the formula of the theorem. This is done by tracking down the maps and identifications of the proofs of [@rosen Thms. 1.5.5, 1.5.9 and 2.5.4] (see also [@mil §3,§4]), and computing the idempotent matrices to which the projective modules appearing there correspond, taking into account that $B\to C$ sends the matrix $h\in\GL_{2n}\tilde{B}$ of to the diagonal matrix ${\mathrm{diag}}(g,g^{-1})\in
\GL_{2n}C$.
In [@rosen 2.5.4], a sequence similar to that of the theorem above is obtained, in which $K_1A$ is replaced by a relative $K_1$-group $K_1(B:A)$, depending on both $A$ and $B$. For example if $B\to B/A$ is a split surjection, then ([@rosen Exer. 2.5.19]) $$K_1(B:A)=\ker(K_1B\to K_1(B/A))$$ The groups $K_1(B:A)$ and $K_1A$ are not isomorphic in general (see Example \[exa:swanex\] below); however their images in $K_1B$ coincide. We point out also that the theorem above can be deduced directly from Milnor’s Mayer-Vietoris sequence for a Milnor square ([@mil §4]).
The following corollary is immediate from the theorem.
\[coro:k0split\] Assume is split by a ring homomorphism $C\to B$. Then $K_0A\to K_0B$ is injective, and induces an isomorphism $$K_0A=\ker(K_0B\to K_0C)$$ Because of this we say that $K_0$ is [split exact]{}.
\[exa:swanex\](Swan’s example [@swan]) We shall give an example which shows that $K_1$ is not split exact. Let $k$ be a field with at least $3$ elements (i.e. $k\ne\mathbb{F}_2$). Consider the ring of upper triangular matrices $$T:=\left[\begin{array}{cc}k&k\\0&k\end{array}\right]$$ with coefficients in $k$. The set $I$ of strictly upper triangular matrices forms an ideal of $T$, isomorphic as a ring, to the ideal $k\epsilon{\triangleleft}k[\epsilon]$, via the identification $\epsilon=e_{12}$. By Examples \[exa:dualn2\] and \[exa:square-zero\], $\ker(K_1(k[\epsilon])\to K_1(k))=K_1(k\epsilon)\cong k\epsilon$, the additive group underlying $k$. If $K_1$ were split exact, then also $$\label{relak1}
K_1(T:I)=\ker(K_1T\to K_1(k\times k))$$ should be isomorphic to $k\epsilon$. However we shall see presently that $K_1(T:I)=0$. Note that $T\to k\times k$ is split by the natural inclusion ${\mathrm{diag}}:k\times k\to T$. Thus any element of $K_1(T:I)$ is the class of an element in $\GL(k\epsilon)$, and by \[exa:dualn2\] it is congruent to the class of an element in $\GL_1(k\epsilon)=1+k\epsilon$. We shall show that if $\lambda\in k$, then $1+\lambda\epsilon\in[\GL_1T,\GL_1T]$. Because we are assuming that $k\neq\mathbb{F}_2$, there exists $\mu\in k-\{0,1\}$; one checks that $$1+\lambda \epsilon=\left[\begin{matrix} 1&\lambda\\ 0&1\end{matrix}\right]=\left[\left
[\begin{matrix}\mu&0\\ 0&1\end{matrix}\right],
\left[\begin{matrix}1&\frac{\lambda}{\mu-1}\\ 0&1\end{matrix}\right]\right]\in [\GL_1T,\GL_1T].$$
Let $R$ be a unital ring. Applying the theorem above to the cone sequence $$\label{coneseq}
0\to M_\infty R\to \Gamma R\to \Sigma R\to 0$$ we obtain an isomorphism $$\label{decal}
K_1\Sigma R=K_0R.$$
\[exe:k0valid\] Use Corollary \[coro:k0split\] to prove that all the properties of $K_0$ stated in \[sec:basick01\] for unital rings, remain valid for all rings. Further, show that $K_0(\Gamma A)=0$ for all rings $A$, and thus that for any ring $A$, the boundary map gives a surjection $$K_1\Sigma A{\twoheadrightarrow}K_0 A.$$
Negative $K$-theory.
--------------------
\[defi:kneg\] Let $A$ be a ring and $n\ge 0$. Put $$K_{-n}A:=K_0\Sigma^nA.$$
\[prop:kneg\]
[i)]{} For $n\le 0$, the functors $K_n:{\mathfrak{Ass}}\to {\mathfrak{Ab}}$ are additive, nilinvariant and $M_\infty$-stable.
[ii)]{} The exact sequence of \[thm:exci01\] extends to negative $K$-theory. Thus if $$0\to A\to B\to C\to 0$$ is a short exact sequence of rings, then for $n\le 0$ we have a long exact sequence $$\xymatrix{K_nA\ar[r] &K_nB\ar[r]& K_nC\ar[d]^{\partial}\\K_{n-1}C&K_{n-1}B\ar[l]&K_{n-1}A\ar[l]}$$
[i)]{} By , we have $\Sigma A=\Sigma{\mathbb{Z}}\otimes A$. Thus $\Sigma$ commutes with finite products and with $M_\infty$, and sends nilpotent rings to nilpotent rings. Moreover, $\Sigma$ is exact, because both $M_\infty$ and $\Gamma$ are. Hence the general case of i) follows from the case $n=0$, which is proved in Section \[sec:basick01\].
[ii)]{} Consider the sequence $$0\to A\to \tilde{B}\to\tilde{C}\to 0$$ Applying $\Sigma$, we obtain $$0\to \Sigma A\to \Sigma \tilde{B}\to\Sigma \tilde{C}\to 0$$ By , if $D$ is any ring, then $K_0\tilde{D}=K_1\Sigma\tilde{D}$. Thus by \[thm:exci01\] and \[coro:k0split\], we get an exact sequence $$\xymatrix{K_0A\ar[r] &K_0B\oplus K_0{\mathbb{Z}}\ar[r]& K_0C\oplus K_0{\mathbb{Z}}\ar[d]^{\partial}\\K_{-1}C\oplus K_{-1}{\mathbb{Z}}&K_{-1}B\oplus K_{-1}{\mathbb{Z}}\ar[l]&K_{-1}A\ar[l]}$$ Splitting off the $K_j{\mathbb{Z}}$ summands, we obtain $$\xymatrix{K_0A\ar[r] &K_0B\ar[r]& K_0C\ar[d]^{\partial}\\
K_{-1}C &K_{-1}B\ar[l]&K_{-1}A\ar[l]}$$ This proves the case $n=0$ of the proposition. The general case follows from this.
\[k0j\] Let ${\mathcal{B}}$ be as in Example \[exa:rsubi\] and $I{\triangleleft}{\mathcal{B}}$ a proper ideal. It is classical that ${{\mathcal{F}}}\subset I\subset {\mathcal{K}}$ for any such ideal. We shall show that the map $$\label{map:k0fk0i}
K_0{\mathcal{F}}\to K_0I$$ is an isomorphism; thus $K_0I=K_0{\mathcal{F}}={\mathbb{Z}}$, by Exercise \[exe:finrank\]. As in \[exa:rsubi\] we consider the local ring $R_I={\mathbb{C}}\oplus I/{\mathcal{F}}$. We have a commutative diagram with exact rows and split exact columns $$\xymatrix{0\ar[r]&{{\mathcal{F}}}\ar@{=}[d]\ar[r]&I\ar[d]\ar[r]&I/{{\mathcal{F}}}\ar[r]\ar[d]&0\\
0\ar[r]&{{\mathcal{F}}}\ar[r]&{\mathbb{C}}\oplus I\ar[d]\ar[r]&R_I\ar[r]\ar[d]&0\\
&&{\mathbb{C}}\ar@{=}[r]&{\mathbb{C}}&}$$ By \[exa:rsubi\] and split exactness, $K_0(I/{\mathcal{F}})=0$. Thus the map is onto. From the diagram above, $K_1(I/{\mathcal{F}})\to K_0{\mathcal{F}}$ factors through $K_1(R_I)\to K_0{\mathcal{F}}$. But it follows from the discussion of Example \[exa:rsubi\] that the map $K_1({\mathbb{C}}\oplus I)\to K_1(R_I)$ is onto, whence $K_1(R_I)\to K_0{\mathcal{F}}$ and thus also $K_1(I/{\mathcal{F}})\to K_0{\mathcal{F}}$, are zero. Thus is an isomorphism.
\[k-1j\] A theorem of Karoubi asserts that $K_{-1}({\mathcal{K}})=0$ [@karcomp]. Use this and excision to show that $K_{-1}(I)=0$ for any operator ideal $I$.
\[exe:colineg\] Prove that if $n<0$, then the functor $K_n$ commutes with filtering colimits.
\[rem:basskneg\] The definition of negative $K$-theory used here is taken from Karoubi-Villamayor’s paper [@kv], where cohomological notation is used. Thus what we call $K_nA$ here is denoted $K^{-n}A$ in [*loc. cit.*]{} $(n\le 0)$. There is also another definition, due to Bass [@bass]. A proof that the two definitions agree is given in [@kv 7.9].
Topological $K$-theory {#sec:topk}
======================
We saw in the previous section (Example \[exa:swanex\]) that $K_1$ is not split exact. It follows from this that there is no way of defining higher $K$-groups such that the long exact sequence of Theorem \[thm:exci01\] can be extended to higher $K$-theory. This motivates the question of whether this problem could be fixed if we replaced $K_1$ by some other functor. This is succesfully done in topological $K$-theory of Banach algebras.
Topological $K$-theory of Banach algebras.
------------------------------------------
A [*Banach*]{} (${\mathbb{C}}$-) algebra is a ${\mathbb{C}}$-algebra together with a norm $||\ \ ||$ which makes it into a Banach space and is such that $||xy||\le ||x||\cdot ||y||$ for all $x,y\in A$. If $A$ is a Banach algebra then its ${\mathbb{C}}$-unitalization is the unital Banach algebra $$\tilde{A}_{\mathbb{C}}=A\oplus{\mathbb{C}}$$ equipped with the product and the norm $||a+\lambda||:=||a||+|\lambda|$. An algebra homomorphism is a morphism of Banach algebras if it is continuous. If $X$ is a compact Hausdorff space and $V$ is any topological vectorspace, we write ${\mathcal{C}}(X,V)$ for the topological vectorspace of all continuous maps $X\to V$. If $A$ is a Banach algebra, then ${\mathcal{C}}(X,A)$ is again a Banach algebra with norm $||f||_\infty:=\sup_x||f(x)||$. If $X$ is locally compact, $X^+$ its one point compactification, and $V$ a topological vectorspace, we write $$V(X)={\mathcal{C}}_0(X,V)=\{f\in{\mathcal{C}}(X^+,V):f(\infty)=0\}.$$ Note that if $X$ is compact, $V(X)={\mathcal{C}}(X,V)$. If $A$ is a Banach algebra then $A(X)$ is again a Banach algebra, as it is the kernel of the homomorphism ${\mathcal{C}}(X^+,A)\to A$, $f\mapsto f(\infty)$. For example, $A[0,1]$ is the algebra of continous functions $[0,1]\to A$, and $A(0,1]$ and $A(0,1)$ are identified with the ideals of $A[0,1]$ consisting of those functions which vanish respectively at $0$ and at both endpoints. Two homomorphisms $f_0,f_1:A\to B$ of Banach algebras are called [*homotopic*]{} if there exists a homomorphism $H:A\to B[0,1]$ such that the following diagram commutes. $$\xymatrix{&B[0,1]\ar[d]^{({{\rm ev}}_0,{{\rm ev}}_1)}\\ A\ar[ur]^H\ar[r]_{(f_0,f_1)}&B\times B}$$ A functor $G$ from Banach algebras to a category ${\mathfrak{D}}$ is called [*homotopy invariant*]{} if it maps homotopic maps to equal maps.
Prove that $G$ is homotopy invariant if and only if for every Banach algebra $A$ the map $G(A)\to G(A[0,1])$ induced by the natural inclusion $A\subset A[0,1]$ is an isomorphism.
\[htpyk0\] ([@rosen 1.6.11]) The functor $K_0:((\mathrm{Banach\hspace{1.5pt} Algebras}))\to {\mathfrak{Ab}}$ is homotopy invariant.
\[nhtpyk1\] [*$K_1$ is not homotopy invariant*]{}. The algebra $A:={\mathbb{C}}[\epsilon]$ is a Banach algebra with norm $||a+b\epsilon||=|a|+|b|$. Both the inclusion $\iota:{\mathbb{C}}\to A$ and the projection $\pi:A\to {\mathbb{C}}$ are homomorphisms of Banach algebras; they satisfy $\pi\iota=1$. Moreover the map $H:A\to A[0,1]$, $H(a+b\epsilon)(t)=a+tb\epsilon$ is also a Banach algebra homomorphism, and satisfies ${{{\rm ev}}}_0H=\iota\pi$, ${{{\rm ev}}}_1H=1$. Thus any homotopy invariant functor $G$ sends $\iota$ and $\pi$ to inverse homomorphisms; since $K_1$ does not do so by \[exa:dualn2\], it is not homotopy invariant.
Next we consider a variant of $K_1$ which is homotopy invariant.
\[defi:k1top\] Let $R$ be a unital Banach algebra. Put $$\GL R_0:=\{g\in \GL R:\exists h\in \GL(R[0,1]): h(0)=1,h(1)=g\}.$$ Note $\GL(R)_0{\triangleleft}\GL R$. The [topological]{} $K_1$ of $R$ is $$K_1^{{\rm top}}R=\GL R /\GL(R)_0.$$
Show that if we regard $\GL R={\mathop{\mathrm{colim}}}_n\GL_n R$ with the weak topology inherited from the topology of $R$, then $K_1^{{\rm top}}R=\pi_0(\GL R)={\mathop{\mathrm{colim}}}_n\pi_0(\GL_n R)$. Then show that $K_1^{top}$ is homotopy invariant.
Note that if $R$ is a unital Banach algebra, $a\in R$ and $i\ne j$, then $1+tae_{i,j}\in E(R[0,1])$ is a path connecting $1$ to the elementary matrix $1+ae_{i,j}$. Thus $ER=[\GL(R),\GL(R)]\subset \GL(R)_0$, whence $\GL(R)_0$ is normal, and we have a surjection $$\label{k1ontok1top}
K_1R{\twoheadrightarrow}K_1^{{\rm top}}R.$$ In particular, $K_1^{{\rm top}}R$ is an abelian group.
Because ${\mathbb{C}}$ is a field, $K_1{\mathbb{C}}={\mathbb{C}}^*$. Since on the other hand ${\mathbb{C}}^*$ is path connected, we have $K_1^{{\rm top}}{\mathbb{C}}=0$.
Note that $K_1^{{\rm top}}$ is additive. Thus we can extend $K_1^{{\rm top}}$ to nonunital Banach algebras in the usual way, i.e. $$K_1^{{\rm top}}A:=\ker(K_1^{{\rm top}}(\tilde{A}_{\mathbb{C}})\to K_1^{{\rm top}}{\mathbb{C}})=K_1^{{\rm top}}(\tilde{A}_{\mathbb{C}})$$
\[exe:k1tononuni\] Show that if $A$ is a (not necessarily unital) Banach algebra, then $$K_1^{{\rm top}} A=\GL A/\GL(A)_0.$$
\[prop:gl0onto\]([@black 3.4.4]) If $R{\twoheadrightarrow}S$ is a surjective unital homomorphism of unital Banach algebras, then $\GL (R)_0\to\GL(S)_0$ is surjective.
Let $$\label{babc}
0\to A\to B\to C\to 0$$ be an exact sequence of Banach algebras. Then $$0\to A\to \tilde{B}_{\mathbb{C}}\to \tilde{C}_{\mathbb{C}}\to 0$$ is again exact. By and \[prop:gl0onto\], the connecting map $\partial:K_1(\tilde{C}_{\mathbb{C}})\to K_0A$ of Theorem \[thm:exci01\] sends $\ker(K_1(\tilde{C}_{\mathbb{C}})\to K_1^{{\rm top}}C)$ to zero, and thus induces a homomorphism $$\partial:K_1^{{\rm top}}C\to K_0A.$$
\[thm:excitop01\] The sequence $$\xymatrix{K^{{\rm top}}_1A\ar[r] &K^{{\rm top}}_1B\ar[r]& K_1^{{\rm top}}C\ar[d]^{\partial}\\K_0C&K_0B\ar[l]&K_0A\ar[l]}$$ is exact.
Straightforward from \[thm:exci01\] and \[prop:gl0onto\].
Consider the exact sequences $$\begin{gathered}
0\to A(0,1]\to A[0,1]\to A\to 0\\
0\to A(0,1)\to A(0,1]\to A\to 0\end{gathered}$$ Note moreover that the first of these sequences is split exact. Because $K_0$ is homotopy invariant and split exact, and because $A(0,1]$ is contractible, we get an isomorphism $$\label{decato1}
K_1^{{\rm top}}A=K_0(A(0,1))$$ Since also $K_1^{{\rm top}}$ is homotopy invariant, we put $$\label{decato2}
K_2^{{\rm top}} A=K_1^{{\rm top}}(A(0,1)).$$
If is exact, then $$0\to A(0,1)\to B(0,1)\to C(0,1)\to 0$$ is exact too.
See [@ror 10.1.2] for a proof in the $C^*$-algebra case; a similar argument works for Banach algebras.
Taking into account the lemma above, as well as and , we obtain the following corollary of Theorem \[thm:excitop01\].
\[excitop12\] There is an exact sequence $$\xymatrix{K^{{\rm top}}_2A\ar[r] &K^{{\rm top}}_2B\ar[r]& K^{{\rm top}}_2C\ar[d]^{\partial}\\K_1^{{\rm top}}C&K^{{\rm top}}_1B\ar[l]&K^{{\rm top}}_1A\ar[l]}$$
The sequence above can be extended further by defining inductively $$K_{n+1}^{{\rm top}}A:=K_n^{{\rm top}}(A(0,1)).$$
Bott periodicity.
-----------------
Let $R$ be a unital Banach algebra. Consider the map $\beta:\idem_n R\to \GL_n {\mathcal{C}}_0(S^1,R)$, $$\label{formubeta}
\beta(e)(z)=ze+1-e$$ This map induces a group homomorphism $K_0R\to K_1^{{\rm top}}{\mathcal{C}}_0(S^1,R)$ (see [@black 9.1]). If $A$ is any Banach algebra, we write $\beta$ for the composite $$\begin{gathered}
\label{betato}
K_0 A\to K_0(\tilde{A}_{\mathbb{C}})\overset\beta\to K_1^{{\rm top}}({\mathcal{C}}_0(S^1,\tilde{A}_{\mathbb{C}}))\\
=K_1^{{\rm top}}{\mathbb{C}}(0,1)\oplus K_1^{{\rm top}} A(0,1){\twoheadrightarrow}K_1^{{\rm top}} A(0,1)=K_2^{{\rm top}}A\end{gathered}$$ One checks that for unital $A$ this defintion agrees with that given above.
\[thm:bott\](Bott periodicity) ([@black 9.2.1]) The composite map is an isomorphism.
Let be an exact sequence of Banach algebras. By \[excitop12\] we have a map $\partial: K_2^{{\rm top}}C\to K_1^{{\rm top}}A$. Composing with the Bott map, we obtain a homomorphism $$\partial\beta:K_0 C\to K_1^{{\rm top}}A.$$
\[thm:exci\_top\_round\] If is an exact sequence of Banach algebras, then the sequence $$\xymatrix{K^{{\rm top}}_1A\ar[r] &K^{{\rm top}}_1B\ar[r]& K_1^{{\rm top}}C\ar[d]^{\partial}\\K_0C\ar[u]^{\partial\beta}&K_0B\ar[l]&K_0A\ar[l]}$$ is exact.
Immediate from Theorem \[thm:excitop01\], Corollary \[excitop12\] and Theorem \[thm:bott\].
### **Sketch of Cuntz’ proof of Bott periodicity for $C^*$-algebras.** {#skecuntz}
([@cu1046 Sec. 2]) A $C^*$-algebra is a Banach algebra $A$ equipped with additive map $*:A\to A$ such that $(a^*)^*=a$, $(\lambda a)^*=\bar{\lambda}a^*$, $(ab)^*=b^*a^*$ and $||aa^*||=||a||^2$ ($\lambda\in{\mathbb{C}}$, $a,b\in A$). The [*Toeplitz*]{} $C^*$-algebra is the free unital $C^*$-algebra ${\mathcal{T}}^{{\rm top}}$ on a generator $\alpha$ subject to $\alpha\alpha^*=1$. Since the shift $s:\ell^2({\mathbb{N}})\to\ell^2({\mathbb{N}})$, $s(e_1)=0$, $s(e_{i+1})=e_i$ satisfies $ss^*=1$, there is a homomorphism ${\mathcal{T}}^{{\rm top}}\to {\mathcal{B}}={\mathcal{B}}(\ell^2({\mathbb{N}}))$. It turns out that this is a monomorphism, that its image contains the ideal ${\mathcal{K}}$, and that the latter is the kernel of the $*$-homomorphism ${\mathcal{T}}^{{\rm top}}\to {\mathbb{C}}(S^1)$ which sends $\alpha$ to the identity function $S^1\to S^1$. We have a commutative diagram with exact rows and split exact columns: $$\label{diag:cuntz}
\xymatrix{0\ar[r]&{\mathcal{K}}\ar@{=}[d]\ar[r]&{\mathcal{T}}^{{\rm top}}_0\ar[r]\ar[d]&{\mathbb{C}}(0,1)\ar[r]\ar[d]&0\\
0\ar[r]&{\mathcal{K}}\ar[r]&{\mathcal{T}}^{{\rm top}}\ar[r]\ar[d]&{\mathbb{C}}(S^1)\ar[r]\ar[d]_{{{\rm ev}}_1}&0\\
&&{\mathbb{C}}\ar@{=}[r]&{\mathbb{C}}&}$$ Here we have used the identification ${\mathcal{C}}_0(S^1,{\mathbb{C}})={\mathbb{C}}(0,1)$, via the exponential map; ${\mathcal{T}}_0^{{\rm top}}$ is defined so that the middle column be exact. Write $\overset\sim\otimes=\otimes_{\min}$ for the $C^*$-algebra tensor product. If now $A$ is any $C^*$-algebra, and we apply the functor $A\overset{\sim}\otimes$ to the diagram , we obtain a commutative diagram whose columns are split exact and whose rows are still exact (by nuclearity, see [@wegge Appendix T]). $$\xymatrix{0\ar[r]&A{\overset{\sim}\otimes}{\mathcal{K}}\ar@{=}[d]\ar[r]&A{\overset{\sim}\otimes}{\mathcal{T}}^{{\rm top}}_0\ar[r]\ar[d]&A(0,1)\ar[r]\ar[d]&0\\
0\ar[r]&A{\overset{\sim}\otimes}{\mathcal{K}}\ar[r]&A{\overset{\sim}\otimes}{\mathcal{T}}^{{\rm top}}\ar[r]\ar[d]&A(S^1)\ar[r]\ar[d]&0\\
&&A\ar@{=}[r]&A&}$$ The inclusion ${\mathbb{C}}\subset M_\infty{\mathbb{C}}\subset {\mathcal{K}}={\mathcal{K}}(\ell^2({\mathbb{N}}))$, $\lambda\mapsto \lambda e_{1,1}$ induces a natural transformation $1\to {\mathcal{K}}{\overset{\sim}\otimes}-$; a functor $G$ from $C^*$-algebras to abelian groups is [*${\mathcal{K}}$-stable*]{} if it is stable with respect to this data in the sense of Definition \[defi:stability\]. We say that $G$ is [*half exact*]{} if for every exact sequence , the sequence $$GA\to GB\to GC$$ is exact.
In general, there is no precedence between the notions of split exact and half exact. However a functor of $C^*$-algebras which is homotopy invariant, additive and half exact is automatically split exact (see [@black §21.4]).
The following theorem of Cuntz is stated in the literature for half exact rather than split exact functors. However the proof uses only split exactness.
\[thm:cu\]([@cu1046 4.4]) Let $G$ be a functor from $C^*$-algebras to abelian groups. Assume that
- $G$ is homotopy invariant.
- $G$ is ${\mathcal{K}}$-stable.
- $G$ is split exact.
Then for every $C^*$-algebra $A$, $$G(A{\overset{\sim}\otimes}{\mathcal{T}}^{{\rm top}}_0)=0.$$
\[fact:k0stable\]([@ror 6.4.1]) $K_0$ is ${\mathcal{K}}$-stable .
It follows from the proposition above, , Cuntz’ theorem and excision, that the connecting map $\partial:K_1^{{\rm top}}(A(0,1))\to K_0(A{\overset{\sim}\otimes}{\mathcal{K}})$ is an isomorphism. Further, one checks, using the explicit formulas for $\beta$ and $\partial$ (, ), that the following diagram commutes $$\xymatrix{K_1^{{\rm top}}A(0,1)\ar[r]^\partial&K_0(A{\overset{\sim}\otimes}{\mathcal{K}})\\&K_0 A\ar[ul]_\beta\ar[u]^\wr}$$ This proves that $\beta$ is an isomorphism.
Polynomial homotopy and Karoubi-Villamayor $K$-theory {#sec:polikv}
=====================================================
In this section we analyze to what extent the results of the previous section on topological $K$-theory of Banach algebras have algebraic analogues valid for all rings. We shall not consider continuous homotopies for general rings, among other reasons, because in general they do not carry any interesting topologies. Instead, we shall consider polynomial homotopies. Two ring homomorphisms $f_0,f_1:A\to B$ are called [*elementary homotopic*]{} if there exists a ring homomorphism $H:A\to B[t]$ such that the following diagram commutes $$\xymatrix{& B[t]\ar[d]^{({{\rm ev}}_0,{{\rm ev}}_1)}\\
A\ar[ur]^H\ar[r]_{(f_0,f_1)}& B}$$ Two homomorphisms $f,g:A\to B$ are [*homotopic*]{} if there is a finite sequence $(f_i)_{0\le i\le n}$ of homomophisms such that $f=f_0$, $f_n=g$, and such that for all $i$, $f_i$ is homotopic to $f_{i+1}$. We write $f\sim
g$ to indicate that $f$ is homotopic to $g$. We say that a functor $G$ from rings to a category ${\mathfrak{D}}$ is [*homotopy invariant*]{} if it maps the inclusion $A\to A[t]$ ($A\in {\mathfrak{Ass}}$) to an isomorphism. In other words, $G$ is homotopy invariant if it is stable (in the sense of \[defi:stability\]) with respect to the natural inclusion $A\to A[t]$. One checks that $G$ is homotopy invariant if and only if it preserves the homotopy relation between homomorphisms. If $G$ is any functor, we call a ring $A$ [*$G$-regular*]{} if $G A\to
G(A[t_1,\dots,t_n])$ is an isomorphism for all $n$.
\[exa:k0reg\] Noetherian regular rings are $K_0$-regular [@rosen 3.2.13] (the same is true for all Quillen’s $K$-groups, by a result of Quillen [@q341]; see Schlichting’s lecture notes [@sch]) and moreover for $n<0$, $K_n$ vanishes on such rings (by [@rosen 3.3.1] and Remark \[rem:basskneg\]). If $k$ is any field, then the ring $R=k[x,y]/<y^2-x^3>$ is not $K_0$-regular (this follows from [@chupro I.3.11 and II.2.3.2]). By \[exa:conmut\] and \[exa:dualn2\], the ring $k[\epsilon]$ is not $K_1$-regular; indeed the $K_1$-class of the element $1+\epsilon t\in k[\epsilon][t]^*$ is a nontrivial element of $\ker(K_1(k[\epsilon][t])\to K_1(k[\epsilon]))={\mathrm{coker}}(K_1(k[\epsilon])\to K_1(k[\epsilon][t])$.
The Banach algebras of paths and loops have the following algebraic analogues. Let $A$ be a ring; let ${{\rm ev}}_i:A[t]\to A$ be the evaluation homomorphism ($i=0,1$). Put $$\begin{gathered}
\label{lupez}
PA:=\ker(A[t]\overset{{{\rm ev}}_0}\to A)\\
\Omega A:=\ker( PA\overset{{{\rm ev}}_1}\to A)\label{omega}\end{gathered}$$ The groups $GL(\ \ )_0$ and $K_1^{{\rm top}}$ have the following algebraic analogues. Let $A$ be a unital ring. Put $$\begin{aligned}
\GL(A)'_0=&{\mathrm{Im}}(\GL PA\to \GL A\}\\
=&\{g\in \GL A:\exists h\in \GL(A[t]): h(0)=1,h(1)=g\}.\end{aligned}$$ Set $$KV_1A:=\GL A/\GL(A)'_0.$$ The group $KV_1$ is the $K_1$-group of Karoubi-Villamayor [@kv]. It is abelian, since as we shall see in Proposition \[prop:kv1ppties\] below, there is a natural surjection $K_1A{\twoheadrightarrow}KV_1A$. Unlike what happens with its topological analogue, the functor $GL(\ \ )'_0$ does not preserve surjections (see Exercise \[exe:gl0notepi\] below). As a consequence, the $KV$-analogue of \[thm:exci01\] does not hold for general short exact sequences of rings, but only for those sequences such that $\GL(B)'_0\to \GL(C)'_0$ is onto, such as, for example, split exact sequences. Next we list some of the basic properties of $KV_1$; all except nilinvariance (due independently to Weibel [@wn] and Pirashvili [@pira]) were proved by Karoubi and Villamayor in [@kv].
\[prop:kv1ppties\]
[i)]{} There is a natural surjective map $K_1 A{\twoheadrightarrow}KV_1A$ $(A\in{\mathfrak{Ass}})$.
[ii)]{} The rule $A\mapsto KV_1A$ defines a split-exact functor ${\mathfrak{Ass}}\to{\mathfrak{Ab}}$.
[iii)]{} If is an exact sequence such that the map $\GL(B)'_0\to \GL(C)'_0$ is onto, then the map $K_1C\to K_0A$ of Theorem \[thm:exci01\] factors through $KV_1 C$, and the resulting sequence $$\xymatrix{KV_1A\ar[r] &KV_1B\ar[r]& KV_1C\ar[d]^{\partial}\\K_0C&K_0B\ar[l]&K_0A\ar[l]}$$ is exact.
[iv)]{}$KV_1$ is additive, homotopy invariant, nilinvariant and $M_\infty$-stable.
If is exact and $\GL(B)'_0\to \GL(C)'_0$ is onto, then it is clear that $$\label{seq:kv1}
KV_1A\to KV_1 B\to KV_1C$$ is exact, and we have a commutative diagram with exact rows and columns $$\label{diag:kv}
\xymatrix{&1\ar[d]&1\ar[d]&1\ar[d]\\
1\ar[r]&\GL\Omega A\ar[d]\ar[r]&\GL\Omega B\ar[d]\ar[r]&\GL\Omega C\ar[d]\\
1\ar[r]&\GL PA\ar[d]\ar[r]&\GL PB\ar[d]\ar[r]&\GL PC\ar[d]\\
1\ar[r]&\GL A\ar[r]&\GL B\ar [r]&\GL C}$$ If moreover is split exact, then each row in the diagram above is, and one checks, by looking at this diagram, that $\GL(A)'_0=\GL(B)'_0\cap \GL(A)$, whence $KV_1 A\to KV_1B$ is injective. Thus $$\label{seq:kv1rightsplit}
0\to KV_1A\to KV_1B\to KV_1C\to 0$$ is exact. In particular $$\label{kv1nunit}
KV_1 A=\ker(KV_1\tilde{A}\to KV_1{\mathbb{Z}}).$$ If $R$ is unital, then $\GL(R)'_0\supset E(R)$, by the same argument as in the Banach algebra case . In particular $KV_1 R$ is abelian. This proves the unital case of i); the general case follows from the unital one using . Thus the right split exact sequence is in fact a split exact sequence of abelian groups. It follows that $KV_1$ is split exact, proving ii). Part iii) follows from part i) and . The proof that $KV_1$ is $M_\infty$-stable on unital rings is the same as the proof that $K_1$ is. By split exactness, it follows that $KV_1$ is $M_\infty$-stable on all rings. To prove that $KV_1$ is homotopy invariant, we must show that the split surjection ${{\rm ev}}_0:KV_1(A[t])\to KV_1 A$ is injective. By split exactness, its kernel is $KV_1PA=\GL PA/\GL(PA)'_0$, so we must prove that $\GL PA\subset \GL(PA)'_0$. But if $\alpha(s)\in \GL PA$, then $\beta(s,t):=\alpha(st)\in \GL PPA$ and ${{\rm ev}}_{t=1}(\beta)=\alpha$. Thus homotopy invariance is proved. If is exact and $A$ is nilpotent, then $PA$ and $\Omega A$ are nilpotent too, whence all those maps displayed in diagram which are induced by $B\to C$ are surjective. Diagram chasing shows that $\GL(B)_0'\to \GL(C)'_0$ is surjective, whence by iii) we have an exact sequence $$KV_1A\to KV_1B\to KV_1C\to 0$$ Thus to prove $KV_1$ is nilinvariant, it suffices to show that if $A^2=0$ then $KV_1A=0$. But if $A^2=0$, then the map $H:A\to A[t]$, $H(a)=at$ is a ring homomorphism, and satisfies ${{\rm ev}}_0H=0$, ${{\rm ev}}_1H=1_A$. Hence $KV_1A=0$, by homotopy invariance.
Consider the exact sequence $$\label{lopez}
0\to \Omega A\to PA\overset{{{\rm ev}}_1}\to A\to 0$$ By definition, $\GL(A)'_0={\mathrm{Im}}(\GL(PA)\to \GL(A))$. But in the course of the proof of Proposition \[prop:kv1ppties\] above, we have shown that $\GL(PA)=\GL(PA)'_0$, so by \[prop:kv1ppties\] iii), we have a natural map $$\label{kv1k0}
KV_1(A)\overset\partial\to K_0(\Omega A).$$ Moreover, is injective, by \[prop:kv1ppties\] iii) and iv). This map will be of use in the next section.Higher $KV$-groups are defined by iterating the loop functor $\Omega$: $$KV_{n+1}(A)=KV_1(\Omega^nA).$$ Higher $KV$-theory satisfies excision for those exact sequences $\eqref{abc}$ such that for every $n$, the map $$\{\alpha\in \GL(B[t_1,\dots,t_n]):\alpha(0,\dots,0)=1\}\to \{\alpha\in \GL(C[t_1,\dots,t_n]):\alpha(0,\dots,0)=1\}$$ is onto. Such sequences are usually called $\GL$-fibration sequences (a different notation was used in [@kv]). Note that if is a $\GL$-fibration, then $\GL(B)'_0\to\GL(C)'_0$ is surjective, and thus \[prop:kv1ppties\] iii) applies. Moreover it is proved in [@kv] that if is a $\GL$-fibration, then there is a long exact sequence $(n\ge 1)$ $$\xymatrix{KV_{n+1}B\ar[r]& KV_{n+1}C\ar[r]&KV_n(A)\ar[r]&KV_n(B)\ar[r]&KV_n(C).}$$
Homotopy $K$-theory {#sec:kh}
===================
Definition and basic properties of $KH$.
----------------------------------------
Let $A$ be a ring. Consider the natural map $$\label{map:delukh}
\partial:K_0A\to K_{-1}\Omega A$$ associated with the exact sequence . Since $K_{-p}=K_0\Sigma^p$, we may iterate the construction and form the colimit $$\label{kh0}
KH_0A:={\mathop{\mathrm{colim}}}_p K_{-p}\Omega^pA.$$ Put $$\label{khn}
KH_nA:={\mathop{\mathrm{colim}}}_pK_{-p}\Omega^{n+p}A=\left\{\begin{array}{cc}KH_0\Omega^nA& (n\ge 0)\\ KH_0\Sigma^nA&(n\le 0)\end{array}\right.$$ The groups $KH_*A$ are Weibel’s [*homotopy $K$-theory*]{} groups of $A$ ([@wh],[@biva 8.1.1]). One can also express $KH$ in terms of $KV$, as we shall see presently. We need some preliminaries first. We know that $K_1(S)=0$ for every infinite sum ring $S$; in particular $KV_1(\Gamma R)=0$ for unital $R$, by \[prop:kv1ppties\] i). Using split exactness of $KV_1$, it follows that $KV_1\Gamma A=0$ for every ring $A$. Thus the dotted arrow in the commutative diagram with exact row below exists $$\xymatrix{K_1(\Gamma A)\ar[r]&K_1(\Sigma A)\ar@{>>}[d]\ar[r]&K_0(A)\ar@{.>}[dl]\ar[r]& 0\\
& KV_1(\Sigma A)&&}$$ The map $K_1(\Sigma A)\to KV_1(\Sigma A)$ is surjective by Proposition \[prop:kv1ppties\] i). Thus the dotted arrow above is a surjective map $$\label{map:k0kv1sigma}
K_0(A){\twoheadrightarrow}KV_1(\Sigma A).$$ On the other hand, the map applied to $\Sigma A$ gives $$\label{map:kv1sigmak-1omega}
KV_1(\Sigma A)\overset\partial\to K_0(\Omega\Sigma A)=K_0(\Sigma \Omega A)=K_{-1}(\Omega A)$$ One checks, by tracking down boundary maps, (see the proof of [@biva 8.1.1]) that the composite of with is the map : $$\label{diag:khkh}
\xymatrix{K_0(A)\ar[rr]^{\eqref{map:delukh}}\ar[dr]_{\eqref{map:k0kv1sigma}}&& K_{-1}(\Omega A)\\
& KV_1(\Sigma A)\ar[ur]_{\eqref{map:kv1sigmak-1omega}}&}$$ On the other hand, followed by applied to $\Sigma\Omega A$ yields a map $$KV_1(\Sigma A)\to KV_1(\Sigma^2\Omega A)=KV_2(\Sigma^2A).$$ Iterating this map one obtains an inductive system; by , we get $$KH_0(A)={\mathop{\mathrm{colim}}}_rKV_1(\Sigma^{r+1}\Omega^r A)={\mathop{\mathrm{colim}}}_rKV_r(\Sigma^{r}A)$$ and in general, $$\label{khkv}
KH_n(A)={\mathop{\mathrm{colim}}}_r KV_1(\Sigma^{r+1}\Omega^{n+r}A)={\mathop{\mathrm{colim}}}_rKV_{n+r}(\Sigma^{r}A).$$ Next we list some of the basic properties of $KH$, proved by Weibel in [@wh].
\[thm:khppties\] ([@wh]) Homotopy $K$-theory has the following properties.
- It is homotopy invariant, nilinvariant and $M_\infty$-stable.
- It satisfies excision: to the sequence there corresponds a long exact sequence ($n\in{\mathbb{Z}}$) $$KH_{n+1}C\to KH_nA\to KH_nB\to KH_nC\to KH_{n-1}A.$$
From and the fact that, by \[prop:kv1ppties\], $KV$ is homotopy invariant, it follows that $KH$ is homotopy invariant. Nilinvariance, $M_\infty$-stability and excision for $KH$ follow from the fact that (by \[prop:kneg\]) these hold for nonpositive $K$-theory, using the formulas , .
\[exe:htpinv\] Note that in the proof of \[thm:khppties\], the formula is used only for homotopy invariance. Prove that $KH$ is homotopy invariant without using , but using excision instead. Hint: show that the excision map $KH_*(A)\to KH_{*-1}(\Omega A)$ coming from the sequence is an isomorphism.
Show that $KH$ commutes with filtering colimits.
$KH$ for $K_0$-regular rings.
-----------------------------
\[lem:pareg\] Let $A$ be a ring. Assume that $A$ is $K_n$-regular for all $n\le 0$. Then $KV_1(A)\to K_0(\Omega A)$ is an isomorphism, and for $n\le 0$, $K_n(PA)=0$, $PA$ and $\Omega A$ are $K_n$-regular, and $K_nA\to K_{n-1}\Omega A$ an isomorphism.
Consider the split exact sequence $$\xymatrix{0\ar[r]&PA[t_1,\dots,t_r]\ar[r]&A[s,t_1,\dots,t_r]\ar[r]&A[t_1,\dots,t_r]\ar[r]&0}$$ Applying $K_n$ ($n\le 0$) and using that $K_n$ is split exact and that, by hypothesis, $A$ is $K_n$-regular, we get that $K_n(PA[t_1,\dots,t_r])=0$. As this happens for all $r\ge 0$, $PA$ is $K_n$-regular. Hence the map of exact sequences $$\xymatrix{0\ar[r]&\Omega A\ar[d]\ar[r]&PA\ar[d]\ar[r]&A\ar[d]\ar[r]&0\\
0\ar[r]&\Omega A[t_1,\dots,t_r]\ar[r]&PA[t_1,\dots,t_r]\ar[r]&A[t_1,\dots,t_r]\ar[r]&0}$$ induces commutative squares with exact rows $$\xymatrix{0\ar[r]&KV_1(A)\ar[d]\ar[r]&K_0(\Omega A)\ar[d]\ar[r]&0\\
0\ar[r]&KV_1(A[t_1,\dots,t_r])\ar[r]&K_0(\Omega A[t_1,\dots,t_r])\ar[r]&0}$$ and $$\xymatrix{0\ar[r]&K_n(A)\ar[d]\ar[r]&K_{n-1}(\Omega A)\ar[d]\ar[r]&0\\
0\ar[r]&K_n(A[t_1,\dots,t_r])\ar[r]&K_{n-1}(\Omega A[t_1,\dots,t_r])\ar[r]&0}\qquad (n\le 0)$$ By Proposition \[prop:kv1ppties\] and our hypothesis, the first vertical map in each diagram is an isomorphism; it follows that the second is also an isomorphism.
\[rem:vorst\] A theorem of Vorst [@vorst] implies that if $A$ is $K_0$-regular then it is $K_n$-regular for all $n\le 0$. Thus the lemma above holds whenever $A$ is $K_0$-regular. The statement of Vorst’s theorem is that, for Quillen’s $K$-theory, and $n\in{\mathbb{Z}}$, a $K_n$-regular unital ring is also $K_{n-1}$-regular. (In his paper, Vorst states this only for commutative rings, but his proof works in general). For $n\le 0$, Vorst’s theorem extends to all, not necessarily unital rings. To see this, one shows first, using the fact that ${\mathbb{Z}}$ is $K_n$-regular (since it is noetherian regular), and split exactness, that $A$ is $K_n$-regular if and only if $\tilde{A}$ is. Now Vorst’s theorem applied to $\tilde{A}$ implies that if $A$ is $K_n$-regular then it is $K_{n-1}$-regular $(n\le 0)$.
\[prop:khk0reg\] If $A$ satisfies the hypothesis of Lemma \[lem:pareg\], then $$KH_n(A)=\begin{cases} KV_n(A) & n\ge 1\\
K_n(A) & n\le 0\end{cases}$$
By the lemma, $KV_{n+1}(A)=KV_1(\Omega^nA)\to K_0(\Omega^{n+1}A)$ and $K_{-n}(\Omega^pA)\to K_{-n-1}(\Omega^{p+1}A)$ are isomorphisms for all $n,p\ge 0$.
Toeplitz ring and the fundamental theorem for $KH$.
---------------------------------------------------
Write ${\mathcal{T}}$ for the free unital ring on two generators $\alpha$, $\alpha^*$ subject to $\alpha\alpha^*=1$. Mapping $\alpha$ to $\sum_ie_{i,i+1}$ and $\alpha^*$ to $\sum_ie_{i+1,i}$ yields a homomorphism ${\mathcal{T}}\to\Gamma:=\Gamma{\mathbb{Z}}$ which is injective ([@biva Proof of 4.10.1]); we identify ${\mathcal{T}}$ with its image in $\Gamma$. Note $$\label{eijintoep}
{\alpha^*}^{p-1}\alpha^{q-1}-{\alpha^*}^{p}\alpha^{q} = e_{p,q}\qquad (p,q\ge 1).$$ Thus ${\mathcal{T}}$ contains the ideal $M_\infty:=M_\infty{\mathbb{Z}}$. There is a commutative diagram with exact rows and split exact columns: $$\xymatrix{0\ar[r]&M_\infty\ar@{=}[d]\ar[r]&{\mathcal{T}}_0\ar[d]\ar[r]& \sigma\ar[d]\ar[r]&0\\
0\ar[r]&M_\infty\ar[r]&{\mathcal{T}}\ar[d]\ar[r]& {\mathbb{Z}}[t,t^{-1}]\ar[d]^{{{\rm ev}}_1}\ar[r]&0\\
&&{\mathbb{Z}}\ar@{=}[r]&{\mathbb{Z}}&}$$ Here the rings ${\mathcal{T}}_0$ and $\sigma$ of the top row are defined so that the columns be exact. Note moreover that the rows are split as sequences of abelian groups. Thus tensoring with any ring $A$ yields an exact diagram $$\label{toepalg}
\xymatrix{0\ar[r]&M_\infty A\ar@{=}[d]\ar[r]&{\mathcal{T}}_0A\ar[d]\ar[r]& \sigma A\ar[d]\ar[r]&0\\
0\ar[r]&M_\infty A\ar[r]&{\mathcal{T}}A\ar[d]\ar[r]& A[t,t^{-1}]\ar[d]\ar[r]&0\\
& &A\ar@{=}[r]&A&}$$ Here we have omitted tensor products from our notation; thus ${\mathcal{T}}A={\mathcal{T}}\otimes A$, $\sigma A=\sigma\otimes A$, and ${\mathcal{T}}_0A={\mathcal{T}}_0\otimes A$. We have the following algebraic analogue of Cuntz’ theorem.
\[thm:algcuntz\] ([@biva 7.3.2]) Let $G$ be a functor from rings to abelian groups. Assume that:
- $G$ is homotopy invariant.
- $G$ is split exact.
- $G$ is $M_\infty$-stable.
Then for any ring $A$, $G({\mathcal{T}}_0A)=0$.
\[thm:decalkh\] Let $A$ be a ring and $n\in{\mathbb{Z}}$. Then $$KH_n(\sigma A)=KH_{n-1}(A).$$
By Theorem \[thm:khppties\], $KH$ satisfies excision. Apply this to the top row of diagram and use Theorem \[thm:algcuntz\].
The following result of Weibel [@wh 1.2 iii)] is immediate from the theorem above.
\[coro:khfund\](Fundamental theorem for $KH$, [@wh 1.2 iii)]). $KH_n(A[t,t^{-1}])=KH_n(A)\oplus KH_{n-1}(A)$.
\[rem:khfund\] We regard Theorem \[thm:decalkh\] as an algebraic analogue of Bott periodicity for $KH$. What is missing in the algebraic case is an analogue of the exponential map; there is no isomorphism $\Omega A\to \sigma A$.
\[rem:kvnotfund\] We have shown in Proposition \[prop:kv1ppties\] that $KV_1$ satisfies the hypothesis of Theorem \[thm:algcuntz\]. Thus $KV_1({\mathcal{T}}_0A)=0$ for every ring $A$. However, there is no natural isomorphism $KV_1(\sigma A)=K_0(A)$. Indeed, since $KV_1(\sigma PA)=KV_1(P\sigma A)=0$ the existence of such an isomorphism would imply that $K_0(PA)=0$, which in turn, given the fact that $K_0$ is split exact, would imply that $K_0(A[t])=K_0(A)$, a formula which does not hold for general $A$ (see Example \[exa:k0reg\]).
We finish the section with a technical result which will be used in Subsection \[subsec:fundatoep\].
\[prop:atotau=0\] Let ${\mathfrak{D}}$ be an additive category, $F:{\mathfrak{Ass}}\to {\mathfrak{D}}$ an additive functor, and $A$ a ring. Assume $F$ is $M_\infty$-stable on $A$ and $M_2$-stable on both ${\mathcal{T}}A$ and $M_2({\mathcal{T}}A)$. Then $F(M_\infty A\to {\mathcal{T}}A)$ is the zero map.
Because $F$ is $M_\infty$-stable on $A$, it suffices to show that $F$ sends the inclusion $\j:A\to {\mathcal{T}}A$, $a\mapsto ae_{11}$, to the zero map. Note that $e_{11}=1-\alpha^*\alpha$, by . Consider the inclusion $\j^{\infty}:A\to {\mathcal{T}}A$, $\j^{\infty}(a)=a\cdot 1={\mathrm{diag}}(a,a,a,\dots)$. One checks that the following matrix $$Q=\left[\begin{matrix}1-\alpha^*\alpha & &\alpha^*\\\alpha & & 0\end{matrix}\right]\in \GL_2{\mathcal{T}}(\tilde{A})$$ satisfies $Q^2=1$ and $$Q\left[\begin{array}{cc}\j(a)&0\\0&\j^\infty(a)\end{array}\right]Q=\left[\begin{array}{cc}\j^\infty(a)&0\\0&0\end{array}\right].$$ Since we are assuming that $F$ is additive and $M_2$-stable on both ${\mathcal{T}}A$ and ${\mathcal{M}}_2{\mathcal{T}}A$, we may now apply Exercise \[exe:add\] and Proposition \[prop:vw\] to deduce the following identity between elements of the group $\hom_{\mathfrak{D}}(F(A),F({\mathcal{T}}A))$: $$\j+\j^{\infty}=\j^{\infty}.$$ It follows that $\j=0$, as we had to prove.
\[exe:gl0notepi\] Deduce from Remark \[rem:kvnotfund\] and propositions \[prop:atotau=0\] and \[prop:kv1ppties\] iii) that the canonical maps $\GL({\mathcal{T}}_0 A)'_0\to \GL(\sigma A)'_0$ and $\GL({\mathcal{T}}A)'_0\to \GL(A[t,t^{-1}])'_0$ are not surjective in general.
Quillen’s Higher $K$-theory {#sec:kq}
===========================
Quillen’s higher $K$-groups of a unital ring $R$ are defined as the homotopy groups of a certain $CW$-complex; the plus construction of the classifying space of the group $\GL(R)$ [@qplus]. The latter construction is defined more generally for $CW$-complexes, but we shall not go into this general version; for this and other matters connected with the plus construction approach to $K$-theory, the interested reader should consult standard references such as Jon Berrick’s book [@berr], or the papers of Loday [@lorep], and Wagoner [@rosen]. We shall need a number of basic facts from algebraic topology, which we shall presently review. First of all we recall that if $X$ and $Y$ are $CW$-complexes, then the cartesian product $X\times Y$, equipped with the product topology, is not a $CW$-complex in general. That is, the category of $CW$-complexes is not closed under finite products in ${\mathrm{Top}}$. On the other hand, any $CW$-complex is a compactly generated –or Kelley– space, and the categorical product of two $CW$-complexes in the category $Ke$ of compactly generated spaces is again $CW$, and also has the cartesian product $X\times Y$ as underlying set. Moreover, in case the product topology in $X\times Y$ happens to be compactly generated, then it agrees with that of the product in $Ke$. In these notes, we write $X\times Y$ for the cartesian product equipped with its compactly generated topology. (For a more detailed treatment of the categorical properties of $Ke$ see [@gs]).
Classifying spaces.
-------------------
The [*classifying space*]{} of a (discrete) group $G$ is a pointed connected $CW$-complex $BG$ such that $$\pi_nBG=\left\{\begin{array}{cc}G& n=1\\ 0 &n\ne 1\end{array}\right.$$ This property characterizes $BG$ and makes it functorial up to homotopy. Further there are various strictly functorial models for $BG$ ([@rosen Ch. 5§1], [@goja 1.5]). We choose the model coming from the realization of the simplicial nerve of $G$ ([@goja]), and write $BG$ for that model. Here are some basic properties of $BG$ which we shall use.
\[fact:bg\]
[i)]{} If $$1\to G_1\to G_2\to G_3\to 1$$ is an exact sequence of groups, then $BG_2\to BG_3$ is a fibration with fiber $BG_1$.
[ii)]{} If $G_1$ and $G_2$ are groups, then the map $B(G_1\times G_2)\to BG_1\times BG_2$ is a homeomorphism.
[iii)]{} The homology of $BG$ is the same as the group homology of $G$; if $M$ is $\pi_1BG=G$-module, then $$H_n(BG,M)=H_n(G,M):={\mathrm{Tor}}^{{\mathbb{Z}}G}_n({\mathbb{Z}},M)$$
Perfect groups and the plus construction for $BG$.
--------------------------------------------------
A group $P$ is called [*perfect*]{} if its abelianization is trivial, or equivalently, if $P=[P,P]$. Note that a group $P$ is perfect if and only if the functor $\hom_{{\mathfrak{Grp}}}(P,-):{\mathfrak{Ab}}\to {\mathfrak{Ab}}$ is zero. Thus the full subcategory $\subset {\mathfrak{Grp}}$ of all perfect groups is closed both under colimits and under homomorphic images. In particular, if $G$ is group, then the directed set of all perfect subgroups of $G$ is filtering, and its union is again a perfect subgroup $N$, the maximal perfect subgroup of $G$. Since the conjugate of a perfect subgroup is again perfect, it follows that $N$ is normal in $G$. Note that $N\subset [G,G]$; if moreover the equality holds, then we say that $G$ is [*quasi-perfect*]{}. For example, if $R$ is a unital ring, then $\GL R$ is quasi-perfect, and $ER$ is its maximal perfect subgroup ([@rosen 2.1.4]). Quillen’s [*plus construction*]{} applied to the group $G$ yields a cellular map of $CW$-complexes $\iota:BG\to (BG)^+$ with the following properties (see [@lorep 1.1.1, 1.1.2]).
- At the level of $\pi_1$, $\iota$ induces the projection $G{\twoheadrightarrow}G/N$.
- At the level of homology, $\iota$ induces an isomorphism $H_*(G,M)\to H_*((BG)^+,M)$ for each $G/N$-module $M$.
- If $BG\to X$ is any continuous function which at the level of $\pi_1$ maps $N\to 1$, then the dotted arrow in the following diagram exists and is unique up to homotopy $$\xymatrix{BG\ar[r]^\iota\ar[d]&(BG)^+\ar@{.>}[dl]\\X&}$$
- Properties i) and iii) above characterize $\iota:BG\to BG^+$ up to homotopy.
From the universal property, it follows that if $f:BG_1\to BG_2$ is a continuous map, then there is a (continuous) map $BG_1^+\to BG_2^+$, unique up to homotopy, which makes the following diagram commute $$\xymatrix{BG_1\ar[d]\ar[r]^f&BG_2\ar[d]\\ BG_1^+\ar[r]^{f^+}&BG_2^+}$$
\[facts:bg+\]([@lorep 1.1.4])
[i)]{} If $G_1$ and $G_2$ are groups, and $\pi_i:B(G_1\times G_2)\to BG_i$ is the projection, then the map $(\pi_1^+,\pi_2^+):B(G_1\times G_2)^+\to BG_1^+\times BG_2^+$ is a homotopy equivalence.
[ii)]{} The map $BN^+\to BG^+$ is the universal classifying space of $BG^+$.
If $$\label{seq:g123}
1\to G_1\to G_2\overset{\pi}\to G_3\to 1$$ is an exact sequence of groups, then we can always choose $\pi^+$ to be a fibration; write $F$ for its fiber. If the induced map $G_1\to \pi_1F$ kills the maximal perfect subgroup $N_1$ of $G_1$, then $BG_1\to F$ factors through a map $$\label{map:fibplus}
BG_1^+\to F$$
\[prop:fibplus\] Let be an exact sequence of groups. Assume that
[i)]{} $G_1$ is quasi-perfect and $G_2$ is perfect.
[ii)]{}$G_3$ acts trivially on $H_*(G_1,{\mathbb{Z}})$.
[iii)]{}$\pi_1F$ acts trivially on $H_*(F,{\mathbb{Z}})$.
Then the map is a homotopy equivalence.
Consider the map of fibration sequences $$\xymatrix{BG_1\ar[d]\ar[r]&BG_2\ar[d]\ar[r]&BG_3\ar[d]\\
F\ar[r]&BG_2^+\ar[r]& BG_3^+}$$ By the second property of the plus construction listed above, the maps $BG_i\to BG_i^+$ are homology equivalences. For $i\ge 2$, we have, in addition, that $G_i$ is perfect, so $BG_i^+$ is simply connected and $F$ is connected with abelian $\pi_1$, isomorphic to ${\mathrm{coker}}(\pi_2BG_2^+\to\pi_2BG_3^+)$. Hence $\pi_1F\to H_1F$ is an isomorphism, by Poincaré’s theorem. All this together with the Comparison Theorem ([@zee]), imply that $BG_1\to F$ and thus also , are homology equivalences. Moreover, because $G_1$ is quasi-perfect by hypothesis, the Hurewicz map $\pi_1BG_1^+\to H_1BG_1^+$ is an isomorphism, again by Poincaré’s theorem. Summing up, $BG_1^+\to F$ is a homology isomorphism which induces an isomorphism of fundamental groups; since $\pi_1F$ acts trivially on $H_*F$ by hypothesis, this implies that is a weak equivalence ([@brow 4.6.2]).
\[lem:fibplus\] Let be an exact sequence of groups. Assume that for every $g\in G_2$ and every finite set $h_1,\dots,h_k$ of elements of $G_1$, there exists an $h\in G_1$ such that for all $i$, $gh_ig^{-1}=hh_ih^{-1}$. Then $G_3$ acts trivially on $H_*(G_1,{\mathbb{Z}})$.
If $g\in G_2$ maps to $\bar{g}\in G_3$, then the action of $\bar{g}$ on $H_*(G_1,{\mathbb{Z}})$ is that induced by conjugation by $g$. The hypothesis implies that the action of $g$ on any fixed cycle of the standard bar complex which computes $H_*(BG_1,{\mathbb{Z}})$ ([@chubu 6.5.4]) coincides with the conjugation action by an element of $G_1$, whence it is trivial ([@chubu 6.7.8]).
Quillen’s higher $K$-groups of a unital ring $R$ are defined as the homotopy groups of $(B\GL R)^+$; we put $$\begin{aligned}
K(R):&=(B\GL R)^+\\
K_nR:&=\pi_nK (R)\qquad (n\ge 1).\end{aligned}$$ In general, for a not necessarily unital ring $A$, we put $$K(A):=\mathrm{fiber}(K(\tilde{A})\to K({\mathbb{Z}})), \qquad K_n(A)=\pi_nK(A)\qquad (n\ge 1)$$ One checks, using \[facts:bg+\] i), that when $A$ is unital, these definitions agree with the previous ones.
As defined, $K$ is a functor from ${\mathfrak{Ass}}$ to the homotopy category of topological spaces ${{\mathrm{Ho}}{\mathrm{Top}}}$. Further note that for $n=1$ we recover the definition of $K_1$ given in \[defi:k01\].
We shall see below that the main basic properties of Section \[sec:basick01\] which hold for $K_1$ hold also for higher $K_n$ ([@lorep]). First we need some preliminaries. If $W:{\mathbb{N}}\to {\mathbb{N}}$ is an injection, we shall identify $W$ with the endomorphism ${\mathbb{Z}}^{({\mathbb{N}})}\to {\mathbb{Z}}^{({\mathbb{N}})}$, $W(e_i)=e_{w(i)}$ and also with the matrix of the latter in the canonical basis, given by $W_{ij}=\delta_{i,w(j)}$. Let $V=W^t$ be the transpose matrix; then $VW=1$. If now $R$ is a unital ring, then the endomorphism $\psi^{V,W}:M_\infty R\to M_\infty R$ of \[prop:vw\] induces a group endomorphism $\GL(R)\to\GL(R)$, which in turn yields homotopy classes of maps $$\label{map:psi+}
\psi:K(R)\to K(R),\qquad \psi':BE(R)^+\to BE(R)^+.$$
\[lem:psitriv\] ([@lorep 1.2.9]) The maps are homotopic to the identity.
A proof of the previous lemma for the case of $\psi$ can be found in [*loc. cit.*]{}; a similar argument works for $\psi'$.
\[prop:hikay\] Let $n\ge 1$ and let $R$ be a unital ring.
[i)]{} The functor $K_n:{\mathfrak{Ass}}_1\to {\mathfrak{Ab}}$ is additive.
[ii)]{} The direct sum $\oplus:\GL R\times \GL R\to \GL R$ of induces a map $K(R)\times K(R)\to K(R)$ which makes $K(R)$ into an $H$-group, that is, into a group up to homotopy. Similarly, $BE(R)^+$ also has an $H$-group structure induced by $\oplus$.
[iii)]{} The functors $K:{\mathfrak{Ass}}_1\to{{\mathrm{Ho}}{\mathrm{Top}}}$ and $K_n:{\mathfrak{Ass}}_1\to{\mathfrak{Ab}}$ are $M_\infty$-stable.
Part i) is immediate from \[facts:bg+\] i). The map of ii) is the composite of the homotopy inverse of the map of \[facts:bg+\] i) and the map $B\GL(\oplus)^+$. One checks that, up to endomorphisms of the form $\psi^{V,W}$ induced by injections ${\mathbb{N}}\to {\mathbb{N}}$, the map $\oplus:\GL(R)\times\GL(R)$ is associative and commutative and the identity matrix is a neutral element for $\oplus$. Hence by \[lem:psitriv\] it follows that $B\GL(R)^+$ is a commutative and associative $H$-space. Since it is connected, this implies that it is an $H$-group, by [@white X.4.17]. The same argument shows that $BE(R)^+$ is also an $H$-group. Thus ii) is proved. Let $\iota:R\to M_\infty R$ be the canonical inclusion. To prove iii), one observes that a choice of bijection ${\mathbb{N}}\times
{\mathbb{N}}\to {\mathbb{N}}$ gives an isomorphism $\phi:M_\infty M_\infty R{\overset{\cong}{\to}}M_\infty R$ such that the composite with $M_\infty\iota$ is a homomorphism the form $\psi^{V,W}$ for some injection $W:{\mathbb{N}}\to{\mathbb{N}}$, whence the induced map $K(R)\to K(R)$ is homotopic to the identity, by Lemma \[lem:psitriv\]. This proves that $K(\iota)$ is a homotopy equivalence.
\[coro:ksumtriv\] If $S$ is an infinite sum ring, then $K(S)$ is contractible.
It follows from the theorem above, using Exercise \[exe:matrix\] ii) and Proposition \[prop:sumring\].
\[prop:conseq\] Let $R$ be a unital ring, $\Sigma R$ the suspension, $\Omega K(\Sigma R)$ the loopspace, and $\Omega_0K(\Sigma R)\subset \Omega K(\Sigma R)$ the connected component of the trivial loop. There is a homotopy equivalence $$K(R){\overset{\sim}{\to}}\Omega_0K(\Sigma R).$$
Consider the exact sequence of rings $$0\to M_\infty R\to \Gamma R\to \Sigma R\to 0$$ Since $K_1(\Gamma R)=0$, we have $$\GL(\Gamma R)=E(\Gamma R),$$ which applies onto $E\Sigma R$. Thus we have an exact sequence of groups $$1\to \GL M_\infty R\to \GL\Gamma R\to E\Sigma R\to 1$$ One checks that the inclusion $\GL(M_\infty R)\to \GL(\Gamma R)$ satisfies the hypothesis of \[lem:fibplus\] (see [@wa bottom of page 357] for details). Thus the perfect group $E(\Sigma R)$ acts trivially on $H_*(\GL M_\infty R,{\mathbb{Z}})$. On the other hand, by \[prop:hikay\] ii), both $K(\Gamma R)$ and $BE(\Sigma R)^+$ are $H$-groups, and moreover since $\pi:\GL(\Gamma R)\to \GL(\Sigma R)$ is compatible with $\oplus$, the map $\pi^+:K(\Gamma R)\to BE(\Sigma R)^+$ can be chosen to be compatible with the induced operation. This implies that the fiber of $\pi^+$ is a connected $H$-space (whence an $H$-group) and so its fundamental group acts trivially on its homology. Hence by Propositions \[prop:hikay\] iii) and \[prop:fibplus\], we have a homotopy fibration $$K(R)\to K(\Gamma R)\to BE(\Sigma R)^+$$ By \[coro:ksumtriv\], the map $$\label{map:alverre}
\Omega BE(\Sigma R)^+\to K(R)$$ is a homotopy equivalence. Finally, by \[facts:bg+\] ii), $$\label{map:aldere}
\Omega BE(\Sigma R)^+{\overset{\sim}{\to}}\Omega_0K(\Sigma R).$$ Now compose with a homotopy inverse of to obtain the theorem.
For all $n\in {\mathbb{Z}}$, $K_n(\Sigma R)=K_{n-1}(R)$
For $n\le 0$, the statement of the proposition is immediate from the definition of $K_n$. For $n=1$, it is . If $n\ge 2$, then $$K_n(\Sigma R)=\pi_n(K(\Sigma R))=\pi_{n-1}(\Omega K(\Sigma R))=\pi_{n-1}(\Omega_0 K(\Sigma R))=\pi_{n-1}K(R)=K_{n-1}R.\ \ \qed$$
The homotopy equivalence of Proposition \[prop:conseq\] is the basis for the construction of the nonconnective $K$-theory spectrum; we will come back to this in Section \[sec:spectra\].
Functoriality issues. {#subsec:functissues}
---------------------
As defined, the rule $K:R\mapsto BGL(R)^+$ is only functorial up to homotopy. Actually its possible to choose a functorial model for $K R$; this can be done in different ways (see for example [@lod 11.2.4,11.2.11]). However, in the constructions and arguments we have made (notably in the proof of \[prop:conseq\]) we have often used Whitehead’s theorem that a map of $CW$-complexes which induces an isomorphism at the level of homotopy groups (a [*weak equivalence*]{}) always has a homotopy inverse. Now, there is in principle no reason why a natural weak equivalence between functorial $CW$-complexes will admit a homotopy inverse which is also natural; thus for example, the weak equivalence of Proposition \[prop:conseq\] need not be natural for an arbitrarily chosen functorial version of $K R$. What we need is to be able to choose functorial models so that any natural weak equivalence between them automatically has a natural homotopy inverse. In fact we can actually do this, as we shall now see. First of all, as a technical restriction we have to choose a small full subcategory $I$ of the category ${\mathfrak{Ass}}$, and look at $K$-theory as a functor on $I$. This is no real restriction, as in practice we always start with a set of rings (ofter with only one element) and then all the arguments and constructions we perform take place in a set (possibly larger than the one we started with, but still a set). Next we invoke the fact that the category ${\mathrm{Top}}_*^I$ of functors from $I$ to pointed spaces is a closed model category where fibrations and weak equivalences are defined objectwise (by [@hir 11.6.1] this is true of the category of functors to any cofibrantly generated model category; by [@hir 11.1.9], ${\mathrm{Top}}_*$ is such a category). This implies, among other things, that there is a full subcategory $({\mathrm{Top}}_*^I)_c\to {\mathrm{Top}}_*^I$, the subcategory of cofibrant objects (among which any natural weak equivalence has a natural homotopy inverse), a functor ${\mathrm{Top}}_*^I\to ({\mathrm{Top}}_*^I)_c$, $X\mapsto \hat{X}$ and a natural transformation $\hat{X}\to X$ such that $\hat{X}(R)\to X(R)$ is a fibration and weak equivalence for all $R$. Thus we can replace our given functorial model for $B\GL^+(R)$ by $\widehat{B\GL(R)^+}$, and redefine $K(R)=\widehat{B\GL(R)^+}$.
Relative $K$-groups and excision.
---------------------------------
Let $R$ be a unital ring, $I{\triangleleft}R$ an ideal, and $S=R/I$. Put $${\overline}{\GL}S:={\mathrm{Im}}(\GL R\to \GL S)$$ The inclusion ${\overline}{\GL}S\subset\GL S$ induces a map $$\label{map:glbar}
(B{\overline}{\GL}S)^+\to K(S)$$ By \[facts:bg+\] ii), induces an isomorphism $$\pi_n(B{\overline}{\GL}S)^+=K_nS \qquad (n\ge 2).$$ On the other hand, $$\pi_1(B{\overline}{\GL}S^+)={\overline}{\GL}S/ES={\mathrm{Im}}(K_1R\to K_1S).$$ Consider the homotopy fiber $$K(R:I):=\mathrm{fiber}((B\GL R)^+\to (B{\overline}{\GL}S)^+).$$ The [*relative $K$-groups*]{} of $I$ with respect to the ideal embedding $I{\triangleleft}R$ are defined by $$K_n(R:I):=\begin{cases}\pi_n K(R:I) &n\ge 1\\ K_n(I) & n\le 0\end{cases}$$ The long exact squence of homotopy groups of the fibration which defines $K(R:I)$, spliced together with the exact sequences of Theorem \[thm:exci01\] and Proposition \[prop:kneg\], yields a long exact sequence $$\label{relseq}
K_{n+1}R\to K_{n+1}S\to K_n(R:I)\to K_nR\to K_n(S)\qquad (n\in {\mathbb{Z}})$$ The canonical map $\tilde{I}\to R$ induces a map $$\label{abstorel}
K_n(I)\to K_n(R:I)$$ This map is an isomorphism for $n\le 0$, but not in general (see Remark \[exa:swanex\]). The rings $I$ so that this map is an isomorphism for all $n$ and $R$ are called [$K$-excisive]{}. Suslin and Wodzicki have completely characterized $K$-excisive rings ([@wodex],[@qs],[@sus]). We have
\[thm:exito\]([@sus]) The map is an isomorphism for all $n$ and $R$ $\iff {\mathrm{Tor}}^{\tilde{I}}_n({\mathbb{Z}},I)=0$ $\forall n$.
Note that $$\begin{gathered}
{\mathrm{Tor}}_0^{\tilde{I}}({\mathbb{Z}},I)=I/I^2\\
{\mathrm{Tor}}_n^{\tilde{I}}({\mathbb{Z}},I)={\mathrm{Tor}}_{n+1}^{\tilde{I}}({\mathbb{Z}},{\mathbb{Z}}).\end{gathered}$$
\[exa:sigmanotexci\] Let $G$ be a group, $IG{\triangleleft}{\mathbb{Z}}G$ the augmentation ideal. Then ${\mathbb{Z}}G=\tilde{IG}$ is the unitalization of $IG$. Hence $${\mathrm{Tor}}_n^{\tilde{IG}}({\mathbb{Z}},IG)=H_{n+1}(G,{\mathbb{Z}}).$$ In particular $${\mathrm{Tor}}_0^{\tilde{IG}}({\mathbb{Z}},I)=G_{ab}$$ So if $IG$ is $K$-excisive, then $G$ must be a perfect group. Thus, for example, $IG$ is not $K$-excisive if $G$ is a nontrivial abelian group. In particular, the ring $\sigma$ is not $K$-excisive, as it coincides with the augmentation ideal of ${\mathbb{Z}}[{\mathbb{Z}}]={\mathbb{Z}}[t,t^{-1}]$. As another example, if $S$ is an infinite sum ring, then $$H_n(\GL(S),{\mathbb{Z}})=H_n(K(S),{\mathbb{Z}})=H_n(pt,{\mathbb{Z}})=0\qquad (n\ge 1).$$ Thus the ring $I\GL(S)$ is $K$-excisive.
We shall introduce a functorial complex $\bar{L}(A)$ which computes ${\mathrm{Tor}}_*^{\tilde{A}}({\mathbb{Z}},A)$ and use it to show that the functor ${\mathrm{Tor}}_*^{\widetilde{(-)}}({\mathbb{Z}},-)$ commutes with filtering colimits. Consider the functor $\perp:\tilde{A}-mod\to\tilde{A}-mod$, $$\perp M=\bigoplus_{m\in M}\tilde{A}.$$ The functor $\perp$ is the free $\tilde{A}$-module cotriple [@chubu 8.6.6]. Let $L(A)\to A$ be the canonical free resolution associated to $\perp$ [@chubu 8.7.2]; by definition, its $n$-th term is $L_n(A)=\perp^n A$. Put $\bar{L}(A)={\mathbb{Z}}\otimes_{\tilde{A}}L(A)$. Then $\bar{L}(A)$ is a functorial chain complex which satisfies $H_*(\bar{L}(A))={\mathrm{Tor}}_*^{\tilde{A}}({\mathbb{Z}},A)$. Because $\perp$ commutes with filtering colimits, it follows that the same is true of $L$ and $\bar{L}$, and therefore also of ${\mathrm{Tor}}_*^{\widetilde{(-)}}({\mathbb{Z}},-)=H_*\bar{L}(-)$.
\[exe:unitalexci\]
[i)]{} Prove that any unital ring is $K$-excisive.
[ii)]{} Prove that if $R$ is a unital ring, then $M_\infty R$ is $K$-excisive. (Hint: $M_\infty R={\mathop{\mathrm{colim}}}_nM_nR$).
If $A$ is flat over $k$ (e.g. if $k$ is a field) then the canonical resolution $L^k(A){\overset{\sim}{\to}}A$ associated with the induced module cotriple $\tilde{A}_k\otimes_k(-)$, is flat. Thus $\bar{L}^k(A):=L^k(A)/AL^k(A)$ computes ${\mathrm{Tor}}_*^{\tilde{A}_k}(k,A)$. Modding out by degenerates, we obtain a homotopy equivalent complex ([@chubu]) $C^{{\mathrm{bar}}}(A/k)$, with $C_n^{{\mathrm{bar}}}(A/k)=A^{\otimes_k n+1}$. The complex $C^{{\mathrm{bar}}}$ is the bar complex considered by Wodzicki in [@wodex]; its homology is the bar homology of $A$ relative to $k$, $H^{{\mathrm{bar}}}_*(A/k)$. If $A$ is a ${\mathbb{Q}}$-algebra, then $A$ is flat as a ${\mathbb{Z}}$-module, and thus $H^{{\mathrm{bar}}}_*(A/{\mathbb{Z}})={\mathrm{Tor}}^{\tilde{A}}_*({\mathbb{Z}},A)$. Moreover, as $A^{\otimes_{\mathbb{Z}}n}=A^{\otimes_{\mathbb{Q}}n}$, we have $C^{{\mathrm{bar}}}(A/{\mathbb{Z}})=C^{{\mathrm{bar}}}(A/{\mathbb{Q}})$, whence $${\mathrm{Tor}}^{\tilde{A}}_*({\mathbb{Z}},A)={\mathrm{Tor}}^{\tilde{A}_{\mathbb{Q}}}_*({\mathbb{Q}},A)=H^{{\mathrm{bar}}}_*(A/{\mathbb{Q}})$$
Locally convex algebras.
------------------------
A [*locally convex*]{} algebra is a complete topological ${\mathbb{C}}$-algebra $L$ with a locally convex topology. Such a topology is defined by a family of seminorms $\{\rho_\alpha\}$; continuity of the product means that for every $\alpha$ there exists a $\beta$ such that $$\label{multcont}
\rho_\alpha(xy)\le \rho_\beta(x)\rho_\beta(y)\qquad (x,y\in L).$$ If in addition the topology is determined by a countable family of seminorms, we say that $L$ is a [*Fréchet*]{} algebra.
Let $L$ be a locally convex algebra. Consider the following two factorization properties:
- Cohen-Hewitt factorization. $$\begin{gathered}
\label{ppty:F}
\qquad \forall n\ge 1,a=(a_1,\dots,a_n)\in L^{\oplus n}=\bigoplus_{i=1}^nL\quad\exists z\in L,\quad x\in L^n\text{ such
that}\\
z\cdot x=a\text{ and } x\in {\overline}{L\cdot a}\nonumber\end{gathered}$$ Here the bar denotes topological closure in $L^{\oplus n}$.
- Triple factorization. $$\begin{gathered}
\label{ppty:T}
\forall a\in L^{\oplus n}, \exists b\in L^{\oplus n}, \quad c,d \in L,\\ \text{such that}
a=cdb \text{ and } (0:d)_l:=\{v\in L:dv=0\}= (0:cd)_l\end{gathered}$$
The right ideal $(0:d)_l$ is called the [*left annihilator*]{} of $d$. Note that property (b) makes sense for an arbitrary ring $L$.
\[lem:equifacto\] Cohen-Hewitt factorization implies triple factorization. That is, if $L$ is a locally convex algebra which satisfies property (a) above, then it also satisfies property (b).
Let $a\in L^{\oplus n}$. By (a), there exist $b\in L^{\oplus n}$ and $z\in L$ such that $a=zb$. Applying (a) again, we get that $z=cd$ with $d\in {\overline}{L\cdot z}$; this implies that $(0:d)_l=(0:z)_l$.
\[prop:exciloca\]([@qs 3.12, 3.13(a)]) Let $L$ be a ring. Assume that either $L$ or $L^{op}$ satisfy . Then $A$ is $K$-excisive.
Fréchet $m$-algebras with approximate units.
--------------------------------------------
A [*uniformly bounded left approximate unit*]{} (ublau) in a locally convex algebra $L$ is a net $\{e_\lambda\}$ of elements of $L$ such that $e_\lambda a\mapsto a$ for all $a$ and $\sup_\alpha\rho_\alpha(a)<\infty$. Right ubau’s are defined analogously. If $L$ is a locally convex algebra such that a defining family of seminorms can be chosen so that condition is satisfied with $\alpha=\beta$ (i.e. the seminorms are [*submultiplicative*]{}) we say that $L$ is an $m$-algebra. An $m$-algebra which is also Fréchet will be called a [*Fréchet $m$-algebra*]{}.
\[exa:ubau\] Every $C^*$-algebra has a two-sided ubau ([@david I.4.8]). If $G$ is a locally compact group, then the group algebra $L^1(G)$ is a Banach algebra with two sided ubau [@wodex 8.4]. If $L_1$ and $L_2$ are locally convex algebras with ublaus $\{e_\lambda\}$ and $\{f_\mu\}$, then $\{e_\lambda\otimes f_\mu\}$ is a ublau for the projective tensor product $L_1{\hat{\otimes}}L_2$, which is a (Fréchet) $m$-algebra if both $L_1$ and $L_2$ are.
In a Banach algebra, any bounded approximate unit is uniformly bounded. Thus for example, the unit of a unital Banach algebra is an ublau. However, the unit of a general unital locally convex algebra (or even of a Fréchet $m$-algebra) need not be uniformly bounded.
Let $L$ be an $m$-Fréchet algebra. A left [*Fréchet $L$-module*]{} is a Fréchet space $V$ equipped with a left $L$-module structure such that the multiplication map $L\times V\to V$ is continuous. If $L$ is equipped with an ublau $e_\lambda$ such that $e_\lambda\cdot v\to v$ for all $v\in V$, then we say that $V$ is [*essential*]{}.
\[exa:summ\] If $L$ is an $m$-Fréchet algebra with ublau $e_\lambda$ and $x\in L^{\oplus n}$ $(n\ge 1)$ then $e_\lambda x\to x$. Thus $L^{\oplus n}$ is an essential Fréchet $L$-module. The next exercise generalizes this example.
\[exe:summ\] Let $L$ be an $m$-Fréchet algebra with ublau $e_\lambda$, $M$ a unital $m$-Fréchet algebra, and $n\ge 1$. Prove that for every $x\in (L{\hat{\otimes}}M)^{\oplus n}$, $(e_\lambda\otimes 1)x\to x$. Conclude that $(L{\hat{\otimes}}M)^{\oplus n}$ is an essential $L{\hat{\otimes}}M$-module.
The following Fréchet version of Cohen-Hewitt’s factorization theorem (originally proved in the Banach setting) is due to M. Summers.
\[thm:summ\]([@summ 2.1]) Let $L$ be an $m$-Fréchet algebra with ublau, and $V$ an essential Fréchet left $L$-module. Then for each $v\in V$ and for each neighbourhood $U$ of the origin in $V$ there is an $a\in L$ and a $w\in V$ such that $v=aw$, $w\in\overline{Lv}$, and $w-v\in U$.
\[thm:ubau\]([@wodex 8.1]) Let $L$ be a Fréchet $m$-algebra. Assume $L$ has a right or left ubau. Then $L$ is $K$-excisive.
In view of Lemma , it suffices to show that $L$ satisfies property . This follows by applying Theorem \[thm:summ\] to the essential $L$-module $L^{\oplus n}$.
\[exe:summ2\] Prove that if $L$ and $M$ are as in Exercise , then $L{\hat{\otimes}}M$ is $K$-excisive.
In [@cot 8.1.1] it asserted that if $k\supset {\mathbb{Q}}$ is a field, and $A$ is a $k$-algebra, then ${\mathrm{Tor}}^{\tilde{A}_{\mathbb{Q}}}_*({\mathbb{Q}},A)={\mathrm{Tor}}^{\tilde{A}_k}_*(k,A)$, but the proof uses the identity $\tilde{A}_k\otimes_{\tilde{A}}{\ ? \ }=k\otimes{\ ? \ }$, which is wrong. In [*loc. cit.*]{}, the lemma is used in combination with Wodzicki’s theorem ([@wodex 8.1]) that a Fréchet algebra $L$ with ublau is $H$-unital as a ${\mathbb{C}}$-algebra, to conclude that such $L$ is $K$-excisive. In Theorem \[thm:ubau\] we gave a different proof of the latter fact.
Fundamental theorem and the Toeplitz ring. {#subsec:fundatoep}
------------------------------------------
If $G:{\mathfrak{Ass}}\to{\mathfrak{Ab}}$ is a functor, and $A$ is a ring, we put $$NG(A):={\mathrm{coker}}(GA\to G(A[t])).$$
Let $R$ be a unital ring. We have a commutative diagram $$\xymatrix{R\ar[d]\ar[r]&R[t]\ar[d]\\ R[t^{-1}]\ar[r]&R[t,t^{-1}]}$$ Thus applying the functor $K_n$ we obtain a map $$\label{map:fund1}
K_nR\oplus NK_nR\oplus NK_nR\to K_n R[t,t^{-1}]$$ which sends $NK_nR\oplus NK_nR$ inside $\ker{{\rm ev}}_1$. Thus $K_nR\to
K_nR[t,t^{-1}]$ is a split mono, and the intersection of its image with that of $NK_nR\oplus NK_nR$ is $0$. On the other hand, the inclusion ${\mathcal{T}}R\to \Gamma R$ induces a map of exact sequences $$\xymatrix{0\ar[r]&M_\infty R\ar@{=}[d]\ar[r]&{\mathcal{T}}R\ar[d]\ar[r]&R[t,t^{-1}]\ar[d]\ar[r]& 0\\
0\ar[r]&M_\infty R\ar[r]&\Gamma R\ar[r]&\Sigma R\ar[r]& 0}$$ In particular, we have a homomorphism $R[t,t^{-1}]\to\Sigma R$, and thus a homomorphism $$\eta:K_nR[t,t^{-1}]\to K_{n-1} R.$$ Note that the maps $R[t]\to {\mathcal{T}}R$, $t\mapsto \alpha$ and $t\mapsto \alpha^*$, lift the homomorphisms $R[t]\to R[t,t^{-1}]$, $t\mapsto t$ and $t\mapsto t^{-1}$. It follows that $\ker\eta$ contains the image of . In [@lorep], Loday introduced a product operation in $K$-theory of unital rings $$K_p(R)\otimes K_q(S)\to K_{p+q}(R\otimes S).$$ In particular, multiplying by the class of $t\in K_1({\mathbb{Z}}[t,t^{-1}])$ induces a map $$\label{map:fund2}
\cup t:K_{n-1}R\to K_nR[t,t^{-1}].$$ Loday proves in [@lorep 2.3.5] that $\eta\circ (-\cup t)$ is the identity map. Thus the images of and have zero intersection. Moreover, we have the following result, due to Quillen [@gray], which is known as the fundamental theorem of $K$-theory.
\[thm:fundak\]([@gray], see also [@sch]) Let $R$ be a unital ring. The maps and induce an isomorphism $$K_nR\oplus NK_nR\oplus NK_nR\oplus K_{n-1}R{\overset{\cong}{\to}}K_nR[t,t^{-1}]\qquad (n\in{\mathbb{Z}}).$$
\[cor:fundak\] (cf. Theorem \[thm:decalkh\]) $$K_n(R[t,t^{-1}]:\sigma R)=K_{n-1}R\oplus NK_nR\oplus
NK_nR\qquad(n\in{\mathbb{Z}}).$$
\[prop:ktoep\](cf. Theorem \[thm:algcuntz\]) Let $R$ be a unital ring, and $n\in{\mathbb{Z}}$. Then $$\begin{gathered}
K_n{\mathcal{T}}R=K_nR\oplus NK_nR\oplus NK_nR,\\
K_n({\mathcal{T}}R:{\mathcal{T}}_0R)=NK_nR\oplus NK_nR.\end{gathered}$$
Consider the exact sequence $$0\to M_\infty R\to {\mathcal{T}}R\to R[t,t^{-1}]\to 0$$ By Proposition \[prop:exciexci\], Exercise \[exe:unitalexci\] and matrix stability we have a long exact sequence $$K_n R\to K_n{\mathcal{T}}R\to K_n R[t,t^{-1}]\to K_{n-1} R\to K_{n-1}{\mathcal{T}}R\qquad (n\in{\mathbb{Z}}).$$ By \[prop:atotau=0\], the first and the last map are zero. The proposition is immediate from this, from Corollary \[cor:fundak\], and from the discussion above.
Comparison between algebraic and topological $K$-theory I {#sec:compa1}
=========================================================
Stable $C^*$-algebras.
----------------------
The following is Higson’s homotopy invariance theorem.
\[thm:hit1\][@hig 3.2.2] Let $G$ be a functor from $C^*$-algebras to abelian groups. Assume that $G$ is split exact and ${\mathcal{K}}$-stable. Then $G$ is homotopy invariant.
\[lem:stab\] Let $G$ be a functor from $C^*$-algebras to abelian groups. Assume that $G$ is $M_2$-stable. Then the functor $F(A):=G(A{\overset{\sim}\otimes}{\mathcal{K}})$ is ${\mathcal{K}}$-stable.
Let $H$ be an infinite dimensional separable Hilbert space. The canonical isomorphism ${\mathbb{C}}^2\otimes_2 H\cong H\oplus H$ induces an isomorphism ${\mathcal{K}}{\overset{\sim}\otimes}{\mathcal{K}}\to M_2{\mathcal{K}}$ which makes the following diagram commute $$\xymatrix{{\mathcal{K}}{\overset{\sim}\otimes}{\mathcal{K}}\ar[rr]^{\cong}&& M_2{\mathcal{K}}\\
&{\mathcal{K}}\ar[ul]^{e_{11}\otimes 1}\ar[ur]^{\iota_1}&}$$ Since $G$ is $M_2$-stable by hypothesis, it follows that $F(1_A{\overset{\sim}\otimes}e_{1,1}{\overset{\sim}\otimes}1_{\mathcal{K}})$ is an isomorphism for all $A$.
The following result, due to Suslin and Wodzicki, is (one of the variants of) what is known as Karoubi’s conjecture [@karcomp].
\[thm:karconc\*\][@qs 10.9] Let $A$ be a $C^*$-algebra. Then there is a natural isomorphism $K_n(A{\overset{\sim}\otimes}{\mathcal{K}})=K^{{\rm top}}_n(A{\overset{\sim}\otimes}{\mathcal{K}})$.
By definition $K_0=K_0^{{\rm top}}$ on all $C^*$-algebras. By Example \[exa:ubau\] and Theorem \[thm:ubau\], $C^*$-algebras are $K$-excisive. In particular $K_*$ is split exact when regarded as a functor of $C^*$-algebras. By \[prop:hikay\] iii), \[prop:kneg\] i), and split exactness, $K_*$ is $M_\infty$-stable on $C^*$-algebras; this implies it is also $M_2$-stable (Exercise \[exe:matrix\]). Thus $K_*(-{\overset{\sim}\otimes}{\mathcal{K}})$ is ${\mathcal{K}}$-stable, by \[lem:stab\]. Hence $K_n(A(0,1]{\overset{\sim}\otimes}{\mathcal{K}})=0$, by split exactness and homotopy invariance (Theorem \[thm:hit1\]). It follows that $$\label{downshift}
K_{n+1}(A{\overset{\sim}\otimes}{\mathcal{K}})=K_n(A(0,1){\overset{\sim}\otimes}{\mathcal{K}})$$ by excision. In particular, for $n\ge 0$, $$\label{agreepos}
K_n(A{\overset{\sim}\otimes}{\mathcal{K}})=K_0(A{\overset{\sim}\otimes}{\overset{\sim}\otimes}_{i=1}^n{\mathbb{C}}(0,1){\overset{\sim}\otimes}{\mathcal{K}})=K_n^{{\rm top}}(A{\overset{\sim}\otimes}{\mathcal{K}}).$$ On the other hand, by Cuntz’ theorem \[thm:cu\], excision applied to the $C^*$-Toeplitz extension and \[lem:stab\], $K_{n+1}(A(0,1){\overset{\sim}\otimes}{\mathcal{K}})=K_n(A{\overset{\sim}\otimes}{\mathcal{K}}{\overset{\sim}\otimes}{\mathcal{K}})=K_n(A{\overset{\sim}\otimes}{\mathcal{K}})$. Putting this together with , we get that $K_*({\mathcal{K}}{\overset{\sim}\otimes}A)$ is Bott periodic. It follows that the identity holds for all $n\in {\mathbb{Z}}$.
Stable Banach algebras.
-----------------------
The following result is a particular case of a theorem of Wodzicki.
\[thm:karconbau\]([@wod Thm. 2], [@cot 8.3.3, 8.3.4]) Let $L$ be Banach algebra with right or left ubau. Then there is an isomorphism $K_*(L{\hat{\otimes}}{\mathcal{K}})=K_*^{{\rm top}}(L{\hat{\otimes}}{\mathcal{K}})$.
Consider the functor $G_L:C^*\to{\mathfrak{Ab}}$, $A\mapsto K_*(L{\hat{\otimes}}(A{\overset{\sim}\otimes}{\mathcal{K}}))$. By the same argument as in the proof of \[thm:karconc\*\], $G_L$ is homotopy invariant. Hence ${\mathbb{C}}\to {\mathbb{C}}[0,1]$ induces an isomorphism $$\begin{aligned}
G_L({\mathbb{C}})=&K_*(L{\hat{\otimes}}{\mathcal{K}}){\overset{\cong}{\to}}G_L({\mathbb{C}}[0,1])=K_*(L{\hat{\otimes}}({\mathbb{C}}[0,1]{\overset{\sim}\otimes}{\mathcal{K}}))\\
=&K_*(L{\hat{\otimes}}{\mathcal{K}}[0,1])=K_*((L{\hat{\otimes}}{\mathcal{K}})[0,1]).\end{aligned}$$ Hence $K_{n+1}(L{\hat{\otimes}}{\mathcal{K}})=K_n(L{\hat{\otimes}}{\mathcal{K}}(0,1))$, by \[thm:ubau\] and \[exa:ubau\]. Thus $K_n(L{\hat{\otimes}}{\mathcal{K}})=K_n^{{\rm top}}(L{\hat{\otimes}}{\mathcal{K}})$ for $n\ge 0$. Consider the punctured Toeplitz sequence $$0\to {\mathcal{K}}\to {\mathcal{T}}^{{\rm top}}_0\to {\mathbb{C}}(0,1)\to 0$$ By [@david V.1.5], this sequence admits a continuous linear splitting. Hence it remains exact after applying the functor $L{\hat{\otimes}}-$. By \[thm:cu\], we have $$K_{-n}(L{\hat{\otimes}}({\mathcal{K}}{\overset{\sim}\otimes}{\mathcal{T}}_0))=0\qquad (n\ge 0).$$ Thus $$K_{-n}(L{\hat{\otimes}}{\mathcal{K}})=K_{-n}(L{\hat{\otimes}}({\mathcal{K}}{\overset{\sim}\otimes}{\mathcal{K}}))=K_0((L{\hat{\otimes}}{\mathcal{K}}){\hat{\otimes}}{\hat{\otimes}}_{i=1}^n{\mathbb{C}}(0,1))=K^{{\rm top}}_{-n}(L{\hat{\otimes}}{\mathcal{K}}).\ \ \qed$$
The theorem above holds more generally for $m$-Fréchet algebras ([@wod Thm. 2], [@cot 8.3.4]), with the appropriate definition of topological $K$-theory (see Section \[sec:karconfre\] below).
Let $A$ be a Banach algebra. Consider the map $K_0(A)\to K_{-1}(A(0,1))$ coming from the exact sequence $$0\to A(0,1)\to A(0,1]\to A\to 0$$ Put $$A(0,1)^m=A{\hat{\otimes}}\left({\hat{\otimes}}_{i=1}^{m}{\mathbb{C}}(0,1)\right)$$ and define $$KC_n(A)={\mathop{\mathrm{colim}}}_p K_{-p}(A(0,1)^{n+p})\qquad (n\in {\mathbb{Z}})$$
[i)]{} Prove that $KC_*$ satisfies excision, $M_\infty$-stability, continuous homotopy invariance, and nilinvariance.
[ii)]{} Prove that $KC_*(A{\hat{\otimes}}{\mathcal{K}})=K_*^{{\rm top}}(A)$.
[iii)]{} Prove that the composite $$K_n^{{\rm top}}A=K_0(A(0,1)^n)\to KC_n(A)\to KC_n(A{\hat{\otimes}}{\mathcal{K}})=K^{{\rm top}}_n(A)\qquad( n\ge 0)$$
is the identity map. In particular $KC_n(A)\to K^{{\rm top}}_n(A)$ is surjective for $n\ge 0$.
J. Rosenberg has conjectured (see [@rosurvey 3.7]) that, for $n\le -1$, the restriction of $K_n$ to commutative $C^*$-algebras is homotopy invariant. Note that if $A$ is a Banach algebra (commutative or not) such that, $K_{-q}(A(0,1)^p)\to K_{-q}(A(0,1)^p[0,1])$ is an isomorphism for all $p,q\ge 0$, then $KC_n(A)\to K^{{\rm top}}_nA$ is an isomorphism for all $n$. In particular, if Rosenberg’s conjecture holds, this will happen for all commutative $C^*$-algebras $A$.
Topological $K$-theory for locally convex algebras {#sec:topk2}
==================================================
Diffeotopy $KV$.
----------------
We begin by recalling the notion of $C^\infty$-homotopies or diffeotopies (from [@cd], [@cw]). Let $L$ be a locally convex algebra. Write ${\mathcal{C}}^\infty([0,1],L)$ for the algebra of those functions $[0,1]\to L$ which are restrictions of ${\mathcal{C}}^\infty$-functions ${\mathbb{R}}\to L$. The algebra ${\mathcal{C}}^\infty([0,1],L)$ is equipped with a locally convex topology which makes it into a locally convex algebra, and there is a canonical isomorphism $${\mathcal{C}}^\infty([0,1],L)={\mathcal{C}}^\infty([0,1],{\mathbb{C}}){\hat{\otimes}}L$$ Two homomorphisms $f_0,f_1:L\to M$ of locally convex algebras are called [*diffeotopic*]{} if there is a homomorphism $H:L\to {\mathcal{C}}^\infty([0,1],M)$ such that the following diagram commutes $$\xymatrix{&{\mathcal{C}}^\infty([0,1],M)\ar[d]^{({{\rm ev}}_0,{{\rm ev}}_1)}\\ L\ar[ur]^H\ar[r]_{(f_0,f_1)}&M\times M}$$ Consider the exact sequences $$\begin{gathered}
\label{lopecito}
0\to P^{{\rm dif}}L\to {\mathcal{C}}^\infty([0,1],L)\overset{{{\rm ev}}_0}\to L\to 0\\
0\to \Omega^{{\rm dif}}L\to P^{{\rm dif}}L\overset{{{\rm ev}}_1}\to L\to 0\label{gonzalez}\end{gathered}$$ Here $P^{{\rm dif}}L$ and $\Omega^{{\rm dif}}L$ are the kernels of the evaluation maps. The first of these is split by the natural inclusion $L\to {\mathcal{C}}^\infty([0,1],L)$, and the second is split the continous linear map sending $l\mapsto (t\mapsto tl)$. We have $$\Omega^{{\rm dif}}L=\Omega^{{\rm dif}}{\mathbb{C}}{\hat{\otimes}}L,\qquad P^{{\rm dif}}L=P^{{\rm dif}}{\mathbb{C}}{\hat{\otimes}}L.$$
Put $$\begin{gathered}
\GL(L)_0^{\prime\prime}={\mathrm{Im}}(\GL P^{{\rm dif}} L\to \GL (L))\\
KV^{{\rm dif}}_1(L)=\GL(L)/\GL(L)_0^{\prime\prime}.\end{gathered}$$ The following is the analogue of Proposition \[prop:kv1ppties\] for $KV_1^{{\rm dif}}$ (except for nilinvariance, treated separately in Exercise \[exe:nilkvdif\]).
\[prop:kvdppties\]
[i)]{} The functor $KV^{{\rm dif}}_1$ is split exact.
[ii)]{}For each locally convex algebra $L$, there is a natural surjective map $K_1L\to KV^{{\rm dif}}_1L$.
[iii)]{} If $$\label{seq:lmn}
0\to L\to M\to N\to 0$$ is an exact sequence such that the map $\GL(M)^{\prime\prime}_0\to \GL(N)^{\prime\prime}_0$ is onto, then the map $K_1N\to K_0L$ of Theorem \[thm:exci01\] factors through $KV^{{\rm dif}}_1N$, and the resulting sequence $$\xymatrix{KV^{{\rm dif}}_1L\ar[r] &KV^{{\rm dif}}_1M\ar[r]& KV^{{\rm dif}}_1N\ar[d]^{\partial}\\K_0N&K_0M\ar[l]&K_0L\ar[l]}$$ is exact.
[iv)]{}$KV^{{\rm dif}}_1$ is additive, diffeotopy invariant and $M_\infty$-stable.
One checks that, mutatis-mutandis, the same argument of the proof of \[prop:kv1ppties\] shows this.
By the same argument as in the algebraic case, we obtain a natural injection $$KV_1^{{\rm dif}}L\hookrightarrow K_0(\Omega^{{\rm dif}}L)$$ Higher $KV^{{\rm dif}}$-groups are defined by $$KV^{{\rm dif}}_n(L)=KV_1^{{\rm dif}}((\Omega^{{\rm dif}})^{n-1}L)\qquad (n\ge 2)$$
\[exe:nilkvdif\]
[i)]{} Show that if $L$ is a locally convex algebra such that $L^n=0$ and such that $L\to L/L^i$ admits a continuous linear splitting for all $i\le n-1$, then $KV^{{\rm dif}}_1L=0$.
[ii)]{} Show that if $L$ is as in i) then the map $KV^{{\rm dif}}_1M\to KV^{{\rm dif}}_1N$ induced by is an isomorphism.
Diffeotopy $K$-theory.
----------------------
Consider the excision map $$K_nL\to K_{n-1}(\Omega^{{\rm dif}}L)\qquad(n\le 0)$$ associated to the sequence . The [*diffeotopy $K$-theory*]{} of the algebra $L$ is defined by the formula $$KD_nL={\mathop{\mathrm{colim}}}_pK_{-p}((\Omega^{{\rm dif}})^{n+p}L)\qquad (n\in{\mathbb{Z}})$$ It is also possible to express $KD$ in terms of $KV^{{\rm dif}}$. First we observe that, since $\Sigma {\mathbb{C}}$ is a countably dimensional algebra, equipping it with the fine topology makes it into a locally convex algebra [@cw 2.1], and if $L$ is any locally convex algebra then we have $$\Sigma L=\Sigma{\mathbb{C}}\otimes_{\mathbb{C}}L=\Sigma{\mathbb{C}}{\hat{\otimes}}L.$$ Thus $$\Omega^{{\rm dif}}\Sigma L=\Sigma\Omega^{{\rm dif}}L.$$ Taking this into account, and using the same argument as used to prove , one obtains $$KD_nL={\mathop{\mathrm{colim}}}_rKV^{{\rm dif}}_1(\Sigma^{r+1}(\Omega^{{\rm dif}})^{n+r}L)={\mathop{\mathrm{colim}}}_rKV^{{\rm dif}}_{n+r+1}(\Sigma^{r+1}L).$$
\[prop:kdppties\] Diffeotopy $K$-theory has the following properties.
It is diffeotopy invariant, nilinvariant and $M_\infty$-stable.
It satisfies excision for those exact sequences which admit a continuous linear splitting. That is, if $$\label{seq:lsplit}
0\to L\to M\overset{\pi}\to N\to 0$$ is an exact sequence of locally convex algebras and there exists a continuous linear map $s:N\to M$ such that $\pi s=1_N$, then there is a long exact sequence $$KD_{n+1}M\to KD_{n+1}N\to KD_nL\to KD_nM\to KD_nN\qquad (n\in{\mathbb{Z}}).$$
The proof is essentially the same as that of Theorem . The splitting hypothesis in ii) guarantees that the functor $L\mapsto \Omega^{{\rm dif}}L=\Omega^{{\rm dif}}{\mathbb{C}}{\hat{\otimes}}L$ and its iterations, send to an exact sequence.
#### Comparing $KV^{{\rm dif}}$ and $KD$.
The analogue of Proposition \[prop:khk0reg\] is \[prop:kdk0reg\] below. It is immediate from Lemma \[lem:plreg\], which is the analogue of Lemma \[lem:pareg\]; the proof of \[lem:plreg\] is essentially the same as that of \[lem:pareg\].
\[lem:plreg\] Let $L$ be a locally convex algebra. Assume that for all $n\le 0$ and all $p\ge 1$, the natural inclusion $\iota_p:L\to L{\hat{\otimes}}\left({\hat{\otimes}}_{i=1}^p{\mathcal{C}}^\infty([0,1])\right)={\mathcal{C}}^\infty([0,1]^p,L)$ induces an isomorphism $K_n(L){\overset{\cong}{\to}}K_n{\mathcal{C}}^\infty([0,1]^p,L)$. Then $KV_1^{{\rm dif}}L\to K_0\Omega^{{\rm dif}}L$ is an isomorphism, and for every $n\le 0$ and every $p\ge 0$, $K_n({\mathcal{C}}^\infty([0,1]^p,P^{{\rm dif}}A))=0$ and $K_n(\Omega^{{\rm dif}}A)\to K_n(C^\infty([0,1]^p,\Omega^{{\rm dif}}A))$ is an isomorphism.
\[prop:kdk0reg\] Let $L$ be a locally convex algebra. Assume $L$ satisfies the hypothesis of Lemma \[lem:plreg\]. Then $$KD_nL=\begin{cases} KV^{{\rm dif}}_nL&n\ge 1\\ K_nL&n\le 0\end{cases}$$
Bott periodicity.
-----------------
Next we are going to prove a version of Bott periodicity for $KD$. The proof is analogous to Cuntz’ proof of Bott periodicity for $K^{{\rm top}}$ of $C^*$-algebras, with the algebra of smooth compact operators and the smooth Toeplitz algebra substituted for the $C^*$-algebra of compact operators and the Toeplitz $C^*$-algebra.
#### Smooth compact operators.
The algebra ${{\mathfrak{K}}}$ of [*smooth compact operators*]{} ([@ncp §2],[@cd 1.4]) consists of all those ${\mathbb{N}}\times {\mathbb{N}}$-matrices $(z_{i,j})$ with complex coefficients such that for all $n$, $$\rho_n(z):=\sum_{p,q}p^nq^n|z_{p,q}|<\infty$$ The seminorms $\rho_n$ are submultiplicative, and define a locally convex topology on ${{\mathfrak{K}}}$. Since the topology is defined by submultiplicative seminorms, it is an $m$-algebra. Further because the seminorms above are countably many, it is Fréchet; summing up ${{\mathfrak{K}}}$ is an $m$-Féchet algebra. We have a map $$e_{11}:{\mathbb{C}}\to {\mathfrak{K}}, z\mapsto e_{11}z$$ Whenever we refer to ${\mathfrak{K}}$-stability below, we shall mean stability with respect to the functor ${\mathfrak{K}}{\hat{\otimes}}-$ and the map $e_{11}$.
#### Smooth Toeplitz algebra.
The [*smooth Toeplitz algebra*]{} ([@cd 1.5]), is the free $m$-algebra ${\mathcal{T}}^{{\rm sm}}$ on two generators $\alpha$, $\alpha^*$ subject to $\alpha\alpha^*=1$. As in the $C^*$-algebra case, there is a commutative diagram with exact rows and split exact columns $$\xymatrix{0\ar[r]&{\mathfrak{K}}\ar@{=}[d]\ar[r]&{\mathcal{T}}^{{\rm sm}}_0\ar[r]\ar[d]&\Omega^{{\rm dif}}{\mathbb{C}}\ar[r]\ar[d]&0\\
0\ar[r]&{\mathfrak{K}}\ar[r]&{\mathcal{T}}^{{\rm sm}}\ar[r]\ar[d]&{\mathcal{C}}^\infty(S^1,{\mathbb{C}})\ar[r]\ar[d]_{{{\rm ev}}_1}&0\\
&&{\mathbb{C}}\ar@{=}[r]&{\mathbb{C}}&}$$ Here ${\mathcal{T}}^{{\rm sm}}_0$ is defined so that the middle column be exact, and we use the exponential map to identify $\Omega^{{\rm dif}}{\mathbb{C}}$ with the kernel of the evaluation map. Moreover the construction of ${\mathcal{T}}^{{\rm sm}}$ given in [@cd] makes it clear that the rows are exact with a continuous linear splitting, and thus they remain exact after applying $L{\hat{\otimes}}$, where $L$ is any locally convex algebra.
#### Bott periodicity.
The following theorem, due to J. Cuntz, appears in [@cd Satz 6.4], where it is stated for functors on $m$-locally convex algebras. The same proof works for functors of all locally convex algebras.
\[thm:cusm\](Cuntz, [@cd Satz 6.4]) Let $G$ be a functor from locally convex algebras to abelian groups. Assume that
- $G$ is diffeotopy invariant.
- $G$ is ${\mathfrak{K}}$-stable.
- $G$ is split exact.
Then for every locally convex algebra $L$, we have: $$G(L{\hat{\otimes}}{\mathcal{T}}_0^{{\rm sm}})=0$$
\[thm:kdbott\] For every locally convex algebra $L$, there is a natural isomorphism $KD_*(L{\hat{\otimes}}{\mathfrak{K}})\cong KD_{*+2}(L{\hat{\otimes}}{\mathfrak{K}})$.
Consider the exact sequence $$\xymatrix{0\ar[r]&L{\hat{\otimes}}{\mathfrak{K}}\ar[r]&L{\hat{\otimes}}{\mathcal{T}}_0^{{\rm sm}}\ar[r]&\Omega^{{\rm dif}}L\ar[r]&0}$$ This sequence is linearly split by construction (see [@cd 1.5]). This splitting property is clearly preserved if we apply the functor ${\mathfrak{K}}{\hat{\otimes}}$. Hence by Proposition \[prop:kdppties\] ii), we have a natural map $$\begin{gathered}
\label{edgeloop}
KD_{*+1}(L{\hat{\otimes}}{\mathfrak{K}})=KD_*(\Omega^{{\rm dif}}L{\hat{\otimes}}{\mathfrak{K}})\to KD_{*-1}(L{\hat{\otimes}}{\mathfrak{K}}{\hat{\otimes}}{\mathfrak{K}})
\end{gathered}$$ By [@cd Lemma 1.4.1], the map $1{\hat{\otimes}}e_{11}:{\mathfrak{K}}\to {\mathfrak{K}}{\hat{\otimes}}{\mathfrak{K}}$ is diffeotopic to an isomorphism. Since $KD$ is diffeotopy invariant, this shows that $KD_*({\mathfrak{K}}{\hat{\otimes}}-)$ is ${\mathfrak{K}}$-stable. Hence $KD_{*-1}(L{\hat{\otimes}}{\mathfrak{K}}{\hat{\otimes}}{\mathfrak{K}})=KD_{*-1}(L{\hat{\otimes}}{\mathfrak{K}})$, and by Cuntz’ theorem \[thm:cusm\], is an isomorphism.
Cuntz has defined a bivariant topological $K$-theory for locally convex algebras ([@cw]). This theory associates groups $kk_*^{{{\rm lc}}}(L,M)$ to any pair $(L,M)$ of locally convex algebras, and is contravariant in the first variable and covariant in the second. Roughly speaking, $kk_n^{{{\rm lc}}}(L,M)$ is defined as a certain colimit of diffeotopy classes of $m$-fold extensions of $L$ by $M$ $(m\ge n)$. There is also an algebraic version of Cuntz’ theory, $kk_*(A,B)$, which is defined for all pairs of rings $(A,B)$ ([@biva]). We point out that $$\label{kkkd}
kk^{lc}_*({\mathbb{C}},M)=KD_*(M{\hat{\otimes}}{\mathfrak{K}}).$$ Indeed the proof given in [@biva 8.1.2] that for algebraic $kk$, $KH_*(A)=kk_*({\mathbb{Z}},A)$ for all rings $A$, can be adapted to prove ; one just needs to observe that, for the algebraic suspension, $kk^{{{\rm lc}}}_*(L,\Sigma M)=kk^{{{\rm lc}}}_{*-1}(L,M)$. Note that, in view of the definition of $KD$, implies the following “algebraic" formula for $kk^{{{\rm lc}}}$: $$kk^{{{\rm lc}}}_n({\mathbb{C}},L)={\mathop{\mathrm{colim}}}_pK_{-p}(({\Omega^{{\rm dif}}})^{p+q}(L\otimes{\mathfrak{K}})).$$
Comparison between algebraic and topological $K$-theory II {#sec:compa2}
==========================================================
The diffeotopy invariance theorem.
----------------------------------
Let $H$ be an infinite dimensional separable Hilbert space; write $H\otimes_2H$ for the completed tensor product of Hilbert spaces. Note any two infinite dimensional separable Hilbert spaces are isomorphic; hence we may regard any operator ideal ${\mathcal{J}}{\triangleleft}{{\mathcal{B}}}(H)$ as a functor from Hilbert spaces to ${\mathbb{C}}$-algebras (see [@hus 3.3]). Let ${\mathcal{J}}{\triangleleft}{{\mathcal{B}}}$ be an ideal.
- ${\mathcal{J}}$ is [*multiplicative*]{} if ${{\mathcal{B}}}{\hat{\otimes}}{{\mathcal{B}}}\to {{\mathcal{B}}}(H\otimes_2 H)$ maps ${\mathcal{J}}{\hat{\otimes}}{\mathcal{J}}$ to ${\mathcal{J}}$.
- ${\mathcal{J}}$ is [*Fréchet*]{} if it is a Fréchet algebra and the inclusion ${\mathcal{J}}\to {{\mathcal{B}}}$ is continuous. A Fréchet ideal is a [*Banach*]{} ideal if it is a Banach algebra.
Write $\omega=(1/n)_n$ for the harmonic sequence.
- ${\mathcal{J}}$ is [*harmonic*]{} if it is a multiplicative Banach ideal such that ${\mathcal{J}}(\ell^2({\mathbb{N}}))$ contains $diag(\omega)$.
Let $p\in {\mathbb{R}}_{>0}$. Write ${\mathcal{L}}_p$ for the ideal of those compact operators whose sequence of singular values is $p$-summable; ${\mathcal{L}}_p$ is called the $p$-[*Schatten ideal*]{}. It is Banach $\iff$ $p\ge 1$, and is harmonic $\iff$ $p>1$. There is no interesting locally convex topology on ${\mathcal{L}}_p$ for $p<1$.
The following theorem, due to J. Cuntz and A. Thom, is the analogue of Higson’s homotopy invariance theorem \[thm:hit1\] in the locally convex algebra context. The formulation we use here is a consequence of [@ct 5.1.2] and [@ct 4.2.1].
\[thm:hit2\]([@ct]) Let ${\mathcal{J}}$ be a harmonic operator ideal, and $G$ a functor from locally convex algebras to abelian groups. Assume that
- $G$ is $M_2$-stable.
- $G$ is split exact.
Then $L\mapsto G(L{\hat{\otimes}}{\mathcal{J}})$ is diffeotopy invariant.
We shall need a variant of \[thm:hit2\] which is valid for all Fréchet ideals ${\mathcal{J}}$. In order to state it, we introduce some notation. Let $\alpha:L\to M$ be a homomorphism of locally convex algebras. We say that $\alpha$ is an [*isomorphism up to square zero*]{} if there exists a continous linear map $\beta:M{\hat{\otimes}}M\to L$ such that the compositions $\beta\circ(\alpha{\hat{\otimes}}\alpha)$ and $\alpha\circ\beta$ are the multiplication maps of $L$ and $M$. Note that if $\alpha$ is an isomorphism up to square zero, then its image is a ideal of $M$, and both its kernel and its cokernel are square-zero algebras.
\[defi:nilinv\] Let $G$ be a functor from locally convex algebras to abelian groups. We call $G$ [continously nilinvariant]{} if it sends isomorphisms up to square zero into isomorphisms.
\[exa:khkosher\] For any $n\in{\mathbb{Z}}$, $KH_n$ is a continously nilinvariant functor of locally convex algebras. If $n\le 0$, the same is true of $K_n$. In general, if $H_*$ is the restriction to locally convex algebras of any excisive, nilinvariant homology theory of rings, then $H_*$ is continously nilinvariant.
\[thm:hit3\][@cot 6.1.6] Let ${\mathcal{J}}$ be a Fréchet operator ideal, and $G$ a functor from locally convex algebras to abelian groups. Assume that
- $G$ is $M_2$-stable.
- $G$ is split exact.
- $G$ is continuously nilinvariant.
Then $L\mapsto G(L{\hat{\otimes}}{\mathcal{J}})$ is diffeotopy invariant.
\[exe:diforeg\] Prove:
[i)]{} If $L$ is a locally convex algebra, then $L[t]$ is a locally convex algebra, and there is an isomorphism $L[t]\cong L{\hat{\otimes}}{\mathbb{C}}[t]$ where ${\mathbb{C}}[t]$ is equipped with the fine topology.
[ii)]{} Let $G$ be a diffeotopy invariant functor from locally convex algebras to abelian groups. Prove that $G$ is polynomial homotopy invariant.
The following fact shall be needed below.
\[fact:jcomstable\] Let $G$ be a functor from locally convex algebras to abelian groups, and ${\mathcal{J}}$ a Fréchet ideal. Assume that $G$ is $M_2$-stable and that $F(-):=G(-{\hat{\otimes}}{\mathcal{J}})$ is diffeotopy invariant. Then $F$ is ${\mathfrak{K}}$-stable.
Let $\iota:{\mathbb{C}}\to {\mathfrak{K}}$ be the inclusion; put $\alpha=1_{\mathcal{J}}{\hat{\otimes}}\iota$. We have to show that if $L$ is a locally convex algebra, then $G$ maps $1_L{\hat{\otimes}}\alpha$ to an isomorphism. To do this one constructs a map $\beta:{\mathfrak{K}}{\hat{\otimes}}{\mathcal{J}}\to {\mathfrak{K}}$, and shows that $G(1_L{\hat{\otimes}}\beta)$ is inverse to $G(1_L{\hat{\otimes}}\alpha)$. To define $\beta$, proceed as follows. By [@cot 5.1.3], ${\mathcal{J}}\supset{\mathcal{L}}^1$, and the tensor product of operators defines a map $\theta:{\mathcal{L}}^1{\hat{\otimes}}{\mathcal{J}}\to {\mathcal{J}}$. Write $\phi:{\mathfrak{K}}\to{\mathcal{L}}^1$ for the inclusion. Put $\beta=\theta\circ(\phi{\hat{\otimes}}1_{\mathcal{J}})$. The argument of the proof of [@ct 6.1.2] now shows that $G$ sends both $1_L{\hat{\otimes}}\alpha\beta$ and $1_L{\hat{\otimes}}\beta\alpha$ to identity maps.
$KH$ of stable locally convex algebras.
---------------------------------------
Let $L$ be a locally convex algebra. Restriction of functions defines a homomorphism of locally convex algebras $L[t]\to {\mathcal{C}}^{\infty}([0,1],L)$, which sends $\Omega L\to \Omega^{{\rm dif}}L$. Thus we have a natural map $$\label{map:compakhkd}
KH_n(L)={\mathop{\mathrm{colim}}}_pK_{-p}(\Omega^{p+n}L)\to {\mathop{\mathrm{colim}}}_pK_{-p}((\Omega^{{\rm dif}})^{p+n}L)=KD_n(L)$$
\[thm:compakh\][@cot 6.2.1] Let $L$ be a locally convex algebra, ${\mathcal{J}}$ a Fréchet ideal, and $A$ a ${\mathbb{C}}$-algebra. Then
[i)]{} The functors $KH_n(A\otimes_{\mathbb{C}}(-{\hat{\otimes}}{\mathcal{J}}))$ $(n\in{\mathbb{Z}})$ and $K_m(A\otimes_{\mathbb{C}}(-{\hat{\otimes}}{\mathcal{J}}))$ $(m\le 0)$ are diffeotopy invariant.
[ii)]{} $A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})$ is $K_n$-regular $(n\le 0)$.
[iii)]{}The map $KH_n(L{\hat{\otimes}}{\mathcal{J}})\to KD_n(L{\hat{\otimes}}{\mathcal{J}})$ of is an isomorphism for all $n$. Moreover we have $$KH_n(L{\hat{\otimes}}{\mathcal{J}})=KV_n(L{\hat{\otimes}}{\mathcal{J}})=KV^{{\rm dif}}_n(L{\hat{\otimes}}J)\qquad (n\ge 1)$$
Part i) is immediate from \[thm:khppties\], \[exa:khkosher\] and \[thm:hit3\]. It follows from part i) and Exercise \[exe:diforeg\] that $A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})$ is $K_n$-regular for all $n\le 0$, proving ii). From part i) and excision, we get that the two vertical maps in the commutative diagram below are isomorphisms $(n\le 0)$: $$\xymatrix{K_n(L{\hat{\otimes}}{\mathcal{J}})\ar[d]\ar[r]^{1}&K_n(L{\hat{\otimes}}{\mathcal{J}})\ar[d]\\
K_{n-1}(\Omega L{\hat{\otimes}}{\mathcal{J}})\ar[r]&K_{n-1}(\Omega^{{\rm dif}}L{\hat{\otimes}}{\mathcal{J}})}$$ It follows that the map at the bottom is an isomorphism. This proves the first assertion of iii). The identity $KH_n(L{\hat{\otimes}}{\mathcal{J}})=KV_n(L{\hat{\otimes}}{\mathcal{J}})$ $(n\ge 1)$ follows from part i), using Proposition \[prop:khk0reg\] and Remark \[rem:vorst\]. Similarly, part i) together with Proposition \[prop:kdk0reg\] imply that $KD_n(L{\hat{\otimes}}{\mathcal{J}})=KV^{{\rm dif}}_n(L{\hat{\otimes}}{\mathcal{J}})$ $(n\ge 1)$.
\[cor:kdk0k-1\] $$KH_n(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}}))=\begin{cases}K_0(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}}))& n\text{ even. }\\ K_{-1}(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}}))& n \text{ odd.}\end{cases}$$
Put $B=A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})$. By part ii) of Theorem \[thm:compakh\] above and Remark \[rem:vorst\], or directly by the proof of the theorem, we have that $B$ is $K_n$-regular for all $n\le 0$. Thus $KH_n(B)=K_n(B)$ for $n\le 0$, by Propostion \[prop:khk0reg\]. To finish, we must show that $KH_n(B)$ is $2$-periodic. By \[fact:jcomstable\], $KH_*(A\otimes_{\mathbb{C}}(-{\hat{\otimes}}{\mathcal{J}}))$ is ${\mathfrak{K}}$-stable. Thus $KH_*(A\otimes_{\mathbb{C}}({\mathcal{T}}_0^{{\rm sm}}{\hat{\otimes}}{\mathcal{J}}))=0$, by Theorem \[thm:cusm\]. Whence $KH_{*+1}(B)=KH_{*-1}(B)$, by excision and diffeotopy invariance.
If ${\mathcal{J}}$ is a Fréchet operator ideal, then by \[k0j\], \[k-1j\] and Corollary \[cor:kdk0k-1\], we get: $$KH_n({\mathcal{J}})=\left\{\begin{array}{cc}{\mathbb{Z}}&n\text{ even.}\\ 0&n\text{ odd.} \end{array}\right.$$ This formula is valid more generally for “subharmonic" ideals (see [@cot 6.5.1] for the definition of this term, and [@cot 7.2.1] for the statement). For example, the Schatten ideals ${\mathcal{L}}_p$ are subharmonic for all $p>0$, but are Fréchet only for $p\ge 1$.
$K$-theory spectra {#sec:spectra}
==================
In this section we introduce spectra for Quillen’s and other $K$-theories. For a quick introduction to spectra, see [@chubu 10.9].
Quillen’s $K$-theory spectrum.
------------------------------
Let $R$ be a unital ring. Since the loopspace depends only on the connected component of the base point, applying the equivalence of Proposition \[prop:conseq\] to $\Sigma R$ induces an equivalence $$\label{map:omegequi}
\Omega K(\Sigma R){\overset{\sim}{\to}}\Omega^2K(\Sigma ^2R)$$ Moreover, by \[subsec:functissues\], this map is natural. Put $${}_n{\mathbb{K}}R:=\Omega K(\Sigma^{n+1}R).$$ The equivalence applied to $\Sigma^nR$ yields an equivalence $${}_n{\mathbb{K}}R{\overset{\sim}{\to}}\Omega{}(_{n+1}{\mathbb{K}}R).$$ The sequence ${\mathbb{K}}R=\{{}_n{\mathbb{K}}R\}$ together with the homotopy equivalences above constitute a spectrum (in the notation of [@chubu 10.9], [*$\Omega$-spectrum*]{} in that of [@swi Ch. 8]), the $K$-theory spectrum; the equivalences are the [*bonding maps*]{} of the spectrum. The $n$-th (stable) homotopy group of ${\mathbb{K}}R$ is $$\pi_n{\mathbb{K}}R={\mathop{\mathrm{colim}}}_p\pi_{n+p}({}_n{\mathbb{K}}R)=K_nR\qquad (n\in{\mathbb{Z}}).$$ Because its negative homotopy groups are in general nonzero, we say that the spectrum ${\mathbb{K}}R$ is [*nonconnective*]{}. Recall that the homotopy category of spectra ${\mathrm{Ho}}Spt$ is triangulated, and, in particular, additive. In the first part of the proposition below, we show that ${\mathfrak{Ass}}_1\to{\mathrm{Ho}}Spt$, $R\mapsto {\mathbb{K}}R$ is an additive functor. Thus we can extend the functor ${\mathbb{K}}$ to all (not necessarily unital) rings, by $$\label{ksnunital}
{\mathbb{K}}A:={\mathop{\mathrm{hofiber}}}({\mathbb{K}}(\tilde{A})\to {\mathbb{K}}({\mathbb{Z}}))$$
\[prop:ksppties\]
[i)]{} The functor ${\mathbb{K}}:{\mathfrak{Ass}}_1\to {\mathrm{Ho}}Spt$ is additive.
[ii)]{} The functor ${\mathbb{K}}:{\mathfrak{Ass}}\to {\mathrm{Ho}}Spt$ defined in above, is $M_\infty$-stable on unital rings.
It follows from \[prop:hikay\] i) and iii).
If $A{\triangleleft}B$ is an ideal, we define the relative $K$-theory spectrum by $${\mathbb{K}}(B:A)={\mathop{\mathrm{hofiber}}}({\mathbb{K}}(B)\to {\mathbb{K}}(B/A)).$$
\[prop:exciexci\] Every short exact sequence of rings with $A$ $K$-excisive, gives rise to a distinguished triangle $${\mathbb{K}}A\to {\mathbb{K}}B\to {\mathbb{K}}C\to \Omega^{-1}{\mathbb{K}}A$$
Immediate from Theorem \[thm:exito\].
$KV$-theory spaces.
-------------------
Let $A$ be a ring. Consider the simplicial ring $${\Delta}A:[n]\mapsto A\otimes{\mathbb{Z}}[t_0,\dots,t_n]/\langle1-(t_0+\dots+t_n)\rangle.$$ It is useful to think of elements of ${\Delta}_nA$ as formal polynomial functions on the algebraic $n$-simplex $\{(x_0,\dots,x_n)\in {\mathbb{Z}}^{n+1}:\sum x_i=1\}$ with values in $A$. Face and degeneracy maps are given by $$\begin{gathered}
\label{formucaradege}
d_i(f)(t_0,\dots,t_{n-1})=f(t_0,\dots,t_{i-1},0,t_i,\dots,t_{n})\\
s_j(f)(t_0,\dots,t_{n+1})=f(t_0,\dots,t_{i-1},t_i+t_{i+1},\dots,t_{n+1}).\nonumber\end{gathered}$$ Here $f\in {\Delta}_nA$, $0\le i\le n$, and $0\le j\le n-1$.In the next proposition and below, we shall use the geometric realization of a simplicial space; see [@GM I.3.2 (b)] for its definition. We shall also be concerned with simplicial groups; see [@chubu Ch.8] for a brief introduction to the latter. The following proposition and the next are taken from D.W. Anderson’s paper [@anderson].
([@anderson 1.7]) Let $A$ be a ring and $n\ge 1$. Then $KV_nA=\pi_{n-1}\GL\Delta A=\pi_n|B\GL{\Delta}A|$.
The second identity follows from the fact that if $G$ is a group, then $\Omega B G{\overset{\sim}{\to}}G$ [@bf] and the fact that, for a simplicial connected space $X$, one has $\Omega |X|{\overset{\sim}{\to}}|\Omega X|$. To prove the first identity, proceed by induction on $n$. Write $\sim$ for the polynomial homotopy relation in $\GL A$ and ${\mathrm{coeq}}$ for the coequalizer of two maps. The case $n=1$ is $$\begin{aligned}
\pi_0\GL(\Delta A)=&{\mathrm{coeq}}(\GL\Delta_1A\overset{{{\rm ev}}_0}{\underset{{{\rm ev}}_1}{\rightrightarrows}}\GL A)\\
=&\GL A/\sim\\
=&\GL A/\GL(A)'_0=KV_1A.\end{aligned}$$ For the inductive step, proceed as follows. Consider the exact sequence of rings $$0\to \Omega A\to PA\to A\to 0$$ Using that $\GL(-)'_0={\mathrm{Im}}(\GL P(-)\to \GL(-))$ and that $P{\Delta}={\Delta}P$ and $\Omega{\Delta}={\Delta}\Omega$, we obtain exact sequences of simplicial groups
$$\begin{gathered}
\xymatrix{1\ar[r]&\GL{\Delta}\Omega A\ar[r]&\GL{\Delta}PA\ar[r]&\GL({\Delta}A)'_0\ar[r]&1}\label{seq:decapi1}\\
\xymatrix{1\ar[r]&\GL({\Delta}A)'_0\ar[r]&\GL{\Delta}A\ar[r]&KV_1({\Delta}A)\ar[r]&1}\label{seq:decapi2}\end{gathered}$$
Since $KV_1$ is homotopy invariant (by \[prop:kv1ppties\]), we have $\pi_0KV_1{\Delta}A=KV_1A$ and $\pi_nKV_1{\Delta}A=0$ for $n>0$. It follows from that $$\label{pigl0}
\pi_n\GL({\Delta}A)'_0=\begin{cases} 0&n=0\\ \pi_n\GL({\Delta}A)& n\ge 1\end{cases}$$ Next, observe that there is a split exact sequence $$1\to \GL {\Delta}PA\to \GL {\Delta}A[x]\to \GL{\Delta}A\to 1$$ Here, the surjective map and its splitting are respectively $\GL d_0$ and $\GL s_0$. One checks that the maps $$\begin{gathered}
h_i:{\Delta}_n{\Delta}_1 A\to {\Delta}_{n+1}A,\\ h_i(f)(t_0,\dots,t_n,x)=f(t_0,\dots,t_i+t_{i+1},\dots, t_n,(t_{i+1}+\dots +t_n)x)\end{gathered}$$ $0\le i\le n$ form a simplicial homotopy between the identity and the map ${\Delta}_n(s_0d_0)$. Thus $\GL d_0$ is a homotopy equivalence, whence $\pi_*\GL{\Delta}PA=0$. Putting this together with and using the homotopy exact sequence of , we get $$\pi_n\GL{\Delta}\Omega A=\pi_{n+1}\GL{\Delta}A \qquad(n\ge 0).$$ The inductive step is immediate from this.
Let $L$ be a locally convex algebra. Consider the [*geometric $n$-simplex*]{} $${\mathbb{A}}^n:=\{(x_0,\dots,x_n)\in{\mathbb{R}}^{n+1}:\sum x_i =1 \}\supset
\Delta^n:=\{x\in {\mathbb{A}}^n:x_i\ge 0 \ (0\le i\le n)\}.$$ If $L$ is a locally convex algebra, we write $${\Delta^{{\rm dif}}}_n L:={\mathcal{C}}^\infty(\Delta^n,L).$$ Here, ${\mathcal{C}}^\infty(\Delta^n, -)$ denotes the locally convex vectorspace of all those functions on $\Delta^n$ which are restrictions of ${\mathcal{C}}^\infty$-functions on ${\mathbb{A}}^n$. The cosimplicial structure on $[n]\mapsto\Delta^n$ induces a simplicial one on ${\Delta^{{\rm dif}}}L$. In particular, ${\Delta^{{\rm dif}}}L$ is a simplicial locally convex algebra, and $\GL({\Delta^{{\rm dif}}}L)$ is a simplicial group.
[i)]{} Prove that $KV_n^{{\rm dif}}L=\pi_{n-1}\GL({\Delta^{{\rm dif}}}L)$ $(n\ge
1)$.
[ii)]{} Let $A$ be a Banach algebra. Consider the simplicial Banach algebra $\Delta_*^{{\rm top}}A={\mathcal{C}}(\Delta^*,A)$ and the simplcial group $\GL(\Delta^{{\rm top}}A)$. Prove that $K^{{\rm top}}_nA=\pi_{n-1}\GL(\Delta^{{\rm top}}A)$ $(n\ge 1)$.
\[prop:ander\]([@anderson 2.3]) Let $R$ be a unital ring. Then the map $|B\GL \Delta R|\to |K\Delta R|$ is an equivalence.
If $A$ is a ring and $n\ge 1$, then $$KV_nA=\pi_n|K(\Delta \tilde{A}:\Delta A)|\qed$$
The argument of the proof of Proposition \[prop:ander\] in [@anderson] applies verbatim to the ${\mathcal{C}}^\infty$ case, showing that if $T$ is a unital locally convex algebra, then $$|B\GL
{\Delta^{{\rm dif}}}T|{\overset{\sim}{\to}}|K{\Delta^{{\rm dif}}}T|.$$ It follows that if $L$ is any, not necessarily unital locally convex algebra and $\tilde{L}_{\mathbb{C}}=L\oplus {\mathbb{C}}$ is its unitalization, then $$KV^{{\rm dif}}_nL=\pi_n|K({\Delta^{{\rm dif}}}\tilde{L}_{\mathbb{C}}:{\Delta^{{\rm dif}}}L)|$$ The analogous formulas for the topological $K$-theory of Banach algebras are also true and can be derived in the same manner.
The homotopy $K$-theory spectrum.
---------------------------------
Let $R$ be a unital ring. Consider the simplicial spectrum ${\mathbb{K}}{\Delta}R$. Put $${\mathbb{KH}}(R)=|{\mathbb{K}}\Delta R|$$ One checks that ${\mathbb{KH}}:{\mathfrak{Ass}}_1\to{\mathrm{Ho}}Spt$ is additive. Thus ${\mathbb{KH}}$ extends to arbitrary rings by $${\mathbb{KH}}(A)={\mathop{\mathrm{hofiber}}}({\mathbb{KH}}\tilde{A}\to{\mathbb{KH}}{\mathbb{Z}})=|{\mathbb{K}}(\Delta \tilde{A}:\Delta A)|$$
\[rem:kh\_berreta\] If $A$ is any, not necessarily unital ring, one can also consider the spectrum $|{\mathbb{K}}{\Delta}A|$; the map $$\label{map:berrekh1}
{\mathbb{K}}{\Delta}A={\mathbb{K}}(\widetilde{{\Delta}A}:{\Delta}A)\to {\mathbb{K}}({\Delta}\tilde{A}:{\Delta}A)$$ induces $$\label{map:berrekh2}
|{\mathbb{K}}{\Delta}A|\to |{\mathbb{KH}}A|.$$ If $A$ happens to be unital, then is an equivalence, whence the same is true of . Further, we shall show below that is in fact an equivalence for all ${\mathbb{Q}}$-algebras $A$.
Let $A$ be a ring, and $n\in {\mathbb{Z}}$. Then $KH_n(A)=\pi_n{\mathbb{KH}}(A)$.
It is immediate from the definition of the spectrum ${\mathbb{KH}}A$ given above that $$\pi_*{\mathbb{KH}}(A)=\ker(\pi_*{\mathbb{KH}}\tilde{A}\to\pi_*{\mathbb{KH}}{\mathbb{Z}})$$ Since a similar formula holds for $KH_*$, it suffices to prove the proposition for unital rings. Let $R$ be a unital ring. By definition, the spectrum $|{\mathbb{KH}}(R)|$ is the spectrification of the pre-spectrum whose $p$-th space is $|\Omega K\Delta
\Sigma^{p+1}R|$. Thus $$\begin{aligned}
\pi_n{\mathbb{KH}}(R)=&{\mathop{\mathrm{colim}}}_p\pi_{n+p}|\Omega K\Delta \Sigma^{p+1}
R|={\mathop{\mathrm{colim}}}_p\pi_{n+p}\Omega| K\Delta \Sigma^{p+1} R|\\
=&{\mathop{\mathrm{colim}}}_p\pi_{n+p+1}|K\Delta \Sigma^{p+1}
R|={\mathop{\mathrm{colim}}}_pKV_{n+p}\Sigma^pR=KH_nR.\ \ \qed\end{aligned}$$
\[exer:kd\] Let $L$ be a locally convex algebra. Put $${\mathbb{KD}}L=|{\mathbb{K}}({\Delta^{{\rm dif}}}\tilde{L}_{\mathbb{C}}: {\Delta^{{\rm dif}}}L)|.$$
[i)]{} Show that $\pi_n{\mathbb{KD}}L=KD_n L$ ($n\in{\mathbb{Z}}$).
[ii)]{} Construct a natural map $${\mathbb{K}}{\Delta^{{\rm dif}}}L\to {\mathbb{KD}}L$$ and show it is an equivalence for unital $L$.
Primary and secondary Chern characters {#sec:chern}
======================================
In this section, and for the rest of the paper, all rings considered will be ${\mathbb{Q}}$-algebras.
Cyclic homology.
----------------
The different variants of cyclic homology of an algebra $A$ are related by an exact sequence, Connes’ $SBI$ sequence $$\label{seq:sbi}
\xymatrix{HP_{n+1}A\ar[r]^S& HC_{n-1}A\ar[r]^B& HN_nA\ar[r]^I&HP_nA\ar[r]^S& HC_{n-2}A}$$ Here $HC$, $HN$ and $HP$ are respectively cyclic, negative cyclic and periodic cyclic homology. The sequence comes from an exact sequence of complexes of ${\mathbb{Q}}$-vectorspaces. The complex for cyclic homology is Connes’ complex $C^\lambda A$, whose definition we shall recall presently; see [@lod 5.1] for the negative cyclic and periodic cyclic complexes. The complex $C^\lambda A$ is a nonnegatively graded chain complex, given in dimension $n$ by the coinvariants $$\label{clambda}
C^\lambda_nA:=(A^{\otimes n+1})_{{\mathbb{Z}}/(n+1){\mathbb{Z}}}$$ of the tensor power –taken over ${\mathbb{Z}}$, or, what is the same, over ${\mathbb{Q}}$– under the action of ${\mathbb{Z}}/(n+1){\mathbb{Z}}$ defined by the signed cyclic permutation $$\lambda(a_0\otimes\dots\otimes a_n)=(-1)^na_n\otimes a_0\otimes\dots\otimes a_{n-1}.$$ The boundary map $b:C_n^\lambda A\to C_{n-1}^\lambda A$ is induced by $$\begin{gathered}
b:A^{\otimes n+1}\to A^{\otimes n},\quad b(a_0\otimes\dots\otimes a_n)=\sum_{i=0}^{n-1}(-1)^ia_0\otimes\dots\otimes a_ia_{i+1}
\otimes\dots\otimes a_n\\+(-1)^na_na_0\otimes\dots\otimes a_{n-1}\end{gathered}$$
\[exa:hc0\] The map $C^\lambda_1(A)\to C_0^\lambda(A)$ sends the class of $a\otimes b$ to $[a,b]:=ab-ba$. Hence $$HC_0A=A/[A,A].$$
By definition, $HC_nA=0$ if $n<0$. Also by definition, $HP$ is periodic of period $2$.
The following theorem subsumes the main properties of $HP$.
\[thm:hp\_ppties\]
[i)]{} (Goodwillie, [@goo]; see also [@cq2]) The functor $HP_*:{\mathbb{Q}}-{\mathfrak{Ass}}\to {\mathfrak{Ab}}$ is homotopy invariant and nilinvariant.
[ii)]{}(Cuntz-Quillen, [@cq]) $HP$ satisfies excision for ${\mathbb{Q}}$-algebras; to each exact sequence of ${\mathbb{Q}}$-algebras, there corresponds a $6$-term exact sequence $$\label{seq:exciper}
\xymatrix{HP_0A\ar[r]&HP_0B\ar[r]&HP_0C\ar[d]\\ HP_1C\ar[u]&HP_1B\ar[l]&HP_1A\ar[l]}$$
The sequence comes from an exact sequence of complexes, and thus, via the Dold-Kan correspondence, it corresponds to a homotopy fibration of spectra $$\label{seq:spt_sbi}
\Omega^{-1}{\mathbb{HC}}A\to{\mathbb{HN}}A\to{\mathbb{HP}}A$$ Similarly, the excision sequence comes from a cofibration sequence in the category of pro-supercomplexes [@cv]; applying the Dold-Kan functor and taking homotopy limits yields a homotopy fibration of Bott-periodic spectra $$\label{seq:sptexciper}
{\mathbb{HP}}A\to{\mathbb{HP}}B\to{\mathbb{HP}}C$$ The sequence is recovered from after taking homotopy groups.
Primary Chern character and infinitesimal $K$-theory.
-----------------------------------------------------
The main or primary character is a map going from $K$-theory to negative cyclic homology $$c_n:K_nA\to HN_nA\qquad (n\in{\mathbb{Z}}).$$ (See [@lod Ch. 8, Ch. 11] for its definition). This group homomorphism is induced by a map of spectra $${\mathbb{K}}A\to {\mathbb{HN}}A$$ Put ${\mathbb{K}}^{{\rm inf}}A:={\mathop{\mathrm{hofiber}}}({\mathbb{K}}A\to {\mathbb{HN}}A)$ for its fiber; we call $K_*^{{\rm inf}}A$ the [*infinitesimal $K$-theory*]{} of $A$. Thus, by definition, $$\label{basicfib}
{\mathbb{K}}^{{\rm inf}}A\to {\mathbb{K}}A\to {\mathbb{HN}}A$$ is a homotopy fibration. The main properties of $K^{{\rm inf}}$ are subsumed in the following theorem.
\[thm:ppties\_kinf\]
[i)]{} (Goodwillie, [@goo1]) The functor $K_n^{{\rm inf}}:{\mathbb{Q}}-{\mathfrak{Ass}}\to {\mathfrak{Ab}}$ is nilinvariant $(n\in{\mathbb{Z}})$.
[ii)]{}([@kabi]) $K^{{\rm inf}}$ satisfies excision for ${\mathbb{Q}}$-algebras. Thus to every exact sequence of ${\mathbb{Q}}$-algebras there corresponds a triangle $${\mathbb{K}}^{{\rm inf}}A\to {\mathbb{K}}^{{\rm inf}}B\to {\mathbb{K}}^{{\rm inf}}C\to\Omega^{-1}{\mathbb{K}}^{{\rm inf}}A$$ in ${\mathrm{Ho}}(Spt)$ and therefore an exact sequence $$K^{{\rm inf}}_{n+1}C\to K^{{\rm inf}}_nA\to K^{{\rm inf}}_nB\to K^{{\rm inf}}_nC\to K^{{\rm inf}}_{n-1}A\ \ \qed$$
Secondary Chern characters.
---------------------------
Starting with the fibration sequence , one builds up a commutative diagram with homotopy fibration rows and columns $$\label{fundechar}
\xymatrix{{\mathbb{K}}^{{\rm inf},{\rm nil}}A\ar[d]\ar[r]&{\mathbb{K}}^{{\rm inf}}A\ar[r]\ar[d]&|{\mathbb{K}}^{{\rm inf}}{\Delta}A|\ar[d]\\
{\mathbb{K}}^{{\rm nil}}A\ar[r]\ar[d] & {\mathbb{K}}A\ar[d]_{c}\ar[r]&|{\mathbb{K}}{\Delta}A|\ar[d]_{c{\Delta}}\\
{\mathbb{HN}}^{{\rm nil}}A\ar[r]&{\mathbb{HN}}A\ar[r]&|{\mathbb{HN}}{\Delta}A|.}$$ The middle column is ; that on the right is applied to ${\Delta}A$; the horizontal map of homotopy fibrations from middle to right is induced by the inclusion $A\to {\Delta}A$, and its fiber is the column on the left.
\[lem:vanisimpli\]([@cot 2.1.1]) Let $A$ be a simplicial algebra; write $\pi_*A$ for its homotopy groups. Assume $\pi_nA=0$ for all $n$. Then ${\mathbb{HC}}A{\overset{\sim}{\to}}0$ and ${\mathbb{HN}}A{\overset{\sim}{\to}}{\mathbb{HP}}A$.
\[prop:identiboto\] Let $A$ be a ${\mathbb{Q}}$-algebra. Then there is a weak equivalence of fibration sequences $$\xymatrix{{\mathbb{HN}}^{{\rm nil}}A\ar[r]\ar[d]^\wr &{\mathbb{HN}}A\ar[r]\ar[d]^\wr &{\mathbb{HN}}{\Delta}A\ar[d]^\wr \\ \Omega^{-1}{\mathbb{HC}}A\ar[r]&{\mathbb{HN}}A\ar[r]& {\mathbb{HP}}A}$$
By Lemma \[lem:vanisimpli\] and Theorem \[thm:hp\_ppties\], we have equivalences $$\xymatrix{{\mathbb{HN}}{\Delta}A\ar[r]^{\sim}& {\mathbb{HP}}{\Delta}A & {\mathbb{HP}}A\ar[l]_\sim}$$ The proposition is immediate from this.
\[prop:berrebuena\] If $A$ is a ${\mathbb{Q}}$-algebra, then the natural map $|{\mathbb{K}}{\Delta}A|\to {\mathbb{KH}}A$ of above is an equivalence.
We already know that the map is an equivalence for unital algebras. Thus since ${\mathbb{KH}}$ is excisive, it suffices to show that ${\mathbb{K}}{\Delta}(-)$ is excisive. Using Proposition \[prop:identiboto\] and diagram , we obtain a homotopy fibration $${\mathbb{K}}^{{\rm inf}}{\Delta}A\to {\mathbb{K}}{\Delta}A\to {\mathbb{HP}}A$$ Note ${\mathbb{HP}}$ is excisive by Cuntz-Quillen’s theorem \[thm:hp\_ppties\] ii). Moreover, ${\mathbb{K}}^{{\rm inf}}{\Delta}(-)$ is also excisive, because $K^{{\rm inf}}$ is excisive (\[thm:ppties\_kinf\] ii)), and because ${\Delta}(-)$ preserves exact sequences and $|-|$ preserves fibration sequences. It follows that ${\mathbb{K}}{\Delta}(-)$ is excisive; this completes the proof.
In view of Propositions \[prop:identiboto\] and \[prop:berrebuena\], we may replace diagram by a homotopy equivalent diagram $$\label{fundechar2}
\xymatrix{{\mathbb{K}}^{{\rm inf},{\rm nil}}A\ar[d]\ar[r]&{\mathbb{K}}^{{\rm inf}}A\ar[r]\ar[d]&|{\mathbb{K}}^{{\rm inf}}{\Delta}A|\ar[d]\\
{\mathbb{K}}^{{\rm nil}}A\ar[r]\ar[d]_\nu & {\mathbb{K}}A\ar[d]_{c}\ar[r]&{\mathbb{KH}}A\ar[d]_{ch}\\
\Omega^{-1}{\mathbb{HC}}A\ar[r]&{\mathbb{HN}}A\ar[r]&{\mathbb{HP}}A.}$$ The induced maps $\nu_*:K^{{\rm nil}}A\to HC_{*-1}A$ and $ch_*:KH_*A\to HP_*A$ are the [*secondary*]{} and the [*homotopy*]{} Chern characters. By definition, they fit together with the primary character $c_*$ into a commutative diagram with exact rows $$\label{htpyfundechar}
\xymatrix{ KH_{n+1}A\ar[r]\ar[d]_{ch_{n+1}}& K^{{\rm nil}}_nA\ar[r]\ar[d]_{\nu_n}&K_n A\ar[r]\ar[d]_{c_n}& KH_nA
\ar[d]_{ch_n}\ar[r]&\tau K^{{\rm nil}}_{n-1}A\ar[d]_{\nu_{n-1}}\\
HP_{n+1}A\ar[r]_S&HC_{n-1}A\ar[r]_B&HN_nA\ar[r]_I&HP_nA\ar[r]_S&HC_{n-2}A.}$$
The construction of secondary characters given above goes back to Weibel’s paper [@wenil], where a a diagram similar to , involving Karoubi-Villamayor $K$-theory $KV$ instead of $KH$ (which had not yet been invented by Weibel), appeared (see also [@karmult]). For $K_0$-regular algebras and $n\ge 1$, the latter diagram is equivalent to .
Recall that, according to the notation of Section \[sec:polikv\], an algebra is $K^{{\rm inf}}_n$-regular if $K^{{\rm inf}}_nA\to K^{{\rm inf}}_n{\Delta}_p A$ is an isomorphism for all $p\ge 0$. We say that $A$ is [*$K^{{\rm inf}}$-regular*]{} if it is $K_n^{{\rm inf}}$-regular for all $n$.
\[prop:kinfreg\] Let $A$ be a ${\mathbb{Q}}$-algebra. If $A$ is $K^{{\rm inf}}$-regular, then the secondary character $\nu_*:K^{{\rm nil}}_*A\to HC_{*-1}A$ is an isomorphism.
The hypothesis implies that the map ${\mathbb{K}}^{{\rm inf}} A\to {\mathbb{K}}^{{\rm inf}}{\Delta}_n A$ is a weak equivalence $(n\ge 0)$. Thus, viewing ${\mathbb{K}}^{{\rm inf}} A$ as a constant simplicial spectrum and taking realizations, we obtain an equivalence ${\mathbb{K}}^{{\rm inf}} A{\overset{\sim}{\to}}|{\mathbb{K}}^{{\rm inf}}{\Delta}A|$. Hence ${\mathbb{K}}^{{\rm inf},{\rm nil}}A{\overset{\sim}{\to}}0$ and therefore $\nu$ is an equivalence.
The notion of $K^{{\rm inf}}$-regularity of ${\mathbb{Q}}$-algebras was introduced in [@cot §3], where some examples are given and some basic properties are proved; we recall some of them. First of all, for $n\le -1$, $K_n^{{\rm inf}}$-regularity is the same as $K_n$-regularity. A $K_0^{{\rm inf}}$-regular algebra is $K_0$-regular, but not conversely. If $R$ is unital and $K^{{\rm inf}}_1$-regular, then the two sided ideal $<[R,R]>$ generated by the additive commutators $[r,s]=rs-sr$ is the whole ring $R$. In particular, no nonzero unital commutative ring is $K_1^{{\rm inf}}$-regular. Both infinite sum and nilpotent algebras are $K^{{\rm inf}}$-regular. If is an exact sequence of ${\mathbb{Q}}$-algebras such that any two of $A$, $B$, $C$ are $K^{{\rm inf}}$-regular, then so is the third.
We shall see in \[thm:laseq\] that any stable locally convex algebra is $K^{{\rm inf}}$-regular.
Application to $KD$.
--------------------
Let $L$ be a locally convex algebra. Then the natural map ${\mathbb{K}}{\Delta^{{\rm dif}}}L\to {\mathbb{KD}}L$ of \[exer:kd\] ii) is an equivalence.
By Exercise \[exer:kd\] ii), the proposition is true for unital $L$. Thus it suffices to show that ${\mathbb{K}}{\Delta^{{\rm dif}}}(-)$ satisfies excision for those exact sequences which admit a continuous linear splitting. Applying the sequence to ${\Delta^{{\rm dif}}}L$ and taking realizations yields a fibration sequence $$|{\mathbb{K}}^{{\rm inf}}{\Delta^{{\rm dif}}}L|\to |{\mathbb{K}}{\Delta^{{\rm dif}}}L|\to |{\mathbb{HN}}{\Delta^{{\rm dif}}}L|$$ One checks that $\pi_*{\Delta^{{\rm dif}}}L=0$ (see [@cot 4.1.1]). Hence the map $I:{\mathbb{HN}}{\Delta^{{\rm dif}}}L\to {\mathbb{HP}}{\Delta^{{\rm dif}}}L$ is an equivalence, by Lemma \[lem:vanisimpli\]. Now proceed as in the proof of Proposition \[prop:berrebuena\], taking into account that ${\Delta^{{\rm dif}}}(-)$ preserves exact sequences with continuous linear splitting.
\[cor:regdif\] Assume that the map $K_nL\to K_n{\Delta^{{\rm dif}}}_pL$ is an isomorphism for all $n\in {\mathbb{Z}}$ and all $p\ge 0$. Then ${\mathbb{K}}L\to {\mathbb{KD}}L$ is an equivalence.
Analogous to the first part of the proof of Proposition \[prop:kinfreg\].
Comparison between algebraic and topological $K$-theory III {#sec:karconfre}
===========================================================
Stable Fréchet algebras.
------------------------
The following is the general version of theorem \[thm:karconbau\], also due to Wodzicki.
\[thm:karconfre\]([@wod Thm. 2],[@cot 8.3.3, 8.3.4]) Let $L$ be an $m$-Fréchet algebra with uniformly bounded left or right approximate unit. Then there is a natural isomorphism: $$K_n(L {\hat{\otimes}}{\mathcal{K}}) \stackrel{\sim}{\to} KD_n(L {\hat{\otimes}}{\mathcal{K}}), \quad \forall n \in {\mathbb{Z}}.$$
Write ${\mathfrak{C}}$ for the full subcategory of those locally convex algebras which are $m$-Fréchet algebras with left ubau. In view of Corollary \[cor:regdif\], it suffices to show that for all $n\in{\mathbb{Z}}$ and $p\ge 0$, the map $$\label{toprove}
K_n(L{\hat{\otimes}}{\mathcal{K}})\to K_n({\Delta^{{\rm dif}}}_p L{\hat{\otimes}}{\mathcal{K}})$$ is an isomorphism for each $L\in{\mathfrak{C}}$. Note that, since ${\Delta^{{\rm dif}}}_p{\mathbb{C}}$ is a unital $m$-Fréchet algebra and its unit is uniformly bounded, the functor ${\Delta^{{\rm dif}}}_p(-)=-{\hat{\otimes}}{\Delta^{{\rm dif}}}_p{\mathbb{C}}$ maps ${\mathfrak{C}}$ into itself. Since $L\to {\Delta^{{\rm dif}}}_pL$ is a diffeotopy equivalence, this means that to prove is to prove that $K_n(-{\hat{\otimes}}{\mathcal{K}}):{\mathfrak{C}}\to {\mathfrak{Ab}}$ is diffeotopy invariant. Applying the same argument as in the proof of Theorem \[thm:karconbau\], we get that the natural map $$K_*(L{\hat{\otimes}}{\mathcal{K}})\to K_*((L{\hat{\otimes}}{\mathcal{K}})[0,1])$$ is an isomorphism. It follows that $K_*(-{\hat{\otimes}}{\mathcal{K}})$ is invariant under continous homotopies, and thus also under diffeotopies.
Prove that if $L$ is as in Theorem \[thm:karconfre\] and $M=L{\hat{\otimes}}{\mathcal{K}}$, then $KD_*(M(0,1))=KD_{*+1}M$.
\[exe:summ3\] Prove that the map $K_n(L {\hat{\otimes}}{\mathcal{K}}) {\to} KD_n(L {\hat{\otimes}}{\mathcal{K}})$ is an isomorphism for every unital Fréchet algebra $L$, even if the unit of $L$ is not uniformly bounded. (Hint: use Exercise \[exe:summ2\]).
\[rem:ncp\] N.C. Phillips has defined a $K^{{\rm top}}$ for $m$-Fréchet algebras ([@ncp]) which extends that of Banach algebras discussed in Section \[sec:topk\] above. We shall see presently that, for $L$ as in Theorem \[thm:karconfre\], $$K_*^{{\rm top}}(L{\hat{\otimes}}{\mathcal{K}})=KD_*(L{\hat{\otimes}}{\mathcal{K}})=K_*(L{\hat{\otimes}}{\mathcal{K}}).$$ Phillips’ theory is Bott periodic and satisfies $K^{{\rm top}}_0(M)=K_0(M{\hat{\otimes}}{\mathfrak{K}})$ and $K_1^{{\rm top}}(M)=K_0((M{\hat{\otimes}}{\mathfrak{K}})(0,1))$ for every Fréchet algebra $M$. On the other hand, for $L$ as in the theorem, we have $KD_0(L{\hat{\otimes}}{\mathcal{K}})=K_0(L{\hat{\otimes}}{\mathcal{K}})$ and $KD_1(L{\hat{\otimes}}{\mathcal{K}})=K_0((L{\hat{\otimes}}{\mathcal{K}})(0,1))$. But by \[fact:jcomstable\], $K_0(M{\hat{\otimes}}{\mathcal{K}})=K_0(M{\hat{\otimes}}{\mathcal{K}}{\hat{\otimes}}{\mathfrak{K}})$ for every locally convex algebra $M$. This proves that $KD_n(L{\hat{\otimes}}{\mathcal{K}})=K^{{\rm top}}_n(L{\hat{\otimes}}{\mathcal{K}})$ for $n=0,1$; by Bott periodicity, we get the equality for all $n$.
Stable locally convex algebras: the comparison sequence.
--------------------------------------------------------
\[thm:laseq\](see [@cot 6.3.1]) Let $A$ be a ${\mathbb{C}}$-algebra, $L$ be a locally convex algebra, and ${\mathcal{J}}$ a Fréchet operator ideal. Then
[i)]{} $A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})$ is $K^{{\rm inf}}$-regular.
[ii)]{} For each $n\in{\mathbb{Z}}$, there is a $6$-term exact sequence $$\label{seq:seqtop}
\xymatrix{ K_{-1}(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}}))\ar[r]&HC_{2n-1}(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})\ar[r]&K_{2n}(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}}))\ar[d] \\
K_{2n-1}(A\otimes_{\mathbb{C}}(L {\hat{\otimes}}{\mathcal{J}})) \ar[u]& HC_{2n-2}(A\otimes_{\mathbb{C}}(L {\hat{\otimes}}{\mathcal{J}})) \ar[l] & K_0(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})). \ar[l]}$$
According to Theorem \[thm:ppties\_kinf\], $K^{{\rm inf}}$ is nilinvariant and satisfies excision. Hence, by Theorem \[thm:hit3\], $L\mapsto K_*^{{\rm inf}}(A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}}))$ is diffeotopy invariant, whence it is invariant under polynomial homotopies. This proves (i). Put $B=A\otimes_{\mathbb{C}}(L{\hat{\otimes}}{\mathcal{J}})$. By (i) and \[prop:kinfreg\], $\nu_*:K^{{\rm nil}}_*B\to HC_{*-1}B$ is an isomorphism. Hence from we get a long exact sequence $$\label{seq:seqtop2}
\xymatrix{KH_{m+1}B\ar[r]& HC_{m-1}B\ar[r]&K_mB\ar[r]& KH_mB
\ar[r]^{S ch_m}&HC_{m-2}B}$$ By Corollary \[cor:kdk0k-1\], $KH_{2n}B=K_0B$ and $KH_{2n-1}B=K_{-1}B$; the sequence of the theorem follows from this, using the sequence .
\[cor:seqtop\] For each $n\in{\mathbb{Z}}$, there is a $6$-term exact sequence $$\label{seq:seqtop}
\xymatrix{ KD_{1}(L{\hat{\otimes}}{\mathcal{J}})\ar[r]&HC_{2n-1}(L{\hat{\otimes}}{\mathcal{J}})\ar[r]&K_{2n}(L{\hat{\otimes}}{\mathcal{J}})\ar[d] \\
K_{2n-1}(L {\hat{\otimes}}{\mathcal{J}}) \ar[u]& HC_{2n-2}(L {\hat{\otimes}}{\mathcal{J}}) \ar[l] & KD_0(L{\hat{\otimes}}{\mathcal{J}}). \ar[l]}$$
By Theorem \[thm:compakh\] iii), $KD_*(L{\hat{\otimes}}{\mathcal{J}})=KH_*(L{\hat{\otimes}}{\mathcal{J}})$. Now use Corollary \[cor:kdk0k-1\].
We saw in Theorem \[thm:karconfre\] that the comparison map $K_*(L{\hat{\otimes}}{\mathcal{K}})\to KD_*(L{\hat{\otimes}}{\mathcal{K}})$ is an isomorphism whenever $L$ is an $m$-Fréchet algebra with left ubau. Thus $$\label{hcvanifre}
HC_*(L{\hat{\otimes}}{\mathcal{K}})=0$$ by Corollary \[cor:seqtop\]. It is also possible to prove directly and deduce Theorem \[thm:karconfre\] from the corollary above; see [@cot 8.3.3].
If we set $L={\mathbb{C}}$ in Theorem \[thm:laseq\] above, we obtain an exact sequence $$\label{seq:seqalg}
\xymatrix{ K_{-1}(A\otimes_{\mathbb{C}}{\mathcal{J}})\ar[r]&HC_{2n-1}(A\otimes_{\mathbb{C}}{\mathcal{J}})\ar[r]&K_{2n}(A\otimes_{\mathbb{C}}{\mathcal{J}})\ar[d] \\
K_{2n-1}(A\otimes_{\mathbb{C}}{\mathcal{J}}) \ar[u]& HC_{2n-2}(A\otimes_{\mathbb{C}}{\mathcal{J}}) \ar[l] & K_0(A\otimes_{\mathbb{C}}{\mathcal{J}}). \ar[l]}$$ Further specializing to $A={\mathbb{C}}$ and using \[k0j\] and \[k-1j\] yields $$0\to HC_{2n-1}{\mathcal{I}}\to K_{2n}{\mathcal{I}}\to {\mathbb{Z}}\overset{\alpha_n}\to HC_{2n-2}{\mathcal{I}}\to K_{2n-1}{\mathcal{I}}\to 0.$$ Here we have written $\alpha_n$ for the composite of $S\circ ch_{2n}$ with the isomorphism ${\mathbb{Z}}\cong K_0{\mathcal{J}}$. If for example ${\mathcal{J}}\subset{\mathcal{L}}_p$ ($p\ge 1$) then $\alpha_n$ is injective for $n\ge (p+1)/2$, by a result of Connes and Karoubi [@ck 4.13] (see also [@cot 7.2.1]). Setting $p=1$ we obtain, for each $n\ge 1$, an isomorphism $$K_{2n}{\mathcal{L}}_1=HC_{2n-1}{\mathcal{L}}_1$$ and an exact sequence $$0\to {\mathbb{Z}}\overset{\alpha_n}\to HC_{2n-2}{\mathcal{L}}_1\to K_{2n-1}{\mathcal{L}}_1\to 0.$$ Note that since $HC_{2n-2}{\mathcal{L}}_1$ is a ${\mathbb{Q}}$-vectorspace by definition, the sequence above implies that $K_{2n-1}{\mathcal{L}}_1$ is isomorphic to the sum of a copy of ${\mathbb{Q}}/{\mathbb{Z}}$ plus a ${\mathbb{Q}}$-vectorspace.
The exact sequence is valid more generally for subharmonic ideals (see [@cot 6.5.1] for the definition of this term, and [@cot 7.1.1] for the statement). In particular, is valid for all the Schatten ideals ${\mathcal{L}}_p$, $p>0$. In [@cot 7.1.1] there is also a variant of involving relative $K$-theory and relative cyclic homology; the particular case of the latter when $A$ is $K$-excisive is due to Wodzicki ([@wod Theorem 5]).
[111]{}
D. Anderson. [*Relationship among $K$-theories*]{}. In Higher $K$-theories, H. Bass (Ed.). Springer Lecture Notes in Math. [**341**]{}, (1972) 57–72. H. Bass. [*Algebraic $K$-theory*]{}. W.A. Benjamin, New York, 1968.
D. J. Benson. [*Representations and cohomology.*]{} Second edition. Cambridge Studies in Advanced Mathematics, 30. Cambridge University Press, Cambridge, 1998.
A. J. Berrick. [*An approach to algebraic $K$-theory.*]{} Research Notes in Mathematics 56. Pitman Books, London, 1982. B. Blackadar. [*$K$-theory for operator algebras*]{}. Second Edition, Cambridge Univ. Press, Cambridge, 1998.
A. Bousfield, D. Kan. [*Homotopy limits, completions and localizations.*]{} Lecture Notes in Math. 304. Springer-Verlag, Berlin, 1972. A. Bousfield, E. Friedlander.
W. Browder. [*Higher torsion in $H$-spaces*]{}. Trans. Amer. Math. Soc. [**108**]{} (1963), 353–375.
A. Connes, [*Noncommutative geometry*]{}. Academic Press, Inc., San Diego, CA, 1994.
A. Connes, M. Karoubi. [*Caractére mulitplicatif d’un module de Fredholm*]{}. $K$-theory [**2**]{} (1988) 431–463.
G. Cortiñas. [*The obstruction to excision in $K$-theory and in cyclic homology*]{}, Invent. Math. [**454**]{} (2006) 143–173.
G. Cortiñas, A. Thom. [*Bivariant algebraic $K$-theory*]{}, to appear in J. reine angew. Math. G. Cortiñas, A. Thom. [*Comparison between algebraic and topological $K$-theory of locally convex algebras.*]{} math.KT/067222 J. Cuntz, [*$K$-theory and $C^*$-algebras*]{} in Algebraic $K$-theory, number theory and analysis. Springer Lecture Notes in Math [**1046**]{} 55–79.
J. Cuntz. [*Bivariante $K$-theorie für lokalkonnvexe Algebren und der Chern-Connes-Charakter.*]{} Documenta Mathematica [**2**]{} (1997) 139–182.
J. Cuntz. [*Bivariant $K$-theory and the Weyl algebra.*]{} $K$-Theory [**35**]{}(2005), 93–137.
J. Cuntz. [*Cyclic theory and the bivariant Chern-Connes character.*]{} In Noncommutative geometry. Lectures given at the C.I.M.E. Summer School held in Martina Franca, September 3–9, 2000. Edited by S. Doplicher and R. Longo. Lecture Notes in Mathematics, 1831. Springer-Verlag, Berlin. 2004, 73–135.
J. Cuntz, D. Quillen, [*Cyclic homology and nonsingularity*]{}, J. Amer. Math. Soc. [**8**]{} (1995), 373–441.
J. Cuntz, D. Quillen. [*Excision in periodic bivariant cyclic cohomology*]{}, Invent. Math. [**127**]{} (1997), 67–98.
J. Cuntz, A. Thom. [*Algebraic $K$-theory and locally convex algebras.*]{} Math. Ann. [**334**]{} (2006) 339–371.
K. R. Davidson. [*$C^*$-algebras by example.*]{} Fields Institute Monographs 6. Amer. Math. Soc., Providence, 1996.
B. Dayton, C. Weibel. [*$K$-theory of hyperplanes*]{}. Trans. AMS [**257**]{} (1980) 119–141.
K. Dykema, T. Figiel, G. Weiss, M. Wodzicki. [*The commutator structure of operator ideals.*]{} Adv. Math. [**185**]{} (2004), 1–78.
S. Geller, C. Weibel. [*Hochschild and cyclic homology are far from being homotopy functors.*]{} Proc. Amer. Math. Soc. [**106**]{} (1989), no. 1, 49–57.
P. Gabriel, G. Zisman. [*Calculus of fractions and homotopy theory.*]{} Ergebnisse der Mathematik und ihrer Grenzgebiete, Band [**35**]{}. Springer-Verlag New York, Inc., New York 1967.
I. Gelfand, Y. Manin. [*Methods of homological algebra (second edition)*]{}. Springer Monographs in Mathematics. Springer, New York, 2002. P. Goerss, J. Jardine. [*Simplicial homotopy theory.*]{} Progress in Mathematics, 174. Birkhäuser Verlag, Basel, 1999.
T. Goodwillie. [*Cyclic homology, derivations, and the free loopspace.*]{} Topology [**24**]{} (1985) 187-215.
T. Goodwillie. [*Relative algebraic $K$-theory and cyclic homology*]{}, Ann. Math. [**124**]{} (1986), 347-402.
A. Hatcher. [*Spectral sequences in algebraic topology.*]{} N. Higson. [*Algebraic $K$-theory of $C^*$-algebras.*]{} Adv. in Math. [**67**]{}, (1988) 1–40.
P. Hirschorn. [*Model categories and their localizations*]{}. Mathematical Surveys and Monographs 99. Amer. Math. Soc., Providence, 2003.
D. Husemöller. [*Algebraic $K$-theory of operator ideals (after Mariusz Wodzicki)*]{}. In $K$-theory, Strasbourg 1992. Astérisque [**226**]{} (1994) 193–209.
M. Karoubi. [*$K$-théorie algébrique de certaines algèbres d’opérateurs*]{}. Algèbres d’opérateurs (Sém., Les Plans-sur-Bex, 1978), pp. 254–290, Lecture Notes in Math., 725, Springer, Berlin, 1979.
M. Karoubi. [*Homologie cyclique et K-théorie*]{}. Astérisque [**149**]{}.
M. Karoubi. [*Sur la $K$-théorie Multiplicative*]{}. In Cyclic homology and noncommutative geometry. Fields Institute Communications [**17**]{} (1997) 59–77.
M. Karoubi, O. Villamayor. [*$K$-théorie algebrique et $K$-théorie topologique*]{}. , (1971) 265–307.
J. L. Loday, [*Cyclic homology*]{}, 1st ed. Grund. math. Wiss. 301. Springer-Verlag Berlin, Heidelberg 1998.
J. L. Loday, [*$K$-théorie algébrique et représentations de groupes.*]{} Annales Scientifiques de l’ÉNS. 4 Série, tome 9, (3) (1976) 309–377.
J. Milnor, [*Introduction to algebraic $K$-theory*]{}. Annals of Mathematics Studies, 72. Princeton University Press, Princeton, 1971.
N. C. Phillips. [*$K$-theory for Fréchet algebras*]{}. International Journal of Mathematics, [**2**]{} (1991) 77–129. D. Quillen. Higher algebraic $K$-theory. I. Algebraic $K$-theory, I: Higher $K$-theories (Proc. Conf., Battelle Memorial Inst., Seattle, Wash., 1972), pp. 85–147. Lecture Notes in Math., Vol. 341, Springer, Berlin 1973.
M. Rordam, F. Larsen, N.J. Laustsen. [*An introduction to $K$-theory for $C^*$-algebras.*]{} London Math. Society Student Texts [**49**]{}. Cambridge University Press, Cambridge UK, 2000.
J. Rosenberg. [*Algebraic $K$-theory and its applications.*]{} Graduate Texts in Mathematics, 147. Springer-Verlag, New York, 1994.
J. Rosenberg [*Comparison between algebraic and topological $K$-theory for Banach algebras and $C^*$-algebras.*]{} In Handbook of K-Theory, Friedlander, Eric M.; Grayson, Daniel R. (Eds.). Springer-Verlag, New York, 2005. M. Schlichting. [*Algebraic K-theory of schemes.*]{}-
M. Summers. [*Factorization in Fréchet modules.*]{} J. London Math. Soc. [**5**]{} (1972), 243–248.
A. Suslin. [*Excision in the integral algebraic $K$-theory.*]{} Proc. Steklov Inst. Math. [**208**]{} (1995) 255–279.
A. Suslin, M. Wodzicki. [*Excision in algebraic $K$-theory*]{}, Ann. of Math. [**136**]{} (1992) 51-122.
R. Swan, [*Excision in algebraic $K$-theory*]{}, J. Pure Appl. Algebra [**1**]{} (1971) 221–252.
R. Switzer. [Algebraic topology-homotopy and homology]{}. Grundlehren der math. Wiss. 212. Springer-Verlag, Berlin 1975. T. Vorst. [*Localization of the $K$-theory of polynomial extensions*]{}. Math. Annalen [**244**]{} (1979) 33–54.
J. B. Wagoner. [*Delooping classifying spaces in algebraic $K$-theory.*]{} Topology [**11**]{} (1972) 349-370.
N. E. Wegge-Olsen. [*$K$-theory and $C^*$-algebras*]{}. Oxford University Press, Oxford, 1993.
C. Weibel. [*The K-book: An introduction to algebraic K-theory*]{}. Book-in-progress, available at its author’s webpage: http://www.math.rutgers.edu/ weibel/Kbook.html
C. Weibel, , Cambridge Univ. Press, 1994.
C. Weibel. [*Homotopy Algebraic $K$-theory*]{}. Contemporary Math. [**83**]{} (1989) 461–488.
C. Weibel. [*Nil $K$-theory maps to cyclic homology*]{}. Trans. Amer. Math. Soc. [**303**]{} (1987) 541–558.
C. Weibel. [*Mayer-Vietoris sequences and mod $p$ $K$-theory*]{}, Springer Lect. Notes Math. [**966**]{} (1982) 390-406.
G. W. Whitehead, [*Elements of Homotopy Theory*]{}, Springer, 1978.
M. Wodzicki. [*Excision in cyclic homology and in rational algebraic $K$-theory*]{}. Ann. of Math. (2) [**129**]{} (1989), 591–639.
M. Wodzicki. [*Algebraic $K$-theory and functional analysis*]{}. First European Congress of Mathematics, Vol. II (Paris, 1992), 485–496, Progr. Math., 120, Birkhäuser, Basel, 1994.
M. Wodzicki. [*Homological properties of rings of functional analytic type.*]{} Proc. Natl. Acad. Sci. USA, [**87**]{} (1990) 4910–4911.
E.C. Zeeman. [*A proof of the comparison theorem for spectral sequences.*]{} Proc. Camb. Philos. Soc. [**53**]{}, (1957) 57–62.
[^1]: Work for this notes was partly supported by FSE and by grants PICT03-12330, UBACyT-X294, VA091A05, and MTM00958.
|
---
abstract: 'We examine the regenerative cutting process by using a single degree of freedom non-smooth model with a friction component and a time delay term. Instead of the standard Lyapunov exponent calculations, we propose a statistical 0-1test analysis for chaos detection. This approach reveals the nature of the cutting process signaling regular or chaotic dynamics. For the investigated deterministic model we are able to show a transition from chaotic to regular motion with increasing cutting speed. For two values of time delay showing the different response the results have been confirmed by the means of the spectral density and the multiscaled entropy.'
author:
- Grzegorz Litak
- Sven Schubert
- Günter Radons
date: 'Received: date / Accepted: date'
title: Nonlinear dynamics of a regenerative cutting process
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction
============
A cutting process is a basic machining technology to obtain the surface of the assumed parameters. In certain working conditions it can be disturbed by chatter appearing as unexpected waves on the machined surface of a workpiece. The appearance of chatter was noticed and described by Taylor in the beginning of 20th century [@Taylor1907]. But the first approaches towards explanations of this phenomenon came about 50 years later through the analysis of self-sustained vibrations [@Arnold1946], regenerative effects [@Tobias1958], structural dynamics [@Tlusty1963; @Merit1965], and, finally, the dry friction phenomenon [@Wu1985a; @Wu1985b]. Consequently, elimination and stabilization of the associated oscillations have become of high interest in science and technology [@Altintas2000; @Warminski2003; @Insperger2006]. The plausible adaptive control concept, based on relatively short time series [@Ganguli2007], has been studied to gain deeper understanding.
Recently, apart from the widely developed chatter vibrations chaotic oscillations caused by various system nonlinearities were predicted and detected [@Grabec1988; @Tansel1992; @Gradisek1998a; @Gradisek1998b; @Marghitu2001; @Litak2002; @Fofana2003; @Gradisek2002; @Litak2004]. The recent technological demand is to improve the final surface properties of the workpiece and to minimize the production time with higher cutting speeds [@Stepan2003]. Thus a better understanding of the physical phenomena associated with a cutting process becomes necessary [@Martinez2009]. In this paper we will continue the work on chaotic instabilities in cutting processes proposing the 0-1test [@Gottwald2004; @Gottwald2005; @Falconer2007; @Gottwald2009a; @Gottwald2009b] as a tool identifying a possible chaotic solution [@Litak2009].
This paper is organized as follows. After the present introduction (Sec.1) we describe the model in Sec.2. In Sec.3 we provide the results of the simulations and corresponding power spectral densities (PSD) while in Sec.4 the 0-1test is applied and, subsequently, the findings are confirmed by means of the multiscale entropy (Sec.5). The paper ends with conclusions (Sec.6).
The model
=========
![Physical model of a regenerative cutting process [@Litak2002].](f1.eps "fig:"){width="5.0cm"} \[fig:model\]
A regenerative cutting process may exhibit a wide range of complex behavior due to frictional effects [@Warminski2003; @Grabec1988], structural nonlinearities [@Pratt2001] and delay dynamics [@Litak2002; @Fofana2003; @Stepan2001; @Wang2006; @Litak2008]. Moreover it may also involve loss of contact between the tool and the workpiece. The following equations model the regenerative cutting process and the mentioned properties.
After the first pass of the tool, the cutting depth can be expressed as $$\begin{aligned}
\label{eq1}
h(t)=h_0-y(t)+y(t\!-\!\tau),\end{aligned}$$
where $y(t\!-\!\tau)$ corresponds to the position of the workpiece during the previous pass, and $\tau$ is the time delay scaled by the period of revolution of the workpiece $2\pi/\Omega_0$ (Fig.1). The motion of the workpiece can be determined from the model proposed by Stépán [@Stepan2001] $$\begin{aligned}
\label{eq2}
&& \ddot y + 2\gamma \dot y + \omega_0^2 y =
\frac{1}{m}{\rm sgn} (v_0\!-\! \dot y)\big(F_y(h) - F_y(h_0)\big), \nonumber \\
&&
F_y(h)= \Theta(h) c_1 w
h^{3/4}, \\
&&
\dot y (t^+)= -\beta \dot y (t^-), \nonumber\end{aligned}$$ where $\omega_0=\sqrt{k/m}$ is the frequency of free vibration, $v_0$ is the feed velocity, and $2\gamma=c/m$ is the damping coefficient. $F_y(h)$ is the thrust force, which is the horizontal component of the cutting force, and $m$ is the effective mass of the workpiece. The thrust force $F_y$ is based on dry friction between the tool and the chip. It is assumed to have a power law dependence on the actual cutting depth $h$ and to be proportional to the chip width $w$ and a friction coefficient $c_1$. $\Theta(\cdot)$ denotes the Heaviside step function. The restitution parameter $\beta=0.75$ is associated with the impact after contact loss, while $t^-$ and $t^+$ denotes the time instants before and after the impact. Substituting into we derive a delay differential equation (DDE) for the workpiece motion $y(t)$. Plugging its solution into results in the history of cutting depth $h(t)$.
Simulation results
==================
The non-smooth model equations are solved by a simple Euler integration scheme. The used parameters [@Litak2002; @Litak2008] are presented in Table\[tab:1\].
[ll]{} Parameter& Value\
initial cutting depth $h_0$ & $10^{-3}$ m\
frequency of free vibration $\omega_0$ & 816 rad/s\
damping coefficient $c$ & $86$ Ns/m\
effective mass of the workpiece $m$ & 17.2 kg\
friction coefficient $c_1$ & $1.25\times10^9$ N/m$^2$\
chip width $w$ & $3.0 \times 10^{-3}$ m\
Furthermore, the feed velocity $v_0$ has been assumed to be fairly large so that $v_0\!>\! \dot y$. Note that, in this case, the system nonlinearities are limited to the exponential dependence of the cutting force on the chip thickness and to the contact loss between the tool and workpiece.
![Time series of cutting depth $h(t)$ for **(a)** time delay $\tau = 1.8$ms and **(b)** $\tau = 2.1$ms (sampling time $\Delta t = 1$ms with an integration step $\Delta t$/$10^3$=1$\mu$s) indicating regular and chaotic motion, respectively. The time series are plotted by sampling points.[]{data-label="fig:data"}](f2a.eps "fig:"){width="45.00000%"}\
![Time series of cutting depth $h(t)$ for **(a)** time delay $\tau = 1.8$ms and **(b)** $\tau = 2.1$ms (sampling time $\Delta t = 1$ms with an integration step $\Delta t$/$10^3$=1$\mu$s) indicating regular and chaotic motion, respectively. The time series are plotted by sampling points.[]{data-label="fig:data"}](f2b.eps "fig:"){width="45.00000%"}
The corresponding time series for two choices of the time delay parameter $\tau = 1.8$ and 2.1ms are presented in Fig.\[fig:data\]. These series have been plotted with points. On the first sight one can notice that both solutions are complex but the Fig.\[fig:data\]a shows points grouped in selected lines while the distribution of time history points of Fig.\[fig:data\]b looks more random. In Fig.\[fig:data\]b $h$ reaches negative values that signal that the contact between the tool and the workpiece is lost.
![Power spectral density $S(\omega)$ for two chosen delay times. A broad band spectral density indicates chaotic / stochastic dynamics whereas sharp peaks imply regular motion.[]{data-label="fig:PSD"}](f3.eps){width="45.00000%"}
The power spectral densities (PSD) of cutting depth $S(\omega)={2\pi}\!/{T}\,|{\cal F}\{h(t)\}|^2$ for the two chosen delay times ($\tau = 1.8$ms and $\tau = 2.1$ms)[^1] indicate a transition from regular to chaotic motion. The sharp peaks in Fig.\[fig:PSD\] belong to a high-periodic orbit (regular motion) whereas the broad spectrum indicates chaotic dynamics.
Both power spectra are dominated by a main peak. In case of regular motion its position belongs to the delay time $\tau=1.8$ms while in case of chaotic dynamics the time scale belonging to the peak ($t_{p}\approx 2.0$ms) is smaller than the delay time $\tau=2.1$ms. This smaller value could be a consequence of a tool-workpiece contact loss. Based on that we take a closer look on other measures to characterize the model’s dynamics and use a 0-1test for chaos to display a possible transition from regular to chaotic motion with increasing delay time $\tau$.
Application of 0-1test
======================
{width="41.50000%"}
Based on the time series $\{\tilde h_j\}$ which is a discretization of the solution $h(t)$ of the DDE normalized by its standard deviation, we define dimensionless displacements in the $(p,q)$-plane in the following way [@Gottwald2004; @Gottwald2005; @Litak2009] $$\label{eq5}
p_n = \sum_{j=0}^{n} \tilde h_j \cos (j c_0), \hspace{0.6cm}
q_n = \sum_{j=0}^{n} \tilde h_j \sin (j c_0),$$ where $c_0$ is a constant. In this way regular dynamics is related to a bounded motion while any chaotic dynamics leads to an unbounded motion in the $(p,q)$-plane [@Gottwald2004], see Fig.\[fig:2\]a.
To obtain a quantitative description of the examined system we perform calculations of the asymptotic properties defined by the total mean square displacement (MSD) $M(n)$, , and finally we obtain the growth rate $K$ in the limit of large times $$\begin{aligned}
\label{eq6}
&& M(n)= \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{j=1}^N
\big[\left(p_{j+ n}\! -\!p_j\right)^2 + \left(q_{j+ n}\! -\!q_j\right)^2\big]
, \\
\label{eq7}
&& K= \lim_{n \rightarrow \infty} \frac{\ln (M(n)+1)}{\ln n}.\end{aligned}$$ For almost all values of the constant $c_0$ the parameter $K$ is approaching asymptotically 0 or 1 for regular or chaotic motion, respectively.
Note, practically, one has to truncate the sums in . Thus we derived $K\approx 0.21$ for $\tau = 1.8$ ms and $K\approx 1.09$ for $\tau = 2.1$ ms, which supports the first impression gained from the time series themselves, Fig.\[fig:data\]a and b. Note further that for delay time $\tau = 1.8$ms, $K$ decays with increasing $n$ on much smaller values, Fig.\[fig:2\]c, which corroborates the result pointing towards regular motion.
![$K$-values for different delay times $\tau$ indicate a transition from regular to chaotic dynamics in the region of 1.9ms. []{data-label="fig:Kdelay"}](f5.eps){width="45.00000%"}
Note that, the parameter $c_0$ acts like a frequency in a spectral calculation, cp. Eqs.(\[eq5\]). If it is badly chosen, $c_0/{\Delta t}$ resonates with one frequency of the process dynamics $\tilde h(t)$. Such a frequency belongs to a a peak in the PSD, Fig.\[fig:PSD\]. In the 0-1test regular motion would yield a ballistic behavior in the $(p,q)$-plane and the corresponding quadratic growth of MSD results in an asymptotic growth rate. The disadvantage of the test, its strong dependence on the chosen parameter $c_0$, could be overcome by a proposed modification. Gottwald and Melbourne [@Gottwald2005; @Gottwald2009a] suggest to take several randomly chosen values of $c_0$ and compute the median of the belonging $K$-values. Particularly, in Ref. [@Gottwald2009a] the problems of averaging over $c_0$ as well as sampling the data points are discussed extensively. We followed this approach [@Gottwald2009a; @Krese2012], which improves the convergence of the test (Fig. 4c) without the consideration of longer time series, to find the time delay $\tau$ leading to chaos (see Fig. \[fig:Kdelay\]). We defined a modified square displacement $D(n)$ which exhibits the same asymptotic growth $$D(n,c_0)=M(n,c_0)-V_{osc}(n,c_0),$$ where the oscillatory term $V_{osc}(n,c_0)$ can be expressed by $$V_{osc}(n,c_0) =E[\tilde{h}]^2 \frac{1- \cos(nc_0)}{1-\cos(c_0)},$$ and $E[\tilde{h}]$ denotes the average of examined time series $\tilde{h}_i$ $$E[\tilde{h}]= \frac{1}{N_{max}} \sum_{i=1}^{N_{max}} \tilde{h}_i,$$ where $N_{max}$ is the number of $\tilde{h}_i$ elements. Consequently, the oscillatory behavior is subtracted from the MSD $M(n,c_0)$ and the regression analysis of the linear growth of $D(n,c_0)$ (Eq. 6) with increasing $n$ is performed using the linear correlation coefficient which determines the value of $K_{c_0}$. $$K_{c_0}= \frac{{\rm cov}({\bf X},{\bf D}(c_0))}{\sqrt{{\rm var}({\bf X}) {\rm var} ({\bf D}(c_0)})},$$ where vectors ${\bf X}$=\[1, 2, ..., $n_{max}$\], and ${\bf D}(c_0)$= \[$D(1,c_0)$, $D(2,c_0)$, ...., $D(n_{max},c_0)$\].
The covariance ${\rm cov}({\bf x}, {\bf y})$ and variance ${\rm var}({\bf x})$, for arbitrary vectors ${\bf
x}$ and ${\bf y}$ of $n_{max}$ elements, and the corresponding averages $E[x]$ and $E[y]$ respectively, are defined $$\begin{aligned}
{\rm cov}({\bf x},{\bf y}) &=& \frac{1}{n_{max}} \sum_{n=1}^{n_{max}} (x(n)-E[x])(y(n)-E[y]), \nonumber \\
{\rm var}({\bf x}) &=& {\rm cov}({\bf x}, {\bf x}).\end{aligned}$$
Finally, the median is taken of $K_{c_0}$-values (Eq. 9) corresponding to 100 different values of $c_0 \in (0,\pi)$. The results of $K$ for different delay times $\tau$, Fig.\[fig:Kdelay\], in the window between 1.75ms and 2.3ms indicates a transition from regular to chaotic dynamics with increasing delay time in the region of 1.9 ms.
As a consequence we conclude that in the investigated window increasing cutting speed leads to a transition from chaotic chatter dynamics to regular motion with improved surface quality.
Multiscale entropy {#sec:MSE}
==================
To characterize the solutions of the DDE, Eqs.(\[eq1\]) and (\[eq2\]), with regard to information production rate and complexity, we aim to calculate multiscale entropy (MSE) [@Costa2002]. This method was successfully applied to analyze the complexity of biological signals [@Costa2002; @Costa2005]. It is suitable for short and noisy time series. As a consequence the chosen procedure would be applicable to experimental data as well. We use an algorithm provided by PhysioNet [@PhysioNet]. First we compute coarse-grained time series $\{x^{(N)}\}$ using non-overlapping intervals containing $N$ equidistant data points $h_i$, $$\label{eq:coarsegraining}
x_j^{(N)}=\frac{1}{N}\sum\limits_{i=(j-1)N+1}^{jN} h_i . % \frac{1}{N\Delta t}$$ In the next step we calculate sample entropy $S_E^{(N)}$ [@Richman2000] for these coarse-grained time series. Sample entropy is the negative of the logarithm of the conditional probability that sequences of $m$ consecutive data points $\textbf{x}^{(N)}_i\!=\!(x^{(N)}_i,\ldots,x^{(N)}_{i+m-1})$ and $\textbf{x}^{(N)}_j$ close to each other will also be close to each other when one more point is added to them. Hence it is estimated as follows $$\label{eq:MSE}
S^{(N)}_E(m,r) = -\ln\frac{U^{(N)}_{m+1}(r)}{U^{(N)}_{m}(r)},$$ where $U^{(N)}_{m}(r)$ represents the relative frequency that a vector $\textbf{x}^{(N)}_i$ is close to a vector $\textbf{x}^{(N)}_j$ ($i\neq j$). Close to each other in the sense that their infinity norm distance is less than $\varepsilon=r\sigma$. By $\sigma$ we denote the standard deviation of the data. In the limit of $m\!\rightarrow\!\infty$ and $r\!\rightarrow\! 0$ sample entropy is equivalent to order-2 Rényi entropy $K_2$ and is suitable to characterize the system’s dynamics [@Grassberger1983]. For independent variables $\{\xi\}$ the entropy follows from $S^{(N)}_E(m,r)=-\ln P\big(|\xi^{(N)}_i\!-\!\xi^{(N)}_j|<\varepsilon\big)$ and is independent of word length $m$. For Gaussian white noise (GWN) the coarse-grained time series is known to be Gaussian distributed too. For small $\varepsilon$ this yields $S^{(N)}_E(m,r) \approx -\ln [\varepsilon/(\sqrt{\pi}\sigma^{(N)})]$. Using that the standard deviation of the coarse-grained time series $\sigma^{(N)}$ decreases with $1/\sqrt{N}$ leads to following expression $$\label{eq:MSE_GWN}
S^{(N)}_E(m,r) \approx -\ln\! \Big(\!r\sqrt{\!\frac{N}{\pi}}\Big)\;(r\rightarrow 0).$$ To clear up the characteristics of the cutting process, we look at MSE depending on box size $r$ for the two chosen delay times, Fig.\[fig:MSE2d\]. For regular motion we expect the entropy to approach zero with decreasing $r$. This is observed for the time series with delay time $\tau = 1.8$ms. For chaotic dynamics the entropy should stay finite, observed for $\tau = 2.1$ms. For the sake of completeness, it should be mentioned that in the case of stochastic dynamics the entropy would diverge with decreasing spatial resolution $r$, cp. Eq.(\[eq:MSE\_GWN\]).
![Multiscale entropy $S_E$ depending on scale factor $N$ and box size $r$. For regular motion we expect the entropy to approach zero with decreasing $r$. For chaotic dynamics the entropy should stay finite. Since $S_E$ is not decreasing with scale factor significantly, it seems there is no characteristic time scale present in the data.[]{data-label="fig:MSE2d"}](f6.eps){width="50.00000%"}
In Fig.\[fig:MSE2d\] and \[fig:MSE1d\] we further analyze the scale factor dependence of MSE. The entropic measure is always larger for the chaotic time series since it is the more complex one. MSE for small scale factor, Fig.\[fig:MSE1d\]a, indicates that there is no characteristic time scale, comparable to [@Costa2002]. But for larger scale factors MSE is decaying comparable to Gaussian white noise, Fig.\[fig:MSE1d\]b. Thus, even in the chaotic case, there exists a characteristic time scale which is close to the delay time.
![Multiscale entropy $S_E$ for fixed $m=2$ and $r=0.05$ depending on scale factor $N$. The time series with delay time $\tau = 2.1$ms seems to be more complex than the time series with delay time $\tau = 1.8$ms since its entropy is larger. **a)** To gain a higher scale factor resolution we recorded also time series with smaller sampling time $\Delta t =0.1$ms. **b)** The existence of a characteristic time scale which is in order of magnitude of the delay time is indicated by the decay of entropy with increasing scale factor $N$. The gray line represents GWN with $r=0.25$. []{data-label="fig:MSE1d"}](f7a.eps){width="43.00000%"}
![Multiscale entropy $S_E$ for fixed $m=2$ and $r=0.05$ depending on scale factor $N$. The time series with delay time $\tau = 2.1$ms seems to be more complex than the time series with delay time $\tau = 1.8$ms since its entropy is larger. **a)** To gain a higher scale factor resolution we recorded also time series with smaller sampling time $\Delta t =0.1$ms. **b)** The existence of a characteristic time scale which is in order of magnitude of the delay time is indicated by the decay of entropy with increasing scale factor $N$. The gray line represents GWN with $r=0.25$. []{data-label="fig:MSE1d"}](f7b.eps){width="43.00000%"}
The frequencies dominating $S(\omega)$, Fig.\[fig:PSD\], are also present in $S^{(N)}_E(m,r)$. They belong to minima in Fig.\[fig:MSE1d\]. We learn coarse-graining of the data over multiple of time scale $t_p$ belonging to structures in the cutting process dynamics leads to less complex time series and contains less information.
Conclusions and last remarks
============================
Concluding, the 0-1test differentiates between the two types of motion. Depending on the chosen delay time for the investigated DDE, , regular or chaotic motion is observed and a transition from chaotic to regular motion is detected with increasing cutting speed. The nature of solutions has been also confirmed by the corresponding power spectral densities and multiscale entropies. The latter reveals more insights into the process dynamics but is of much higher computational cost than the 0-1test and the spectral calculations.
The 0-1test appeared to be relatively simple and, consequently, useful for systems with delay and discontinuities. A huge advantage of the test is its low computational effort and the possibility to compute it “on the fly” while the data is still growing. One of the useful aspects of the 0-1 test is that the result can be plotted against the parameter $\tau$.
The presented method gives a quantitative criterion for chaos similar to the maximum Lyapunov exponent.
As demonstrated by Falconer [*et al.*]{} [@Falconer2007] and Krese and Govekar [@Krese2012] the method can be used on experimental data as well. Unfortunately in case of the cutting process experimental data are often characterized by a relatively high level of noise [@Litak2004]. In the examined system, we waived the possibility of additive noise. It was shown that the 0-1test could be applied on dynamical systems with additive noise and a good signal to noise ratio [@Gottwald2005].
Acknowledgements {#acknowledgements .unnumbered}
================
This work is partially supported by the European Union within the framework of the Integrated Regional Development Operational Program as project POIG.0101.02-00-015/08 and by the 7th Framework Programme FP7-REGPOT-2009-1, under Grant Agreement No. 245479.
[99]{}
Taylor, F.: On the art of cutting metals. Trans. ASME [**28**]{}, 31–350 (1907) Arnold, R.N.: The mechanism of tool vibration in the cutting of steel. Proc. Inst. Mech. Eng. [**154**]{}, 261–284 (1946) Tobias, S.A., Fishwick, W.: A Theory of Regenerative Chatter. The Engineer, London (1958) Tlusty, J., Polacek, M.: The stability of machine tool against self-excited vibrations in machining. ASME Int. Res. Prod. Eng. 465–474 (1963) Merrit, H.E.: Theory of self-excited machine-tool chatter. ASME J. Eng. Ind. [**87**]{}, 447–454 (1965) Wu, D.W., Liu, C.R.: An analytical model of cutting dynamics. Part 1: Model building. ASME J. Eng. Ind. [**107**]{}, 107–111 (1985) Wu, D.W., Liu, C.R.: An analytical model of cutting dynamics. Part 2: Verification. ASME J. Eng. Ind. [**107**]{}, 112–118 (1985) Altintas, Y.: Manufacturing Automation: Metal Cutting Mechanics, Machine Tool Vibrations, and CNC Design. Cambridge University Press, Cambridge (2000) Warminski, J., Litak, G., Cartmell M.P., Khanin R., Wiercigroch W.: Approximate analytical solutions for primary chatter in the nonlinear metal cutting model. J. Sound Vibr. [**259**]{}, 917–933 (2003) Insperger, T., Gradisek, J., Kalveram, M., Stépán, G., Winert K., Govekar E.: Machine tool chatter and surface location error in milling processes. J. Manufac. Sci. Eng. [**128**]{}, 913–920 (2006) Ganguli, A., Deraemaeker, A., Preumont, A.: Regenerative chatter reduction by active damping control. J. Sound Vibr. [**300**]{}, 847–862 (2007) Grabec I.: Chaotic dynamics of the cutting process. Int. J. Mach. Tools Manuf. [**28**]{}, 19–32 (1988) Tansel, I.N., Erkal, C., Keramidas, T.: The chaotic characteristics of three-dimensional cutting. Int. J. Mach. Tools Manufact. [**32**]{}, 811–827 (1992) Gradisek, J., Govekar E., Grabec, I.: Time series analysis in metal cutting: chatter versus chatter-free cutting. Mech. Syst. Signal Process [**12**]{}, 839–854 (1998) Gradisek, J., Govekar, E., Grabec I.: Using coarse-grained entropy rate to detect chatter in cutting. J. Sound Vibr. [**214**]{}, 941–952 (1998) Marghitu, D.B., Ciocirlan, B.O., Craciunoiu, N.: Dynamics in orthogonal turning process. Chaos, Solitons & Fractals [**12**]{}, 2343–2352 (2001) Litak G.: Chaotic vibrations in a regenerative cutting process. Chaos, Solitons & Fractals [**13**]{}, 1531–1535 (2002) Fofana, M.S.: Delay dynamical systems and applications to nonlinear machine-tool chatter. Chaos, Solitons & Fractals [**12**]{} 731–747 (2003) Gradisek, J., Grabec, I., Sigert, I., Friedrich, R.: Stochastic dynamics of metal cutting: bifurcation phenomena in turning. Mech. Syst. Signal Process. [**16**]{}, 831–840 (2002) Litak, G., Rusinek, R., Teter A.: Nonlinear analysis of experimental time series of a straight turning process. Meccanica [**39**]{}, 105–112 (2004) Stépán, G., Szalai, R., Insperger, T.: Nonlinear dynamics of high-speed milling subjected to regenerative effect. In: Radons, G. (ed.): Nonlinear Dynamics of Production Systems. Wiley, New York (2003) Vela-Martínez, L., Jáuregui-Correa, J.C., González-Brambila, O.M., Herrera-Ruiz, G., Lozano-Guzmán, A.: Instability conditions due to structural nonlinearities in regenerative chatter. Nonlinear Dynamics [**56**]{}, 415–427 (2009) Gottwald G.A., Melbourne, I.: A new test for chaos in deterministic systems. Proc. R. Soc. Lond. A **460**, 603–611 (2004) Gottwald G.A., Melbourne, I.: Testing for chaos in deterministic systems with noise. Physica D [**212**]{}, 100–110 (2005) Falconer, I., Gottwald, G.A., Melbourne, I., Wormnes, K.: Application of the 0-1test for chaos to experimental data. SIAM J. App. Dyn. Syst. [**6**]{}, 95-402 (2007) Gottwald, G.A., Melbourne, I.: On the implementation of the 0-1test for chaos. SIAM J. App. Dyn. Syst. [**8**]{}, 129-145 (2009) Gottwald, G.A., Melbourne, I.: On the validity of the 0-1test for chaos. Nonlinearity [**22**]{}, 1367–1382 (2009) Litak, G., Syta, A., Wiercigroch, M.: Identification of chaos in a cutting process by the 0-1test. Chaos, Solitons & Fractals [**40**]{}, 2095–2101 (2009) Pratt, J.R., Nayfeh, A.H.: Chatter control and stability analysis of a cantilever boring bar under regenerative cutting conditions. Philos. Trans. R. Soc. Lond. A [**359**]{}, 759–792 (2001) Stépán, G.: Modelling nonlinear regenerative effects in metal cutting. Philos. Trans. R. Soc. Lond. A **359**, 739–757 (2001) Wang X.S., Hu, J., Gao, J.B.: Nonlinear dynamics of regenerative cutting processes – Comparison of two models. Chaos, Solitons & Fractals **29**, 1219–1228 (2006) Litak, G., Sen A.K., Syta, A.: Intermittent and chaotic vibrations in a regenerative cutting process. Chaos, Solitons & Fractals [**41**]{}, 2115–2122 (2009) Krese, B., Govekar, E.: Nonlinear analysis of laser droplet generation by means of 0-1 test for chaos. Nonlinear Dynamics [**67**]{}, 2101-–2109 (2012) Costa, M., Goldberger, A.L., Peng, C.-K.: Multiscale analysis of complex biological signals. Phys. Rev. Lett. [**89**]{}, 068102 (2002) Costa, M., Goldberger, A.L., Peng, C.-K.: Multiscale analysis of biological signals. Phys. Rev. E [**89**]{}, 021906 (2005) Goldberger, A.L., Amaral, L.A.N., Glass, L., Hausdorff, J.M., Ivanov, P.Ch., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.-K., Stanley, H.E.: PhysioBank, physioToolkit, and physioNet: Components of a new research resource for complex physiologic signals. Circulation [**101**]{}, 215–220 (2000) Richman, J.S, Moorman, J.R.: Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. [**278**]{}, H2039–H2049 (2000) Grassberger, P., Procaccia, I.: Estimation of the Kolmogorov-entropy from a chaotic signal. Phys. Rev. A [**28**]{}, 2591-2593 (1983)
[^1]: ${\cal F}\{\cdot\}$ denotes the Fourier transform.
|
---
abstract: 'We present a catalog of emissive point sources detected in the SPT-SZ survey, a contiguous 2530-square-degree area surveyed with the South Pole Telescope (SPT) from 2008 - 2011 in three bands centered at 95, 150, and 220GHz. The catalog contains 4845 sources measured at a significance of 4.5$\sigma$ or greater in at least one band, corresponding to detections above approximately 9.8, 5.8, and 20.4mJy in 95, 150, and 220GHz, respectively. Spectral behavior in the SPT bands is used for source classification into two populations based on the underlying physical mechanisms of compact, emissive sources that are bright at millimeter wavelengths: synchrotron radiation from active galactic nuclei and thermal emission from dust. The latter population includes a component of high-redshift sources often referred to as submillimeter galaxies (SMGs). In the relatively bright flux ranges probed by the survey, these sources are expected to be magnified by strong gravitational lensing. The survey also contains sources consistent with protoclusters, groups of dusty galaxies at high redshift undergoing collapse. We cross-match the SPT-SZ catalog with external catalogs at radio, infrared, and X-ray wavelengths and identify available redshift information. The catalog splits into 3980 synchrotron-dominated and 865 dust-dominated sources and we determine a list of 506 SMGs. Ten sources in the catalog are identified as stars. We calculate number counts for the full catalog, and synchrotron and dusty components, using a bootstrap method and compare our measured counts with models. This paper represents the third and final catalog of point sources in the SPT-SZ survey.'
author:
- 'W. B. Everett'
- 'L. Zhang'
- 'T. M. Crawford'
- 'J. D. Vieira'
- 'M. Aravena'
- 'M. A. Archipley'
- 'J. E. Austermann'
- 'B. A. Benson'
- 'L. E. Bleem'
- 'J. E. Carlstrom'
- 'C. L. Chang'
- 'S. Chapman'
- 'A. T. Crites'
- 'T. de Haan'
- 'M. A. Dobbs'
- 'E. M. George'
- 'N. W. Halverson'
- 'N. Harrington'
- 'G. P. Holder'
- 'W. L. Holzapfel'
- 'J. D. Hrubes'
- 'L. Knox'
- 'A. T. Lee'
- 'D. Luong-Van'
- 'A. C. Mangian'
- 'D. P. Marrone'
- 'J. J. McMahon'
- 'S. S. Meyer'
- 'L. M. Mocanu'
- 'J. J. Mohr'
- 'T. Natoli'
- 'S. Padin'
- 'C. Pryke'
- 'C. L. Reichardt'
- 'C. A. Reuter'
- 'J. E. Ruhl'
- 'J. T. Sayre'
- 'K. K. Schaffer'
- 'E. Shirokoff'
- 'J. S. Spilker'
- 'B. Stalder'
- 'Z. Staniszewski'
- 'A. A. Stark'
- 'K. T. Story'
- 'E. R. Switzer'
- 'K. Vanderlinde'
- 'A. Wei[ß]{}'
- 'R. Williamson'
bibliography:
- '../../../BIBTEX/spt.bib'
title: 'Millimeter-wave Point Sources from the 2500-square-degree SPT-SZ Survey: Catalog and Population Statistics'
---
Introduction {#sec:intro}
============
The South Pole Telescope (SPT, @carlstrom11) is a 10-m millimeter-wavelength telescope which has provided an immensely rich set of survey data. From 2008 – 2011, the SPT was used to conduct a 2500-square-degree survey of the southern sky in three bands centered at 95, 150, and 220GHz with arcminute resolution. While the primary science goal of this survey, the South Pole Telescope Sunyaev-Zel’dovich (SPT-SZ) Survey, was a search for galaxy clusters using the Sunyaev-Zel’dovich effect [@bleem15b], the dataset is also ideal for finding compact, extragalactic sources of emission [@vieira10]. The large area, high resolution, and comparatively low noise of the full SPT-SZ survey provide an extensive catalog of new sources selected at millimeter (mm) wavelengths spanning flux densities of a few mJy to many Jy. The multi-frequency nature of the dataset further provides the opportunity for population separation based on spectral characteristics of different types of sources.
Broadly speaking, extragalactic sources that are bright at mm wavelengths fall into two categories: sources whose flux increases with frequency, and sources whose flux is either nearly constant or decreasing with frequency. Flat- or falling-spectrum sources are generally associated with active galactic nuclei (AGN), where the source of the mm flux is from acceleration of relativistic charged particles producing synchrotron radiation. Rising-spectrum sources are predominantly dusty star-forming galaxies (DSFGs). The high dust content of these sources makes them difficult to detect at optical wavelengths, but the mm and submillimeter (submm) flux from these sources is thermal emission from the dust itself, making the mm/submm wavebands particularly useful for identifying and observing this population.
Historically, the synchrotron population has been well-studied at radio wavelengths (a review of the current understanding of radio source populations from millimeter and radio surveys can be found in @dezotti10). The spectra of radio sources are generally characterized by a power law relating source flux density, $S$, to frequency, $\nu$: $S \propto \nu^\alpha$. AGN-fueled radio sources can be roughly separated into two populations: flat-spectrum sources, generally defined to have $\alpha > -0.5$, and steep-spectrum sources with $\alpha < -0.5$. In the currently accepted “unified model" [e.g., @urry95; @netzer15], these two populations are actually the same type of physical object whose spectral appearance depends on the orientation of the observer relative to the axis of the characteristic jets emerging from the central black hole. In side-on observations relative to the typically-extended jets, the optically thin lobes create a steep component of the spectrum at radio frequencies, and the central black hole engine is obscured by the dusty accretion torus. For sight lines along the axis of the jet, the object appears as a compact flat-spectrum source also referred to as a blazar.
The characterization of dusty sources has progressed significantly as mm- and submm-wave surveys have grown in size and resolving power in the last several decades. In the 1980s, the all-sky infrared satellite IRAS discovered a population of 18,351 extra-galactic sources [@saunders00b]. Most of these were at relatively low redshifts, $z {\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}0.3$, with emission dominated by dust, and were classified as luminous infrared (IR) galaxies (LIRGs) (10$^{11} < L_{\textrm{IR}} < 10^{12}$ L$_\odot$) and ultraluminous IR galaxies (ULIRGs) (10$^{12} < L_{\textrm{IR}} < 10^{13}$ L$_\odot$), compared with typical spiral galaxies with luminosities around 10$^{10}$ L$_\odot$ [@blain02]. Beginning in the late 1990s, observations at 450 and 850$\mu$m with the Submillimeter Common-User Bolometer Array (SCUBA) instrument on the James Clerk Maxwell Telescope (JCMT) [@holland99] discovered a high-redshift component of the DSFG population, which were termed submillimeter galaxies (SMGs), due to the wavelength at which they were identified. These early surveys of SMGs covered relatively small areas, only a few square degrees at most, and as a result traced out populations of relatively dim sources [e.g., @smail97; @hughes98c; @barger98; @eales00; @cowie02; @scott02b; @bertoldi07; @weiss09].
The advent of large-area and multi-band surveys allowed detections probing the brightest and rarest SMGs. This included surveys conducted using the SPT at 1.4, 2.0, and 3.2mm [e.g., @vieira10], the Spectral and Photometric Imaging Receiver (SPIRE) at 250, 350, and 500$\mu$m on the [*Herschel Space Observatory*]{}[e.g., @eales10], the Planck satellite [e.g., @planck15-26], and the Atacama Cosmology Telescope [e.g., @gralla19].
The first released compact-source sample from the SPT, @vieira10, included a population of extremely bright ($\sim$30mJy at 1.4mm), rising-spectrum sources that did not have counterparts in IRAS catalogues (indicating they were most likely at high redshift). Follow-up observations of these sources and a similarly bright population of sources detected in early [*Herschel*]{} surveys using telescopes such as the Atacama Large Millimeter/Submillimeter Array (ALMA) and the Submillimeter Array (SMA) have demonstrated that these objects are indeed at high redshift and most of them are magnified by strong gravitational lensing by a massive object along the line of sight [@negrello10; @vieira13; @spilker16; @hezaveh16].
Thermal dust emission at high redshift is probed almost uniquely by moderate-to-high-resolution, mm/submm observatories, including the SPT and [*Herschel*]{}. Where high-redshift observations of other emission mechanisms at other wavelengths suffer from cosmological dimming, mm/submm observations benefit from a strong negative K-correction [@blain96] that results in nearly constant observed flux density for a source with a dust-like spectrum, out to approximately $z=10$ for mm wavelengths. The combination of this effect and the phenomenon of gravitational lensing makes large-area mm/submm surveys uniquely powerful in studying the nature of star formation at the highest redshifts possible.
In this work, we present results from the full 2500 square degrees of the SPT-SZ survey; this analysis is an extension on the work of two previous papers: @vieira10 (hereafter V10), and @mocanu13 (hereafter M13), and builds on the same analysis pipeline. V10 developed the source-finding pipeline and applied it to a single field covering 87 square degrees observed in 2008 in two frequencies. M13 expanded that analysis to 5 fields, two observed in 2008 and three in 2009 (771 square degrees in total), and added a third frequency. In this current paper, we add 1759 square degrees of previously unanalyzed data and include additional data for two fields which were re-observed in 2010 and 2011. We adjust the previous pipeline to be compatible with the goals of the full survey (full area coverage) and work to optimize elements in the pipeline chain. Sections \[sec:observ\] and \[sec:reduction\] present an overview description of the data and analysis pipeline. Section \[sec:catalog\] provides a description and characterization of the catalog, including source population separation, and Section \[sec:number\_counts\] presents the number counts. Section \[sec:discussion\] presents a discussion of the results and conclusions can be found in Section \[sec:conclusion\]. Throughout the work, we assume a standard $\Lambda$CDM cosmology with $H_0 = 70$, $\Omega_m = 0.3$, and $\Omega_\Lambda = 0.7$.
Observations {#sec:observ}
============
The South Pole Telescope (SPT, @carlstrom11), a 10-meter telescope designed for observations in millimeter wavelengths, is located at the geographic South Pole and was designed to measure low-contrast sources such as CMB anisotropies with high sensitivity. The first camera for the SPT, SPT-SZ, contained a 960-pixel array of transition-edge-sensor bolometers, with sensitivity in three bands centered at 95, 150, and 220GHz (3.2, 2.0, and 1.4mm, respectively). The SPT-SZ receiver had an angular resolution of roughly 1.7, 1.2, and 1.0arcmin at 95, 150, and 220GHz, respectively, with a 1-degree diffraction-limited field-of-view. The pixels on the focal plane were arranged into 6 triangular wedges forming a hexagon, with each wedge sensitive in a single band.
The SPT-SZ survey represents the culmination of four years of observations, 2008–2011, of roughly 2500 square degrees on the sky. The sky area covered spans the region in the southern hemisphere from roughly declination (decl.) -65 to -40 degrees and from right ascension (R.A.) $20^\text{h}$ to $7^\text{h}$, avoiding sky area contaminated by emission from the Galaxy. Over the duration of the survey, the composition of the receiver changed slightly. In 2008, the focal plane was composed of three 150GHz wedges, two 220GHz wedges, and a single 95GHz wedge, but the 95GHz wedge failed to produce science-quality data. For 2009, one 220GHz wedge was swapped for another 150 GHz wedge, and the 95GHz wedge was upgraded to an improved-quality wedge, resulting in four 150GHz wedges, and one each of 220GHz and 95GHz. The composition of the focal plane then remained the same for 2009, 2010, and 2011.
The full 2500-square-degree area was split into 19 contiguous fields which were observed independently. The characteristics of each field are presented in Table \[tab:fieldstable\], and Figure \[fig:surveyfields\] shows the location of each field on the sky. In observing a given field, the telescope started in one corner, scanned back and forth across the sky in constant elevation and then took a step in elevation and repeated until it had covered the desired area in that field. Scan speeds varied between 0.25 and 0.42deg/sec. Between observations, the telescope initial starting position was dithered to achieve uniform coverage of each field. Only data from the constant-speed portion of each scan is used in the map for that particular observation. The three 2009 fields, [ra21hdec-50]{}, [ra3h30dec-60]{}, and [ra21hdec-60]{}, and one 2008 field, [ra23h30dec-55]{}, were observed using a lead-trail scan strategy, in which the field is split into two halves, left and right. The two halves were observed independently, delayed such that due to sky rotation, the second half had drifted so that the two halves were observed over the same azimuth range. This allows for the possibility of the removal of ground-synchronous contamination. However, ground contamination in those fields was measured to be negligible, so the lead and trail portions are simply coadded in this analysis. The rest of the fields were observed using a simple scan in azimuth, except for the [ra21hdec-50]{} field, for which a portion of the observations used an elevation scan, where the telescope scans up and down in elevation while allowing the field to drift through the field-of-view in azimuth. Techniques for analyzing this field are discussed in detail in M13. The observation strategy for each field was designed to produce as close as possible a uniform-depth survey across the full area, except for two fields, [ra5h30dec-55]{} and [ra23h30dec-55]{}, both of which were observed originally in 2008 and then re-observed in either 2010 or 2011, to add data at 95GHz which was unavailable in 2008 and nominally to observe to twice the depth of the 2008 survey in 150GHz.
[lcccccccc]{} & 2008/2011 & 82.5 & -55.0 & 15 & 10 & 89 & 3x3\
& 2008/2010 & 352.5 & -55.0 & 15 & 10 & 108 & 3x3\
& 2009 & 315.0 & -60.0 & 30 & 10 & 150 & 6x3\
& 2009 & 52.5 & -60.0 & 45 & 10 & 225 & 8x3\
& 2009 & 315.0 & -50.0 & 30 & 10 & 193 & 6x3\
& 2010 & 62.5 & -50.0 & 25 & 10 & 166 & 5x3\
& 2010 & 12.5 & -50.0 & 25 & 10 & 152 & 5x3\
& 2010 & 37.5 & -50.0 & 25 & 10 & 155 & 5x3\
& 2010 & 15.0 & -60.0 & 30 & 10 & 140 & 6x3\
& 2010 & 82.5 & -45.0 & 15 & 10 & 105 & 3x3\
& 2011 & 97.5 & -55.0 & 15 & 10 & 82 & 3x3\
& 2011 & 345.0 & -62.5 & 30 & 5 & 65 & 6x2\
& 2011 & 315.0 & -42.5 & 30 & 5 & 118 & 6x2\
& 2011 & 337.5 & -55.0 & 15 & 10 & 73 & 3x3\
& 2011 & 345.0 & -45.0 & 30 & 10 & 221 & 6x3\
& 2011 & 90.0 & -62.5 & 30 & 5 & 65 & 6x2\
& 2011 & 52.5 & -42.5 & 45 & 5 & 185 & 8x2\
& 2011 & 15.0 & -42.5 & 30 & 5 & 126 & 6x2\
& 2011 & 97.5 & -45.0 & 15 & 10 & 112 & 3x3\
Total: & & & & & & 2530 \[tab:fieldstable\]
Data reduction and analysis {#sec:reduction}
===========================
The following section describes the steps in the analysis pipeline from timestream data for individual bolometers to source catalogs. These steps include: filtering of each bolometer’s timestream data for each scan; forming a single-observation map by coadding each bolometer’s contribution to map pixels, and then forming a single map for each field by coadding all single-observation maps; constructing masks to define the high-weight regions of the fields for source-finding; developing an optimal filter to amplify the signal-to-noise for detecting compact sources; extracting sources separately for each band using a CLEAN algorithm; and finally, forming a single multi-band catalog, taking into account the effect of flux biases and overlap regions between fields.
Timestream filtering and mapmaking {#sec:filtering}
----------------------------------
The response of each detector is recorded at 100Hz as time-ordered data (TOD) as the telescope scans across the sky. We apply a set of filters to the TOD to suppress noise above the temporal frequency corresponding to the map pixel size and low-frequency noise due to atmosphere. The filtering we apply in this work is very similar to that in M13. The data are low-pass filtered above a temporal frequency corresponding to $\ell = 37500$ in the scan direction to remove noise on scales smaller than the chosen map pixel size, 0.25arcmin. To mitigate atmospheric noise, we apply a first-order polynomial subtraction and a high-pass filter below $\ell = 246$. Since atmospheric noise will be spatially coherent on the size scale of the detector wedges, we also remove a mean across each wedge of the receiver from all well-performing bolometers at each time step.
The filtered TOD for each bolometer are then coadded into 0.25 by 0.25arcmin pixels by inverse-variance weighting, adding contributions from bolometers to each pixel to form a single map per observation. The weights for each bolometer are calculated from the power spectral density of each detector’s TOD in the range from 1 to 3Hz. We pixelize each field using an oblique Lambert equal-area projection. This choice of projection is important for source-finding because it preserves the source shape across the full area of the map. However, it also produces complications in the analysis, since the scan direction rotates with pixel location in the map. The ramifications of this are discussed in Section \[sec:optfiltsection\].
To make final coadded maps from all the observations, we apply several cuts (which have been previously shown to be useful for SPT data–see, e.g., @schaffer11), based on the mean weights and mean RMS of the uniform-weight region of each single-observation map. We cut on excessively high median weights which occurs when a bolometer’s TOD has anomalously low noise. In the past, this has been shown to correspond to poor bolometer behavior, such as when a detector changes operating point due to shifts in the amount of loading [@schaffer11]. For maps with reasonably good weather conditions, the weights scale well with the RMS of the map; however, for poor weather days, $1/f$ noise dominates the RMS and the 1–3Hz range is no longer a good estimate of the weight that should be assigned to that bolometer. Therefore, we also perform a cut on observations where the map RMS does not scale properly with the median weight in the map. The single-observation maps that survive the cuts are coadded by inverse-variance-weighting each pixel to form a single coadded map per field and per band.
To calibrate the maps, we use both a relative and absolute calibration. The relative calibration of the TOD from one observation to the next is done through repeated observations of the galactic HII region RCW38 and reference from a thermal calibrator source installed in the bolometer optical path [@schaffer11]. The absolute calibration is determined from comparisons of the SPT power spectrum to that from @planck15-11 in the $\ell$ range from 682 to 1178. This results in fractional errors in temperature of 1.05%, 1.15%, and 2.24% in 95, 150, and 220GHz, respectively.
The pointing model used for constructing maps is based on regular measurements of galactic HII regions in addition to data recorded by sensors on the telescope measuring temperature, linear displacement, and tilt. To check the absolute astrometry, we correct the global pointing of each field by cross-matching the positions of the brightest 40 SPT-SZ sources in each of the three bands per field to source locations in the Australia Telescope 20 GHz (AT20G) Survey catalog [@murphy10], which has an RMS positional accuracy of 1 arcsec. We then fit for a global pointing correction using the cross-matched locations. We iterate on this process of cross-matching and fitting for a correction until the calculated offset is smaller than the residual scatter, and we find the RMS residual pointing scatter for the on-average 26 brightest sources with cross-matches in each field to be 4.3arcsec in declination and 4.6arcsec in R.A.$\cdot \cos$(Decl.).
Mask construction {#masks}
-----------------
Each field is analyzed separately in our pipeline for extracting sources. We then cross-match the single-field catalogs at the end, accounting for places where fields overlap, to form a single catalog for the survey. Field masking is needed to exclude low signal-to-noise edges due to turn-around regions of the scan strategy and non-uniform array coverage between bands on the focal plane. However, we also want to define masks such that we have continuous coverage of the full 2500-square-degree survey area. This requires that we define separate masks per band for each field, because different bands occupy physically offset locations in different wedges on the focal plane and therefore observe slightly offset regions on the sky. In principle, this choice only adds slightly more complicated bookkeeping for cataloging sources, since now a source detected on the edge of one field in one band could be detected in an adjacent field in a different band. To achieve continuous coverage, we also need to extend the field masks to lower signal-to-noise regions compared with M13, making the noise level within each field slightly less uniform.
Optimal filtering for source extraction {#sec:optfiltsection}
---------------------------------------
As the sources we detect are expected to be unresolved by the telescope (except for nearby sources), a source in our maps should manifest in the maps as an SPT beam with the time-stream filtering applied. We can improve our signal-to-noise for detecting objects with an expected source shape using an appropriate optimal filter. The filter takes advantage of knowledge of the source shape and the noise in the region of the map where the source is located, which includes residual atmosphere, instrument noise, and the primary anisotropies of the CMB, which acts as a source of noise for the detection of compact, extragalactic sources. The first component needed in constructing the source profile is the beam, which is measured using a combination of observations of Jupiter and Venus, as well as the brightest point sources in the fields. The main lobes of the beams are measured to be well-described by Gaussian functions with FWHM of 1.7, 1.2, and 1.0arcmin for 95, 150, and 220GHz, respectively. The sidelobes of the beams are downweighted in the filter, and therefore are unimportant for the point source analysis pipeline. To model the source profile, we insert a beam into a noiseless map, and then reobserve the source once for each single-observation map using the characteristics of the telescope’s performance for that particular observation and the time-stream filtering. The Fourier-domain version of this source profile is used as the transfer function for our maps. All the single-observation transfer functions are then coadded into a single transfer function for the coadded map.
Following formalism set up in @tegmark98 and @haehnelt96, to maximize the signal-to-noise of sources in the map, we filter the map using an appropriately normalized version of the signal-to-noise of the source. We apply the optimal filter, $\psi$, in the Fourier domain given by: $$\psi = \frac{\tau^TN^{-1}}{\tau^TN^{-1}\tau},$$ where $\tau$ is the transfer function and $N$ is the 2D noise power spectral density (PSD), resulting in a filtered map still in units of temperature. In addition to the source profile, we also need to characterize the noise around each source. To do this, we find the PSD of the noise of the coadded map by averaging 100 versions of difference maps. Each difference map is constructed by multiplying a randomly chosen half of the individual observation maps by -1 and adding them. The 2D power spectrum of the Fourier transform of each difference map are then averaged to generate a single 2D noise PSD for the coadded map. Because differencing two individual observation maps cancels out the contribution to the noise from the CMB anisotropies, we add back in a Gaussian realization of the best-fit CMB power spectrum from @keisler11. Smaller contributions to the noise, such as from secondary CMB anisotropies, thermal and kinetic SZ effects, are neglected in the filter construction.
The construction of the optimal filter is complicated by two characteristics of the SPT data. Because of the telescope’s location at the South Pole, the scan direction is always along constant declination. This means that the effect of time-stream filtering is anisotropic in the maps, and we essentially have an anisotropic beam. We account for the smearing of the beam in the scan direction by calculating the transfer function and applying the transfer function during source extraction. But, for point source work, we use an area-preserving projection, which causes the scan direction to rotate with respect to the axes of the pixel orientation of the maps. Therefore, our anisotropic beam in the map rotates with respect to the pixel $x$-$y$ location. The second characteristic of the SPT data is that the noise in the maps varies with declination. Because the telescope scans the same distance in azimuth in the same amount of time regardless of elevation, but this distance corresponds to less physical distance farther from the equator, the result is a noise level with a gradient in declination through our maps, with slightly less noise at higher declination. To account for these two position-dependent complications, we divide up each field in a number of sectors which are small enough that an assumption of zero source rotation and noise uniformity is reasonable. We then calculate separate transfer functions and noise PSDs for each sector independently, construct a single optimal filter for each sector, and extract sources separately per sector. Further description of the process for creating these data products and their salient features can be found in the SPT 2008 data release paper [@schaffer11]. The number of sectors per field is shown in Table \[tab:fieldstable\].
Essentially, splitting up each field into sectors is a compromise between computation time and accuracy. We test that the sizes of our sectors are appropriate, i.e. that the measured flux density of sources is unaffected by the size of the sector we choose, by applying the transfer function and noise PSD from adjacent sectors to a sector where the effects of noise and scan rotation angle are the most severe, and check that the resultant change in the flux densities of the sources in that sector are below the noise level of the sector. We also test that the noise in different sectors does not differ by more than 5%.
We found in creating and testing our optimal filter that there was residual noise due to incomplete averaging in the creation of the PSD. This resulted in excess noise in the source extraction template, resulting in excess noise in the optimally filtered maps. To mitigate this effect, we apply a smoothing kernel to the optimal filter in the Fourier domain. To test that the strength of the filtering is optimal, we sweep through a range of kernel size while monitoring the noise. As we apply a stronger and stronger smoothing to the filter, we see that the noise level in the optimally filtered map is reduced, indicating that the excess noise being introduced by the filter is being diminished. But, applying stronger smoothing past a certain point eventually causes the noise level to once again rise, as real noise information in the PSD will begin to be cut, and the filter becomes a less realistic description of the actual signal-to-noise in the map and therefore less optimal. We take the minimum noise level as our optimized smoothing kernel size.
Source extraction algorithm {#sec:clean}
---------------------------
After optimally filtering the map, we locate and extract source flux densities using a CLEAN algorithm [@hogbom74]. CLEANing was developed originally for radio interferometry, where uneven baseline sampling and a finite number of antennae produce incomplete sampling of the Fourier domain. In turn, this effect produces sidelobes on the beam (a so-called “dirty beam"), which is analogous to the wings on the SPT beam due to the total applied optimal filter. The CLEAN algorithm detects and removes sources iteratively using a template source profile, which allows for the detection of fainter sources hidden underneath the dirty-beam wings of brighter sources. The source template we employ for CLEANing takes into account that we have optimally filtered the map, however, technically this optimal filter is only optimal for a source located at the center of a sector (which is where the simulated beam was placed when calculating the transfer functions for each sector). For sources off center in the sector, this optimal filter is at a slightly incorrect rotation angle. In order to form a template for each source, $\tau'$, we rotate the source profile (which is the map space version of the transfer function, $\tau$) to the correct rotation angle for the $x$-$y$ pixel location in the sector, and then convolve it with the optimal filter for that sector (which is not rotated). Effectively, our source template (in the Fourier domain) is given by, $$\tau' = \psi \tau,$$ where $\psi$ is the optimal filter function. Each sector of a field has been filtered separately, so we also perform the cleaning separately per sector and then unite the catalogs of detected sources from all fields. Since sources have long wings in the scan direction due to time-stream filtering, we need to account for the possibility of false detections from the wings of sources bleeding into a sector from sources just outside the sector. We do this by defining a sector pixel mask to outline the source-finding area for each sector and a second mask which covers a larger area than this sector mask. We define the larger masks such that the extra space on the left and right sides relative to the sector mask edges will be wider than the wings on all but the very obviously brightest sources, which we check by hand if they occur at the edge of a sector. The CLEAN algorithm is applied to the area of the larger mask for each sector, but only the sources that are within the smaller sector pixel mask are saved into the catalog.
To better account for non-uniformity in noise level across each sector, we construct a scaled noise map using the weight map for each field’s coadded map. We apply the optimal filter for each sector to the inverse of the weight map, and then scale each sector’s RMS noise by the square-root of the ratio of each sector’s median weight to its filtered weight map. In essence, we construct a local scaled-noise map, which can be used to construct a local signal-to-noise map when combined with the optimally filtered map. Thus, rather than assume a single noise value per sector when CLEANing, we take into account any local noise non-uniformity and CLEAN down to a locally-determined signal-to-noise threshold. The most noticeable differences resulting from the implementation of this method arise along the edges of the map, which are noisier than the RMS noise of the sectors which include these regions, and fields which were observed with a lead-trail observing strategy and have low-noise strips where the lead and trail observations overlap.
The steps of the CLEANing are as follows:
1. Find the location of the brightest pixel in a given sector in the optimally filtered map.
2. Rotate the source profile for that sector by the appropriate rotation angle for that $x$-$y$ pixel location, and convolve it with the optimal filter for that sector. This is the source template.
3. Subtract the source template, scaled to the flux of the pixel and multiplied by a loop gain coefficient. The loop gain is a multiplicative factor between 0 and 1 to account for non-ideal characteristics of the CLEANing pipeline, such as imperfections in the source model, extended sources, and finite pixelization in the map. We choose a loop gain of 0.1.
4. Find the next brightest pixel in the map and repeat the process until all pixels in the map have significance below the chosen signal-to-noise detection threshold, in this case 4.5 times the scaled RMS noise of that pixel location.
We extract negative sources as well as positive sources during the CLEANing process. Because the CLEANing is performed with a loop gain, bright sources will be broken up into multiple brightest pixels during the CLEANing. Once the CLEANing is finished (i.e. no pixels in the map remain above the chosen significance threshold), the pixels found by the CLEAN are associated into sources using a radius of association that is brightness-dependent, scaling from roughly 38arcsec for detections of 4.5$\sigma$ up to 2arcmin for detections of 200$\sigma$ or larger. All of the pixels associated with a single source are used in a centroiding process to find the source’s position. The post-CLEAN map, with all sources removed, is called a residual map, and will be used in later steps of the analysis.
After locating sources in the map, we convert from units of CMB temperature fluctuations to units of flux density. Optimally filtering the map is equivalent to fitting the map with a source shape, and the value of the brightest pixel of each source can be used to calculate the integrated source flux. Specifically, we calculate the flux of each source by stacking all of the CLEAN components removed for a given source onto the residual map and taking the maximum in a cutout region. The maps are calibrated in units of CMB temperature fluctuations, so we convert to flux density units using $$S[\text{Jy}] = T_\text{peak}\cdot \Delta\Omega_\text{f} \cdot 10^{26} \cdot \frac{2k_\text{B}}{c^2}\left(\frac{k_\text{B}T_\text{CMB}}{h}\right)^2\frac{x^4e^x}{(e^x-1)^2},$$ where $x = h\nu/(k_\text{B}T_\text{CMB})$, and $\Delta \Omega_\text{f}$ is the effective solid angle under a filtered source template, given by $$\Delta \Omega_\text{f} = \left[\int d^2 k\ \psi(k_x,k_y)\ \tau(k_x,k_y)\right]^{-1}.$$
We inspect detected sources for obvious spurious detections, such as false sources created by the effect of the timestream filtering on bright galaxy clusters and spurious detections very close to extremely bright sources. We also inspect for extended sources, discussed in more detail in Section \[sec:extsrcsection\]. Obvious false detections are trimmed; for the sake of completeness in the catalog, information on extendedness is not used to remove any sources, but is retained as a flag in the catalog.
Flux biases and three-band flux deboosting
------------------------------------------
The raw fluxes in our catalogs are subject to several biases, which must be carefully considered before the fluxes can be used for population statistics. The first is due to the fact that the underlying source number count populations are steep functions of flux. We expect the noise in the map to be Gaussian, but since there are many more dim sources than bright sources, it is much more likely that a detection at a given significance is a dim source on top of a positive noise fluctuation than a bright source on top of a negative noise fluctuation. Therefore, more sources below a significance threshold will be bumped above the threshold and detected as sources due to noise than will be bumped below, resulting in a positive flux bias which most strongly affects low signal-to-noise sources. This bias is closely related to Eddington bias, although that term is generally applied to counts as a function of flux rather than the fluxes of individual objects [@mocanu13]. When applied to individual sources, we refer to this bias as “flux boosting" and its correction as “flux de-boosting."
A second bias is due to the fact that we estimate source flux based on peak pixel brightness. A positive noise fluctuation near a source will pull the detected peak position away from the true position and also return a higher flux, whereas a nearby negative noise fluctuation will not have nearly as as strong a corresponding opposite effect on either the returned position or flux [@austermann10]. For a significance threshold of $S/N_\text{meas} = 4.5$, this is a roughly 5% effect and will be less important for all higher-significance detections (see e.g. @vanderlinde10). We therefore neglect this bias in this work.
Finally, a third bias arises from the fact that for sources that we detect only in one or two bands but not all three, the flux(es) for the source in the non-detected band(s) will be subject to a slight negative bias. This is due to the fact that we measure source flux in the non-detection band(s) using a source position determined from a band where the source is detected, and positional uncertainty biases the flux low. This bias is expected to be small given the small positional uncertainty for a 4.5-$\sigma$ detection. We calculate that a 1-$\sigma$ positional offset would result in a flux underestimate of 5%, and therefore neglect this bias.
### Bayesian flux deboosting
One standard method for dealing with flux deboosting in mm and submm surveys is the application of a Bayesian approach, where a posterior probability distribution is calculated given prior knowledge about the underlying source populations [@coppin05]. The usual Bayesian posterior distribution can be expressed as $$P(S_\text{true} | S_{\text{meas}}) \propto P(S_{\text{meas}} | S_\text{true})\ P(S_\text{true}),$$ where $P(S_\text{true} | S_{\text{meas}})$ is the posterior probability, expressing the probability of the true source flux $S_\text{true}$ given the measured value $S_{\text{meas}}$. $P(S_\text{meas} | S_\text{true})$ is the likelihood, expressing the probability of measuring a flux $S_{\text{meas}}$ given that the true flux of the source is $S_\text{true}$. Most simply, the likelihood is taken to be a Gaussian with width given by the map noise. $P(S_\text{true})$ is the prior, which expresses previous knowledge about the population of sources being detected, which in our case is proportional to the differential number counts as a function of flux, $dN/dS$.
@crawford10 present an argument for slightly altering the expressions above to account for the fact that we expect the number of sources to rise steeply with decreasing flux and the reality that the telescope observes the sky with some finite resolution (which we further pixelate when creating a map). Therefore, there is a confusion limit due to faint sources coexistent in a single pixel which contributes to the noise of each detection. The standard Bayesian approach can be modified slightly to account for this:
$$P(S_{\text{max}} | S_{\text{meas}}) \propto P(S_{\text{meas}} | S_{\text{max}})\ P(S_{\text{max}}),$$
where now the posterior, $P(S_{\text{max}} | S_{\text{meas}})$, gives the probability that the highest flux source contributing to the pixel brightness is $S_{\text{max}}$ given the measured flux of $S_{\text{meas}}$ in that pixel. Similarly, $P(S_{\text{meas}} | S_{\text{max}})$ expresses the likelihood that $S_{\text{meas}}$ will be measured given that the brightest source contributing to that pixel brightness has flux $S_{\text{max}}$. The likelihood includes the uncertainty in the flux due to the presence of fainter sources. The prior, $P(S_{\text{max}})$, is still expressed by the differential number counts, $dN/dS$, but now multiplied by an exponential suppression at low flux representing the probability that no other sources brighter than $S_{\text{max}}$ exist in that pixel.
### Simultaneous three-band deboosting {#dbdb}
Also presented in @crawford10 is the framework for expanding the single-band deboosting presented above to a deboosting of fluxes for sources detected in multiple bands simultaneously. @crawford10 expands the analysis from one to two bands, and M13 presents the extension to three bands. We use the same method for deboosting as in M13 and present an overview of the methodology below, see M13 for further details.
The goal of multi-band deboosting is to estimate the posterior probability for the flux of the source in multiple bands using its measured flux in one or more bands and any prior information known. The simplest way to write this would be $$\begin{gathered}
P\left(S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220} | S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220}\right) \propto\\ P \left(S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220} | S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220}\right) \centerdot \\ P\left(S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220}\right),\end{gathered}$$ which would express the 3-dimensional posterior probability distribution for the true flux for the detected source in the three bands, given the measured fluxes for that source in three bands. For the multi-band prior, one could assume that the priors for each band are independent and therefore could be separated as $$P\left(S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220}\right) = P\left(S^{\text{max}}_{95}\right)\ P\left(S^{\text{max}}_{150}\right)\ P\left(S^{\text{max}}_{220}\right).$$ However, in general this assumption would only be accurate if the three bands probed completely separate populations of sources with no overlap. In general, while more synchrotron sources are detected in 95 GHz, and 220 GHz is a stronger probe of dusty sources, there is certainly population overlap between the bands.
To accommodate this issue, we can express the prior as the combination of a prior on flux for one band (for example 150 GHz), and two priors describing the power law behavior connecting two fluxes as a function of frequency: $$\begin{split}
S_{95} = S_{150}\left(\frac{\nu_{95}}{\nu_{150}}\right)^{\alpha_{95-150}} \\
S_{220} = S_{150}\left(\frac{\nu_{220}}{\nu_{150}}\right)^{\alpha_{150-220}}.
\label{alphadef}
\end{split}$$ Note that the effective band centers for SPT depend slightly on the assumed spectral index of the source. We assume a flat spectral index of zero, which gives effective band centers of 97.6, 152.9, and 218.1GHz. M13 found that source fluxes are not affected significantly by making this assumption. Through a change of variables, then, we can express the three-flux prior in terms of one flux and two spectral indices ($\alpha$): $$\begin{gathered}
P\left(S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220}\right) =\\ P\left( S^{\text{max}}_{150}, \alpha^{\text{max}}_{95-150},\alpha^{\text{max}}_{150-220}\right)\ \frac{d \alpha^{\text{max}}_{95-150}}{d S^{\text{max}}_{95}}\ \frac{d \alpha^{\text{max}}_{150-220}}{d S^{\text{max}}_{220}}\end{gathered}$$ where the $\frac{d\alpha}{dS^{\text{max}}}$ can be found from Eqn. \[alphadef\].
We then make the assumption that the prior written in this way is made up of three independent components: $$\begin{gathered}
P\left( S^{\text{max}}_{150}, \alpha^{\text{max}}_{95-150},\alpha^{\text{max}}_{150-220}\right) = \\P\left(S^{\text{max}}_{150}\right) P\left(\alpha^{\text{max}}_{95-150}\right) P\left(\alpha^{\text{max}}_{150-220}\right).
\label{priorindependence}\end{gathered}$$ By separating them, we are assuming that the spectral indices are independent of flux and the two spectral indices are not correlated with each other. Strictly speaking, we know that this assumption of independence is also incorrect – fainter sources tend to have more dust-like spectral indices. More fundamentally, simply changing variables does not change the issue of the priors being correlated, since the amount of information contained in the priors has stayed the same. However, since we are interested in measuring $\alpha$ in this analysis, and allow for the possibility of sources with non-typical spectral indices, expressing the priors in this way allows us to place weak flat priors on both spectral indices between the physically motivated range of $-3 \le \alpha \le 5$, allowing the intrinsic population characteristics to emerge.
We now have for the 3-dimensional posterior on fluxes: $$\begin{gathered}
P\left(S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220} | S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220}\right) \propto \\ P \left(S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220} | S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220}\right) \centerdot \\P\left( S^{\text{max}}_{150}\right)P\left(\alpha^{\text{max}}_{95-150}\right)P\left(\alpha^{\text{max}}_{150-220}\right)\ \frac{d \alpha^{\text{max}}_{95-150}}{d S^{\text{max}}_{95}}\ \frac{d \alpha^{\text{max}}_{150-220}}{d S^{\text{max}}_{220}}\end{gathered}$$ The likelihood, $P \left(S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220} | S^{\text{max}}_{95}, S^{\text{max}}_{150}, S^{\text{max}}_{220}\right)$, is given by a multivariate Gaussian $$\begin{gathered}
P\left(S^{\text{\text{meas}}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220} | S^{\text{max}}_{150}, S^{\text{max}}_{95}, S^{\text{max}}_{220} \right) =\\ \frac{1}{\sqrt{(2\pi)^3 det(\textbf{C})}}\text{exp}(-\frac{1}{2}\textbf{r}^T\textbf{C}^{-1}\textbf{r}).\end{gathered}$$ The noise covariance $\textbf{C}$ represents the flux uncertainty due to instrument noise, atmosphere, and uncertainties in beam and absolute calibration.
The residual vector, $\textbf{r}$, is given by: $$\textbf{r} = \left[ S^{\text{meas}}_{95} - S^{\text{max}}_{95}, S^{\text{meas}}_{150} - S^{\text{max}}_{150}, S^{\text{meas}}_{220} - S^{\text{max}}_{220}\right]$$
For the flux prior, we estimate the number counts $dN/dS$ based on a sum of synchrotron and dusty population models. Synchrotron populations are calculated using the @dezotti05 model at 150GHz and extrapolated to the other two bands. Dusty populations at 150 and 220GHz are estimated by use of updated @negrello07 models (M. Negrello, private communication). The population at 95GHz is estimated using an extrapolation of the @negrello07 prediction at 850$\mu$m using spectral indices of 3.1 for high-redshift sources (calculated from the spectral energy distribution of the ULIRG Arp 220 shifted to $z \sim 3$) and 2.0 for low-redshift sources (from IRAS observations). This is the same method as employed in M13.
There is an asymmetry introduced in our current deboosting algorithm, namely, that one band is chosen to have much stricter prior information applied to it through the flux prior, and the other two bands have much less restrictive priors applied through loose $\alpha$ priors. Therefore, for any given source, with flux information in three bands, the amount of deboosting each band’s flux receives depends on the choice made in selecting which band the flux prior is applied to. In @crawford10 and M13, this band is termed the “detection band" but this is slightly confusing terminology, since a given source could in fact be detected simultaneously in all three bands or some combination of bands. To avoid this confusion, here we employ the term “flux-prior band" to refer to the band which has the flux prior applied as opposed to a prior on $\alpha$. In practice, the deboosted fluxes reported in the catalog are calculated using the band with the highest significance detection in raw flux as the flux-prior band. For number counts, we use the band for which we are calculating number counts as the flux-prior band and then restrict to only sources with a detection in that band.
Since we are interested in calculating posterior distributions for spectral indices in addition to fluxes, we calculate in parallel the posteriors for one flux and two $\alpha$’s: $$\begin{gathered}
P\left(S^{\text{max}}_{150}, \alpha^{\text{max}}_{95-150},\alpha^{\text{max}}_{150-220} | S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220}\right) \propto \\P\left(S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220} | S^{\text{max}}_{150}, \alpha^{\text{max}}_{95-150},\alpha^{\text{max}}_{150-220}\right) \centerdot \\ P\left( S^{\text{max}}_{150}, \alpha^{\text{max}}_{95-150},\alpha^{\text{max}}_{150-220}\right).\end{gathered}$$
The prior is identical to that used for three fluxes, and the likelihood is very similar: $$\begin{gathered}
P\left(S^{\text{meas}}_{95}, S^{\text{meas}}_{150}, S^{\text{meas}}_{220} | S^{\text{max}}_{150}, \alpha^{\text{max}}_{95-150},\alpha^{\text{max}}_{150-220}\right) = \\\frac{1}{\sqrt{(2\pi)^3 det(\textbf{C})}}\text{exp}(-\frac{1}{2}\textbf{r}^T\textbf{C}^{-1}\textbf{r}),\end{gathered}$$ the same as before, but where the residual vector is now: $$\begin{gathered}
\textbf{r} = \left[ S^{\text{meas}}_{95} - S^{\text{max}}_{150}\left(\frac{\nu_{95}}{\nu_{150}}\right)^{\alpha^{\text{max}}_{95-150}},\ S^{\text{meas}}_{150} - S^{\text{max}}_{150}\right.,\\ \left. S^{\text{meas}}_{220} - S^{\text{max}}_{150}\left(\frac{\nu_{220}}{\nu_{150}}\right)^{\alpha^{\text{max}}_{150-220}}\right].\end{gathered}$$ The likelihood values are identical for the corresponding locations in the different parameter spaces.
From our 3-dimensional posterior probability distributions, we marginalize over two of the three parameters in the posterior to find the corresponding 1-dimensional posteriors for a parameter of interest. We then integrate the PDFs to the 16%, 50%, and 84% levels in the cumulative distribution to calculate the best-fit values and 1-$\sigma$ error bars.
Radial cross-match method {#sec:radxmatchmethod}
-------------------------
There are several instances in the analysis pipeline where a cross-match method is employed: cross-matching between the 19 SPT fields within a given band, cross-matching between SPT bands, and cross-matching between the SPT catalog and external catalogs. The same general principle is applied in each case, while the details that differ will be discussed topically in following sections. A cross-match criterion involving only a radial offset is appropriate when the source densities of the two groups of sources under comparison are comparable or, in the case of cross-matching with external information, the source density of the external catalog is similar or lower than the SPT catalog, and the positional uncertainty is small relative to the typical distance between sources. An appropriate cross-matching radius can then be chosen either analytically or empirically using the measured source density. Depending on the application, either all of the sources within the radial distance are considered associated (in the case of cross-matching between SPT fields for the same band), or the closest candidate within the radial criterion, if one exists, is considered associated (in the case of cross-matching between SPT bands and between the SPT catalog and external catalogs). Selecting a radial threshold that is excessively large will result in falsely associating physically unrelated objects, whereas a radial threshold that is too small risks missing true associations that are shifted in position due to map noise or residual pointing error. Further details of the cross-matching between SPT detections to form a single three-band full-survey-area catalog can be found in the following subsection, and details of cross-matches with external catalogs and redshift information are further detailed in Sections \[sec:xchecksection\] and \[sec:zassoc\].
Catalog generation
------------------
To be included in the source catalog, we require that a source must exceed the detection threshold of 4.5$\sigma$ in raw flux signal-to-noise in at least one band. The threshold of 4.5$\sigma$ was chosen to align with V10 and M13, which calculated purity levels of roughly 90% at 150GHz and a 4.5$\sigma$ threshold. We note that the purity level in V10 for 220GHz was also roughly 90%, whereas it is somewhat lower in the current work (see Section \[sec:purity\] below), which is due to the 2008-observed fields having deeper 220GHz data than the rest of the survey. To form a united single SPT catalog including all fields and bands, we first cross-match across all 19 fields for detections in a single band. About 10% of the full survey area falls in overlap regions covered by multiple fields, and sources that lie in overlap regions will have repeat detections in different fields. We remove repeat detections by concatenating all fields’ detections in each band and employing a radial cross-match as discussed in Section \[sec:radxmatchmethod\]. We then keep the detection that comes from the map with the lowest noise at that location and throw out the others. To determine the cross-match radius, we use the analytic formalism in Appendix B of @ivison07 which takes into account the measured beam FWHM for that band and the source signal-to-noise to yield a positional error calculated analytically. Assuming there is equal and uncorrelated error in both positional directions, we use a 3-$\sigma$ positional error for a 4.5-$\sigma$ detection to cross-match, corresponding to 57.8, 40.3, and 34.0arcsec for 95, 150, and 220GHz, respectively. (Note: in reality the positional error is correlated in the two orthogonal directions in the map when cross-matching source positions, so assuming errors are uncorrelated is not technically correct). We test that this is an appropriate radius by comparing to the density of source detections within each single field and also check that we don’t associate (and therefore remove) sources detected within the same field. We note that the radial criterion used to associate CLEAN components into sources (as discussed in Section \[sec:clean\]) corresponds to just under 2$\sigma$ for a detection of 4.5$\sigma$ at 95GHz, the band with the largest beam; therefore, sources within this radius of another source within the same field and band very likely would have been considered a component detection of that source. We remove six sources flagged within the crossmatch radius of a source in the same field, five of these are sources detected at 95GHz, one at 150GHz, and none at 220GHz. These six appear to either be multiple detections of the same source or component detections of extended sources.
We additionally remove all sources that lie in regions with overlapping coverage from multiple fields where the source is detected in a field with a higher noise level but not detected in an overlapping field with lower noise, as it’s expected that these detections would be false. This step removes 51 sources at 95GHz, 44 sources at 150GHz, and 47 sources at 220GHz, which is roughly 3% of sources in 220GHz, and a smaller percentage for 95 and 150GHz. We check that the distribution in signal-to-noise of sources trimmed is sensible, i.e. that almost all trimmed sources are near the detection threshold of 4.5$\sigma$, and therefore likely to be false detections due to map noise. The one notable exception is a 12.3-$\sigma$ detection in 220GHz that is removed from field due to overlapping coverage by which has lower noise at that source location but a non-detection of the source in that band. This source appears to be a flaring radio source that became brighter over the course of observing the 2500-square-degree area of the SPT-SZ survey, such that it was brighter in , observed in 2010, compared with , observed in 2009. We note that due to detections of this source above 4.5$\sigma$ in 95 and 150GHz, this source does survive to the final catalog as SPT-S J015917-6055.9, but with recorded fluxes in the three bands that were not measured contemporaneously.
The next step in creating a multi-band catalog is to cross-match across the SPT bands. We employ a radial cross-match method and use a 30arcsec radius of association, which is chosen similarly to above using the analytical positional uncertainty of SPT sources calculated from the formalism in @ivison07. For the band with the widest beam (1.7arcmin at 95GHz), 30arcsec is roughly a 1.5-$\sigma$ positional error for a 4.5-$\sigma$ detection. Since the source densities in all three SPT bands are quite low relative to the 30arcsec association radius, the expected rate of random association of two unrelated sources between bands is also very low. Using just the full-survey average source density, the probability of random association is $0.024\%$, $0.034\%$, and $0.012\%$ for 95, 150, and 220GHz, respectively.
At this step, we also remove any sources with incomplete coverage across the three bands. Because the masks of usable area for each field cover physically offset regions of the sky for the three bands, some area of the survey at the edges will be covered only by one or two bands, and for the sake of consistency, we trim any sources detected in these areas. This removes 115 sources from the final catalog, or about 2% of the catalog.
Catalog: description and characterization {#sec:catalog}
=========================================
Single-band and multi-band catalogs
-----------------------------------
Our 3-band integrated catalog for the full 2530 square degrees of survey area contains 2774 sources detected above 4.5$\sigma$ at 95GHz, 3909 at 150GHz, and 1435 at 220GHz. The median purity at this detection threshold across all fields in the survey is 94.4%, 94.8%, and 83.4% at 95, 150, and 220GHz, respectively; see Section \[sec:purity\] for further detail. Cross-matching across SPT bands, this yields a multi-band catalog with 4845 total sources detected at a minimum of 4.5$\sigma$ in at least one band. The noise levels for individual matched-filtered maps are shown in Table \[tab:completeness\]; taking the median noise level across all fields, 4.5$\sigma$ corresponds to detections above 9.8, 5.8, and 20.4mJy in 95, 150, and 220GHz, respectively. Of the 4845 sources in the catalog, 722 sources are detected at $\ge 4.5\,\sigma$ in all three bands. 1662 are detected only in 95 GHz and 150GHz, and 167 are detected only in 150GHz and 220GHz. 390 are detected only in 95GHz, 1358 are detected only in 150GHz, and 546 are detected only in 220GHz. Of all the detections in the catalog, roughly 8% have fluxes in different bands drawn from multiple different fields, which is consistent with about 10% of the area of the survey falling in overlap regions covered by multiple fields. Similarly, of all the sources detected above 4.5$\sigma$ in all three bands, about 9% have fluxes drawn from multiple fields. We compare raw fluxes and deboosted fluxes in the combined catalog in Figure \[fig:rawf\]. Overplotted are expected values for spectral indices between the bands, and we see that for the most part, sources follow the characteristic lines for dusty and synchrotron sources. Similarly, we plot $\alpha_{95-150}$ vs. $\alpha_{150-220}$ for both raw spectral indices and deboosted values in Figure \[fig:alpha1v2\]. We note in these plots, that spectral index does seem to correlate with source brightness, as expected, where the brightest sources are synchrotron-dominated. We also note that while there are sources where $\alpha_{95-150}$ correlates with $\alpha_{150-220}$, there are also numerous sources with spectral indices which are not correlated, indicating sources with a spectral break, which will be discussed further in Section \[sec:discussion\]. To show the effect of the deboosting, Figure \[fig:dbVsrawFlux\] plots deboosted flux as a function of raw flux for each of the three SPT-SZ bands. An overview of the number of sources above 4.5 and $5.0\,\sigma$ are shown in Table \[tab:dettable\].
Population separation {#sec:popsep}
---------------------
To explore the distributions in spectral indices that we find from deboosting and to separate sources into populations based on spectral index, we normalize each source’s posterior probability distribution for $\alpha$, such that the integral of the marginalized posterior over all possible values of $\alpha$ is unity, and then sum all the posteriors from different sources. In Figure \[fig:alphaposts\], we show these distributions for sources with signal-to-noise greater than or equal to 5.0 in both of the bands that a particular spectral index spans. We restrict to higher signal-to-noise sources for this part of the analysis to provide a cleaner population separation.
From Figure \[fig:alphaposts\], we see that the posteriors for $\alpha^{\text{max}}_{95-150}$ show only the presence of a synchrotron population peaking at $\alpha^{\text{max}}_{95-150} \sim -0.7\pm0.6$. As shown in Figure \[fig:alpha1v2\], synchrotron sources do dominate the high signal-to-noise sources in general, and dusty sources, with a positive spectral index, are much more likely to be below the detection threshold at 95GHz. In contrast, the posteriors for $\alpha^{\text{max}}_{150-220}$ show two peaks in the distribution, representing contributions from both synchrotron and dusty populations, peaking at $\alpha^{\text{max}}_{150-220} \sim -0.6\pm0.6$ and $\alpha^{\text{max}}_{150-220} \sim 3.4\pm0.8$, respectively. Once again, the synchrotron peak is stronger since we are restricting to relatively high signal-to-noise detections, which are synchrotron-dominated.
We take the minimum of our summed posterior distribution on $\alpha^{\text{max}}_{150-220}$ as the dividing criterion to produce separate catalogs of synchrotron and dusty sources. From Figure \[fig:alphaposts\], this produces a population separation at $\alpha^{\text{max}}_{150-220} = 1.51$. To classify each source as either dusty or synchrotron, we find the probability for each source that $\alpha^{\text{max}}_{150-220} > 1.51$ from each source’s marginalized posterior. If the probability that a source has $\alpha^{\text{max}}_{150-220} > 1.51$ is less than 50%, we classify the source as synchrotron, and conversely, if the the probability that a source has $\alpha^{\text{max}}_{150-220} > 1.51$ is greater than or equal to 50%, the source is classified as dusty.
Catalog description
-------------------
The columns in our catalog are described as follows; sources in the catalog are listed in order of detection significance, using the highest-significance detection across all bands. The catalog for the full survey will be available online[^1].
1. Source I.D.: Source IAU identification
2. RA: Right ascension (J2000) in degrees
3. DEC: Declination (J2000) in degrees
4. $S^{\text{meas}}_{95}/N_{95}$: Raw signal-to-noise in 95GHz
5. $S^{\text{meas}}_{95}$: Raw flux in 95GHz, \[mJy\]
6. $S^{\text{max}}_{95}$: Deboosted flux in 95GHz taken from integrating 50% of the posterior PDF, with 16% and 84% taken as 1-$\sigma$ error bars, \[mJy\]
7. $S^{\text{meas}}_{150}/N_{150}$: Raw signal-to-noise in 150GHz, \[mJy\]
8. $S^{\text{meas}}_{150}$: Raw flux in 150GHz, \[mJy\]
9. $S^{\text{max}}_{150}$: Deboosted flux in 150GHz taken from integrating 50% of the posterior PDF, with 16% and 84% taken as 1-$\sigma$ error bars, \[mJy\]
10. $S^{\text{meas}}_{220}/N_{220}$: Raw signal-to-noise in 220GHz, \[mJy\]
11. $S^{\text{meas}}_{220}$: Raw flux in 220GHz, \[mJy\]
12. $S^{\text{max}}_{220}$: Deboosted flux in 220GHz taken from integrating 50% of the posterior PDF, with 16% and 84% taken as 1-$\sigma$ error bars, \[mJy\]
13. $\alpha^{\text{meas}}_{95-150}$: Spectral index between 95GHz and 150GHz as calculated from the raw 95 and 150GHz fluxes.
14. $\alpha^{\text{max}}_{95-150}$: Spectral index between 95 and 150GHz taken from integrating 50% of the posterior PDF from the deboosting algorithm. 1-$\sigma$ error bars from integrating 16% and 84% of the posterior PDF.
15. $\alpha^{\text{meas}}_{150-220}$: Spectral index between 150GHz and 220GHz as calculated from the raw 150 and 220GHz fluxes.
16. $\alpha^{\text{max}}_{150-220}$: Spectral index between 150 and 220GHz taken from integrating 50% of the posterior PDF from the deboosting algorithm. 1-$\sigma$ error bars from integrating 16% and 84% of the posterior PDF.
17. Type: Classification of a source as either synchrotron or dusty depending on the fraction of the integrated 150-220GHz spectral index posterior above the threshold of $\alpha^{\text{max}}_{150-220} > 1.51$. For $P\left(\alpha^{\text{max}}_{150-220} > 1.51\right) \ge 0.5$, the source is classified as dusty, for $P\left(\alpha^{\text{max}}_{150-220} > 1.51\right) < 0.5$, the source is classified as synchrotron.
18. External counterparts: Flag on sources with an associated detection in one of the external catalogs we cross-match. See Section \[sec:xchecksection\].
19. Extendedness: Flag on sources that appear to be extended or are multiple members of the same source at physically offset locations due to being extended. See Section \[sec:extsrcsection\].
20. Redshift information: Measured redshift, if available.
21. Cut classification: Flag indicating a source is a member of the “ext cut" (1), “$z$ cut" (2), or SMG list (3). See Section \[sec:cutsec\].
[l c c]{} 95GHz detections & 2774 & 2416\
150GHz detections & 3909 & 3617\
220GHz detections & 1435 & 991\
Three-band detections & 722 & 645\
Sources classified as synchrotron-dominated & 3980 & 3506\
Sources classified as dust-dominated & 865 & 530\
Sources classified as SPT SMGs & 506 & 258\
Sources classified as low-$z$ LIRGs & 302 & 224\
Sources identified as stars & 10 & 10\
\[tab:dettable\]
Completeness {#sec:completeness}
------------
The completeness of the catalog for a given band is defined as the ratio of the number of sources we detect using the source-finding algorithm compared with the true number of sources in the map for a given flux. Due to the presence of noise in the maps, sources near the detection threshold may be missed by the source-finder if they happen to be coincident with a negative noise fluctuation which pulls their flux below the detection threshold. Completeness is important not only for the robustness of the catalog, but also for calculating number counts, discussed in the following section. The completeness is calculated in practice by performing the source-finding on a known population of sources at fixed flux values. We add a set of 100 simulated sources at a chosen flux level to random locations in the residual map (the optimally filtered map post-CLEANing, which is a good approximation to noise plus a background of sources below the detection threshold of the CLEANing). The source profile used is the real-space version of the transfer function (i.e. a beam with the timestream filtering applied) for the sector which contains the coordinates randomly chosen for the source, rotated to the proper angle. We then run the source-finder and cross match the returned detections with the known inputs. We repeat this process for a broad range of flux levels. The completeness as a function of flux is then given by $f_\text{compl} (S) = N_\text{recovered}/N_\text{input}$. Since the noise in our maps is to a good approximation Gaussian and sources are rare enough that the noise dominates the distribution of flux in the map, we would expect the completeness to follow an error function of the form $$f_\text{compl} (S) = \frac{1}{\sqrt{2\pi\sigma^2}}\int_S^\infty e^{-(S' - S_0)^2/2\sigma^2}dS'$$ where $S_0$ is the detection threshold, in this case 4.5 times the mean RMS noise in the map for each band. Since this process is computationally expensive, we evaluate the completeness at a few discrete flux levels and fit the error function to those results and use it as a model of our completeness, and we estimate the errors on our completeness estimate using binomial statistics. We repeat this process for each band separately.
Galaxy clusters appear as compact negative signals at 95 and 150GHz via the thermal Sunyaev-Zel’dovich (SZ) effect (see @bleem15b for a recent review and catalog release of clusters detected in 2500 square-degrees of the SPT-SZ survey; and @sunyaev72 for background on the SZ effect). Compact clusters with high significance can overlap and therefore cancel out emissive sources, which we do not account for in the completeness calculation. Using an assumed cosmological model and cluster mass function as well as SPT cluster selection functions, M13 calculated an expectation of one cluster large enough to cancel a 4.5-$\sigma$ emissive source per ten square degrees of SPT-SZ survey, which corresponds to roughly 10-20 clusters per field or roughly 250 total in the full SPT-SZ survey. Given the relatively low point source density in the SPT maps above the detection threshold, the likelihood of purely random overlap and cancellation is less than 1% per field for 150 GHz, the band with the highest source density, and even though it is known that point sources and clusters have some preference for clustering, the effect on the completeness due to cluster overlap is expected to be relatively small.
Flux levels averaged over all sectors per each field and band for 50% and 95% completeness are shown in Table \[tab:completeness\]. The median 95% completeness across all fields is 12.89, 7.60, and 26.83mJy at 95, 150, and 220GHz, respectively.
[l | c c c c | c c c c | c c c c]{}
& 2.25 & 9.76 & 13.33 & 96.2 & 1.01 & 4.39 & 6.00 & 97.2 & 2.97 & 13.25 & 18.09 & 89.8\
& 2.17 & 9.43 & 12.87 & 93.1 & 0.939 & 4.18 & 5.71 & 95.0 & 2.73 & 11.86 & 16.19 & 83.9\
& 1.89 & 8.20 & 11.20 & 96.7 & 1.11 & 4.75 & 6.49 & 95.9 & 3.83 & 16.42 & 22.42 & 85.1\
& 1.93 & 8.30 & 11.34 & 93.8 & 1.13 & 4.90 & 6.69 & 92.3 & 3.89 & 16.58 & 22.64 & 82.0\
& 2.18 & 9.44 & 12.89 & 94.1 & 1.28 & 5.46 & 7.46 & 95.3 & 4.36 & 19.12 & 26.12 & 83.4\
& 1.87 & 8.15 & 11.13 & 96.8 & 1.17 & 5.22 & 7.13 & 97.3 & 4.14 & 17.65 & 24.11 & 86.6\
& 2.24 & 9.72 & 13.27 & 96.2 & 1.30 & 5.71 & 7.80 & 95.3 & 4.49 & 19.65 & 26.83 & 80.8\
& 2.15 & 9.17 & 12.52 & 95.5 & 1.24 & 5.44 & 7.43 & 93.9 & 4.16 & 17.89 & 24.43 & 78.2\
& 2.14 & 9.25 & 12.64 & 94.3 & 1.27 & 5.50 & 7.52 & 95.2 & 4.29 & 18.77 & 25.64 & 79.0\
& 2.35 & 10.35 & 14.13 & 92.5 & 1.34 & 5.79 & 7.90 & 88.4 & 4.83 & 20.96 & 28.62 & 80.8\
& 2.22 & 9.55 & 13.04 & 93.0 & 1.30 & 5.59 & 7.64 & 95.0 & 4.77 & 20.50 & 28.00 & 87.8\
& 2.20 & 9.54 & 13.03 & 94.7 & 1.29 & 5.72 & 7.82 & 94.6 & 4.62 & 20.47 & 27.96 & 83.8\
& 2.25 & 9.80 & 13.29 & 94.4 & 1.33 & 5.90 & 8.05 & 93.0 & 4.84 & 21.03 & 28.71 & 84.9\
& 2.29 & 10.16 & 13.88 & 94.7 & 1.33 & 5.87 & 8.02 & 90.9 & 4.93 & 21.75 & 29.70 & 68.4\
& 2.18 & 9.56 & 13.05 & 94.2 & 1.29 & 5.52 & 7.54 & 94.4 & 4.71 & 20.65 & 28.19 & 78.9\
& 2.14 & 9.08 & 12.40 & 94.9 & 1.30 & 5.61 & 7.66 & 93.1 & 4.91 & 21.55 & 29.43 & 88.1\
& 2.11 & 9.11 & 12.44 & 93.1 & 1.27 & 5.57 & 7.60 & 95.3 & 4.54 & 19.45 & 26.56 & 87.1\
& 2.21 & 9.44 & 12.89 & 96.3 & 1.28 & 5.70 & 7.79 & 92.8 & 4.62 & 20.01 & 27.33 & 81.1\
& 2.17 & 9.20 & 12.56 & 94.2 & 1.30 & 5.63 & 7.69 & 94.8 & 4.85 & 21.13 & 28.86 & 78.0
Purity {#sec:purity}
------
The purity of the catalog as a function of source signal-to-noise is defined as one minus the fraction of sources at that signal-to-noise or higher that are expected to be false detections due to noise in the map. To quantify the purity of the catalog, we estimate the number of detections above a given threshold in a simulated noise-only map and compare those with the number detected above the same significance in the real maps. We generate simulated noise maps from difference maps, which contain instrument noise and residual atmosphere. The method for generating difference maps is discussed in Section \[sec:optfiltsection\]. To the noise realizations, we add contributions from the power spectrum of primary anisotropies in the CMB, which is also a source of noise for our source detections. These noise fluctuations have a power spectrum determined from the best fit $\Lambda$CDM model to combined WMAP7 and SPT data [@keisler11]. We also include an estimate of the thermal SZ effect, as well as contributions from the CIB in terms of a Poisson and clustered component. The component of the noise that we add to our simulations to account for the SZ effect is a Gaussian random field with power spectrum given by fitting measurements in @shirokoff11.
Running the source-finder on these simulated maps, we calculate the purity as a function of signal-to-noise to be $$f_\text{pure} = 1 - N_\text{false}/N_\text{total}$$
Massive clusters in the real maps will contribute to impurity in the source-finding because the timestream filtering causes these objects to have positive wings, which can be detected as false sources. However, these false detections are easy to identify and quite rare in the real maps. We remove them from the catalog by hand, and a total of six sources are removed. Thus there is no need to include them in the purity simulations.
Table \[tab:completeness\] shows purity values averaged over all sectors per field and per band for detections $\geq 4.5\,\sigma$. The median purity for sources detected at $\geq 4.5\,\sigma$ across all fields for the full survey is 94.4%, 94.8%, and 83.4% at 95, 150, and 220GHz, respectively. For sources detected at $\geq 5.0\,\sigma$, the median purity across all fields is 98.9%, 97.6%, and 95.1% at 95, 150, and 220GHz, respectively.
[l c c c c c c]{} SPT & 95GHz (3.2mm) & 1.7arcmin & 1.10 & & &\
& 150GHz (2.0mm) & 1.2arcmin & 1.54 & & &\
& 220GHz (1.4mm) & 1.0arcmin & 0.57 & & &\
SUMSS & 843MHz (36cm) & 45arcsec & 26.75 & 0.8 & 3427 & 1.49\
PMN & 4850MHz (6cm) & 4.2arcmin & 1.75 & 2.5 & 1834 & 0.95\
AT20G & 20GHz (1.5cm) & 4.6arcsec & 0.32 & 1.0 & 820 & 0.03\
IRAS & 12, 25, 60, 100$\mu$m & 11-88arcsec & 4.72 & 1.5 & 318 & 0.92\
AKARI-FIS & 65, 90, 140, 160$\mu$m & 24-59arcsec & 0.87 & 1.5 & 217 & 0.17\
AKARI-IRC & 9, 18$\mu$m & 3.3-6.6arcsec & 5.18 & 0.5 & 56 & 0.11\
WISE & 3.4, 4.6, 12, 22$\mu$m & 6.1-12arcsec & 45.32 & 0.7 & 734 & 1.94\
RASS & 0.1-2.4keV & & 3.53 & 1.5 & 447 & 0.69 \[extcatstable\]
External associations {#sec:xchecksection}
---------------------
To further characterize the nature of sources in the SPT catalog, we cross-match with seven external catalogs, ranging in wavelength from radio to X-ray. These include:
- The Sydney University Molonglo Sky Survey (SUMSS, @mauch03) at 843MHz
- The Parkes-MIT-NRAO (PMN) Southern Survey [@wright94] at 4850MHz
- The Australia Telescope 20-GHz Survey (AT20G, @murphy10)
- The Infrared Astronomical Satellite Faint Source Catalog (IRAS-FSC, @moshir92) at 12, 25, 60, and 100$\mu$m
- The Infrared Astronomical Satellite AKARI, IRC Point Source Catalog [@yamamura10] at 9 and 18$\mu$m, and the FIS Bright Source Catalog [@ishihara10] at 65, 90, 140, and 160$\mu$m
- The Wide-field Infrared Survey Explorer (WISE) AllWISE Source Catalog at 3.4, 4.6, 12, and 22$\mu$m
- The ROSAT All-Sky Survey (RASS) Bright Source Catalog [@voges99] and Faint Source Catalog [@voges00] at X-ray energies 0.1-2.4keV
Each external catalog is cross-matched with positions of SPT point sources in the catalog using a radial association criterion, as overviewed in Section \[sec:radxmatchmethod\]. An appropriate radius for association is determined for each external catalog by looking at the distributions of source separations, selecting a radius such that the probability of a random, false association is approximately 1% and no greater than 2%. The chosen radius for each catalog can be found in Table \[extcatstable\]. For most of the external catalogs, the density of sources is low enough that confusion within the SPT beam size is not an issue. For WISE, which has the highest source density, confusion becomes a problem for cross-matching with the detections in the shorter-wavelength WISE bands. Therefore, we restrict the source density in the WISE sources we cross-match with by applying a cut on the WISE catalog using the W4 22$\mu$m band, and restricting to only cross-matching with WISE sources that have W4 flux greater than 5mJy. We experimented with a more complex cross-matching scheme, incorporating source flux and number density, but found that a simple radial cross-match achieved comparable results.
![A comparison of survey depths for SPT and wide-field surveys used to cross-match with the SPT source catalog. Blue curves show example spectral energy distribution (SED) curves for dusty star-forming galaxies and their high-redshift component, SMGs, which are an Arp 220 SED shifted in redshift. Red and orange curves show two example synchrotron SEDs for different types of flat-spectrum sources. \[fig:survdepth\]](survery_depth_sed_plot_v11.pdf){width="8.5cm"}
Figure \[fig:survdepth\] shows an overview of the wavelengths and detection thresholds of the external catalogs with which we cross-match the SPT-SZ catalog. The figure also shows reference spectral energy distribution (SED) curves for DSFGs, modeled by an Arp 220 profile, and reference SEDs for two examples of flat-spectrum synchrotron sources. Shifting the reference DSFG SED in redshift demonstrates how negative K-correction enables detection of high-redshift dusty sources in mm wavelengths. Surveys in the infrared observe dusty sources on the Wien side of their SED and therefore shift to a dimmer portion of the spectrum with increasing redshift, in addition to dimming from increasing source distance. In contrast, mm/sub-mm wavelength surveys observe dusty sources on the Rayleigh-Jeans side of the spectrum and therefore shift to an intrinsically brighter part of the spectrum with increasing redshift, canceling the effect of dimming from increased distance. Table \[extcatstable\] gives an overview of each survey and the number of cross-matches with the SPT catalog. A comparison of cross-matches per catalog, including cross-match overlap between surveys for the total SPT catalog as well as dusty and synchrotron sub-populations is illustrated in Figure \[fig:venndiag\]. The most ubiquitous cross-match for the SPT catalog is with the SUMSS survey in the radio, where 71% of SPT sources have cross-matches in SUMSS. SUMSS is especially useful for cross-matches with synchrotron-dominated sources in the SPT-SZ survey since the wide-field radio survey has full coverage of the SPT-SZ area and is complete to a depth of 6mJy/beam at 5$\sigma$, The SUMSS beam is relatively large, making it not suitable for cross-matching with high-resolution optical and infrared catalogs, but confusion is not a significant issue when comparing with SPT, which has a similarly large beam and low source density. For dusty sources, IRAS in the infrared is particularly useful for identifying low-redshift dusty galaxies, but both WISE and AKARI overlap IRAS cross-matches considerably, as shown in Figure \[fig:venndiag\].
Of the 4845 sources in the catalog, 1109 have no cross-matches with external catalogs. 84% of these are detections in only one band, mostly in 150 GHz-only or 220 GHz-only; 10 sources with no cross-matches in external catalogs have detections in all three bands.
![Venn diagrams showing fractional cross-match overlap between the SPT-SZ catalog and external catalogs. The top panel shows cross-matches with the full SPT-SZ catalog; the middle and bottom panels show cross-matches for synchrotron and dusty sources, respectively. In each panel, colored regions indicate the proportion of SPT-SZ sources with cross-matches in each catalog, showing overlapping cross-matches between various catalogs, and white space indicates the fraction of SPT-SZ sources that have no cross-matches with catalogs displayed in that panel. We find that of the 4845 total sources in the catalog, 1109 (23%) have no cross-matches in external catalogs; 597 of these are classified as synchrotron (15% of synchrotron sources), and 512 are classified as dusty (59% of dusty sources). \[fig:venndiag\]](venn_rectangles_newcat.pdf){width="8.5cm"}
Extended sources {#sec:extsrcsection}
----------------
We expect that all extragalactic sources with redshifts greater than $ z \sim 0.05$ will be unresolved in the maps, given the instrumental beam size of roughly 1arcmin. There is a chance that very nearby sources or bright AGN with extended radio lobes may be resolved in the maps. We take a two-pronged approach to flagging extended sources: first, we fit a cutout around each detected source to a model constructed from the beam profile convolved with a non-symmetric 2D Gaussian and compare the $\Delta \chi^2$ of the fit to a model containing only the beam. Based on looking at the fields with the most obvious extended sources, we use a threshold of $\Delta \chi^2\geq7$ to flag sources as extended in the catalog. Second, to ensure that we are catching all sources that are detected as multiple detections of the same source in the CLEANing, we run a by-eye check of all sources within close proximity to other detections and flag sources that appear to be multiple detections at physically offset locations of the same, extended source. Each source flagged as possibly being a multiple detection of the same, extended object, is cross-checked with external catalogs to determine if the detections are indeed from the same object or from distinct objects that appear in our maps with close proximity. For the sake of completeness, we leave all detections in the catalog, but indicate the likelihood that a source is extended. In calculating the number counts, we calculate multiple versions of the counts, including using the extendedness information from both flagging methods. Fluxes for extended sources will be lower limits on the true flux, since the CLEANing is unable to accurately return flux for sources that do not look like our chosen source profile. Using the two methods discussed above, a total of 131 sources from the catalog are flagged as extended.
Redshift associations {#sec:zassoc}
---------------------
Redshifts for SPT catalog sources are obtained from a combination of follow-up observations (e.g., @weiss13 [@strandet16]) and the literature. We obtain literature redshifts by querying the NASA/IPAC Extragalactic Database (NED) and using an association radius of 0.6arcmin. 743 sources in the catalog have identified redshifts; available redshift information is listed per source in the catalog and shown in Figure \[fig:alphavsredshift\].
Star identification {#sec:starident}
-------------------
A small but interesting sub-population in the catalog are ten stars, identified primarily using their cross-matched IRAS flux at 12$\mu$m. In this section, we overview the method for separating stars from other objects in the SPT-SZ catalog; Section \[sec:stardiscuss\] discusses characteristics of the stars we see in the SPT-SZ catalog. As shown in Figures \[fig:alpha1v2\] and \[fig:starid\], the stars are not clearly identifiable using SPT data alone: their flux in SPT bands does not set them apart from other SPT sources and their spectral indices in SPT wavelengths span both synchrotron and dusty populations. However, looking at cross-matched flux in IRAS at 12$\mu$m, these sources have considerably higher flux than other sources in the SPT catalog.
The primary selection effect for detecting stars at millimeter wavelengths is a bias toward either large-surface-area and highly-luminous stars or stars with excess emission from dust such that they will have sufficient flux at mm wavelengths to be detectable. Nine of the ten stars identified in the SPT catalog are red giants on the Asymptotic Giant Branch (AGB), most of which are late-type M stars. The remaining star is \* $\beta$ Pic, which has a well-known dusty circumstellar disk [@sheret04; @riviere-marichalar14]. Red giants are luminous, large, and relatively cool, with surface temperatures of order a few thousand Kelvin. Therefore their stellar flux follows a blackbody distribution peaking around 1 to a few$\mu$m [@bedding97; @whitelock97] with a spectral index of 2 at longer wavelengths, and they are large and bright enough to be detectable at millimeter wavelengths, far into the Rayleigh-Jeans tail of their blackbody spectrum. Dusty galaxies as well as flat- and steep-spectrum synchrotron sources have spectra that rise as a function of wavelength for wavelengths shorter than $\sim$ 100 $\mu$m, as shown in Figure \[fig:survdepth\], whereas stars have spectra that are falling between $\lambda =$ a few$\mu$m to $\sim$ 100$\mu$m. Therefore, the ratio of IRAS 60$\mu$m/100$\mu$m flux should be less than one for non-stellar objects and greater than one for stars. This ratio can be used with relative success to identify stars, as shown in the right panel of Figure \[fig:starid\], but we find that high IRAS 12$\mu$m flux on its own is a more effective criterion. In theory, it should be possible to use cross-matches with WISE at wavelengths shorter than the IRAS bands to more clearly identify stars, since stars should be even brighter at shorter IR wavelengths than IRAS, closer to the peak of the stellar SED; however, most of the stars observed in the SPT sample are so bright that the WISE flux measurements are saturated and unreliable.
{width="18.1cm"}
Cut selection criteria {#sec:cutsec}
----------------------
To assist with comparing number counts with models and to further characterize source populations within the catalog, we develop three source cuts using extendedness and external cross-match information. Because source fluxes are measured in maps that have been optimally filtered assuming sources are unresolved by the SPT beam, sources that are flagged as extended or measured in the SPT maps as multiple detections will have fluxes that are systematically underestimated and therefore may bias the number counts. We therefore develop two cuts to flag them for removal when calculating the number counts.
First, in the extended cut, or “ext cut," we flag all objects flagged as extended or detected as multiple detections but confirmed to be a single object, using the methods described in Section \[sec:extsrcsection\]. Sources identified as stars are also removed in the counts for this cut, since they are not included in the models with which we compare the counts. The extended cut removes 131 sources from the catalog as a whole, 36 of these are classified as synchrotron-dominated and 95 as dust-dominated.
Second, we develop a cut to flag all low-redshift objects, using the redshift cross-match information discussed in Section \[sec:zassoc\]. Because the extended source flag used in the “ext cut" involves in part a by-eye inspection of individual sources, a method was sought to remove extended objects more systematically. All extended sources appear large enough in the SPT maps to be resolved by the SPT beam, and therefore should all be relatively local and removable by cutting all objects with low measured redshift. However, cutting below a redshift threshold will remove additional sources as well. The “$z$ cut" trims all sources flagged as stars and all sources with cross-matched redshifts $z \le 0.1$, resulting in flagging 461 sources from the full catalog, of which 248 have a synchrotron classification and 213 have a dusty classification. Looking at the distributions of source angular sizes for SPT sources with NED identifications, we expect that the cut threshold of $z < 0.1$ will correspond roughly to cutting objects with angular sizes $\gtrsim 1$arcmin, roughly the size of the SPT beam. We verify that all sources flagged by “ext cut" are included in those sources flagged by the “$z$ cut."
To more cleanly select sources in the catalog that are likely to be high-redshift SMGs, which in the relatively high flux range probed by the catalog are likely to be gravitationally lensed, we develop a list of “SPT SMGs" using more strict criteria than the “$z$ cut" and “ext cut" source lists. For this cut, we include only dust-dominated sources, we apply the same redshift criterion as the “$z$ cut," and we also exclude any remaining detections with IRAS cross-matches. Although a few IRAS detections have been confirmed to be at relatively high redshift (e.g. APM 0827 @irwin98b), these sources are few in the literature. Furthermore, IRAS detections are unlikely to be high-redshift objects, since, as shown in Figure \[fig:survdepth\], dusty objects will be observed on the Wien side of the spectrum in the IRAS bands, which will shift to an intrinsically dimmer portion of the spectrum with increasing redshift, in addition to reduced flux from greater distance. In contrast, the negative K-correction of dusty sources in sub-/millimeter bands enables the detection of the same luminosity source out to high redshifts, as shown in Figure \[fig:iras\_select\]. Additionally, a cut on IRAS objects has been used successfully in previous SPT analyses as a proxy for trimming low-redshift objects [@vieira10; @mocanu13]. We also trim SPT detections that are measured as “dipping" in the three SPT bands, meaning that they are sources with a dusty spectral index between 150 and 220GHz, but a synchrotron spectral index between 95 and 150GHz. In a reanalysis of SPT number counts from M13, @mancuso15 identified a set of sources in the SPT-SZ survey that are relatively bright at 95 GHz and were classified as dusty galaxies using the SPT pipeline in M13. When considering fluxes for each source across a wider range of frequencies than just the SPT data, @mancuso15 note that these sources do appear to have significant emission in radio bands indicating synchrotron emission and don’t have spectra that would clearly indicate that these sources are DSFGs. The presence of synchrotron emission causes the spectrum in SPT bands to appear “dipping," and although we note that the exact classification of these sources remains somewhat unclear, it provides evidence that “dipping" sources in SPT data are not clearly DSFGs. Furthermore, a set of SPT “dipping" sources have been observed in preliminary follow-up observations with LABOCA at 870$\mu$m. While a few sources had measured fluxes at 870$\mu$m that would be consistent with the presence of dust, most sources had measured fluxes at 870$\mu$m that were too low to be consistent with a DSFG spectrum, indicating that they may be synchrotron sources with complicated spectra or may be blends of unrelated objects. Follow-up spectroscopy with the VLT to obtain redshifts for a set of SPT “dipping" sources have measured redshifts in the range $z = 0.85 - 2.32$, also indicating that these sources are less likely to be high-redshift SMGs. We note that there are a couple dipping sources from V10 and M13 with follow-up observations that confirmed that they are high-redshift lensed objects. However, these objects either appear to have “dipping" spectral behavior in the SPT bands due to superposition of the high-redshift object with its foreground lens or are a superposition of unrelated objects along the line of sight. Therefore, although we note that high-redshift dusty galaxies may possess significant synchrotron emission, and therefore may manifest in the SPT-SZ catalog with a “dipping" spectrum, follow-up information on known SPT “dipping" sources so far has indicated they are less likely to be high-redshift dusty galaxies, and therefore to be conservative, we exclude them from the SMG list. We find a total of 506 sources in the SMG list, of which 73 have detections above 4.5$\sigma$ at both 150 and 220GHz.
![Observed infrared luminosity versus redshift for low-redshift infrared galaxies [@brauher08] and SPT SMGs with detection thresholds for an Arp-220-like dusty galaxy spectrum shown in solid lines. \[fig:iras\_select\]](IRAS_SPT_arp220_2.pdf){width="8.5cm"}
{width="18.1cm"}
Number counts {#sec:number_counts}
=============
In addition to supplying a catalog of detected sources, we seek to calculate the expected number counts in each of our bands as a function of flux. The number counts provide a characterization of mm-wave source populations at different wavelengths, and can be used to constrain models of galaxy evolution.
To characterize the number counts at each of our three frequencies, we employ a bootstrap method developed in @austermann09. For each band, we select only the sources in the catalog that are detected above 4.5$\sigma$ in that particular band. For each source, using our chosen band as the flux prior band for deboosting, we select 50,000 triplets of source fluxes from the 3-dimensional flux posterior probability distribution for that source. Effectively this creates 50,000 mock catalogs. We resample each catalog by drawing fluxes with replacement for a number of sources that is a Poisson deviate of the true catalog size. We then calculate for each catalog the number of sources in each flux bin to find the differential number counts, and determine 16th, 50th, and 84th percentiles of the distribution of $dN/dS$ within each flux bin. The number counts are corrected for completeness in each bin using the simulations in Section \[sec:completeness\]. We plot our calculated number counts in Figure \[fig:numcounts\]. We do not explicitly correct for purity in the number counts, since that will be accounted for by the deboosting which has generated the posteriors we draw from. The posteriors include fluxes below the detection threshold, and when drawing fluxes at random, there is a chance that fluxes below the detection threshold will be chosen. When this occurs, we remove them from the number counts calculation.
Figure \[fig:numcounts\] shows differential source counts per band for the full catalog population (excluding stars), as well as synchrotron and dusty population counts. The counts for synchrotron and dusty populations are generated using a probabilistic classification, where we calculate the corresponding $\alpha^{\text{max}}_{150-220}$ for each of the 50,000 flux resamplings of each source. We then calculate the probability that each resampling will be classified as dusty or synchrotron using the same cut as for the catalog sources, and associate it with the counts for its assigned population. Therefore, for a single source in the catalog, if it has a probability $p$ of having $\alpha^{\text{max}}_{150-220} \ge 1.51$, it will fall into the dusty source counts $p$ fraction of resamplings and will fall into the synchrotron source counts $1-p$ fraction of resamplings. Looking at Figure \[fig:numcounts\], as we might expect, the synchrotron counts dominate at all frequencies, but dusty sources are much more prominent at 220GHz than in the other two bands, and exceed the synchrotron counts at the very lowest flux levels. The total counts are shown with two cut versions: no cuts applied (other than removing stars), and applying the “$z$ cut," where we remove all sources with measured redshifts z $< 0.1$. Synchrotron-dominated and dust-dominated counts are also shown, with the “$z$ cut" applied. Tables \[tab:diffcountstable90\], \[tab:diffcountstable150\], and \[tab:diffcountstable220\] give the calculated $dN/dS$ number counts for 95, 150, and 220GHz, respectively, for the total catalog population, synchrotron and dusty populations with the “$z$ cut" applied, and SMGs.
[llllc]{} $8.7\times10^{-3}-1.1\times10^{-2}$ & $(9.71_{-1.2}^{+1.3})\times 10^{1}$ & $(8.00_{-1.1}^{+1.3})\times 10^{1}$ & $9.00_{-4.0}^{+5.0}$ & $0.80$\
$1.1\times10^{-2}-1.4\times10^{-2}$ & $(6.28_{-0.4}^{+0.4})\times 10^{1}$ & $(5.37_{-0.4}^{+0.4})\times 10^{1}$ & $5.44_{-1.4}^{+1.6}$ & $0.92$\
$1.4\times10^{-2}-1.7\times10^{-2}$ & $(3.95_{-0.3}^{+0.3})\times 10^{1}$ & $(3.48_{-0.3}^{+0.3})\times 10^{1}$ & $2.17_{-0.7}^{+0.9}$ & $1.00$\
$1.7\times10^{-2}-2.2\times10^{-2}$ & $(2.52_{-0.2}^{+0.2})\times 10^{1}$ & $(2.29_{-0.2}^{+0.2})\times 10^{1}$ & $(7.28_{-3.6}^{+4.6})\times 10^{-1}$ & $1.00$\
$2.2\times10^{-2}-2.7\times10^{-2}$ & $(1.63_{-0.1}^{+0.1})\times 10^{1}$ & $(1.48_{-0.1}^{+0.1})\times 10^{1}$ & $(2.18_{-1.5}^{+2.2})\times 10^{-1}$ & $1.00$\
$2.7\times10^{-2}-3.4\times10^{-2}$ & $(1.03_{-8.7\times10^{-2}}^{+9.3\times10^{-2}})\times 10^{1}$ & $9.33_{-0.9}^{+0.9}$ & $ 0 _{- 0 }^{+0.1}$ & $1.00$\
$3.4\times10^{-2}-4.2\times10^{-2}$ & $5.97_{-0.6}^{+0.7}$ & $5.37_{-0.6}^{+0.6}$ & & $1.00$\
$4.2\times10^{-2}-5.3\times10^{-2}$ & $4.32_{-0.4}^{+0.4}$ & $3.95_{-0.4}^{+0.4}$ & & $1.00$\
$5.3\times10^{-2}-6.7\times10^{-2}$ & $2.36_{-0.3}^{+0.3}$ & $2.15_{-0.3}^{+0.3}$ & & $1.00$\
$6.7\times10^{-2}-8.3\times10^{-2}$ & $1.39_{-0.2}^{+0.2}$ & $1.32_{-0.2}^{+0.2}$ & & $1.00$\
$8.3\times10^{-2}-1.0\times10^{-1}$ & $(7.88_{-1.3}^{+1.3})\times 10^{-1}$ & $(7.50_{-1.3}^{+1.3})\times 10^{-1}$ & & $1.00$\
$1.0\times10^{-1}-1.3\times10^{-1}$ & $(7.03_{-1.0}^{+1.0})\times 10^{-1}$ & $(5.84_{-0.9}^{+1.2})\times 10^{-1}$ & & $1.00$\
$1.3\times10^{-1}-1.6\times10^{-1}$ & $(5.25_{-0.8}^{+0.8})\times 10^{-1}$ & $(5.01_{-0.7}^{+0.8})\times 10^{-1}$ & & $1.00$\
$1.6\times10^{-1}-2.1\times10^{-1}$ & $(1.33_{-0.4}^{+0.4})\times 10^{-1}$ & $(1.33_{-0.4}^{+0.4})\times 10^{-1}$ & & $1.00$\
$2.1\times10^{-1}-2.6\times10^{-1}$ & $(1.22_{-0.3}^{+0.4})\times 10^{-1}$ & $(1.14_{-0.3}^{+0.4})\times 10^{-1}$ & & $1.00$\
$2.6\times10^{-1}-3.2\times10^{-1}$ & $(6.07_{-1.8}^{+2.4})\times 10^{-2}$ & $(5.46_{-1.8}^{+2.4})\times 10^{-2}$ & & $1.00$\
$3.2\times10^{-1}-4.1\times10^{-1}$ & $(4.84_{-1.5}^{+1.9})\times 10^{-2}$ & $(4.84_{-1.5}^{+1.9})\times 10^{-2}$ & & $1.00$\
$4.1\times10^{-1}-5.1\times10^{-1}$ & $(4.25_{-1.2}^{+1.5})\times 10^{-2}$ & $(4.25_{-1.2}^{+1.5})\times 10^{-2}$ & & $1.00$\
$5.1\times10^{-1}-6.4\times10^{-1}$ & $(2.16_{-0.6}^{+0.9})\times 10^{-2}$ & $(1.54_{-0.6}^{+0.9})\times 10^{-2}$ & & $1.00$\
$6.4\times10^{-1}-8.0\times10^{-1}$ & $(1.72_{-0.5}^{+0.7})\times 10^{-2}$ & $(1.72_{-0.5}^{+0.7})\times 10^{-2}$ & & $1.00$\
$8.0\times10^{-1}-1.0$ & $(1.96_{-2.0}^{+3.9})\times 10^{-3}$ & $ 0 _{- 0 }^{+3.9\times10^{-3}}$ & & $1.00$\
$1.0-1.3$ & $(6.26_{-3.1}^{+4.7})\times 10^{-3}$ & $(6.26_{-3.1}^{+4.7})\times 10^{-3}$ & & $1.00$\
[lllllc]{} $4.4\times10^{-3}-5.6\times10^{-3}$ & $(4.04_{-0.8}^{+0.8})\times 10^{2}$ & $(3.06_{-0.7}^{+0.8})\times 10^{2}$ & $(6.90_{-3.0}^{+3.9})\times 10^{1}$ & $(5.91_{-3.0}^{+3.0})\times 10^{1}$ & $0.80$\
$5.6\times10^{-3}-7.0\times10^{-3}$ & $(2.23_{-0.2}^{+0.2})\times 10^{2}$ & $(1.74_{-0.2}^{+0.2})\times 10^{2}$ & $(2.40_{-0.6}^{+0.8})\times 10^{1}$ & $(1.36_{-0.4}^{+0.5})\times 10^{1}$ & $0.90$\
$7.0\times10^{-3}-8.7\times10^{-3}$ & $(1.31_{-8.8\times10^{-2}}^{+9.0\times10^{-2}})\times 10^{2}$ & $(1.04_{-7.9\times10^{-2}}^{+7.9\times10^{-2}})\times 10^{2}$ & $(1.16_{-0.3}^{+0.3})\times 10^{1}$ & $5.10_{-1.4}^{+1.6}$ & $0.97$\
$8.7\times10^{-3}-1.1\times10^{-2}$ & $(7.69_{-0.5}^{+0.5})\times 10^{1}$ & $(6.38_{-0.5}^{+0.5})\times 10^{1}$ & $4.30_{-1.3}^{+1.6}$ & $1.79_{-0.5}^{+0.9}$ & $1.00$\
$1.1\times10^{-2}-1.4\times10^{-2}$ & $(4.76_{-0.5}^{+0.5})\times 10^{1}$ & $(4.12_{-0.4}^{+0.4})\times 10^{1}$ & $1.72_{-0.6}^{+0.9}$ & $(8.58_{-4.3}^{+5.7})\times 10^{-1}$ & $1.00$\
$1.4\times10^{-2}-1.7\times10^{-2}$ & $(2.89_{-0.3}^{+0.3})\times 10^{1}$ & $(2.54_{-0.2}^{+0.3})\times 10^{1}$ & $(5.71_{-3.4}^{+3.4})\times 10^{-1}$ & $(2.28_{-2.3}^{+2.3})\times 10^{-1}$ & $1.00$\
$1.7\times10^{-2}-2.2\times10^{-2}$ & $(1.68_{-0.1}^{+0.1})\times 10^{1}$ & $(1.52_{-0.1}^{+0.1})\times 10^{1}$ & $ 0 _{- 0 }^{+0.2}$ & & $1.00$\
$2.2\times10^{-2}-2.7\times10^{-2}$ & $(1.23_{-0.1}^{+0.1})\times 10^{1}$ & $(1.08_{-0.1}^{+0.1})\times 10^{1}$ & $(7.27_{-7.3}^{+7.3})\times 10^{-2}$ & $(7.27_{-7.3}^{+7.3})\times 10^{-2}$ & $1.00$\
$2.7\times10^{-2}-3.4\times10^{-2}$ & $7.13_{-0.7}^{+0.8}$ & $6.20_{-0.6}^{+0.7}$ & & & $1.00$\
$3.4\times10^{-2}-4.2\times10^{-2}$ & $4.39_{-0.5}^{+0.6}$ & $3.93_{-0.5}^{+0.5}$ & & & $1.00$\
$4.2\times10^{-2}-5.3\times10^{-2}$ & $2.73_{-0.3}^{+0.4}$ & $2.51_{-0.3}^{+0.3}$ & & & $1.00$\
$5.3\times10^{-2}-6.7\times10^{-2}$ & $1.41_{-0.2}^{+0.2}$ & $1.38_{-0.2}^{+0.2}$ & & & $1.00$\
$6.7\times10^{-2}-8.3\times10^{-2}$ & $1.06_{-0.2}^{+0.2}$ & $(9.40_{-1.6}^{+1.6})\times 10^{-1}$ & & & $1.00$\
$8.3\times10^{-2}-1.0\times10^{-1}$ & $(8.06_{-1.3}^{+1.3})\times 10^{-1}$ & $(6.94_{-1.3}^{+1.1})\times 10^{-1}$ & & & $1.00$\
$1.0\times10^{-1}-1.3\times10^{-1}$ & $(5.39_{-0.9}^{+1.0})\times 10^{-1}$ & $(5.24_{-0.9}^{+1.0})\times 10^{-1}$ & & & $1.00$\
$1.3\times10^{-1}-1.6\times10^{-1}$ & $(1.79_{-0.5}^{+0.6})\times 10^{-1}$ & $(1.79_{-0.5}^{+0.6})\times 10^{-1}$ & & & $1.00$\
$1.6\times10^{-1}-2.1\times10^{-1}$ & $(1.24_{-0.3}^{+0.5})\times 10^{-1}$ & $(1.24_{-0.4}^{+0.4})\times 10^{-1}$ & & & $1.00$\
$2.1\times10^{-1}-2.6\times10^{-1}$ & $(9.12_{-2.3}^{+3.0})\times 10^{-2}$ & $(9.12_{-3.0}^{+2.3})\times 10^{-2}$ & & & $1.00$\
$2.6\times10^{-1}-3.2\times10^{-1}$ & $(4.85_{-1.8}^{+1.8})\times 10^{-2}$ & $(4.85_{-1.8}^{+1.8})\times 10^{-2}$ & & & $1.00$\
$3.2\times10^{-1}-4.1\times10^{-1}$ & $(6.78_{-1.9}^{+1.9})\times 10^{-2}$ & $(6.29_{-1.9}^{+1.5})\times 10^{-2}$ & & & $1.00$\
$4.1\times10^{-1}-5.1\times10^{-1}$ & $(2.32_{-0.8}^{+1.2})\times 10^{-2}$ & $(1.93_{-0.8}^{+1.2})\times 10^{-2}$ & & & $1.00$\
$5.1\times10^{-1}-6.4\times10^{-1}$ & $(2.16_{-0.9}^{+0.9})\times 10^{-2}$ & $(2.16_{-0.9}^{+0.9})\times 10^{-2}$ & & & $1.00$\
$6.4\times10^{-1}-8.0\times10^{-1}$ & $(4.92_{-2.5}^{+2.5})\times 10^{-3}$ & $(2.46_{-2.5}^{+2.5})\times 10^{-3}$ & & & $1.00$\
$8.0\times10^{-1}-1.0$ & $(3.92_{-3.9}^{+2.0})\times 10^{-3}$ & $(3.92_{-3.9}^{+3.9})\times 10^{-3}$ & & & $1.00$\
$1.0-1.3$ & $(9.39_{-4.7}^{+3.1})\times 10^{-3}$ & $(7.83_{-3.1}^{+3.1})\times 10^{-3}$ & & & $1.00$\
[lllllc]{} $1.4\times10^{-2}-1.7\times10^{-2}$ & $(6.14_{-1.4}^{+1.5})\times 10^{1}$ & $(2.05_{-0.9}^{+0.7})\times 10^{1}$ & $(3.07_{-1.0}^{+1.4})\times 10^{1}$ & $(2.73_{-1.0}^{+1.0})\times 10^{1}$ & $0.83$\
$1.7\times10^{-2}-2.2\times10^{-2}$ & $(2.52_{-0.7}^{+0.8})\times 10^{1}$ & $(1.37_{-0.5}^{+0.6})\times 10^{1}$ & $8.00_{-4.6}^{+4.6}$ & $6.86_{-3.4}^{+3.4}$ & $0.99$\
$2.2\times10^{-2}-2.7\times10^{-2}$ & $(1.29_{-0.1}^{+0.2})\times 10^{1}$ & $7.87_{-1.1}^{+1.2}$ & $1.92_{-0.5}^{+0.7}$ & $1.51_{-0.5}^{+0.5}$ & $0.90$\
$2.7\times10^{-2}-3.4\times10^{-2}$ & $7.15_{-0.9}^{+0.9}$ & $4.80_{-0.8}^{+0.8}$ & $(7.61_{-2.3}^{+3.5})\times 10^{-1}$ & $(5.86_{-2.3}^{+2.9})\times 10^{-1}$ & $0.99$\
$3.4\times10^{-2}-4.2\times10^{-2}$ & $3.93_{-0.6}^{+0.6}$ & $2.82_{-0.5}^{+0.5}$ & $(3.70_{-1.9}^{+1.9})\times 10^{-1}$ & $(3.24_{-1.4}^{+1.9})\times 10^{-1}$ & $1.00$\
$4.2\times10^{-2}-5.3\times10^{-2}$ & $2.33_{-0.4}^{+0.4}$ & $1.81_{-0.4}^{+0.4}$ & $(1.11_{-0.7}^{+1.1})\times 10^{-1}$ & $(1.11_{-0.7}^{+1.1})\times 10^{-1}$ & $1.00$\
$5.3\times10^{-2}-6.7\times10^{-2}$ & $1.41_{-0.2}^{+0.3}$ & $1.09_{-0.2}^{+0.2}$ & $ 0 _{- 0 }^{+5.9\times10^{-2}}$ & $ 0 _{- 0 }^{+5.9\times10^{-2}}$ & $1.00$\
$6.7\times10^{-2}-8.3\times10^{-2}$ & $1.13_{-0.2}^{+0.2}$ & $(8.93_{-1.6}^{+1.9})\times 10^{-1}$ & $ 0 _{- 0 }^{+2.4\times10^{-2}}$ & $ 0 _{- 0 }^{+2.4\times10^{-2}}$ & $1.00$\
$8.3\times10^{-2}-1.0\times10^{-1}$ & $(6.19_{-1.5}^{+1.7})\times 10^{-1}$ & $(5.44_{-1.3}^{+1.5})\times 10^{-1}$ & & & $1.00$\
$1.0\times10^{-1}-1.3\times10^{-1}$ & $(2.84_{-0.7}^{+0.9})\times 10^{-1}$ & $(2.54_{-0.7}^{+0.9})\times 10^{-1}$ & & & $1.00$\
$1.3\times10^{-1}-1.6\times10^{-1}$ & $(1.79_{-0.5}^{+0.6})\times 10^{-1}$ & $(1.55_{-0.5}^{+0.6})\times 10^{-1}$ & & & $1.00$\
$1.6\times10^{-1}-2.1\times10^{-1}$ & $(1.24_{-0.4}^{+0.4})\times 10^{-1}$ & $(1.24_{-0.4}^{+0.4})\times 10^{-1}$ & & & $1.00$\
$2.1\times10^{-1}-2.6\times10^{-1}$ & $(6.84_{-2.3}^{+3.0})\times 10^{-2}$ & $(6.84_{-2.3}^{+3.0})\times 10^{-2}$ & & & $1.00$\
$2.6\times10^{-1}-3.2\times10^{-1}$ & $(6.07_{-2.4}^{+2.4})\times 10^{-2}$ & $(5.46_{-1.8}^{+2.4})\times 10^{-2}$ & & & $1.00$\
$3.2\times10^{-1}-4.1\times10^{-1}$ & $(3.39_{-1.5}^{+1.5})\times 10^{-2}$ & $(2.90_{-1.0}^{+1.9})\times 10^{-2}$ & & & $1.00$\
$4.1\times10^{-1}-5.1\times10^{-1}$ & $(2.32_{-1.2}^{+1.2})\times 10^{-2}$ & $(1.93_{-0.8}^{+1.5})\times 10^{-2}$ & & & $1.00$\
$5.1\times10^{-1}-6.4\times10^{-1}$ & $(6.16_{-6.2}^{+6.2})\times 10^{-3}$ & $(3.08_{-3.1}^{+6.2})\times 10^{-3}$ & & & $1.00$\
$6.4\times10^{-1}-8.0\times10^{-1}$ & $(7.38_{-4.9}^{+4.9})\times 10^{-3}$ & $(7.38_{-4.9}^{+4.9})\times 10^{-3}$ & & & $1.00$\
$8.0\times10^{-1}-1.0$ & $(5.89_{-3.9}^{+5.9})\times 10^{-3}$ & $(5.89_{-3.9}^{+3.9})\times 10^{-3}$ & & & $1.00$\
$1.0-1.3$ & $(3.13_{-1.6}^{+3.1})\times 10^{-3}$ & $(3.13_{-3.1}^{+3.1})\times 10^{-3}$ & & & $1.00$\
Discussion of results {#sec:discussion}
=====================
Source catalog characteristics
------------------------------
Of the 4845 sources in the catalog, 3980 (82.1%) are classified as synchrotron sources, and 865 (17.9%) as dusty sources, based on the probability that their $\alpha^{\text{max}}_{150-220}$ from deboosting is less or greater, respectively, than 1.51, the minimum of the summed posterior distribution of $\alpha^{\text{max}}_{150-220}$, as discussed in Section \[sec:popsep\]. 1109 sources in the catalog, or about 23%, have no cross-matches in external catalogs, and of those, 597 are classified as synchrotron and 512 as dusty. 937 or 84% of the sources in the catalog with no external cross-matches are detected in only one band by SPT, and 172 (16%) are detected in at least two bands.
Looking at Figure \[fig:alpha1v2\], we see that while a majority of sources in the catalog fit into the paradigm of two populations, dusty and synchrotron, with similar spectral indices between $95-150$ GHz and $150-220$ GHz, we also see some sources with a spectral break. To categorize different types of behavior, we look at the distributions of $\alpha^{\text{max}}_{95-150}$ for dusty and synchrotron sources, and see that $\alpha^{\text{max}}_{95-150}$ = 0.5 forms a relatively natural population separation, although this is a somewhat soft threshold. Using $\alpha^{\text{max}}_{95-150}$ = 0.5 and $\alpha^{\text{max}}_{150-220}$ = 1.51 as population thresholds, we divide the plots in Figure \[fig:alpha1v2\] into four quadrants: “rising," “falling," “dipping," and “peaking," though we stress that since the population break lines do not fall along $\alpha = 0$, the behavior of a source in one of the quadrants may not be as simple as the name suggests. For example, a source in the “peaking" quadrant may have flux that rises with band between all three frequencies but with a spectral index shallow enough that the source was characterized as synchrotron.
### Synchrotron sources
Using this categorization, we find that of the 3980 sources categorized as synchrotron, 3266 sources fall into the “falling" category, sources that we expect to have their flux dominated by synchrotron emission and likely are characteristic synchrotron sources: steep-spectrum sources and flat-spectrum sources (blazars), including Flat-spectrum Radio Quasars (FSRQs) and BL Lacs (which can be further categorized as LBL (low-frequency peaked BL Lacs) and HBL (high-frequency peaked BL Lacs)) [@dezotti10; @urry98].
Considering all sources classified as synchrotron, we find $\alpha^{\text{max}}_{95-150}$ (as shown in Figure \[fig:alpha1v2\]) has a median value of $-0.6$ with a wide standard deviation of $1.2$, and $\alpha^{\text{max}}_{150-220}$ has a median of $-0.7$ with standard deviation of $0.9$. Restricting to synchrotron sources detected at greater than 5.0$\sigma$ at 150 and 220GHz, these median spectral indices flatten and tighten slightly to median $\alpha^{\text{max}}_{95-150} = -0.6$ with a standard deviation of $0.4$ and median $\alpha^{\text{max}}_{150-220} = -0.6$ with a standard deviation of $0.5$. These numbers are the same if we restrict to only synchrotron sources in the “falling" quadrant.
From models of synchrotron number counts, we expect that in our observing bands, synchrotron sources for the flux ranges spanned by the SPT catalog should be dominated by flat-spectrum sources, either FSRQs for sources with fluxes $\gtrsim$15mJy, or BL Lacs for sources with fluxes ${\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}$15mJy, although steep-spectrum sources are expected to assume a larger portion of the synchrotron population at lower flux ranges as well [@tucci11]. Flat-spectrum sources are expected to have spectral indices $\alpha > -0.5$, but according to [@tucci11], the spectra of FSRQs will feature a spectral break which becomes more prominent at higher observing frequencies. For the “C2Ex" model version from @tucci11, which is expected to be the model version in @tucci11 that best predicts synchrotron number counts at our observing frequencies, the frequency at which the spectral break is predicted to occur is below our observing bands for all but the few very highest-flux sources in our catalog. Therefore, in the SPT bands, it is likely that FSRQs will appear as steep-spectrum sources, post-spectral break. In contrast, according to the @tucci11 model, BL Lacs are expected to feature a spectral break at observing frequencies higher than the SPT bands, and therefore, BL Lacs should appear as flat-spectrum sources, but their population will be balanced out somewhat in the lower flux ranges by steep-spectrum sources. Therefore, we might expect that relatively high-flux synchrotron sources in the SPT catalog will appear with moderately steep spectral indices in our bands, and lower fluxes are likely to have a wider distribution of spectral indices, which may peak between flat- and steep- depending on the balance between FSRQs, BL Lacs, and steep-spectrum sources. Looking at the SPT catalog, we find this to be generally true: synchrotron sources with fluxes greater than 50mJy in at least two bands have a moderately steep median spectral index $\alpha^{\text{max}}_{95-150} = -0.6$, which is the same regardless of if we restrict to sources in the “falling" quadrant or include all synchrotron-classified sources. Looking at synchrotron sources in the lower range of flux probed by the SPT-SZ catalog, $S_{150} < 20\,$mJy, but still detected above 4.5$\sigma$ at 150 and 220GHz such that they will have well-measured spectral indices, we find the same median $\alpha^{\text{max}}_{95-150}$ but with a wider distribution. We also note, however, that the width of the distribution in $\alpha^{\text{max}}_{95-150}$ will necessarily be wider for lower-flux sources just due to larger scatter from noise.
Sources in the “peaking" quadrant are classified as synchrotron and have a flat, or falling index between 150GHz and 220GHz, but a rising spectral index between 95GHz and 150GHz. In the SPT-SZ catalog, there are 714 sources in this quadrant. 88% of the sources in the “peaking" quadrant are single-band detections at 150GHz only, indicating that many have relatively low flux, given the noise threshold is lowest for 150GHz and sources just barely detected at 150GHz may be below the noise threshold at 90 and 220GHz. Because they are low-significance detections and we apply relatively unrestrictive priors on spectral index in the deboosting, the flux deboosting for these sources is quite uncertain. We expect that visible clustering of sources in the “peaking" quadrant as shown in the upper right panel of Figure \[fig:alpha1v2\], therefore, is likely due to the influence of edges of the applied spectral index priors in the deboosting, particularly because this clustering is not visible in the distributions of the raw spectral indices, shown in the upper left panel of Figure \[fig:alpha1v2\].
44% of the “peaking" sources have cross-matches in SUMSS; only about 6% have cross-matches in IRAS. As a check on the expected nature of the sources “peaking" in the SPT bands, we consider all sources with cross-matches in SUMSS, and find that a large majority of the sources show flat or falling spectral behavior between the measured SUMSS flux at 843MHz and flux at 150GHz in SPT, indicating that they likely have spectra consistent with being flat- or steep-spectrum synchrotron sources. A total of 20 sources have detected fluxes above $4.5\,\sigma$ in both 95 and 150GHz; of these, 19 have cross-matches in radio bands, including SUMSS. Of the sources with radio cross-matches, all but one have a spectral index relative to SUMSS consistent with being a flat-spectrum source, despite having $\alpha^{\text{max}}_{95-150} \gtrsim 0.5$. The remaining source with a radio cross-match has a spectral index relative to SUMSS that would categorize it as a steep-spectrum source. For these sources “peaking" in the SPT data that have relatively well-measured spectral indices in the SPT bands and cross-matches in radio catalogs, we expect these sources are likely AGN with significant self-absorption and disagreements in spectral behavior between the SPT bands and fluxes from radio cross-matches may be due to source variability over time. There is a population of sources, known as Gigahertz peaked-spectrum (GPS) sources, that peak generally in the range 500MHz – 10GHz [@odea98] due to either self-absorption of synchrotron or to free-free absorption in the ionized outskirts of the source, with a subpopulation peaking at frequencies above 5GHz, known as High Frequency Peakers (HFPs) [@dallacasa00]. However, multi-frequency follow-up observations of both the original “bright sample" of HFPs from @dallacasa00 and “faint sample" from @stanghellini09 indicated that a large fraction of each sample, including all faint sources with the highest turn-over frequencies, were identified as flat-spectrum blazars, often with large variability between epochs [@tinti05; @orienti10]. Combining this information with spectral index information relative to radio cross-matches for the SPT sources with relatively well-measured “peaking" spectral indices, we expect these sources are not HFPs, and instead are likely flat- or steep-spectrum synchrotron sources.
We note that from looking at proxy SED profiles for redshifted dusty galaxies as shown in Figure \[fig:survdepth\], and shown in Figure \[fig:alphavsredshift\] in the following section, we would expect that very high-redshift dusty galaxies may also appear in the SPT data with a “peaking" profile, with a rising spectral index between 95 and 150GHz, consistent with dust, but a spectral index between 150 and 220GHz that begins to flatten as the peak of the blackbody SED is redshifted into the SPT bands. In the currently-employed classification scheme, these sources would be categorized as synchrotron-dominated. A source with an Arp 220 spectrum would have a spectral index between 150 and 220GHz that flattens to below 1.51, the threshold for population separation used in the current catalog, at roughly a redshift of $z\sim10$. We note that the dusty source in the SPT-SZ catalog with the highest measured redshift, $z = 6.9$ [@strandet17], has spectral indices in the SPT-SZ bands that place it in the “rising" quadrant, correctly categorizing it as a dusty source.
{width="18.1cm"}
### Dusty sources
Looking at all 865 dusty-classified sources, we find median spectral indices with relatively wide distributions of $\alpha^{\text{max}}_{95-150} = 1.7$ with a standard deviation of $1.5$ and $\alpha^{\text{max}}_{150-220} = 2.7$ with a standard deviation of $0.8$, which steepen to $\alpha^{\text{max}}_{95-150} = 2.3$ with a standard deviation of $1.3$ and $\alpha^{\text{max}}_{150-220} = 3.3$ with a standard deviation of $0.5$ when considering only sources detected above 5.0$\sigma$ at both 150 and 220GHz. They also steepen to $\alpha^{\text{max}}_{95-150} = 2.1$ and $\alpha^{\text{max}}_{150-220} = 2.8$ with a standard deviations of $0.9$ and $0.8$, respectively, when considering only dusty sources in the “rising" quadrant (695 sources), and $\alpha^{\text{max}}_{95-150} = 2.5$ and $\alpha^{\text{max}}_{150-220} = 3.3$, with standard deviations of 1.0 and 0.5, respectively, for sources in the “rising" quadrant detected above 5.0$\sigma$ at both 150 and 220GHz.
We expect dusty galaxies observed in the frequency bands of SPT-SZ, where we are probing the Rayleigh-Jeans side of the spectrum, to follow a modified blackbody spectrum, $S_\nu \propto \nu^\beta B_{\nu}(\nu,T_d) \propto \nu^{\beta+2}$, where $\beta$, the dust emissivity spectral index is often assumed to be 1.5 and measured to be in the range 1 – 2 for starburst galaxies [@dunne01; @magnelli12]. Thus, we expect to find measured spectral indices for dusty sources in the range $\alpha = 3-4$, and we find the SPT catalog to be relatively consistent with this, especially for dusty sources with well-measured spectral indices (detected at both 150 and 220 GHz).
We find 170 sources in the “dipping" quadrant, which are dusty-classified sources with typically greater flux at 95 and 220GHz relative to 150GHz. We expect some sources in this category to be nearby spiral galaxies or ULIRGs, sources with spectra containing both dust and synchrotron components. For example, from Figure \[fig:survdepth\], an Arp 220 SED with slightly more synchrotron emission would appear as a dipping source in the SPT-SZ bands. In addition, multi-wavelength follow-up observations of powerful radio galaxies and steep radio-spectrum quasars have shown these sources can have spectra that dip at roughly 1mm [@haas06], with contributions to the emission at far-infrared and submm wavelengths expected to be caused by dust heating from the AGN and star formation, and observations with [*Herschel*]{} have shown that a substantial fraction of these sources may be radio-loud ULIRGs [@barthel18; @podigachoski16]. As mentioned in Section \[sec:cutsec\], follow-up observations at 870$\mu$m with LABOCA of a set of 21 sources with dipping spectra in SPT bands have indicated that while a few have fluxes at 870$\mu$m consistent with the presence of dust, most of these sources have fluxes at 870$\mu$m that are too low to be consistent with a DSFG spectrum. Five SPT “dipping" sources with LABOCA follow-up have fluxes at 870$\mu$m that indicate, along with cross-matches in radio catalogs, their spectra are likely a combination of dust and synchrotron: SPT0420-55, SPT0427-47, SPT2117-58, SPT2147-55, and SPT2014-56. The rest of the sources have non-detections or measured fluxes at 870$\mu$m that are too low to be consistent with the significant presence of dust. Follow-up spectroscopy with the VLT for SPT “dipping" sources have yielded measured redshifts in the range $z = 0.85 - 2.32$, indicating that they are unlikely to be high-redshift SMGs, though none of the five sources mentioned above with bright 870$\mu$m fluxes have measured redshifts. Therefore, it’s likely that this group of sources contains a combination of different types of objects, including low-redshift galaxies, moderate-redshift synchrotron sources or sources with spectra that include both dust and synchrotron components, and possibly sources that are blends of unrelated objects along the line of sight. 68% of the sources in the “dipping" quadrant have cross-matches in external catalogs, especially SUMSS, and most of the brightest “dipping" sources have cross-matches in SUMSS, IRAS, and WISE, as expected for nearby galaxies.
Most sources in the “dipping" quadrant do not have a strong preference for falling in that quadrant: many are detections in only 95GHz or only 220GHz, meaning they have high uncertainties on their deboosted spectral indices, or they are relatively close to the threshold of a different quadrant. Using the posterior distributions for $\alpha^{\text{max}}_{95-150}$ and $\alpha^{\text{max}}_{150-220}$ to calculate a probability for each source to be deboosted into the dipping quadrant, of the four sources with greater than 90% likelihood of being in the dipping quadrant, two are part of objects detected as multiple components in the source-finding, indicating that they are extended, and therefore likely low-redshift and possibly have greater uncertainty on their measured flux from not optimally matching the source profile used to extract them.
We would also expect that galactic HII regions should fall in the “dipping" quadrant. These sources are expected to be quite extended in the SPT-SZ maps, often appearing in the catalog as multiple source detections; therefore their measured fluxes are quite inaccurate. Extended sources in the catalog that cross-match with galactic HII regions are mostly divided between the “rising" and “dipping" quadrants. These sources are flagged as extended in the catalog and are cut by both the “ext cut" and “$z$ cut."
For the list of 506 SMGs, we find median spectral indices of $\alpha^{\text{max}}_{95-150} = 2.1$ with a standard deviation of $0.9$ and $\alpha^{\text{max}}_{150-220} = 2.6$ with a standard deviation of $0.8$, which steepen to $\alpha^{\text{max}}_{95-150} = \alpha^{\text{max}}_{150-220} = 3.3$, with standard deviations of 0.7 and 0.4, respectively, for sources in the SPT SMG list detected above 5.0$\sigma$ at 150 and 220GHz (52 sources).
Follow-up observations of SMG candidates from the 2500-square-degree SPT-SZ survey using ALMA have measured redshifts for 39 sources, yielding a median redshift $z \sim 4$ [@weiss13; @strandet16], and all available measured redshifts for SMGs in the survey are shown in Figure \[fig:alphavsredshift\]. The source with the highest-measured redshift, SPT0311–58, with a redshift of $z = 6.9$ [@strandet17] has been observed to have massive rates of star formation likely triggered by the merging of two component galaxies, making it a very rare object observed well into the epoch of reionization [@marrone18]. While follow-up observations of SPT SMGs have indicated that most are subject to strong gravitational lensing, a number of sources show no evidence of gravitational lensing, indicating that they may be intrinsically extremely luminous or may be groups of dusty star-forming galaxies potentially in the early stages of forming a galaxy cluster, referred to as a ‘protocluster’ [@miller18]. SPT2349-56, a discovered protocluster in the 2500-square-degree SPT-SZ survey area, has been shown using deep ALMA spectral imaging to consist of at least 14 galaxies all at a redshift of 4.31 and undergoing massive star-formation in a relatively compact region [@miller18]. From a follow-up sample of roughly 90 SPT sources, a total of about 9 protocluster candidates have been discovered so far using detailed ALMA and LABOCA observations. These sources show similar characteristics to SPT2349-56, with typical measured redshifts $z \gtrsim 4$, demonstrating that discovered protoclusters in the SPT-SZ area can inform the study of structure formation in the very early universe.
### Redshift distribution of SPT-detected sources
A total of 743 sources in the SPT-SZ catalog have measured redshifts from cross-matches in NED or follow-up observations. Of these, 531 are synchrotron-dominated sources and 212 are dust-dominated sources. 234 (163) synchrotron (dusty) sources with measured redshifts are at $z < 0.1$. Figure \[fig:alphavsredshift\] shows the distribution of spectral index, $\alpha^{\text{max}}_{150-220}$, for all sources in the SPT-SZ catalog with measured redshifts, showing both dusty and synchrotron populations. Because redshifts are drawn from a variety of different sources, we don’t attempt to quantify the completeness function of the redshift cross-matching with the SPT catalog; rather, this figure is to give an illustration of the known redshift information for the catalog. The measured redshifts for high-redshift SPT SMGs are drawn primarily from follow-up observations with ALMA of sources discovered in the SPT-SZ survey area [@weiss13; @strandet16].
A few individual sources in Figure \[fig:alphavsredshift\] warrant additional comment. Two sources with redshift associations where the measured redshift is $> 3$ that have no cross-matches in external catalogs are not included in the SPT SMG list because both appear with a dipping spectrum in the SPT bands, due to blending of the lensed object with either its foreground lens or an unrelated source along the line of sight. An additional source with measured redshift $>3$ appears in the plot above with a cross-match with an IRAS detection, this cross-match is most likely a false association of two unrelated objects that fall just within the association radius. Finally, one source with a measured high redshift in the SPT SMG list has a cross-match with an X-ray detection in RASS; the lens for this high-redshift source is a galaxy cluster, and the X-ray detection is of the cluster, which falls within the RASS association radius of the background source.
{width="18.1cm"}
### Millimeter-wavelength star characterization {#sec:stardiscuss}
Millimeter wavelength observations of cool stars can provide interesting insight into the nature of these objects. The baseline expected flux measured at mm-wavelengths is due just to observing the Rayleigh-Jeans tail of the stellar blackbody radiation, where the stars detectable by SPT are those that are large and bright enough that despite observing the SED in a wavelength range where the flux is many orders of magnitude below the star’s peak output, it is still detectable above the SPT noise level. As mentioned in Section \[sec:starident\], nine of the ten stars in the SPT-SZ catalog are AGB stars, most of which are M type, and many of which are Mira variables or closely related to Miras, which are known to be very large and luminous, bright enough for the baseline flux in the Rayleigh-Jeans tail to be detectable. However, excess mm-wavelength emission above the stellar SED or a spectral break from the expected $\nu^2$ of the stellar blackbody may indicate the presence of dust, including spinning dust, or potentially stellar winds, though the effects of winds are likely to be subdominant to dust because the stellar atmosphere will be optically thin at millimeter wavelengths [@ogorman17; @tram19].
Gathering flux measurements from the literature, Figure \[fig:starsed\] shows SEDs for each of the ten stars detected in the SPT-SZ catalog. Other than \* $\beta$ Pic, which is an A-type star and therefore much hotter than the other stars detected in the catalog (but which has a well-known dusty debris disc causing excess emission in longer wavelengths), the stars detected in the SPT-SZ catalog are expected to have effective temperatures of roughly a few thousand Kelvin, and therefore should have blackbody spectra that peak at $\sim 1$ – a few microns. Therefore, to explore the possibility of excess flux at millimeter wavelengths, we fit a simple blackbody model where the model is constrained using only data in the wavelength range of the blackbody peak, $1.25 - 5\,\mu$m, to fit for the blackbody effective temperature and the star’s angular diameter.
Six stars in the catalog show flux in the SPT bands relatively consistent with blackbody fits to just the baseline stellar SED: \* P Dor, \* bet Gru, \* pi.01 Gru, V\* R Hor, V\* X Pav, and V\* NU Pav. The other four stars identified in the SPT data have somewhat different spectral behavior from following the tail of the stellar blackbody. Two stars show excess emission but similar spectral indices: \* $\beta$ Pic and V\* RZ Sgr. As mentioned above, \* $\beta$ Pic has a well-documented debris disc with a median dust temperature of 79K [@riviere-marichalar14]. This star shows strong excess flux in the SPT bands relative to the expected stellar SED and a spectral index slightly steeper than the expected $\nu^2$ of the stellar blackbody, likely due to dust modification of the blackbody spectrum. V\* RZ Sgr shows an excess of flux in mm-wavelengths but with a typical blackbody spectral index. It is known to have an optical nebula [@whitelock94] and a circumstellar shell large enough to be resolved by IRAS at 60$\mu$m with a measured radius of 4.3arcmin [@young93], and therefore the extra emission observed in the SPT bands is consistent with the significant presence of dust. Correspondingly, V\* RZ Sgr is observed to be extended in the SPT-SZ catalog, and is flagged accordingly. Therefore, the measured SPT fluxes are likely to underestimate the true flux, and this is the likely cause of the disagreement between the measured SPT fluxes and those from Planck [@planck18-54], as shown in Figure \[fig:starsed\], given the larger Planck beam.
Two stars show spectral indices distinctly different from the stellar blackbody: V\* RR Tel and del02 Gruis. V\* RR Tel shows quite flat spectral indices as well as excess flux in SPT bands, although we also note that the simple blackbody model is a poor fit to the data. V\* RR Tel is known to be a symbiotic nova, with a red giant in mutual orbit with a white dwarf [@ivison95], and its distinct spectrum may indicate the presence of significant stellar winds [@gudel02] or the effect of ionization from the white dwarf. del02 Gru, a red giant, also has a relatively flat measured spectral index in SPT bands. We note that both V\* RR Tel and del02 Gru have spectral indices measured between the SPT bands that are relatively consistent with measured fluxes in radio catalogs, where the measured flux in radio frequencies has clearly departed from the blackbody spectrum. Similarly, V\* R Hor, V\* NU Pav, and \* bet Gru have detections in radio catalogs that also show a break from the blackbody spectrum. The flux measurements in SPT bands for these three are relatively consistent with a blackbody spectrum, but show some departure, potentially consistent with their radio fluxes.
As can be seen in Figure \[fig:starid\], three sources in the SPT catalog have cross-matches with sources in IRAS with fluxes at 12$\mu$m comparable to the stars but are not identified as stars. Looking at each of these objects individually by hand and comparing with data in external surveys, two appear to be likely false cross-matches due to blends of multiple objects superimposed in the SPT maps along the line of sight, making accurate cross-match identification difficult. A third SPT object appears in the catalog as a repeat cross-match with the IRAS source identified as V\* RZ Sgr, due to either being a blend of unrelated objects along the line of sight near V\* RZ Sgr or possibly a multiple detection of V\* RZ Sgr itself, which is known to be extended, as noted above.
{width="18.1cm"}
Number counts characterization
------------------------------
### Synchrotron source population
Differential number counts per band for synchrotron-dominated sources are shown in Figure \[fig:sync\_counts\] along with comparison to two models: @dezotti05 and @tucci11, neither of which have been fit to the SPT counts. The SPT counts shown are calculated using the method described in Section \[sec:number\_counts\] on the SPT source population with the “$z$ cut" flagged sources removed. Plots comparing non-cut and various cut versions of the synchrotron counts are shown in Figure \[fig:sync\_counts\_appendix\] in the Appendix.
The @dezotti05 cosmological evolution model includes separate components for multiple synchrotron populations, including primarily steep-spectrum radio sources and two populations of flat-spectrum sources (blazars): flat-spectrum radio quasars (FSRQs) and BL Lacs. It describes each population with a comoving luminosity function extrapolated to higher frequencies using a simple power law ($\alpha =-0.1$) for flat-spectrum sources and some spectral steepening for steep-spectrum sources.
Similar to @dezotti05, the @tucci11 model extrapolates source counts using spectral behavior measured at low radio frequencies (5 GHz). But the extrapolation is developed using characteristics of the physical mechanisms of emission for different populations, focusing specifically on flat-spectrum sources which dominate the number counts at cm- to mm-wavelengths and fluxes brighter than $\sim$ 10mJy. The spectrum of emission from flat-spectrum sources is expected to break at some frequency in the range of 10-1000GHz and steepen at higher frequencies due to both electron cooling as electrons are injected from the AGN core into the jets and from a reduction of the apparent size of the optically thick core with increasing observing frequency, such that high-frequency observations become dominated by the optically thin jets, which have a steeper spectrum [@tucci11]. This effect is most prominent for higher-flux sources, because these are more likely to be flat-spectrum sources. The SPT-SZ counts are compared with the “C2Ex" version of the @tucci11 model, which is the version that best fits data at frequencies $\gtrsim$ 100GHz, as confirmed mainly with comparison to Planck ERCSC counts, which has strong constraining power at the highest flux ranges due to full-sky coverage.
While historically the @dezotti05 model has been broadly successful in extrapolating to higher frequencies [@dezotti10], because the @dezotti05 model does not include a spectral break for flat-spectrum sources, we expect that it will become less of a good fit to the counts relative to the @tucci11 model with increasing observing frequency and increasing source flux. Looking at Figure \[fig:sync\_counts\], while the @dezotti05 model is in moderate agreement with the SPT-SZ counts at 95GHz, it becomes an increasingly poor fit to the counts at higher frequencies, particularly at high fluxes, where FSRQs will dominate. The @tucci11 model is a reasonably good fit to the data at all three SPT-SZ frequency bands across the flux ranges probed by the SPT-SZ catalog.
{width="12cm"}
### Dusty source population
Differential number counts per band for sources in the the SMG list are shown in Figure \[fig:dust\_counts\] with comparisons to the lensed components of three representative models: @bethermin12, @cai13, and @negrello07. Because measured fluxes in the SPT-SZ catalog for low-redshift sources that are extended will underestimate the true source flux, we restrict our comparisons of the number counts for dusty populations with models to the SPT SMG list, for which comparison with models is the most straightforward. The SPT SMG list comprises dusty sources where all low-redshift sources, IRAS cross-matches, and dipping sources, which may be contaminated by blending with source lenses or unrelated objects along the line of sight, have been removed. We expect this population to be dominated by sources that are magnified by gravitational lensing. None of the models considered here have been fit to the SPT number counts. Figure \[fig:dust\_counts\_appendix\] in the Appendix shows a comparison of different cut versions for the dusty number counts, including no cuts (only stars), “ext cut," “$z$ cut," and the SMG list. All three models we compare with combine forward-physical and backward-phenomenological components to describe different populations of the observable galaxy population, including late-type warm (starburst) and cold (normal) galaxies, as well as lensed and unlensed spheroidals and protospheroidals.
The @negrello07 model includes counts for protospheroidals as modeled using the physical model by @granato04. The @granato04 counts have been rescaled at 850$\mu$m to agree with counts from the SCUBA SHADES survey [@coppin06]. Protospheroidals virialized generally at $z \gtrsim 1.5$, and the $z {\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}1.5$ contribution is considered to be dominantly from starburst and disc galaxies, which are modeled from local luminosity functions at 60$\mu$m. The strongly lensed component to the @negrello07 model was calculated similarly to @perrotta02 [@perrotta03] but using the @granato04 model for protospheroidals. When comparing with the SMG list, as in Figure \[fig:dust\_counts\], we have trimmed sources with fluxes greater than 200mJy at 60$\mu$m from the @negrello07 model. Of the dusty source counts models considered in this work, the @negrello07 lensed model is the closest to reproducing the SPT SMG counts, but we note that it is the model that is the most fine-tuned to reproduce counts at (sub-)mm-wavelengths.
The @bethermin12 model includes main sequence and starburst galaxies as the two main components of the model, using one SED per component from libraries from [*Herschel*]{}. Because phenomenological or hybrid models are limited by lacking physical underpinnings describing the evolution of the luminosity function, instead the @bethermin12 model is based on two distinct star-formation mechanisms and their evolution, one for each galaxy component, based on the work in @sargent12. The contribution from strong gravitational lensing is accounted for by applying a magnification factor to the luminosity function. The lensed component of the @bethermin12 model overestimates the SPT SMG number counts for all but the very lowest fluxes at both 150 and 220GHz.
The @cai13 model is a hybrid model, combining a physical, forward model for spheroidal galaxies and backward-evolution model for late-type galaxies, based on observations that early-type galaxies are dominated by older stellar populations, while late-type galaxies have younger stellar populations. They improve on previous models by considering components of the flux for protospheroidal galaxies from star formation and central AGN in a unified way, rather than being considered separately. Protospheroidal galaxies are modeled using @granato04, and low-$z$ galaxy populations are considered in two populations: “warm" starburst galaxies and “cold" late-type galaxies. A magnification factor is applied to account for strong lensing of high-redshift protospheroidals. While the @cai13 model counts are lower than @bethermin12, this model also over-predicts the SPT SMG counts for generally all but the lowest flux bins probed by the SPT data at both 150 and 220GHz.
Disagreement of the data with models can help place constraints on the maximum magnification factor for the strong gravitational lensing of protospheroidals. Both the [@cai13] and [@bethermin12] models assume no upper bound on strong lensing magnification factor (assuming sources were point-like). According to [@bonato14] (see Figure 2), applying a maximum magnification factor of $20-30$ (corresponding to a physical source size of slightly less than $\simeq3$kpc [@lapi12]), provides good agreement with SPT source counts from M13 for dusty sources with no IRAS cross-matches. The SMG list reported in the current work is more conservative in defining SMG candidates relative to M13, restricting by IRAS cross-match but also measured redshift and removing “dipping" sources. The number counts for the SMG list in the current work are consistent with the dusty counts with no IRAS cross-matches in M13 at 220GHz and are slightly lower at 150GHz. A maximum magnification factor of $20-30$ is in good agreement with follow-up observations with ALMA of a set of roughly 50 lensed SMGs from the 2500-square-degree SPT-SZ survey, where the measured median magnification factor is $\sim 6$, with a maximum of 30 [@spilker16]. The implied source physical sizes from the @lapi12 model for a maximum magnification factor of 30 are also consistent with follow-up observations, where the measured size distribution of strongly lensed sources is found to be consistent with that of unlensed sources [@spilker16].
Comparison with previous SPT-SZ point source results
----------------------------------------------------
As the third and final compact source data release from SPT-SZ, the full 2530 square-degree analysis covers a factor of 3.3 times the area of the previous release, M13, including 1759 square-degrees of previously unanalyzed data. Due to alterations of the source-finding pipeline and differences in mask areas to aid full survey coverage, the five sky fields covered by the previous two analyses, V10 and M13, have been reanalyzed in the current analysis. For and which were originally observed in 2008 and then reobserved in 2010 and 2011 to add 95GHz coverage and greater depth at 150 and 220GHz, we have incorporated the previously unanalyzed 2010 and 2011 data. Although a large majority of the sources extracted in the five reanalyzed fields are consistent with past reported catalogs, there are slight differences in sources extracted between M13, V10, and the current analysis. These differences are due mainly to the lower noise in 150GHz for and , slight differences in the masks used for each field, and the slightly different treatment of map noise used for source detection, as discussed in Section \[sec:clean\]. We confirm that the sources that differ generally have signal-to-noise values very close to the detection threshold or are located at the edges of the survey area, which are affected by slight differences in masking.
Figure \[fig:SPT\_compare\_total\_counts\] shows a comparison of total number counts between the three generations of compact source catalog releases with data from SPT-SZ: V10, M13, and the current work. As expected, the increase in sky area with generally comparable noise level also reduces the error bars on the calculated number counts and adds a few flux bins of counts that were either upper limits or missing from M13. The error bars on the uncut version of the counts, which are most directly comparable to the M13 counts, reduce by roughly 50%, consistent with the amount of area increase.
The smaller error bars allow the SPT number counts to be more constraining of the parameters of galaxy evolution models. For the synchrotron counts, the @tucci11 model more clearly agrees with the number counts than the older @dezotti05 model, as shown in Figure \[fig:sync\_counts\], whereas the M13 counts showed a weaker preference between models, particularly at 95 and 150GHz. Since the main difference between the two models is the inclusion of a spectral break for FSRQs, the greater constraint shows a clear preference for the presence of a spectral break, although the counts are not constraining enough to provide much further information on the models, such as the break frequency, which might further constrain the AGN core size. Similarly, the smaller error bars for the dusty counts also provide clearer constraints, particularly on parameters governing lensed sources, such as the expected lensing magnification factor, as discussed in the previous section.
Comparison of SPT-SZ with other mm-wavelength surveys
-----------------------------------------------------
The upper panels of Figure \[fig:SPT\_compare\_Planck\_ACT\] provide a comparison between the total counts from SPT-SZ from the current work (with only stars removed) with number counts from the all-sky survey from Planck at 100, 143, and 217GHz [@planck12-7], and number counts from the Atacama Cosmology Telescope (ACT, @gralla19). The total counts from ACT presented here have been drawn from the sum of synchrotron- and dust-dominated sub-populations, as presented in Table 5 of @gralla19. The lower panels of Figure \[fig:SPT\_compare\_Planck\_ACT\] compare synchrotron-dominated counts from SPT-SZ (similarly shown with only stars removed) compared with number counts from the Planck Multi-frequency Catalog of Non-thermal Sources [@planck18-54], excluding Galactic-plane sources using the GAL070 mask, and counts for AGN sources from ACT [@gralla19]. The total and synchrotron-dominated number counts from SPT-SZ are broadly consistent with both Planck and ACT across the full range of fluxes overlapping between the different surveys.
As an overview of the different source populations present in the SPT-SZ catalog, Figure \[fig:smg\_comparison\_counts\] shows cumulative number counts at 220GHz, including total, synchrotron-, and dust-dominated source populations. Contributions to dusty counts are considered in three components: low-$z$ LIRGs, SPT SMGs, and unlensed high-$z$ sources, with empirical counts from the SCUBA-2 instrument [@geach17], scaled from 850$\mu$m to 220GHz using an SED for Arp 220 shifted to $z\,\sim\,2.5$. SPT number counts for low-$z$ dusty sources have been calculated using sources that are trimmed by the “$z$ cut." We know the SPT flux measurements for these sources will be biased low, and this is one of the causes of the slight discrepancy with the @bethermin11 model counts for these sources. The counts for this population are shown here primarily for illustration. The @negrello07 lensed-only model agrees well with cumulative number counts calculated from our SPT SMG list.
Conclusion {#sec:conclusion}
==========
We have presented a catalog of 4845 compact sources extracted from 2530 square degrees of the SPT-SZ survey with fluxes measured in three bands centered at 95, 150, and 220GHz. Sources in the catalog are detected in at least one band with a significance of 4.5$\sigma$ or higher. Because the raw source fluxes will be subject to a positive bias due to the underlying source number counts being a steep function of flux, we apply a Bayesian deboosting method to report corrected fluxes and spectral indices. The deboosting method is also used to separate sources into synchrotron-dominated and dust-dominated populations using their deboosted spectral index between 150 and 220GHz. Synchrotron sources (with flat or falling spectral indices between 150 and 220GHz) are consistent with AGN, and dust-dominated sources (with rising spectral indices between 150 and 220GHz) are consistent with dusty star-forming galaxies, including a population of high-redshift dusty galaxies, which we refer to as SMGs. In the relatively bright flux ranges and moderate field depths probed by the SPT-SZ survey, we expect the high-redshift dusty sources we observe will be dominated by sources subject to strong gravitational lensing. We further categorize this population by developing an “SPT SMG list," which contains 506 sources. With the largest currently available sky area with arcmin resolution, the SPT-SZ survey provides a powerful lever arm for finding the brightest and rarest high-redshift dusty sources. Sources in the SMG list with previous follow-up observations have measured redshifts up to $z = 6.9$, demonstrating that the survey has detected sources as far back as the epoch of reionization [@strandet17]. Similarly, protoclusters discovered in the 2500-square-degree area with redshifts $\gtrsim 4$ demonstrate that this dataset can probe high-redshift structure formation [@miller18].
Number counts for total, synchrotron-, and dust-dominated populations have also been calculated. The number counts probe flux ranges from $8.7\times 10^{-3} - 1.3$Jy at 95GHz, $4.4\times 10^{-3} - 1.3$Jy at 150GHz, and $1.4\times 10^{-2} - 1.3$Jy at 220GHz. We find that our synchrotron population number counts as well as catalog spectral indices are consistent with models from [@tucci11], in which FSRQs are expected to dominate the brighter fluxes probed by the catalog, but featuring a spectral break at higher observing frequencies resulting in moderately steep spectral indices for FSRQs measured in the SPT-SZ bands.
As expected, number counts for the dusty source population are subdominant to those for the synchrotron population at all fluxes we probe, except at the lowest flux range at 220GHz. Focusing on number counts for the SMG list, we find that of all of the models we compare with, the @negrello07 lensed model agrees the best with our measured counts at 150 and 220GHz; the @cai13 and @bethermin12 models lensed components generally overestimate our measured counts.
As the third and final compact source catalog release from the SPT-SZ survey, our catalog and number counts are consistent with prior-released results from the SPT-SZ survey: @vieira10 and @mocanu13. The current work features a roughly 50% reduction in the error bars on our measured number counts, consistent with the increase in sky area utilized. We find our measured number counts are also consistent with all-sky results from Planck [@planck12-7; @planck18-54] and recently-released results from ACT [@gralla19].
Looking to the future, SPT-3G, the camera most-recently installed on the South Pole Telescope, is undertaking observations currently that will push the flux detection threshold for compact sources by roughly an order of magnitude relative to SPT-SZ over a 1500 square-degree area, thus probing populations of both lensed and unlensed dusty sources and overlapping with flux ranges probed by the original deep but narrow surveys from instruments such as SCUBA. Anticipated noise levels in the completed SPT-3G survey are roughly 140, 130, and 760$\mu$Jy at 95, 150, and 220GHz. This survey will allow for unprecedented study of the highest-redshift dusty, star-forming galaxies, as well as provide powerful constraints on the development and evolution of extragalactic radio source populations. SPT-3G will thus push the SPT source detection threshold into the population of unlensed dusty sources, enabling consistency checks with source counts measured in small field areas observed with ALMA (e.g. @hatsukade18 [@zavala18]), while also covering large field areas, further enabling the discovery of extremely rare sources, such as protoclusters, advancing our understanding of structure formation in the early universe.
The SPT is supported by the NSF through grant PLR-1248097, with partial support through PHY-1125897, the Kavli Foundation and the Gordon and Betty Moore Foundation grant GBMF 947. D.P.M. and J.D.V. acknowledge support from the US NSF under grants AST-1715213 and AST-1716127. J.D.V. acknowledges support from an A. P. Sloan Foundation Fellowship. C. Reichardt acknowledges the support from an Australian Research CouncilÕs Future Fellowship (FT150100074). B.B. is supported by the Fermi Research Alliance LLC under contract no. De-AC02-07CH11359 with the U.S. Department of Energy. Work at Argonne National Lab is supported by UChicago Argonne LLC, Operator of Argonne National Laboratory (Argonne). Argonne, a U.S. Department of Energy Office of Science Laboratory, is operated under contract no. DE-AC02-06CH11357. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC-0015640. Thanks to Dr. Graham Harper for conversations regarding cool stars and their emission mechanisms, and thanks to Diego Herranz Muñoz from the Planck Collaboration and Dr. Megan Gralla from the ACT collaboration for supplying number counts.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This research uses observations from the AKARI mission, a JAXA project with the participation of ESA. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the [*Herschel Observatory*]{}; [*Herschel*]{} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. This research makes use of data products from the Planck Observatory (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.
{width="18.1cm"}
{width="18.1cm"}
[^1]: https://pole.uchicago.edu/public/data/everett20/
|
---
abstract: 'Asteroseismology is a powerful method for determining fundamental properties of stars. We report the first application to a metal-poor object, namely the subgiant star . We measured precise velocities from two sites, allowing us to detect oscillations and infer a large frequency separation of $\Delta \nu = 24.25 \pm 0.25 \mu$Hz. Combining this value with the location of the star in the H-R diagram and comparing with standard evolutionary models, we were able to place constraints on the stellar parameters. In particular, our results indicate that has a low mass ($0.85\pm0.04\,M_\sun$) and is at least 9Gyr old.'
author:
- 'Timothy R. Bedding, R. Paul Butler, Fabien Carrier, Francois Bouchy, Brendon J. Brewer, Patrick Eggenberger, Frank Grundahl, Hans Kjeldsen, Chris McCarthy, Tine Bj[ø]{}rn Nielsen, Alon Retter, Christopher G. Tinney'
title: 'Solar-like oscillations in the metal-poor subgiant $\nu$ Indi: constraining the mass and age using asteroseismology'
---
Introduction
============
Asteroseismology is a powerful method for determining fundamental properties of stars. This is because oscillation frequencies give strong constraints on the internal structure that are independent of classical observations. Observations of solar-like oscillations are accumulating rapidly, and measurement have recently be reported for several main-sequence and subgiant stars, including [@B+C2002; @BKB2004], [@C+B2003; @KBB2005], [@MLA2004b; @CEDAl2005], [@BBS2005], HD 49933 [@MBC2005] and [@KBB2003; @CEB2005; @GKR2005]. All of these stars have solar metallicity or greater. Asteroseismology is particularly useful for constraining the evolutionary status of stars with low metallicity [e.g. @DACDiM2005]. Here, we report the first oscillation measurements of a metal-poor star.
We have observed the subgiant (HR 8515; HD 211998; HIP 110618), whose iron abundance is only 3% of solar. We adopt the following stellar parameters: $V=5.28$, ${\mbox{$T_{\rm eff}$}}= 5300 \pm 100$K, $L=6.2\,L_\sun$ and $\mbox{[Fe/H]}=-1.4\pm0.1$ [@NHS97; @GSC2000]. Note that the Bright Star Catalogue [@Hoffleit] incorrectly lists this star as a binary with spectral types A3V:+F9V. In fact, is a single star with spectral type G0 [@L+McW86].
Velocity observations and power spectra
=======================================
We observed in August 2002 from two sites. At Siding Spring Observatory in Australia we used UCLES (University College London Echelle Spectrograph) with the 3.9-m Anglo-Australian Telescope (AAT). An iodine absorption cell was used to provide a stable wavelength reference, with the same setup that we have previously used with this spectrograph [@BBK2004]. At the European Southern Observatory on La Silla in Chile we used the CORALIE spectrograph with the 1.2-m Swiss telescope. A thorium emission lamp was used to provide a stable wavelength reference, and the velocities were processed using the method described by @BPQ2001.
With UCLES we obtained 680 spectra of , with typical exposure times of 300s (but sometimes as short as 200s in the best conditions) and a dead time between exposures of 61s. With CORALIE we obtained 521 spectra, with typical exposure times of 360s and a dead time between exposures of 128s.
The resulting velocities, with nightly means subtracted, are shown in Fig. \[fig.series\]. As can be seen, the weather was good in Australia but poor in Chile (we were allocated seven nights with UCLES and 14 with CORALIE).
Most of the scatter in the velocities, especially for UCLES, is due to oscillations. Figure \[fig.best\] shows a close-up of the first night. Oscillations with a period of about 50min and variable amplitude (due to beating between modes) are visible here and throughout the time series. We also see good agreement between the two instruments, within measurement uncertainties, during the overlap (although we should note that the difference in absolute velocities is not known and has been adjusted to give the best fit).
Our analysis of these velocities follows the method that we developed for [@BBK2004] and [@KBB2005]. We have used the measurement uncertainties, $\sigma_i$, as weights in calculating the power spectrum (according to $w_i = 1/\sigma_i^2$), but modified some of the weights to account for a small fraction of bad data points. In this case, 4 data points from UCLES and 24 from CORALIE needed to be down-weighted. The power spectra of the individual time series and of the combination are shown in Fig. \[fig.power\]. The differences between the three panels can be attributed to the effects of beating between modes.
The low metallicity of means that the lines in its spectrum are fewer and weaker than for stars of solar metallicity. This, together with its relative faintness ($V=5.3$), means that the Doppler precision for is poorer than for other stars observed with the same instruments, such as and B. We measured the average noise in the amplitude spectrum of at frequencies above the stellar signal (0.6–1mHz) to be 17.3 for UCLES, 27.2 for CORALIE and 14.9 for the combined data. Using these values, we calculated the noise per minute of observing time to be 5.9 for UCLES and 9.5 for CORALIE, where the difference is due to a combination of factors, primarily the telescope aperture but also including spectrograph design, sky conditions and observing duty cycle.
The inset in each panel of Fig. \[fig.power\] shows the spectral window (the response to a single pure sinusoid). For the single-site data we see sidelobes at $\pm 11.6$ that are very strong (51% in power for UCLES and 57% for CORALIE). These are due to daytime gaps in the observing window. When the data are combined, the sidelobes are drastically reduced (to 16% in power) and are also slightly shifted, occurring at $\pm 10.8$. In the cases of and B, we generated power spectra in which the weights were adjusted on a night-by-night basis in order to minimize the sidelobes. We have not done that for because the sidelobes for the two-site data are already quite low.
Large frequency separation {#sec.Dnu}
==========================
Mode frequencies for low-degree p-mode oscillations are approximated reasonably well by a regular series of peaks: $$\nu_{n,l} = {\mbox{$\Delta \nu$}}{} (n + {{\textstyle\frac{1}{2}}}l + \epsilon) - l(l+1) D_0.
\label{eq.asymptotic}$$ Here $n$ (the radial order) and $l$ (the angular degree) are integers, ${\mbox{$\Delta \nu$}}{}$ (the large separation) depends on the sound travel time across the whole star, $D_0$ is sensitive to the sound speed near the core and $\epsilon$ is sensitive to the surface layers. See @ChD2004 for a recent review of the theory of solar-like oscillations.
The large separation, ${\mbox{$\Delta \nu$}}{}$, is proportional to the square root of the mean density of the star. Scaling from the Sun, for which ${\mbox{$\Delta \nu$}}{} =
135\,{\mbox{$\mu$Hz}}$, we expect to have a large separation of about 25. In order to search for a regular series of peaks from which to measure the large separation, we calculated the autocorrelation function of the power spectrum. The result is shown in Fig. \[fig.acorr\] and the peak at 10.8 (dashed line) is the main sidelobe caused by the daily gaps (see above). We interpret the peak at 24.5 as due to the large separation, with smaller peaks at $\pm 10.8\,{\mbox{$\mu$Hz}}$ from this being due to daily aliases [[(the peak at 13 also coincides with half the large separation). The peak at 5 is not easily explained by the regular p-mode structure and may reflect departures of a few modes from the asymptotic relation in equation \[eq.asymptotic\].]{}]{}
As a check of this result, we also measured the large separation directly from the light curve data using Bayesian methods (see @Gre2005 for a good introduction to Bayesian analysis). We modelled the light curve as a sum of sinusoids, where the number of sinusoids, together with their amplitudes, phases and frequencies, were treated as unknowns. An advantage of this approach is that the amplitudes and phases can be integrated out of the problem at the beginning [@Bre88], so that we only need to fit the frequencies and their number. To determine the large separation, we chose the prior distribution for the frequencies to be periodic, with period $\frac{1}{2} \Delta \nu$ (considered unknown). We fixed the central frequency of this periodic comb to be the highest peak in the power spectrum (313.14). We then calculated the posterior distribution for $\Delta \nu$, which is shown in the lower panel of Fig. \[fig.bayes\]. There is a single strong peak with a value of $\Delta \nu = 24.25 \pm 0.25 \mu$Hz, in agreement with the peak of the autocorrelation. More details of this method, which is similar to the approach used by @Bre2003 and appears to be very promising for this type of analysis, will be presented separately (B. Brewer et al., in prep.).
Constraints on the stellar parameters
=====================================
What can we learn about from our measurement of ? Figure \[fig.hr\] shows the location of the star in the H-R diagram, together with some theoretical evolutionary tracks. The box, which is the same in each panel, shows the observed position of from classical measurements. The value for ($5300 \pm 100$K) is the mean of published photometric estimates [@NHS97; @GSC2000], where we note the large uncertainty in the effective temperature scale for metal-poor stars. The luminosity is based on the Hipparcos parallax ($34.6 \pm 0.6$mas), with bolometric corrections from @AAMR99. Note that the bolometric correction is a function of effective temperature, hence the slope of the box. The diagonal dashed lines, which are also the same in each panel, are loci of constant radius, calculated from $L\propto R^2{\mbox{$T_{\rm eff}$}}^4$. We can immediately see that a measurement of the radius from interferometry would be valuable in constraining the location of the star in the H-R diagram, as has already been shown for other stars [@KTS2003; @PTG2003; @KTM2004; @TKP2005].
The curved lines in Fig. \[fig.hr\] are evolutionary tracks for a range of masses, using model calculations similar to those by @ChD82. We used a metallicity of $Z=0.001$ (with hydrogen and helium mass fractions of $X=0.75$ and $Y=0.249$, respectively) and the three panels differ in the adopted value of the mixing-length parameter ($\alpha = 1.7,$ 1.8 and 1.9), where the solar value is $\alpha_\sun=1.83$. The relatively rapid evolution in this subgiant phase means each track can be described by a single age, as shown in the figure. Finally, the diagonal lines are contours of constant ${\mbox{$\Delta \nu$}}{}$. We calculated these from the evolutionary models by scaling from the Sun (since ${\mbox{$\Delta \nu$}}{}$ is proportional to the square root of the mean density).
We see that our measurement of ${\mbox{$\Delta \nu$}}{}$ significantly constrains the parameters of . This is quantified in Fig. \[fig.params\], in which the thin error bars show the range of each parameter based on classical measurements alone ($L$ and ${\mbox{$T_{\rm eff}$}}$), while the thick bars show the situation after we have added the constraint provided by our measurement of ${\mbox{$\Delta \nu$}}$. Including this constraint reduces the uncertainty in both effective temperature and radius, for a given value of $\alpha$, by a factor of four.
What can we say about the mass of ? Even with no constraints from seismology, the requirement that be younger than the universe (13.7Gyr; @SVP2003) sets a lower limit of $0.81\,M_\sun$. A star of lower mass would not have had time to evolve this far. Note that this limit, which we derived from the tracks in Fig. \[fig.params\], is essentially independent of the mixing length. This is because the value of $\alpha$ has little effect on the fusion rate in the core, and hence on the time for a star of given mass to leave the main sequence and enter the subgiant phase. Meanwhile, an upper limit on the mass is obtained from our measurement of ${\mbox{$\Delta \nu$}}$, provided we also set a lower limit on $\alpha$ (third panel of Fig. \[fig.params\]). Adopting a plausible lower limit of $\alpha\ge1.7$ gives an upper limit for the mass of $0.89\,M_\sun$.
We can also set interesting limits on the age of . From the bottom panel of Fig. \[fig.params\], and again setting $\alpha>1.7$ as a plausible limit, we see that the age must be at least 9Gyr. This confirms that is, indeed, very old and must have been formed very early in the history of the Galaxy. The final results of our analysis are summarized in Table \[tab.params\], where we list our best estimates for the parameters of , assuming that $Z=0.001$ and $\alpha =
1.8\pm0.1$.
With the data currently available, what constraints can we set on the mixing length? The dashed line at an age of 13.7Gyr in Fig. \[fig.params\] indicates the upper limit set by age of the universe. This indicates that the mixing length cannot be greater than 2.1. We expect much stronger constraints on $\alpha$, and also on the other stellar parameters, to come from the individual oscillation frequencies. The extraction of these frequencies and a comparison with theoretical models is deferred to a future paper (F. Carrier et al., in prep.). An accurate measurement of the radius using interferometry would also be extremely valuable.
Oscillation amplitude {#sec.amp}
=====================
The amplitudes of individual modes are affected by the stochastic nature of the excitation and by the (unknown) value of the mode lifetime. To measure the oscillation amplitude of in a way that is independent of these effects, we have followed the method introduced by @KBB2005. In brief, this involves the following steps: (i) smoothing the power spectrum heavily to produce a single hump of excess power that is insensitive to the fact that the oscillation spectrum has discrete peaks; (ii) converting to power density by multiplying by the effective length of the observing run (4.42d, which we calculated from the area under the spectral window in power); (iii) fitting and subtracting the background noise; and (vi) multiplying by $\Delta\nu/3.0$ and taking the square root, in order to convert to amplitude per oscillation mode. For more details, see @KBB2005.
The result is shown in Fig. \[fig.ampsmooth\]. The peak amplitude per mode is 0.95, which occurs at a frequency of $\nu_{\rm
max}=320$ (period 52min). This value of $\nu_{\rm max}$ is consistent with that expected from scaling the acoustic cutoff frequency of the Sun [@BGN91; @K+B95]. The observed peak amplitude is 4.6 times the solar value, when the latter is measured using stellar techniques [@KBB2005], which is substantially less than the value of 7.3 expected from the $L/M$ scaling proposed by @K+B95 [[but is in good agreement with the $(L/M)^{0.7}$ scaling suggested for main-sequence stars by @SGA2005. A measurement of the mode lifetimes in would be particularly useful.]{}]{}
Conclusions
===========
We have observed solar-like oscillations in the metal-poor subgiant star and measured the large frequency separation. We used this, together with the location of the star in the H-R diagram and standard evolutionary models, to place constraints on the stellar parameters. Our results, summarized in Table \[tab.params\], confirm that has a low mass and a large age and represent the first application of asteroseismology to a metal-poor star. Further constraints on the parameters, particularly the mixing length, should come from comparing individual oscillation frequencies with theoretical models.
We are extremely grateful to Conny Aerts for agreeing a time swap on CORALIE that allowed us to observe at the optimum time of year. We also thank Geoff Marcy for useful advice and enthusiastic support. This work was supported financially by the Australian Research Council, the Swiss National Science Foundation, the Danish Natural Science Research Council, the Danish National Research Foundation through its establishment of the Theoretical Astrophysics Center, and by a research associate fellowship from Penn State University. We further acknowledge support by NSF grant AST-9988087 (RPB) and by SUN Microsystems.
[32]{} natexlab\#1[\#1]{}url \#1[[\#1]{}]{}
, A., [Arribas]{}, S., & [Mart[í]{}nez-Roger]{}, C., 1999, A&AS, 140, 261.
Bedding, T. R., Kjeldsen, H., Butler, R. P., McCarthy, C., Marcy, G. W., O’Toole, S. J., Tinney, C. G., & Wright, J. T., 2004, ApJ, 614, 380.
, F., [Bazot]{}, M., [Santos]{}, N. C., [Vauclair]{}, S., & [Sosnowska]{}, D., 2005, A&A, 440, 609.
Bouchy, F., & [Carrier]{}, F., 2002, A&A, 390, 205.
, F., [Pepe]{}, F., & [Queloz]{}, D., 2001, A&A, 374, 733.
Bretthorst, G. L. , volume 48 of [*Lecture Notes in Statistics*]{}. Springer-Verlag: New York, 1988.
, G. L., 2003, In Willimas, C. J., editor, [*AIP Conf. Proc. 659: Bayesian Inference and Maximum Entropy Methods in Science and Engineering*]{}, page 3. available from [http://bayes.wustl.edu/]{}.
Brown, T. M., Gilliland, R. L., Noyes, R. W., & Ramsey, L. W., 1991, ApJ, 368, 599.
Butler, R. P., Bedding, T. R., Kjeldsen, H., McCarthy, C., O’Toole, S. J., Tinney, C. G., Marcy, G. W., & Wright, J. T., 2004, ApJ, 600, L75.
Carrier, F., & [Bourban]{}, G., 2003, A&A, 406, L23.
, F., [Eggenberger]{}, P., & [Bouchy]{}, F., 2005, A&A, 434, 1085.
, F., [Eggenberger]{}, P., [D’Alessandro]{}, A., & [Weber]{}, L., 2005, NewA, 10, 315.
Christensen-Dalsgaard, J., 1982, MNRAS, 199, 735.
Christensen-Dalsgaard, J., 2004, Sol. Phys., 220, 137.
D’Antona, F., [Cardini]{}, D., [Di Mauro]{}, M. P., [Maceroni]{}, C., [Mazzitelli]{}, I., & [Montalb[á]{}n]{}, J., 2005, MNRAS, 363, 847.
, R. G., [Sneden]{}, C., [Carretta]{}, E., & [Bragaglia]{}, A., 2000, A&A, 354, 169.
Gregory, P. C. . Cambridge University Press, 2005.
, D. B., [Kallinger]{}, T., [Reegen]{}, P., [Weiss]{}, W. W., [Matthews]{}, J. M., [Kuschnig]{}, R., [Marchenko]{}, S., [Moffat]{}, A. F. J., [Rucinski]{}, S. M., [Sasselov]{}, D., & [Walker]{}, G. A. H., 2005, ApJ. in press (arXiv:astro-ph/0503695).
Hoffleit, D. . Yale University Observatory, New Haven, 1982.
Kervella, P., [Th[' e]{}venin]{}, F., [Morel]{}, P., [Berthomieu]{}, G., [Bord[' e]{}]{}, P., & [Provost]{}, J., 2004, A&A, 413, 251.
Kervella, P., [Th[é]{}venin]{}, F., [S[é]{}gransan]{}, D., [Berthomieu]{}, G., [Lopez]{}, B., [Morel]{}, P., & [Provost]{}, J., 2003, A&A, 404, 1087.
Kjeldsen, H., & Bedding, T. R., 1995, A&A, 293, 87.
Kjeldsen, H., Bedding, T. R., Baldry, I. K., Bruntt, H., Butler, R. P., Fischer, D. A., Frandsen, S., Gates, E. L., Grundahl, F., Lang, K., Marcy, G. W., Misch, A., & Vogt, S. S., 2003, AJ, 126, 1483.
Kjeldsen, H., Bedding, T. R., Butler, R. P., Christensen-Dalsgaard, J., Kiss, L., McCarthy, C., Marcy, G. W., Tinney, C. G., & Wright, J. T., 2005, ApJ, 635, 1281.
, D. L., & [McWilliam]{}, A., 1986, ApJ, 304, 436.
, M., [Lebrun]{}, J. C., [Appourchaux]{}, T., & [Schmitt]{}, J., 2004, In [*SOHO 14/GONG 2004 Workshop, Helio- and Asteroseismology: Towards a Golden Future*]{}, ESA SP-559, page 563. arXiv:astro-ph/0409126.
, B., [Bouchy]{}, F., [Catala]{}, C., [Michel]{}, E., [Samadi]{}, R., [Th[' e]{}venin]{}, F., [Eggenberger]{}, P., [Sosnowska]{}, D., [Moutou]{}, C., & [Baglin]{}, A., 2005, A&A, 431, L13.
, P. E., [Hoeg]{}, E., & [Schuster]{}, W. J., 1997, In Battrick, B., editor, [*Hipparcos Venice ’97 Symposium*]{}, page 225. ESA SP-402. .
Pijpers, F. P., [Teixeira]{}, T. C., [Garcia]{}, P. J., [Cunha]{}, M. S., [Monteiro]{}, M. J. P. F. G., & [Christensen-Dalsgaard]{}, J., 2003, A&A, 406, L15.
, R., [Goupil]{}, M.-J., [Alecian]{}, E., [Baudin]{}, F., [Georgobiani]{}, D., [Trampedach]{}, R., [Stein]{}, R., & [Nordlund]{}, [Å]{}., 2005, JA&A, 26, 171.
, D. N., [Verde]{}, L., [Peiris]{}, H. V., [Komatsu]{}, E., [Nolta]{}, M. R., [Bennett]{}, C. L., [Halpern]{}, M., [Hinshaw]{}, G., [Jarosik]{}, N., [Kogut]{}, A., [Limon]{}, M., [Meyer]{}, S. S., [Page]{}, L., [Tucker]{}, G. S., [Weiland]{}, J. L., [Wollack]{}, E., & [Wright]{}, E. L., 2003, ApJS, 148, 175.
, F., [Kervella]{}, P., [Pichon]{}, B., [Morel]{}, P., [di Folco]{}, E., & [Lebreton]{}, Y., 2005, A&A, 436, 253.
[lrclr]{} $\Delta\nu~({\mbox{$\mu$Hz}})$ & 24.25 & $\pm$ & $0.25$ & (1.0%)\
(K) & 5291 & $\pm$ & $34$ & (0.64%)\
$M~(M_\sun)$ & 0.847 & $\pm$ & $0.043$ & (5.1%)\
Age (Gyr) & 11.4 & $\pm$ & $2.4$ & (21%)\
$L~(L_\sun)$ & 6.21 & $\pm$ & $0.23$ & (3.7%)\
$R~(R_\sun)$ & 2.97 & $\pm$ & $0.05$ & (1.7%)\
$\log (g/\mbox{cm\,s}^{-2})$ & 3.421 & $\pm$ & $0.016$ & (3.8% in $g$)\
angular diameter (mas) & 0.956 & $\pm$ & $0.023$ & (2.4%)\
\
\
\
|
---
abstract: |
We describe an approach based on topology optimization that enables automatic discovery of wavelength-scale photonic structures for achieving high-efficiency second-harmonic generation (SHG). A key distinction from previous formulation and designs that seek to maximize Purcell factors at individual frequencies is that our method not only aims to achieve frequency matching (across an entire octave) and large radiative lifetimes, but also optimizes the equally important nonlinear–coupling figure of merit $\bar{\beta}$, involving a complicated spatial overlap-integral between modes. We apply this method to the particular problem of optimizing micropost and grating-slab cavities (one-dimensional multilayered structures) and demonstrate that a variety of material platforms can support modes with the requisite frequencies, large lifetimes $Q >
10^4$, small modal volumes $\sim (\lambda/n)^3$, and extremely large $\bar{\beta} \gtrsim 10^{-2}$, leading to orders of magnitude enhancements in SHG efficiency compared to state of the art photonic designs. Such giant $\bar{\beta}$ alleviate the need for ultra-narrow linewidths and thus pave the way for wavelength-scale SHG devices with faster operating timescales and higher tolerance to fabrication imperfections.
author:
- Zin Lin$^1$
- Xiangdong Liang$^2$
- Marko Loncar$^1$
- 'Steven G. Johnson$^2$'
- 'Alejandro W. Rodriguez$^{3}$'
bibliography:
- 'opt.bib'
title: 'Cavity-enhanced second harmonic generation via nonlinear-overlap optimization'
---
[*Introduction.—*]{} Nonlinear optical processes mediated by second-order ($\chi^{(2)}$) nonlinearities play a crucial role in many photonic applications, including ultra-short pulse shaping [@DeLong94; @Arbore97], spectroscopy [@Heinz82], generation of novel frequencies and states of light [@Kuo06; @Vodopyanov06; @Krischek10], and quantum information processing [@Vaziri02; @Tanzilli05; @Zaske12]. Because nonlinearities are generally weak in bulk media, a well-known approach for lowering the power requirements of devices is to enhance nonlinear interactions by employing optical resonators that confine light for long times (dimensionless lifetimes $Q$) in small volumes $V$ [@JoannopoulosJo08-book; @Soljacic02:bistable; @Soljacic03:OL; @Yanik03; @Yanik04; @Bravo-AbadRo10; @Rivoire09; @Pernice12; @Bi12; @Buckley14]. Microcavity resonators designed for on-chip, infrared applications offer some of the smallest confinement factors available, but their implementation in practical devices has been largely hampered by the difficult task of identifying wavelength-scale ($V \sim \lambda^3$) structures supporting long-lived, resonant modes at widely separated wavelengths and satisfying rigid frequency-matching and mode-overlap constraints [@Rodriguez07:OE; @Bravo-AbadRo10].
In this letter, we extend a recently proposed formulation for the scalable topology optimization of microcavities, where every pixel of the geometry is a degree of freedom, to the problem of designing wavelength-scale photonic structures for second harmonic generation (SHG). We apply this approach to obtain novel micropost and grating microcavity designs supporting strongly coupled fundamental and harmonic modes at infrared and visible wavelengths with relatively large lifetimes $Q_{1},Q_2 > 10^4$. In contrast to recently proposed designs based on known, linear cavity structures hand-tailored to maximize the Purcell factors or mode volumes of individual resonances, e.g. ring resonators [@Almeida04; @Xu05; @Levy11; @Pernice12] and nanobeam cavities [@Deotare09; @Buckley14], our designs ensure frequency matching and small confinement factors while also simultaneously maximizing the SHG enhancement factor $Q_1^2 Q_2 |\bar{\beta}|^2$ to yield orders of magnitude improvements in the nonlinear coupling $\bar{\beta}$ described by [(\[eq:beta\])]{} and determined by a special overlap integral between the modes. These particular optimizations of multilayer stacks illustrate the benefits of our formalism in an approachable and experimentally feasible setting, laying the framework for future topology optimization of 2D/3D slab structures that are sure to yield even further improvements.
Most experimental demonstrations of SHG in chip-based photonic systems [@Rivoire09; @Furst10; @Levy11; @Pernice12; @Diziain13; @Wang14; @Kuo14] operate in the so-called small-signal regime of weak nonlinearities, where the lack of pump depletion leads to the well-known quadratic scaling of harmonic output power with incident power [@Boyd92]. In situations involving all-resonant conversion, where confinement and long interaction times lead to strong nonlinearities and non-negligible down conversion [@Soljacic03:OL; @Rodriguez07:OE], the maximum achievable conversion efficiency $\left( \eta \equiv {P_2^\text{out} \over P_1^\text{in}} \right)$, $$\eta^\text{max}~= \left(1~-~\frac{Q_1}{Q_1^\text{rad}} \right) \left(1~-~\frac{Q_2}{Q_2^\text{rad}}\right)$$ occurs at a critical input power [@Rodriguez07:OE], $$P_1^\text{crit} = {2 {\ensuremath{\omega}}_1 \epsilon_0 \lambda_1^3 \over
\big({\ensuremath{\chi^{(2)}_\text{eff}}}\big) ^{2} |\bar{\beta}|^2 Q_1^2 Q_2}
\left(1-\frac{Q_1}{Q_1^\text{rad}} \right)^{-1},$$ where ${\ensuremath{\chi^{(2)}_\text{eff}}}$ is the effective nonlinear susceptibility of the medium \[SM\], $Q = \left(\frac{1}{Q^\text{rad}} +
\frac{1}{{\ensuremath{Q^\text{c}}}}\right)^{-1}$ is the dimensionless quality factor (ignoring material absorption) incorporating radiative decay $\frac{1}{Q^\text{rad}}$ and coupling to an input/output channel $\frac{1}{{\ensuremath{Q^\text{c}}}}$. The dimensionless coupling coefficient $\bar{\beta}$ is given by a complicated, spatial overlap-integral involving the fundamental and harmonic modes \[SM\], $$\begin{aligned}
\bar{\beta} = { \int d{\ensuremath{\mathbf{r}}}~ \bar{\epsilon}({\ensuremath{\mathbf{r}}}) E_2^* E_1^2 \over
\left( \int d{\ensuremath{\mathbf{r}}}~ \epsilon_1 |\mathbf{E}_1|^2 \right) \left(
\sqrt{\int d{\ensuremath{\mathbf{r}}}~ \epsilon_2 |\mathbf{E}_2|^2 } \right) } \sqrt{
\lambda_1^3 },
\label{eq:beta}\end{aligned}$$ where $\bar{\epsilon}({\ensuremath{\mathbf{r}}}) = 1$ inside the nonlinear medium and zero elsewhere. Based on the above expressions one can define the following dimensionless figures of merit, $$\begin{aligned}
\label{eq:F1}
\mathrm{FOM}_1 &= Q_1^2 Q_2 |\bar{\beta}|^2 \left( 1 - {Q_1 \over
Q_1^\text{rad}} \right)^2 \left( 1 - {Q_2 \over Q_2^\text{rad}}
\right), \\ \mathrm{FOM}_2 &= \left(Q^\text{rad}_1\right)^2
Q^\text{rad}_2 |\bar{\beta}|^2
\label{eq:F2}. \end{aligned}$$ where $\mathrm{FOM}_1$ represents the efficiency per power, often quoted in the so-called undepleted regime of low-power conversion [@Boyd92], and $\mathrm{FOM}_2$ represents limits to power enhancement. Note that for a given radiative loss rate, $\mathrm{FOM}_1$ is maximized when the modes are critically coupled, $Q=\frac{Q^\text{rad}}{2}$, with the absolute maximum occurring in the absence of radiative losses, $Q^\text{rad} \to \infty$, or equivalently, when $\mathrm{FOM}_2$ is maximized. From either FOM, it is clear that apart from frequency matching and lifetime engineering, the design of optimal SHG cavities rests on achieving a large nonlinear coupling $\bar{\beta}$.
[*Optimal designs.—*]{} Table I characterizes the FOMs of some of our newly discovered microcavity designs, involving simple micropost and gratings structures of various $\chi^{(2)}$ materials, including GaAs, AlGaAs and LiNbO$_3$. The low-index material layers of the microposts consist of alumina (Al$_2$O$_3$), while gratings are embedded in either silica or air (see supplement for detailed specifications). Note that in addition to their performance characteristics, these structures are also significantly different from those obtained by conventional methods in that traditional designs often involve rings [@Pernice12; @Bi12], periodic structures or tapered defects [@Deotare09], which tend to ignore or sacrifice $\bar{\beta}$ in favor of increased lifetimes and for which it is also difficult to obtain widely separated modes [@Buckley14]. [Figure \[fig:fig1\]]{} illustrates one of the optimized structures—a doubly-resonant rectangular micropost cavity with alternating AlGaAs/Al$_2$O$_3$ layers—along with spatial profiles of the fundamental and harmonic modes. It differs from conventional microposts in that it does not consist of periodic bi-layers yet it supports two localized modes at precisely $\lambda_1= 1.5~\mu$m and $\lambda_2=\lambda_1/2$. In addition to having large $Q^\text{rad} \gtrsim 10^5$ and small $V \sim (\lambda_1/n)^3$, the structure exhibits an ultra-large nonlinear coupling $\bar{\beta} \approx 0.018$ that is almost an order of magnitude larger than the best overlap found in the literature (see [Fig. \[fig:fig2\]]{}). From an experimental point of view, the micropost system is of particular interest because it can be realized by a combination of existing fabrication techniques such as molecular beam epitaxy, atomic layer deposition, selective oxidation and electron-beam lithography [@Vahala04]. Additionally, the micropost cavity can be naturally integrated with quantum dots and quantum wells for cavity QED applications [@Lermer12]. Similar to other wavelength-scale structures, the operational bandwidths of these structures are limited by radiative losses in the lateral direction [@JoannopoulosJo08-book; @Vahala04; @YZhang09], but their ultra-large overlap factors more than compensate for the increased bandwidth, which ultimately may prove beneficial in experiments subject to fabrication imperfections and for large-bandwidth applications [@DeLong94; @Arbore97; @Scalora97; @Krischek10].
![(a) Schematic illustration of a topology-optimized micropost cavity with alternating AlGaAs/Al$_2$O$_3$ layers and dimensions $h_x \times h_y \times h_z = 8.4 \times 3.5 \times
0.84~(\lambda_1^3)$. For detailed structural specifications, please refer to the supplement. (b) $x$–$z$ cross-section of the $E_y$ components of two localized modes of frequencies $\lambda_1 =
1.5\mu$m and $\lambda_2 = \lambda_1/2$ and large spatial overlap $\sim E_2^* E^2_1$.[]{data-label="fig:fig1"}](./fig1.eps){width="0.5\columnwidth"}
To understand the mechanism of improvement in $\bar{\beta}$, it is instructive to consider the spatial profiles of interacting modes. [Figure \[fig:fig1\]]{}b plots the $y$-components of the electric fields in the $xz$-plane against the background structure. Since $\bar{\beta}$ is a *net* total of positive and negative contributions coming from the local overlap factor $E_1^2 E_2$ in the presence of nonlinearity, not all local contributions are useful for SHG conversion. Most notably, one observes that the positions of negative anti-nodes of $E_2$ (light red regions) coincide with either the nodes of $E_1$ or alumina layers (where $\chi^{(2)}=0$), minimizing negative contributions to the integrated overlap. In other words, improvements in $\bar{\beta}$ do not arise purely due to tight modal confinement but also from the constructive overlap of the modes enabled by the strategic positioning of field extrema along the structure.
Based on the tabulated FOMs (Table I), the efficiencies and power requirements of realistic devices can be directly calculated. For example, assuming $\chi^{(2)}_\text{eff}\left(\text{AlGaAs}\right) \sim
100~\mathrm{pm/V}$ [@Bi12], the AlGaAs/Al$_2$O$_3$ micropost cavity ([Fig. \[fig:fig1\]]{}) yields an efficiency of ${P_{2,\mathrm{out}} \over P_1^2}=2.7\times 10^4/\mathrm{W}$ in the undepleted regime when the modes are critically coupled, $Q=\frac{Q^\text{rad}}{2}$. For larger operational bandwidths, e.g. $Q_1=5000$ and $Q_2=1000$, we find that ${P_{2,\mathrm{out}}
\over P_1^2}=16/\mathrm{W}$. When the system is in the depleted regime and critically coupled, we find that a maximum efficiency of 25% can be achieved at $P^\mathrm{crit}_1 \approx 0.15~\mathrm{mW}$ whereas assuming smaller $Q_1=5000$ and $Q_2=1000$, a maximum efficiency of $96\%$ can be achieved at $P^\mathrm{crit}_1 \approx
0.96~\mathrm{W}$.
![Scatter plot of $ \left( Q_1^\text{rad} \right)^2 Q_2^\text{rad}$ versus nonlinear overlap $|\bar{\beta}|^2$ for representative geometries, including WGMRs [@Furst10], micro- and nano-ring resonators [@Pernice12; @Bi12], photonic crystal slab and nanobeam cavities [@Rivoire09; @Buckley14]. A trend towards decreasing lifetimes and increasing overlaps is readily observed as devices become increasingly smaller. Meanwhile, it remains an open problem to discover structures with high $Q$s, small $V$s and large $|\bar{\beta}|$ (shaded region). \[fig:fig2\]](./fig2.eps){width="0.5\columnwidth"}
[*Comparison against previous designs.—*]{} Table II summarizes various performance characteristics, including the aforementioned FOM, for a handful of previously studied geometries with length-scales spanning from $\mathrm{mm}$ to a few wavelengths (microns). Fig 2 demonstrates a trend among these geometries towards increasing $\bar{\beta}$ and decreasing $Q^\text{rad}$ as device sizes decrease. Maximizing $\bar{\beta}$ in millimeter-to-centimeter scale bulky media translates to the well-known problem of phase-matching the momenta or propagation constants of the modes [@Boyd92]. In this category, traditional WGMRs offer a viable platform for achieving high-efficiency conversion [@Furst10]; however, their ultra-large lifetimes (critically dependent upon material-specific polishing techniques), large sizes (millimeter length-scales), and extremely weak nonlinear coupling (large mode volumes) render them far-from optimal chip-scale devices. Although miniature WGMRs such as microdisk and microring resonators [@Pernice12; @Diziain13; @Kuo14] show increased promise due to their smaller mode volumes, improvements in $\bar{\beta}$ are still hardly sufficient for achieving high efficiencies at low powers. Ultra-compact nanophotonic resonators such as the recently proposed nanorings [@Bi12], 2D photonic crystal defects [@Rivoire09], and nanobeam cavities [@Buckley14], possess even smaller mode volumes but prove challenging for design due to the difficulty of finding well-confined modes at both the fundamental and second harmonic frequencies [@Rivoire09]. Even when two such resonances can be found by fine-tuning a *limited* set of geometric parameters [@Bi12; @Buckley14], the frequency-matching constraint invariably leads to sub-optimal spatial overlaps which severely limits the maximal achievable $\bar{\beta}$.
![Work flow of the design process. The degrees of freedom in our problem consist of all the pixels along $x$-direction in a 2D computational domain. Starting from the vacuum or a uniform slab, the optimization seeks to develop an optimal pattern of material layers (with a fixed thickness in the $z$-direction) that can tightly confine light at the desired frequencies while ensuring maximal spatial overlap between the confined modes. The developed 2D cross-sectional patterns is truncated at a finite width in the $y$-direction to produce a fully three-dimensional micropost or grating cavity which is then simulated by FDTD methods to extract the resonant frequencies, quality factors, eigenmodes and corresponding modal overlaps. Here, it must be emphasized that we merely performed one-dimensional optimization (within a 2D computational problem) because of limited computational resources; consequently, our design space is severely constrained. \[fig:fig3\] ](./fig3.eps){width="0.5\columnwidth"}
Comparing Tables I and II, one observes that for a comparable $Q$, the topology-optimized structures perform significantly better in both $\mathrm{FOM}_1$ and $\mathrm{FOM}_2$ than any conventional geometry, with the exception of the LN gratings, whose low $Q^\mathrm{rad}$ lead to slightly lower $\mathrm{FOM}_2$. Generally, the optimized microposts and gratings perform better by virtue of a large and robust $\bar{\beta}$ which, notably, is significantly larger than that of existing designs. Here, we have not included in our comparison those structures which achieve non-negligible SHG by special poling techniques and/or quasi-phase matching methods [@Boyd92; @Miller97; @Kuo14], though their performance is still sub-optimal compared to the topology-optimized designs. Such methods are highly material-dependent and are thus not readily applicable to other material platforms; instead, ours is a purely geometrical *topology optimization* technique applicable to any material system.
[|c|c|c|l|c|l|c|c|c|]{} Structure & $\lambda~(\mathrm{\mu m})$ & & & $\bar{\beta}$ & $\mathrm{FOM}_1$ & $\mathrm{FOM}_2$\
-----------------------------
LN WGM resonator [@Furst10]
-----------------------------
& $1.064 - 0.532$ & & & - & $\sim 10^{10}$ & -\
AlN microring [@Pernice12] & $1.55 - 0.775$ & & & - & $2.6 \times 10^5$ & -\
--------------------------------
GaP PhC slab [@Rivoire09] $^*$
--------------------------------
& $1.485 - 0.742$ & & & - & $\approx 2 \times 10^5$ & -\
& $1.7 - 0.91^\dagger$ & & & 0.00021 & 820 & $1.8\times 10^8$\
& $1.8 - 0.91$ & & & 0.00012 & 227 & $2.1 \times 10^5$\
-------------------------
AlGaAs nanoring [@Bi12]
-------------------------
& $1.55 - 0.775$ & & & 0.004 & $10^5$ & $1.6 \times 10^9$\
[*Optimization formulation.—*]{} Optimization techniques have been regularly employed by the photonic device community, primarily for fine-tuning the characteristics of a pre-determined geometry; the majority of these techniques involve probabilistic Monte-Carlo algorithms such as particle swarms, simulated annealing and genetic algorithms [@Kim04; @Behnam10; @Minkov14]. While some of these *gradient-free* methods have been used to uncover a few unexpected results out of a limited number of degrees of freedom (DOF) [@Gondarenko06], *gradient-based* topology optimization methods efficiently handle a far larger design space, typically considering every pixel or voxel as a DOF in an extensive 2D or 3D computational domain, giving rise to novel topologies and geometries that might have been difficult to conceive from conventional intuition alone. The early applications of topology optimization were primarily focused on mechanical problems[@Bendose04] and only recently have they been expanded to consider photonic systems, though largely limited to *linear* device designs [@Gondarenko06; @Jensen11; @Liang13; @Liu13; @Piggott14; @MenLee14]. In what follows, we describe a technique for gradient-based topology optimization of nonlinear wavelength-scale frequency converters.
Recent work [@Liang13] considered topology optimization of the cavity Purcell factor by exploiting the concept of local density of states (LDOS). In particular, this previous formulation exploited the equivalency between LDOS and the power radiated by a *point* dipole in order to reduce Purcell-factor maximization problems to a series of small scattering calculations. Defining the objective $\mathrm{max}_{\bar{\epsilon}}~f{\ensuremath{\left(}}\bar{\epsilon}({\ensuremath{\mathbf{r}}});{\ensuremath{\omega}}{\ensuremath{\right)}}= -
\operatorname{Re}[\int d{\ensuremath{\mathbf{r}}}\mathbf{J}^* \cdot {\ensuremath{\mathbf{E}}}]$, it follows that ${\ensuremath{\mathbf{E}}}$ can be found by solving the frequency domain Maxwell’s equations ${\cal M} {\ensuremath{\mathbf{E}}}= i {\ensuremath{\omega}}\mathbf{J}$, where ${\cal M}$ is the Maxwell operator \[SM\] and $\mathbf{J}=\delta {\ensuremath{\left(}}{\ensuremath{\mathbf{r}}}- {\ensuremath{\mathbf{r}}}_0 {\ensuremath{\right)}}\mathbf{\hat{e}}_j$. The maximization is then performed over a finely discretized space defined by the *normalized* dielectric function, $\{ \bar{\epsilon}_\alpha =
\bar{\epsilon}({\ensuremath{\mathbf{r}}}_\alpha),~\alpha \leftrightarrow (i \Delta x,j
\Delta y,k \Delta z) \}$. A key realization in [@Liang13] is that instead of maximizing the LDOS at a single discrete frequency ${\ensuremath{\omega}}$, a better-posed problem is that of maximizing the frequency-averaged $f$ in the vicinity of ${\ensuremath{\omega}}$, denoted by $\langle f \rangle = \int
d{\ensuremath{\omega}}'~{\cal W}({\ensuremath{\omega}}';{\ensuremath{\omega}},\Gamma) f({\ensuremath{\omega}}')$, where ${\cal W}$ is a weight function defined over some specified bandwidth $\Gamma$. Using contour integration techniques, the frequency integral can be conveniently replaced by a single evaluation of $f$ at a complex frequency ${\ensuremath{\omega}}+ i
\Gamma$ [@Liang13]. For a fixed $\Gamma$, the frequency average effectively shifts the algorithm in favor of minimizing $V$ over maximizing $Q$; the latter can be enhanced over the course of the optimization by gradually winding down the averaging bandwidth $\Gamma$ [@Liang13]. A major merit of the frequency-averaged LDOS formulation is that it features a mathematically well-posed objective as opposed to a direct maximization of the cavity Purcell factor ${ Q
\over V}$, allowing for rapid convergence of the optimization algorithm into an extremal solution.
As suggested in [@Liang13], a simple extension of the optimization problem from single- to multi-mode cavities maximizes the minimum of a collection of LDOS at different frequencies. Applying such an approach to the problem of SHG, the optimization objective becomes: $\mathrm{max}_{\bar{\epsilon}_{\ensuremath{\alpha}}} \mathrm{min}
\Big[\mathrm{LDOS}({\ensuremath{\omega}}_1),\mathrm{LDOS}(2{\ensuremath{\omega}}_1)\Big]$, which would require solving two *separate* scattering problems, ${\cal
M}_1{\ensuremath{\mathbf{E}}}_1=\mathbf{J}_1$ and ${\cal M}_2{\ensuremath{\mathbf{E}}}_2=\mathbf{J}_2$, for the two distinct *point* sources $\mathbf{J}_1$, $\mathbf{J}_2$ at ${\ensuremath{\omega}}_1$ and ${\ensuremath{\omega}}_2 = 2 {\ensuremath{\omega}}_1$ respectively. However, as discussed before, rather than maximizing the Purcell factor at individual resonances, the key to realizing optimal SHG is to maximize the overlap integral $\bar{\beta}$ between ${\ensuremath{\mathbf{E}}}_1$ and ${\ensuremath{\mathbf{E}}}_2$, described by [(\[eq:beta\])]{}. Here, we suggest an elegant way to incorporate $\bar{\beta}$ by *coupling* the two scattering problems. In particular, we consider not a point dipole but an extended source $\mathbf{J}_2 \sim {\mathbf{E}}_1^2$ at ${\ensuremath{\omega}}_2$ and optimize a single *combined* radiated power $f = -\operatorname{Re}\Big[ \int
d{\ensuremath{\mathbf{r}}}~\mathbf{J}_2^* \cdot {\ensuremath{\mathbf{E}}}_2 \Big]$ instead of two otherwise *unrelated* LDOS. The advantage of this approach is that $f$ yields precisely the $\bar{\beta}$ parameter along with any resonant enhancement factors $(\sim Q/V)$ in ${\ensuremath{\mathbf{E}}}_1$ and ${\ensuremath{\mathbf{E}}}_2$. Intuitively, $\mathbf{J}_2$ can be thought of as a nonlinear polarization current induced by ${\ensuremath{\mathbf{E}}}_1$ in the presence of the second order susceptibility tensor $\pmb{\chi}^{(2)}$, and in particular is given by $J_{2i} =
\bar{\epsilon}({\ensuremath{\mathbf{r}}}) \sum_{jk} \chi^{(2)}_{ijk} E_{1j} E_{1k}$ where the indices $i,j,k$ run over the Cartesian coordinates. In general, $\chi^{(2)}_{ijk}$ mixes polarizations and hence $f$ is a sum of different contributions from various polarization-combinations. In what follows and for simplicity, we focus on the simplest case in which $\mathbf{E}_1$ and ${\ensuremath{\mathbf{E}}}_2$ have the same polarization, corresponding to a diagonal $\bm{\chi}^{(2)}$ tensor determined by a scalar $\chi^{(2)}_\text{eff}$. Such an arrangement can be obtained for example by proper alignment of the crystal orientation axes \[SM\]. With this simplification, the generalization of the linear topology-optimization problem to the case of SHG becomes: $$\begin{aligned}
\text{max}_{\bar{\epsilon}_{\ensuremath{\alpha}}} ~ \langle f(\bar{\epsilon}_{\ensuremath{\alpha}};{\ensuremath{\omega}}_1) \rangle &= - \mathrm{Re}\Big[ \Big\langle \int \mathbf{J}_2^* \cdot \mathbf{E}_2 ~d\mathbf{r} \Big \rangle \Big], \label{eq:obj} \\
{\cal M}_1 \mathbf{E}_1 &= i \omega_1 \mathbf{J}_1, \notag \\
{\cal M}_2 \mathbf{E}_2 &= i \omega_2 \mathbf{J}_2,~{\ensuremath{\omega}}_2 = 2 {\ensuremath{\omega}}_1 \notag\end{aligned}$$ where $$\begin{aligned}
\mathbf{J}_1 &= \delta(\mathbf{r}_{\ensuremath{\alpha}}-\mathbf{r}_0)
\hat{\mathbf{e}}_j,~j \in \{x,y,z\} \notag \\ \mathbf{J}_2 &=
\bar{\epsilon}({\ensuremath{\mathbf{r}}}_{\ensuremath{\alpha}}) E_{1j}^2 \hat{\mathbf{e}}_j, \notag \\ {\cal
M}_l &= \nabla \times {1 \over \mu}~\nabla \times -~
\epsilon_l(\mathbf{r}_{\ensuremath{\alpha}}) \omega_l^2,~l=1,2 \notag
\\ \epsilon_l(\mathbf{r}_{\ensuremath{\alpha}}) &= \epsilon_\text{m} + \bar{\epsilon}_{\ensuremath{\alpha}}~ \left( \epsilon_{\text{d}l} - \epsilon_\text{m} \right),
~\bar{\epsilon}_{\ensuremath{\alpha}}\in [0,1], \notag\end{aligned}$$ and where $\epsilon_\text{d}$ denotes the dielectric contrast of the nonlinear medium and $\epsilon_\text{m}$ is that of the surrounding linear medium. Note that $\bar{\epsilon}_{\ensuremath{\alpha}}$ is allowed to vary continuously between 0 and 1 whereas the intermediate values can be penalized by so-called threshold projection filters [@Wang11]. The scattering framework makes it straightforward to calculate the derivatives of $f$ (and possible functional constraints) with respective to $\bar{\epsilon}_{\ensuremath{\alpha}}$ via the adjoint variable method [@Bendose04; @Jensen11; @Liang13]. The optimization problem can then be solved by any of the many powerful algorithms for convex, conservative, separable approximations, such as the well-known method of moving asymptotes [@Svanberg02].
For computational convenience, the optimization is carried out using a 2D computational cell (in the $xz$-plane), though the resulting optimized structures are given a finite transverse extension $h_y$ (along the $y$ direction) to make realistic 3D devices (see [Fig. \[fig:fig3\]]{}). In principle, the wider the transverse dimension, the better the cavity quality factors since they are closer to their 2D limit which only consists of radiation loss in the $z$ direction; however, as $h_y$ increases, $\bar{\beta}$ decreases due to increasing mode volumes. In practice, we chose $h_y$ on the order of a few vacuum wavelengths so as not to greatly compromise either $Q$ or $\bar{\beta}$. We then analyze the 3D structures via rigorous FDTD simulations to determine the resonant lifetimes and modal overlaps. By virtue of our optimization scheme, we invariably find that frequency matching is satisfied to within the mode linewidths. We note that our optimization method seeks to maximize the *intrinsic* geometric parameters such as $Q^\text{rad}$ and $\bar{\beta}$ of an *un-loaded* cavity whereas the *loaded* cavity lifetime $Q$ depends on the choice of coupling mechanism, e.g. free-space, fiber, or waveguide coupling, and is therefore an external parameter that can be considered independently of the optimization. When evaluating the performance characteristics such as $ \mathrm{FOM}_1 $, we assume total operational lifetimes $Q_1=5000,~Q_2=1000$. In the optimized structures, it is interesting to note the appearance of deeply sub-wavelength features $\sim 1-5\%$ of ${\lambda_1 \over n}$, creating a kind of *metamaterial* in the optimization direction; these arise during the optimization process regardless of starting conditions due to the low-dimensionality of the problem. We find that these features are not easily removable as their absence greatly perturbs the quality factors and frequency matching.
[*Concluding remarks.—*]{} We have presented a formulation that allows for large-scale optimization of SHG. Applied to simple micropost and grating structures, our approach yields new classes of microcavities with stronger performance metrics over existing designs. One potentially challenging aspect for fabrication in the case of gratings is the presence of deeply sub-wavelength features, which would require difficult high-aspect-ratio etching or growth techniques. This is not an issue for the micropost cavities since each material layer can be grown/deposited to a nearly arbitrary thickness [@Vahala04; @Lermer12]. Another caveat about wavelength-scale cavities is that they are sensitive to structural perturbations near the cavity center, where most of the field resides. In our optimized structures, the figures of merit are robust to within $\sim \pm 20~\mathrm{nm}$ variations (approximately one computational pixel). One possible way to constrain the optimization to ensure some minimum spatial feature and robustness is to exploit so-called regularization filters and worst-case optimization techniques [@Wang11], which we will consider in future work.
Our results provide just a glimpse of the kinds of designs that could be realized in structures with 2D and 3D variations, where we expect even better performance metrics due to the larger design space. In fact, preliminary application of our formulation to 2D systems reveals overlap factors and lifetimes at least one order of magnitude larger than those attained here. Apart from SHG optimization, our approach can be generalized to consider other nonlinear processes, even those involving more than two frequencies \[SM\]. Preliminary investigations reveal orders-of-magnitude improvements in the efficiency of third harmonic and sum-frequency generation processes. These findings, together with higher-dimensional applications, will be presented in future work.
|
Ø[[O]{}]{} i[[i]{}]{} [**v**]{} [**u**]{} ł[[**l**]{}]{} 1n[[(1,n)]{}]{} =cmr10 at 24truept =cmbx10 at 15truept
0.7cm
Particle transport in a random velocity
field with Lagrangian statistics
Piero Olla
ISAC-CNR
Sezione di Lecce
73100 Lecce Italy
**Abstract**
The transport properties of a random velocity field with Kolmogorov spectrum and time correlations defined along Lagrangian trajectories are analyzed. The analysis is carried on in the limit of short correlation times, as a perturbation theory in the ratio, scale by scale, of the eddy decay and turn-over time. Various quantities such as the Batchelor constant and the dimensionless constants entering the expression for particle relative and self-diffusion are given in terms of this ratio and of the Kolmogorov constant. Particular attention is paid to particles with finite inertia. The self-diffusion properties of a particle with Stokes time longer than the Kolmogorov time are determined, verifying on an analytical example the dimensional results of $[$nlin.CD/0103018$]$. Expressions for the fluid velocity Lagrangian correlations and correlation times along a solid particle trajectory, are provided in several parameter regimes, including the infinite Stokes time limit corresponding to Eulerian correlations. The concentration fluctuation spectrum and the non-ergodic properties of a suspension of heavy particles in a turbulent flow, in the same regime, are analyzed. The concentration spectrum is predicted to obey, above the scale of eddies with lifetime equal to the Stokes time, a power law with universal $-4/3$ exponent, and to be otherwise independent of the nature of the turbulent flow. A preference of the solid particle to lie in less energetic regions of the flow is observed.
PACS numbers: 47.27.Qb 47.55.Kf 02.50.Ey
**I. Introduction**
One of the differences between high Reynolds number turbulence and other examples of random fields with power law scaling, is the Lagrangian nature of time correlations [@kraichnan64]. From the theoretical point of view, the need for a Lagrangian treatment of time correlations has been one of the main difficulties in the realization of statistical turbulent closures [@kraichnan65]. Because of this, many such theories assume from the start that the turbulence dynamics be equivalent to that of a random velocity field with identical energy spectrum but Eulerian time statistics, i.e. the fluctuations decay without being transported by the larger vortices [@kraichnan71; @orszag77; @yakhot86]. Such an assumption does not work in the case of particle transport: both relative and self-diffusion are affected by the way in which time correlations are defined.
Concerning self-diffusion, in Kolmogorov turbulence, fluctuations at a scale $l$ within the inertial range, have characteristic velocity $\sim l^\frac{1}{3}$ and decay time $\sim l^{-\frac{2}{3}}$ along fluid trajectories. Hence, in a time $t$ the velocity of a fluid parcel will change by an amount of the order of that of a fluctuation with that lifetime, i.e. by $t^\frac{1}{2}$. If the fluctuations were not advected by the flow, the fluid parcel would see the fluctuation only for the time $\sim l^{-1}$ it takes to cross it. The variation of the fluid parcel velocity in a time $t$ would be therefore $\sim t^\frac{1}{3}$.
Concerning relative diffusion, this process is determined by vortices with the size of the fluid parcel separation at the given time. If these vortices were fixed in space, their effect on relative diffusion would be proportional to the crossing time by the fluid parcels, which is determined by the large scale properties of the flow. In other words, if time correlations were given in an Eulerian reference frame, the process of relative diffusion would not depend solely on the inter-particle distance and on the velocity difference, but also on the total velocity.
Given the difficulty in defining a velocity field with Lagrangian statistics, a successful strategy for the treatment of transport has been to neglect time correlations altogether, i.e. to consider a velocity field $\u$ such that $\langle u_\alpha (\x,t)u_\beta (0,0)\rangle =
U_{\alpha\beta}(\x)\delta(t)$: the so called Kraichnan model [@kraichnan94]. In this model, Eulerian and Lagrangian time statistics trivially coincide in what is the zero order of some perturbation theory in powers of the correlation time of the turbulence. It has been possible, in particular, to determine the anomalous scaling exponents of a passive scalar injected at large scales in the velocity field [@chertkov95; @gawedzki95; @frisch98; @gat98]. The origin of this success is that, although the time structure of the velocity correlation is lost, that of the relative displacement, whose geometrical properties determine the passive scalar correlations, is preserved [@gat98a; @pumir00; @celani01]. (For instance, particle pair separation still obeys Richardson diffusion).
The question, at this point, is how to introduce finite correlation times in a perturbative manner, but preserving the Lagrangian nature of correlations. There are practical reasons to do this. One motivation, of course, is to be able to determine the time correlations of the particle velocities. Lagrangian dispersion models [@thomson87; @thomson90; @borgas94] are based on the adoption of prescriptions on the form of these time correlations; to be able to determine them directly from the statistical properties of the velocity field would be therefore of some interest.
It must be said that most of the prescriptions entering a Lagrangian dispersion model could be obtained, in practice, by dimensional reasoning or by experiments. In some cases, like in the presence of particles endowed with inertia, this turns out, however, to be a difficult task [@csanady63; @sawford91a]. It is very difficult, for instance, to make assumptions on the preference of solid particles to lie in certain regions of the flow instead of others [@wang92; @elgobashi92; @wang93]. Solid particle transport by a turbulent flow is an example of a situation in which careful treatment of the time dependent statistics of the velocity field is essential. It is precisely the interplay between the response time of the solid particle to the fluid, i.e. the Stokes time $\tau_S$, and the characteristic times of the turbulent flow [@olla01], which determines the dynamics, and this is clearly lost when all the turbulent times are sent to zero.
Recently, there has been strong theoretical interest on the problem of turbulence induced concentration fluctuations in a heavy particle suspension. In [@elperin00], the role of a finite correlation time of the turbulent field was recognized. In [@balkovsky01], the case of particle with Stokes time in the turbulent viscous range was analyzed exploiting the fact that, in this case, the fluid velocity is spatially smooth on the scale of interest for the solid particle. In both [@elperin00] and [@balkovsky01], however, the inertial range structure of the turbulent flow was disregarded altogether. The approach carried on here, allows instead to analyze the production of concentration fluctuations in any regimes of Stokes times, in particular in the inertial range, where qualitatively different behaviors for the concentration fluctuation build-up are observed.
Purpose of this paper is to extend the Kraichnan model to short but finite correlation times, preserving, in a controlled perturbation theory, the Lagrangian structure of correlations, and providing several applications to the transport of particles with and without inertia. The analysis will be confined to a situation of two-dimensional, stationary, homogeneous and isotropic turbulence.
This paper is organized as follows. In section II, the equations determining the extension of the Kraichnan model will be illustrated and their main properties discussed. Section III will be devoted to the dynamics of passive tracers; the self-diffusion and relative diffusion of fluid parcels, including the expression for the constants involved, will be determined; the effect of finite diffusivity will be discussed and the Batchelor constant for a passive scalar injected at large scale in the flow will be calculated. In section IV, the transport properties of a heavy particle with Stokes time longer than the Kolmogorov time will be studied, focusing on the relation between the correlation time for the fluid velocity sampled by the particle, and its Lagrangian and Eulerian counterparts. Section V will be devoted to calculation of the concentration fluctuations arising from compressibility of the heavy particle flow. In section VI, the bias introduced by inertia in the sampling of fluid velocity by solid particles (non-ergodic effects) will be analyzed. Section VII will be devoted to conclusions.
**II. Finite correlation time extension of the Kraichnan model**
A two-dimensional random velocity field with Eulerian correlation times scaling like the eddy turn-over time of a real turbulent flow can be obtained very simply, writing appropriate Langevin equations for the Fourier components of the vorticity field: $$\partial_tq_\k(t)+\gamma_kq_\k(t)=h_k\xi_\k(t)
\eqno(2.1)$$ where $q_\k(t)$ is the (space) Fourier transform of the vorticity: $$q(\x,t)=\nabla_\perp\cdot\u(\x,t),
\eqno(2.2)$$ $[$for the generic vector ${\bf v}$, we indicate ${\bf v}_\perp=(-v_2,v_1)]$, and $\xi_\k(t)$ is the Fourier transform of a zero mean fully uncorrelated noise term of unitary amplitude: $$\langle\xi(\x,t)\xi(0,0)\rangle=\delta(\x)\delta(t)
\eqno(2.3)$$ The damping and forcing Kernels $\gamma_k$ and $h_k$ are chosen, for $k\ll\eta^{-1}$, with $\eta$ the Kolmogorov length of the flow, as: $$\gamma_k=\rho\Ckol^\frac{1}{2}\flux^\frac{1}{3}(k^2+k_0^2)^\frac{1}{3}
\eqno(2.4)$$ $$H_k=|h_k|^2=\frac{8\pi\rho\Ckol^\frac{3}{2}\flux k^2}{k^2+k_0^2}
\eqno(2.5)$$ while, for $k\eta>1$, some cut-off is imposed on the forcing amplitude $H_k$. In this way, the velocity spectrum $\U_\k(t)$, defined by $\langle\u_\k(t)\u_\p(0)\rangle=\U_\k(t)(2\pi)^2$ $\delta(\k+\p)$, will read, for $k\eta\ll 1$: $$\U_\k(t) =4\pi\Ckol\flux^\frac{2}{3}\frac{\k_\perp\k_\perp}{k^2}
\frac{\exp(-\gamma_k|t|)}{(k^2+k_0^2)^\frac{4}{3}}
\eqno(2.6)$$ where $\Ckol$ and $\flux$ play the role, respectively, of the Kolmogorov constant and the inertial range energy flux in a real turbulent field having this correlation spectrum. For $k_0\ll k\ll\eta^{-1}$, we thus have the energy spectrum: $E_k=\Ckol\flux^\frac{2}{3}k^{-\frac{5}{3}}$. Identifying $\gamma_k^{-1}$ with the decay time and $k^{-2}U_k^{-\frac{1}{2}}(0)$ with the turn-over time of an eddy at scale $k^{-1}$, we see that $\rho$ gives the ratio of the eddy turn-over and eddy decay time in the inertial range. The effect of sweep by the large scales, however, is not accounted for in this way.
The most natural way to impose Lagrangian correlations in the random velocity field is to include an advection term in Eqn. (2.1), which will take the form in real space: $$(\partial_t+\u(\x,t)\cdot\nabla)q(\x,t)+\int\d^2y\gamma(\x-\y)q(\y,t)=\int\d^2yh(\x-\y)\xi(\y,t)
\eqno(2.7)$$ This has the form of a vorticity equation in which the forcing and dissipation term, instead of being localized respectively at large and small scales, act over the whole of the inertial range, and this is reflected in their being nonlocal operators in real space. This is opposite to what happens in a real turbulent field, where energy balance is established between large scale forcing and small scale viscous dissipation, by means of the nonlinear cascade. A nonlinear cascade is still present because of the convection term, but it acts on the timescale of the eddy turn-over time, and, for large $\rho$, its effect is only a correction to that of the forcing and damping terms. Choosing $\rho$ large has therefore the consequence that convection acts merely as a large scale sweep.
Actually, Eqn. (2.7) looks a lot like the typical starting point of many turbulent closures [@kraichnan71; @orszag77; @yakhot86], in which $\gamma_k$ gives the turbulent response function (eddy viscosity of small scales) and $h_k$ the nonlinear forcing by the cascade. For instance, $\rho^{-2}$ coincides with the renormalized dimensionless coupling constant of the Renormalization Group (RNG) closure [@yakhot86; @olla91], and its smallness is there the basis for the establishment of a perturbation theory. Here, the philosophy is rather different: no parametrization of the turbulence cascade is sought, $\rho$ is chosen arbitrarily large, and the similarity with real turbulence is expected to be only kinematic. (Also, the separation of a Kolmogorov constant out of the energy flux $\bar\epsilon$ is arbitrary).
Things can be made a little bit more quantitative, introducing scale by scale the sweep time: $$T_k=k^{-1}\langle u^2\rangle^{-\frac{1}{2}}
\sim \Ckol^{-\frac{1}{2}}\flux^{-\frac{2}{3}}k_0^\frac{1}{3}k^{-1}.
\eqno(2.8)$$ i.e. the time needed to a vortex of size $k^{-1}$ to pass in front of a fixed probe. We see that sweep is important for all scales for which $\gamma_kT_k<1$, i.e., from Eqns. (2.4) and (2.6), for $k>k_0\rho^3$. The Kraichnan model is recovered when sweep can be neglected in all of the inertial range, i.e. for $\rho>(\eta k_0)^{-\frac{1}{3}}$. This means basically that the zero correlation time limit is taken before the infinite Reynolds number limit $\eta k_0\to 0$. In this regime we have: $$\U_\k(t)
\simeq \frac{\k_\perp\k_\perp}{k^2}\frac{2\pi}{\rho}\Ckol^\frac{1}{2}\flux^\frac{1}{3}
k^{-\frac{10}{3}} \delta(t)
\eqno(2.9)$$ To understand what happens in the regime of dominant sweep, it is convenient to shift to Lagrangian coordinates. Introduce then the coordinate $\z(t|\x,t_0)$ of a fluid parcel which at time $t_0$ is at $\x$, and define the Lagrangian velocity: $$\u^\smalL(\x,t)=\u(\z(t|\x,0),t)
\eqno(2.10)$$ and analogous expressions for $q^\smalL(\x,t)$ and the other fields. After introducing the increase of trajectory separation in a time $t$: $\delta\z(t|\x,\y)=\z(t|\x,0)-\z(t|\y,0)-(\x-\y)$ Eqn. (2.7) becomes, in the new variables: $$\partial_tq^\smalL(\x,t)+\int\d^2y\gamma(\x-\y+\delta\z(t|\x,\y))q^\smalL(\y,t)=
\int\d^2y h(\x-\y+\delta\z(t|\x,\y))\xi(\y,t)
\eqno(2.11)$$ which must be coupled with the equation for $\delta\z$; inverting Eqn. (2.2): $$\partial_t\delta\z(t|\x,\y)=\frac{1}{2\pi}\int\d^2r[{\bf G}(\x,\r)-{\bf G}(\y,\r)]q^\smalL(\r,t)
\eqno(2.12)$$ with $${\bf G}(\x,\r)=
\frac{(\x-\r+\delta\z(t|\x,\r))_\perp}
{|\x-\r+\delta\z(t|\x,\r)|^2}
\eqno(2.13)$$ We see then that the natural expansion parameter of the theory is: $$\frac{\delta z(t|\x,\y)}{|\x-\y|}
\sim\frac{|\u(\x,0)-\u(\y,0)|}{|\x-\y|\gamma_{|\x-\y|^{-1}}}
\sim\rho^{-1}
\eqno(2.14)$$ i.e. the relative amount of particle separation increase in an eddy lifetime. The zero order of the theory, which is Gaussian and is described by Eqn. (2.6) after substituting $\u\to\u^\smalL$, corresponds to neglecting trajectory separation in an eddy lifetime, while keeping the uniform large scale sweep, implicit in the Lagrangian field $q^\smalL$.
Although the results which follow in the present paper are all obtained to the lowest order in the $\rho$ expansion [@note1], associated with neglecting all non-Gaussian effects in $\u$, a diagrammatic expansion of Eqns. (2.12-13) in terms of the fields $q^\smalL$, $\delta\z$ and their conjugate could be obtained by means of the Martin-Siggia-Rose formalism [@martin73]. This expansion would only be valid locally around $t=0$, since, at long times, trajectory separation becomes dominant. $[$To be consistent, this perturbation expansion should not receive contribution by correlations involving pairs of points in space-time such that $\gamma_{|\x-\x'|^{-1}}|t-t'|>\rho$, but this is expected to be true from the exponential decay of the time correlations$]$.
The interaction terms in the perturbation expansion are obtained Taylor expanding the kernels $\gamma$, ${\bf G}$ and $h$ ($H$ working with the field action). The result for $\gamma$ is, for instance: $$\gamma(\x-\y+\delta\z(t|\x,\y))=\gamma(\x-\y)+\sum_{n=1}^\infty\lambda_{\gamma_n}
\gamma_n^{i_1...i_n}(\x-\y)\delta z_{i_1}(t|\x,\y)...\delta z_{i_n}(t|\x,\y)
\eqno(2.15)$$ with $\lambda_{\gamma_n}=1$ a coefficient which may scale when carrying on power counting. Similar coefficients $\lambda_{G_n}$ and $\lambda_{H_n}$ are introduced in the Taylor expansion for $G$ and $H$. The theory is thus characterized by an infinite number of interactions involving vertices, which, to $\O(\rho^{-n})$, have up to $2+n$ legs.
To check for divergences at large $k$ in the perturbation expansion, we use power counting directly in Eqns. (2.11-13) [@zinn-justin]. Rescaling coordinates and times as: $$x\to\Lambda x\quad{\rm and}\quad t\to\Lambda^\frac{2}{3}t,
\eqno(2.16)$$ Eqns. (2.9-11) remain invariant in form provided we rescale the various fields and interactions $A=q^\smalL,\delta z$, $\lambda_{\gamma_n},\lambda_{G_n},\lambda_{H_n}$: as $A\to \Lambda^{[A]}A$, with $$[q^\smalL]=-\frac{2}{3}\qquad [\delta\z]=1\qquad$$ $$[\lambda_{\gamma_n}]=[\lambda_{H_n}] =[\lambda_{G_n}] =0.
\eqno(2.17)$$ This leads to expect logarithmic divergences at large $k$, meaning renormalizability of the field theory and the possibility of logarithmic correction to scaling, produced by renormalization of the parameters in Eqns. (2.11-13).
It must be mentioned that marginal interactions and renormalizability are consequence of the dimensional relation implicit in Kolmogorov scaling: $[q^\smalL]=-[t]$. In general, had we set: $$\gamma_k\sim k^r\qquad H_k\sim k^s
\eqno(2.18)$$ we would have obtained: $$[q^\smalL]=\frac{r-s-2}{2}\qquad [\delta\z]=\frac{3r-s}{2}$$ $$[\lambda_{\gamma_n}]=[\lambda_{H_n}]
=[\lambda_{G_n}] =-n([t]+[q^\smalL])
=\frac{n}{2}(s+2-3r)
\eqno(2.19)$$ We thus see that super-renormalizability $[\lambda]<0$ and non-renormalizability $[\lambda]>0$ of the theory occur, respectively, for positive and negative $[q^\smalL]+[t]$ [@zinn-justin]. This corresponds to the two regimes of eddy decay time becoming asymptotically longer (shorter) than the eddy turn-over time, and hence the nonlinearity becoming dominant (negligible) at large scales.
Marginality of the interactions means that logarithmic divergences may arise both at large and small $k$. At small $k$, however, such divergences are not expected, due to the subtraction in the definition of $\delta\z$. The reason is sketched below (more details in a separate publication; it must be said, anyway, that this is not a surprise: Lagrangian closures [@kraichnan65] were introduced precisely to cure the infrared divergences arising in the original Eulerian theories). As it appears from Eqn. (2.17), small $k$ divergence is due to internal lines in a loop diagrams involving the field $\delta\z$. The scaling of Eqn. (2.17) is associated with large, not with small $k$ behaviors. In fact, the divergences occurring for large $k$ in a loop diagram will not change if we exchange $\delta\z(t|\x,\y)\to\z(t|\x,0)+\z(t|\y,0)-(\x+\y)$; this because each small eddy contribute to the separation $\x-\y$ an amount which is of the same order of the one to sweep. Now, the logarithmic divergence predicted at small $k$ in a loop diagram comes indeed, from equating the scaling of the sweep $\z(t|\x,0)+\z(t|\y,0)-(\x+\y)$ with that of the trajectory separation $\delta\z(t|\x,\y)$ also at small $k$, which is incorrect. For small $k$, this scaling should be corrected by a factor $k$ per field $\delta\z$ involved in the lines of the loop, and this is enough to eliminate divergence.
**III. Passive tracer transport**
[**1. Self-diffusion of a fluid parcel**]{} Lagrangian correlation functions in the form $\langle\u^\smalL(\x,t)\u^\smalL(\x,0)\rangle=
\langle\u(\z(t|\x,0),t)\u(\x,0)\rangle$ are the simplest objects one may try to calculate from the random velocity field introduced in section II. The starting point, to lowest order in $\rho^{-1}$, and after sending the Kolmogorov scale $\eta$ to zero, is the following modification of Eqn. (2.6): $$\U^\smalL_\k(t)=
4\pi\Ckol\flux^\frac{2}{3}\frac{\k_\perp\k_\perp}{k^2}
\frac{\exp(-\gamma_k|t|)}{(k^2+k_0^2)^\frac{4}{3}}
\eqno(3.1)$$ The Lagrangian correlation time $\tau_L$ is then readily calculated: $$\tau_L^{-1}=
\langle|\u^\smalL|^2\rangle
\Big[\int\d t\langle\u^\smalL(\x,t)\cdot\u^\smalL(\x,0)\rangle\Big]^{-1}=
2\rho\Ckol^\frac{1}{2}\flux^\frac{1}{3}k_0^\frac{2}{3}
\eqno(3.2)$$ and we have the following relation between the turbulence level $u_T^2=\langle u^2\rangle=\langle |\u^\smalL|^2\rangle$ and the integral scales of the flow $k_0$ and $\tau_L$: $$u_T^2
=3\Ckol\flux^\frac{2}{3}k_0^{-\frac{2}{3}}=6\rho\Ckol^\frac{3}{2}\flux\tau_L
\eqno(3.3)$$ The correlation time $\tau_L$ is determined by the particular form of $\U^\smalL_\k$ we have chosen at small $k$, which is non-universal. It is more interesting, and relevant from the point of view of Lagrangian dispersion modeling [@thomson87; @sawford91], to calculate the Lagrangian time structure function: $$\langle[u_\alpha^\smalL(\x,t)-u_\alpha^\smalL(\x,0)]
[u_\beta^\smalL(\x,t)-u_\beta^\smalL(\x,0)]\rangle
=\frac{1}{2}\langle|\u^\smalL(\x,t)-\u^\smalL(\x,0)|^2\rangle\delta_{\alpha\beta}
\eqno(3.4)$$ We discover immediately that, in order to have a self similar spectrum for the inertial range, the time correlations should have continuous time derivative at $t=0$, a property not satisfied by Eqn. (3.1).
This self-similarity violation can be illustrated in a simple way imagining the turbulence field in the neighborhood of the fluid parcel as a superposition of nested eddies with scale $l_n$, velocity $u_n$ and eddy turn-over time $\tau_n$: $$l_n=l_02^{-n},\qquad u_n=u_02^{-\frac{n}{3}},\qquad\tau_n=\tau_02^{-\frac{2n}{3}}
\eqno(3.5)$$ If the time correlation decayed linearly for $t\to 0$, we would have: $$\langle |\u^\smalL(\x,t)-\u^\smalL(\x,0)|^2\rangle
\sim\sum_{\tau_n<t}u_n^2\frac{t}{\tau_n}+\sum_{\tau_n>t}u_n^2
\sim u_0^2\log(\tau_0/t)\frac{t}{\tau_0};
\eqno(3.6)$$ Thus, identical scaling of $u_n^2$ and $\tau_n$, and linear decay of correlations cause the largest space scale to contribute to the structure function at arbitrary short time separation $t$, in the same way as a vortex with eddy turn-over time $\tau_n\sim t$, whence the logarithmic correction involving $\tau_0$.
In order to have a quadratic behavior of the time correlation at $t=0$, it is necessary that the noise $\xi$ in Eqn. (2.7) be correlated in time, and the correlation must again be given along the trajectories. The appropriate modification to Eqn. (2.7) is therefore: $$\begin{cases}
(\partial_t+\u(\x,t)\cdot\nabla)q(\x,t)+\int\d^2y\gamma(\x-\y)q(\y,t)=r(\x,t)
\\
(\partial_t+\u(\x,t)\cdot\nabla)r(\x,t)+\int\d^2y\hat\gamma(\x-\y)r(\y,t)=\int\d^2yh(\x-\y)\xi(\y,t)
\end{cases}
\eqno(3.7)$$ where, for $k\ll\eta^{-1}$: $$\hat\gamma_k=\hat\rho\Ckol^\frac{1}{2}\flux^\frac{1}{3}k^\frac{2}{3}
\quad{\rm and}\quad
H_k=|h_k|^2=8\pi\rho\hat\rho(\rho+\hat\rho)\Ckol^\frac{5}{2}\flux^\frac{5}{3}
(k^2+k_0^2)^\frac{2}{3}
\eqno(3.8)$$ It is easy to show that also the field theory associated with Eqn. (3.7) is characterized by marginal interactions: $[\lambda_{\gamma_n}]=[\lambda_{\hat\gamma_n}]=[\lambda_{H_n}]=[\lambda_{G_n}]=0$ and the considerations in section II extend to the present case.
The zero order of the theory leads to the following correlation function: $$\U^\smalL_\k(t)
=\frac{\k_\perp\k_\perp}{k^2}
\frac{4\pi\Ckol\flux^\frac{2}{3}}{(k^2+k_0^2)^\frac{4}{3}}
\frac{\rho\ex^{-\hat\gamma_k|t|}-\hat\rho\ex^{-\gamma_k|t|}}{\rho-\hat\rho}
\eqno(3.9)$$ and the time correlation has a quadratic maximum at $t=0$. Calculation of the Lagrangian correlation time leads to the same result of Eqn. (3.2), with the substitution $\rho\to\frac{\rho\hat\rho}{\rho+\hat\rho}$, while smoothness of the time correlation eliminates the logarithmic correction to the scaling of the Lagrangian time structure function. This structure function obeys in fact, after sending $k_0\to 0$, the expected normal diffusion behavior: $$\langle|\u^\smalL(\x,t)-\u^\smalL(\x,0)|^2\rangle=2C_0\flux|t|
\qquad
{\rm with}
\qquad
C_0
%\frac{\Ckol(\rho\hat\rho)^\frac{1}{2}}{2}
%\int_0^\infty x^{-\frac{5}{3}}\d x\Big[1-\frac{1}{\rho-\hat\rho}
%\Big(\rho\exp(-(\hat\rho/\rho)^\frac{1}{2}x^\frac{2}{3})
%-\hat\rho\exp(-(\rho/\hat\rho)^\frac{1}{2}x^\frac{2}{3})\Big)\Big]
%$$ =\^/; (3.10) $$ the constant $C_0$ is $\O(\rho)$ and, as expected from the discussion leading to Eqn. (3.6), diverges logarithmically for $\hat\rho/\rho\to\infty$.
[**2. Relative diffusion**]{} Analyzing the transport of a cluster of particles requires consideration of time intervals, during which the space separations involved cannot be approximated as constant. Over these timescales, the short correlation time limit leads to a perturbation scheme, which treats the velocity field to zero order as a white noise.
We focus on the case of a pair of particles. We have to study an equation in the form: $$\partial_t[z_\alpha(t|\r_0,0)-z_\alpha(t|0,0)]
=u_\alpha^\smalL(\r_0,t)-u_\alpha^\smalL(0,t)=U_{\alpha\beta}(\z(t|\r_0,0)-\z(t|0,0))
\xi_\beta(t)
\eqno(3.11)$$ with $\langle\xi_\alpha(t)\xi_\beta(0)\rangle=\delta_{\alpha\beta}\delta(t)$ and $U_{\alpha\beta}$ to be determined. Due to the multiplicative noise nature of this equation, attention must be paid to the possible presence of drift terms arising from the Stratonovich prescription implicit in its definition [@gardiner]. It is easy to show that this drift is identically zero, either by direct calculation of the increment $\delta\z(t|\x,0)$ for $t$ in the inertial range, or noticing that: $$\langle\delta\z(t|\r_0,0)\rangle=\int_0^t\d\tau[
\langle (\u^\smalL(\r_0,\tau)\rangle-\langle\u^\smalL(0,\tau)\rangle]=0;
\eqno(3.12)$$ this, because of homogeneity of turbulence. For this reason, the separation process is described simply by: $$\partial_t\langle[z_\alpha(t|\r_0,0)-z_\alpha(t|0,0)][z_\beta(t|\r_0,0)-z_\beta(t|0,0)]\rangle=
D_{\alpha\beta}(\z(t|\r_0,0)-\z(t|0,0))
\eqno(3.13)$$ with $$D_{\alpha\beta}(\r)=\int\d t\langle[u_\alpha^\smalL(\r,t)-u_\alpha^\smalL(0,t))]
[u_\beta^\smalL(\r,0)-u_\beta^\smalL(0,0))]\rangle
\eqno(3.14)$$ This tensor is easily calculated from $D_{11}(\r)$ for $\r=(r,0)$, exploiting incompressibility. Using $\int_0^{2\pi}\d\theta\sin^2\theta\sin^2(x\cos\theta)=
\frac{\pi}{2}[1-\J_0(2x)-\J_2(2x)]$, we find, in the limit $k_0\to 0$: $$D_{11}(\r)=\frac{4\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}}{\rho}r^\frac{4}{3}
\eqno(3.15)$$ where $$\alpha_\smalsev=\int_0^\infty\d x\ x^{-\frac{7}{3}}[1-\J_0(x)-\J_2(x)]
\simeq 0.265,
\eqno(3.16)$$ with $\J_n$ the Bessel function of the first kind, is evaluated in terms of Gamma functions [@gradshteyn] using the formula $\int_0^\infty\d x\ x^\mu J_\nu(x)=2^\mu
\frac{\Gamma(\frac{1}{2}(1+\nu+\mu))}{\Gamma(\frac{1}{2}(1+\nu-\mu))}$. From incompressibility we find therefore: $$D_{\alpha\beta}(\r)=\Big[\frac{r_\alpha r_\beta}{r^2}+\frac{7}{3}\Big(\delta_{\alpha\beta}
-\frac{r_\alpha r_\beta}{r^2}\Big)\Big]
\frac{4\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}}{\rho}r^\frac{4}{3}
\eqno(3.17)$$ We want to study the asymptotics of the separation process of two particles in the inertial range. The procedure is standard (see e.g [@borgas94]); we introduce the distribution $P$ for the separation $\r$ at time $t$, which will obey the diffusion equation (the summation over repeated indices convention is adopted throughout the paper): $$\partial_tP=\frac{1}{2}\partial_\alpha\partial_\beta D_{\alpha\beta}P
\eqno(3.18)$$ and look for an isotropic similarity solution in the form $$P(\r,t)=t^{-3}f(t^{-\frac{3}{2}}r)=t^{-3}f(R)
\eqno(3.19)$$ Equation (3.18) takes then the form $$\frac{3}{2}\partial_\alpha(R_\alpha f)
+\frac{4\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}}{2\rho}\partial_\alpha R^\frac{1}{3}
R_\alpha\partial_Rf=0
\eqno(3.20)$$ This equation has an unphysical solution, which is divergent in $R=0$, and a finite one: $$f(R)=\exp\Big(-\frac{9\rho R^\frac{2}{3}}{8\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}}\Big)
\eqno(3.21)$$ whose moments are: $$\langle R^n\rangle=\int_0^\infty R^{1+n}\d Rf(R)=\frac{3}{2}
\Big(\frac{8\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}}{9\rho}\Big)^{3+\frac{3n}{2}}
\Gamma(3+\frac{3n}{2})
\eqno(3.22)$$ with $\Gamma$ the standard gamma function.
From here, the expression for the particle space separation is obtained in a straightforward manner; for $\gamma_{x^{-1}}t\gg 1$, indicating $\r(t)=\z(t|\r_0,0)-\z(t|0,0)$: $$\langle r^2(t)\rangle=c\flux t^3,
\qquad
c=\frac{10240\alpha_\smalsev^3\Ckol^\frac{3}{2}}{243\rho^3}
\eqno(3.23)$$ i.e. the space separation obeys Richardson diffusion. For the relative velocity, we have, from Eqn. (2.6): $$\langle [u^\smalL_r(\r_0,0)-u^\smalL_r(0,0)]^2\rangle=2\alpha_\smalfi\Ckol\flux^\frac{2}{3}
\langle r^\frac{2}{3}(t)\rangle
\eqno(3.24)$$ where $$\alpha_\smalfi=\int_0^\infty\d x\ x^{-\frac{5}{3}}[1-\J_0(x)-\J_2(x)]
\simeq 2.149
\eqno(3.25)$$ and, using Eqn. (3.22), for $\gamma_{x^{-1}}t\gg 1$, we find the normal diffusion behavior: $$\langle [u^\smalL_r(\r_0,0)-u^\smalL_r(0,0)]^2\rangle=\tilde c\flux t,
\qquad
\tilde c=\frac{16\alpha_\smalfi\alpha_\smalsev\Ckol^\frac{3}{2}}{3\rho}
\eqno(3.26)$$ Passing to the smoothed out in time version of the velocity field provided by Eqn. (3.7), is accomplished, as in the case of $\tau_L$, by exchanging $\rho\to\frac{\rho\hat\rho}{\rho+\hat\rho}$. In [@boffetta00], both a sub-exponential behavior for the function $f(R)$ and Richardson diffusion were observed in a direct DNS (direct numerical simulation) of two-dimensional turbulence in the inverse cascade regime. Based on the results of that paper, extrapolating applicability of our leading order expressions in $\rho$ would give then (taking also $\hat\rho\to\infty$): $\rho\simeq 2$.
[**3. The role of diffusivity and the Batchelor constant**]{} The dynamics of passive tracers, contrary to that of fluid elements, feels the effect of molecular diffusivity. Due to finiteness of the turbulent correlation times, this effect does not consist purely of an additive noise contribution to the tracer velocity. Indicating by $\sigma$ the molecular diffusivity, the passive tracer velocity will have the form: $$\v(\x,t)+(2\sigma)^\frac{1}{2}{\boldsymbol\xi}(\x,t)
\eqno(3.27)$$ with $\langle\xi_\alpha(\x,t)\xi_\beta(0,0)\rangle=\delta_{\alpha\beta}\delta(\x)\delta(t)$ and $\v$ obeying an equation in the form: $$(\partial_t+\v(\x,t)\cdot\nabla)q_v(\x,t)+\int\d^2y\gamma(\x-\y)q_v(\y,t)
-\int\d^2yh(\x-\y)\xi(\y,t)$$ $$=-(2\sigma)^\frac{1}{2}\langle{\boldsymbol\xi}(\x,t)\cdot\nabla q_v(\x,t)\rangle_\xi
\simeq\sigma\nabla^2q_v(\x,t)
\eqno(3.28)$$ where $q_v=\nabla_\perp\cdot\v$, $\langle .\rangle_\xi$ is an average limited to the noise ${\boldsymbol\xi}$ and use has been made, in converting the advection by molecular noise into a diffusion term, of Itô’s lemma [@gardiner]. We see (it is assumed that the limit $\eta\to 0$ is already taken) that there is a renormalization of the damping kernel $\gamma$: $$\gamma_k\to\gamma_k+\sigma k^2
\eqno(3.29)$$ which leads to a cut-off for the velocity at the inverse diffusive scale $$\eta^{-1}_\sigma=(\rho\Ckol^\frac{1}{2})^\frac{3}{4}\flux^\frac{1}{4}\sigma^{-\frac{3}{4}}
\eqno(3.30)$$ We have then: $$\langle \v_\k(t)\v_{-\k}(0)\rangle
=\frac{\exp(-\sigma k^2|t|)}{1+(k\eta_\sigma)^\frac{4}{3}}\langle \u_\k(t)\u_{-\k}(0)\rangle,
\eqno(3.31)$$ and for small space separations $r/\eta_\sigma\to 0$, we have a quadratic behavior for the velocity structure function: $$\langle[v_r(\x+\r,t)-v_r(\x,t)]^2\rangle
=2\Ckol\flux^\frac{2}{3}r^2\eta_\sigma^{-\frac{4}{3}}
\int_0^\infty\frac{[1-\J_0(x)-\J_2(x)]\d x}{x^\frac{5}{3}(x^\frac{4}{3}+(r/\eta_\sigma)^\frac{4}{3})}$$ $$\simeq \frac{1}{4}\Ckol\flux^\frac{2}{3}\eta_\sigma^{-\frac{4}{3}}r^2|\log r/\eta_\sigma |
\eqno(3.32)$$ The transport of a passive scalar $\theta(\x,t)$ will be described by the equation $$(\partial_t+\v(\x,t)\cdot\nabla)\theta(\x,t)=\sigma\nabla^2\theta(\x,t)+f(\x,t)
\eqno(3.33)$$ with $f(\x,t)$ a source term. An interesting quantity to calculate is the fluctuation spectrum for $\theta$ in the case $f$ is random in time and concentrated at large scale: $$\langle f(\x+\r,t)f(\x,0)\rangle=F(r)\delta(t),\qquad
F(r)=
\begin{cases}
2\flux_\theta, & k_0r<1
\\
0, & k_0r>0
\end{cases}
\eqno(3.34)$$ We can thus consider $\langle\theta\rangle=0$. The equation for the steady state passive scalar correlation $\Theta(r)=\langle\theta(\x+\r,t)\theta(\x,t)\rangle$ will then be, for $k_0r\ll 1$: $$\langle\theta(\x,t)[\v(\x+\r,t)-\v(\x,t)]\cdot\nabla\theta(\x+r,t)\rangle
=2\sigma\nabla^2\Theta(r)+4\flux_\theta
\eqno(3.35)$$ For $r\to 0$, the left hand side of this equation is zero; we thus obtain $\flux_\theta=\frac{\sigma}{2}\langle|\nabla\theta|^2\rangle$, i.e. $\flux_\theta$ is the dissipation of passive scalar fluctuations. Following the same approach of the previous section, the velocity difference $\v(\x+\r,t)-\v(\x,t)$ is approximated by a white noise. From Itô’s Lemma, its contribution in Eqn. (3.35) will be an eddy diffusivity $D_{\alpha\beta}^v(\r)$, whose expression will coincide, for $r\gg \eta_\sigma r$, with the one for $D_{\alpha\beta}$ provided by Eqns. (3.16-17). Drift terms coming from the Stratonovich prescriptions are ruled out with the same arguments used in the previous section. The resulting diffusion equation will then read: $$\partial_\alpha\partial_\beta(\frac{1}{2}D_{\alpha\beta}^v(\r)+2\sigma\delta_{\alpha\beta})
\Theta(r)+4\flux_\theta=0
\eqno(3.36)$$ For $r\gg\eta_\sigma$ $D^v_{ij}$ is essentially a correction to the molecular diffusivity, and will read, from Eqn. (3.31): $$D_{\alpha}^v(\r)=
\int\d t\langle[v_\alpha(\r,t)-v_\alpha(0,t))][v_\beta(\r,0)-v_\beta(0,0))]\rangle$$ $$\simeq
\Big[\frac{r_\alpha r_\beta}{r^2}+\frac{13}{3}\Big(\delta_{\alpha\beta}-\frac{r_\alpha
r_\beta}{r^2}\Big)\Big]
\frac{3\pi\sigma}{8\rho^2}(r/\eta_\sigma)^\frac{10}{3}
\eqno(3.37)$$ For $\eta_\sigma\ll r$, $D^v_{ij}$ is approximated by Eqn. (3.17), the molecular diffusivity $\sigma$ can be neglected and Eqn. (3.35) takes the form: $$\Theta''+\frac{7}{3r}\Theta'=-\frac{8\rho\flux_\theta}
{4\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}r^\frac{4}{3}}
\eqno(3.38)$$ Solution of this equation gives automatically the passive scalar structure function $\langle[\theta(\x+\r,t)-\theta(\x,t)]^2\rangle=2[\Theta(0)-\Theta(r)]$ in the inertial range for $\theta$: $\eta_\sigma\ll r\ll k_0^{-1}$. This structure function scales like $r^\frac{2}{3}$ and can be written in the form: $$\langle[\theta(\x+\r,t)-\theta(\x,t)]^2\rangle
=\frac{B\flux_\theta r^\frac{2}{3}}{\Ckol^\frac{1}{2}\flux^\frac{1}{3}}
\eqno(3.39)$$ with the parameter $B=\frac{3\rho}{\alpha_\smalsev}$ the so called Batchelor constant of the flow. As with relative diffusion, the case of a velocity field with smooth time correlation described by Eqn. (3.7) is recovered substituting $\rho$ with $\frac{\rho\hat\rho}{\rho+\hat\rho}$.
**IV. Solid tracers: 1-particle statistics**
We consider the simplest case of a linear drag. In the presence of gravity (or of a constant external force) and of the turbulent velocity field $\u(\x,t)$, the solid particle coordinate $\z^\smalP(t|\x,0)$ will obey the equation of motion: $$\dot\z^\smalP(t|\x,t)=\v^\smalP(\x,t)+\u_G,\qquad \z^\smalP(0|\x,0)=\x
\eqno(4.1)$$ where $\u_G$ is the gravitational drift, that we suppose constant and uniform and $\v^\smalP$ is the fluctuation in the Lagrangian solid particle velocity, which obeys the linear relaxation equation: $$\dot\v^\smalP(\x,t)=\tau_S^{-1}(\u(\z^\smalP(t|\x,0),t)-\v^\smalP(\x,t))
=\tau_S^{-1}(\u^\smalP(\x,t)-\v^\smalP(\x,t)),
\eqno(4.2)$$ with $\tau_S$ the Stokes time. (For a spherical particle of radius $a$ and density $\rho_\smalP$, in a fluid of density $\rho_{\scriptscriptstyle{0}}$ and kinematic viscosity $\nu$, we would have: $\frac{2a^2}{9\nu}|1-\rho_\smalP/\rho_{\scriptscriptstyle{0}}|$; we are disregarding any effect from finite particle Reynolds number [@maxey83]). From now on we shall identify Lagrangian quantities calculated on solid particle trajectories by the superscript $P$.
In general the non-coincidence of fluid and solid particle trajectories makes the analysis of Eqns. (4.1-2) a very difficult task. The short correlation time limit $\rho\to\infty$, however, allows to proceed perturbatively in the fluctuating part of the trajectory separation $\u_Gt+\z(t|\x,0)-\z^\smalP(t|\x,0)$. The physical motivation for this is that, from Eqn. (4.2), $\u_Gt+\z(t|\x,0)-\z^\smalP(t|\x,0)$ fluctuates on timescale $\tau_S$ with velocity scale fixed by those eddies which have decay time $\tau_S$. Hence, for $\rho$ large, the fluctuating part of trajectory separation remains small on the scale of these eddies. Furthermore, when either $u_Gt>\delta z(t|\x+\u_Gt,\x)$, or $\gamma_{|u_Gt|^{-1}}t> 1$, in other words, when either $\frac{\Ckol^\frac{3}{2}\flux t}{u_G^2} < 1 $ or $\frac{\Ckol^\frac{3}{2}\flux t}{u_G^2}>\rho^{-3}$ (provided $\rho> 1$, one of the two conditions is always satisfied), it is possible to approximate $\z(t|\x,0)+\u_Gt\simeq \z(t|\x+\u_Gt,0)$.
To lowest order we have therefore: $$\u(\z^\smalP(t|\x,0),t)=\u(\u_Gt+\z(t|\x,0),t)=\u^\smalL(\x+\u_Gt,t)
\eqno(4.3)$$ We obtain immediately the fluctuation amplitude of the velocity difference between solid and fluid particle at a given position. From Eqns. (4.2-3) we can write: $$\v^\smalP(\x,t)=\int\frac{\d^2k}{(2\pi)^2}\int_{-\infty}^t\frac{\d\tau}{\tau_S}\u^\smalL_\k(\tau)
\exp(-\frac{t-\tau}{\tau_S}+\i\k\cdot\x)
\eqno(4.4)$$ and from here we obtain, using Eqn. (3.2): $$\langle(v_\alpha-u_\alpha)(v_\beta-u_\beta)\rangle=\delta_{\alpha\beta}u^2_S\int_1^\infty\frac{\d x}
{x(1+\frac{2\tau_S}{\tau_L}x^\frac{1}{3})}
\underset{\scriptscriptstyle \tau_S\ll\tau_L}\longrightarrow
3\delta_{\alpha\beta}u^2_S\log(\tau_L/\tau_S)
\eqno(4.5)$$ where $$u_S=\Big(\frac{\tau_S}{3\tau_L}\Big)^\frac{1}{2}u_T
\eqno(4.6)$$ for $\tau_S<\tau_L$, is the velocity scale of eddies with lifetime $\tau_S$ and $u_T$ is the turbulent velocity defined in Eqn. (3.3). In order to proceed to next order, it is necessary to calculate the trajectory separation: $$\z^\smalP(t|\x,0)-\z(t|\x,0)=\u_Gt+(1-\ex^{-t/\tau_S})
\int_{-\infty}^0\d\tau\ \ex^{\tau/\tau_S}\u^\smalL(\x,\tau)$$ $$-\int_0^t\d\tau\exp(-\frac{t-\tau}{\tau_S})\u^\smalL(\x,\tau)
\eqno(4.7)$$ We notice from this equation that the inertia produced part of trajectory separation does not grow indefinitely. In other words, if $\u_G=0$ and to lowest order in $\rho^{-1}$, there will be localization of solid particle trajectories around the fluid parcel trajectories they cross at any given time; from Eqn. (4.7): $\langle |\z^\smalP(t|\x,-\infty)-\z(t|\x,-\infty)|^2\rangle
\sim (u_T\tau_S)^2\sim \Ckol\flux^\frac{2}{3}k_0^{-\frac{2}{3}}\tau_S^2$. We thus introduce the localization length $S_l$: $$S_l=\Ckol^\frac{1}{2}\flux^\frac{1}{3}k_0^{-\frac{1}{3}}\tau_S
\eqno(4.8)$$ What happens is that the velocity difference $\v^\smalP-\u^\smalP$ obeys a relaxation equation with a forcing which is a time derivative; from Eqn. (4.2): $\frac{\d}{\d t}(\v^\smalP-\u^\smalP)+\tau_S^{-1}(\v^\smalP-\u^\smalP)=-\dot\u^\smalP$. The frequency spectrum of $\v^\smalP-\u^\smalP$ does not have therefore the small frequency singularity necessary for long time divergence. The localization length $S_l$ will appear to play a fundamental role in the production both of concentration fluctuations and of corrections to the velocity correlation time. (Of course, to higher order in $\rho^{-1}$, the relative separation of fluid parcels sets in and localization is destroyed; $S_l$ becomes then, that part of trajectory separation which remains after the Richardson diffusion contribution is subtracted out).
In the absence of gravity, beside the integral scale dependent localization length $S_l$, three more scales, which, if $\tau_S\ll\tau_L$, are purely inertial, can be obtained combining $\tau_S$, the crossing time of an eddy by a solid particle, the eddy lifetime and the eddy turn-over time. We have the size $S$ of an eddy whose lifetime equals $\tau_S$: $\gamma_{S^{-1}}\tau_S\sim 1$; the size $S_c$ of an eddy that is crossed by a solid particle in a time $\tau_S$: $u_S\sim S_c/\tau_S$; the size $S_i$ of an eddy whose lifetime equals the crossing time by a solid particle: $S_i\gamma_{S_i^{-1}}\sim u_S$. Summarizing: $$S=\rho^\frac{3}{2}\Ckol^\frac{3}{4}\flux^\frac{1}{2}\tau_S^\frac{3}{2},
\qquad
S_c=\rho^\frac{1}{2}\Ckol^\frac{3}{4}\flux^\frac{1}{2}\tau_S^\frac{3}{2}
\quad
{\rm and}
\quad
S_i=\rho^{-\frac{3}{2}}\Ckol^\frac{3}{4}\flux^\frac{1}{2}\tau_S^\frac{3}{2}
\eqno(4.9)$$ From Eqn. (4.9), we identify the following sequence of ranges:
- A large separation range $r>S$, in which the fluid velocity $\u^\smalP$ varies slowly on the scale of the relaxation time $\tau_S$.
- A first intermediate range $S<r<S_c$ in which the fluid velocity $\u^\smalP$ is a fast variable, but still, $\tau_S$ is short compared with the crossing time of an eddy of size $r$; hence, Eqn. (4.2) has the form of a Langevin equation with a noise $\tau_S^{-1}\u^\smalP$ of constant amplitude on the scale of this crossing time. The cross-over scale $S$ will play an important role in the determination of the degree of non-ergodicity of the solid particle flow (see section VI).
- A second intermediate range $S_c<r<S_i$, in which the crossing time is shorter than both the Stokes time and the eddy turn-over time, but is longer than the lifetime of an eddy of that size; hence, the solid particle moves ballistically with respect to the fluid.
- A small separation range $r<S_i$, in which trajectory separation in the lifetime of an eddy, is not a perturbation any more.
From Eqns. (4.3), (4.4) and (4.7), we can establish a perturbative calculation scheme for $\u^\smalP$ and $\v^\smalP$. Notice that, within perturbation theory, $\v^\smalP$ is a one-valued function of $\x$ and $t$, and $\v(\x,t)$ defines automatically a velocity field for the solid particles. The separation between $S_i$ and all the other scales of the problem, has the consequence that, in the present case, the Weinstock approximation is exact [@weinstock76]. What happens is that trajectory separation is produced mainly by eddies of size $r\gtrsim S$, for which trajectory separation is a perturbation. This has the consequence, in particular, that the Weinstock approximation applies also at scales $r<S_i$ for which trajectory separation is not a perturbation at all. For dominant gravity, i.e. when $u_G>u_S$, trajectory separation is produced mainly by the gravitational drift $u_G$ and the Weinstock approximation is automatically satisfied.
We can calculate at this point the time correlation for the solid particle velocity and adopt the approach followed in [@pismen78; @nir79]; we can thus write, using Eqn. (4.7): $$\langle u_1^\smalP(0,0)u_1^\smalP(0,t)\rangle
=\int\frac{\d^2k}{(2\pi)^2}\frac{\d^2p}{(2\pi)^2}
\langle u^\smalL_{1\k}(0)u^\smalL_{1\p}(t)
\exp[\i\p\cdot(\z^\smalP(t|0,0)-\z(t|0,0))]\rangle$$ $$=-\int\frac{\d^2k}{(2\pi)^2}\frac{\d^2p}{(2\pi)^2}
\exp(\i\p\cdot\u_Gt)\frac{\delta^2Z[{\bf J}]}{\delta J_{\k 1}(0)\delta J_{\p 1}(t)}
\Big|_{{\bf J}=\p\bar J_t}
\eqno(4.10)$$ where $$Z[{\bf J}]=\Big\langle\exp\Big(\i\int\frac{\d^2s}{(2\pi)^2}
\int\d t\u^\smalL_\s(t)\cdot{\bf J}_\s(t)\Big)\Big\rangle$$ $$=\N\exp\Big(-\frac{1}{2}\int\d\tau\d\tau'\int\frac{\d^2s}{(2\pi)^2}
{\bf J}_\s(\tau)\cdot\U_\s^\smalL(\tau-\tau')\cdot{\bf J}_{-\s}(\tau')\Big)
\eqno(4.11)$$ is the generating functional for the field $\u^\smalL$ and $$\bar J_t(\tau)=
\begin{cases}
0, &\tau>t
\\
-\exp(-\frac{t-\tau}{\tau_S}), & 0<\tau<t
\\
(1-\exp(-t/\tau_S))\exp(\tau/\tau_S), &\tau<0
\end{cases}
\eqno(4.12)$$ Substituting back into Eqn. (4.10), we obtain, after introducing dimensionless variables $\bar t=t/\tau_S$, $\bar u_G=k_0\tau_Su_G$ and $\bar\gamma=\tau_S\gamma_{k_0}=
\tau_S/(2\tau_L)$:
$$\langle u_1^\smalP(0,0)u_1^\smalP(0,t)\rangle
=\frac{u_T^2}{6}
\int_1^\infty\d x\ x^{-\frac{4}{3}}[\J_0(\bar u_G(x-1)^\frac{1}{2}\bar t)+
\J_2(\bar u_G(x-1)^\frac{1}{2}\bar t)]$$ $$\times\exp\Big[-\bar\gamma \bar t x^\frac{1}{3}
-\frac{\bar\gamma^3(x-1)}{2\rho^2}
\int_1^\infty\frac{\d y}{y^\frac{4}{3}(1+\bar\gamma y^\frac{1}{3})}
\Big(1-\ex^{-\bar t}-\frac{\ex^{-\bar\gamma\bar ty^\frac{1}{3}}-\ex^{-\bar t}}
{1-\bar\gamma y^\frac{1}{3}}\Big)\Big]
\eqno(4.13)$$ We see from this equation that decorrelation of the fluid velocity sampled by a solid particle receives three contributions: one from the gravitational drift $u_G$, one from the eddy decay $\bar\gamma x^\frac{1}{3}\bar t$ and the integral term in the exponential, which comes from inertia produced trajectory separation. This last term is peculiar, in that it saturates to a constant for long $t$ instead of continuing to increase indefinitely. This term is the argument in the exponential expression for $Z[\J]$ $[$see Eqn. (4.11)$]$, which is essentially: $$\p\p : \langle[\z^\smalP(t|0,0)-\z(t|0,0)][\z^\smalP(t|0,0)-\z(t|0,0)]\rangle
\eqno(4.14)$$ with the drift $\u_G$ subtracted out, and with $\p$ the wavevector entering the integral of Eqn. (4.10). But, from Eqns. (4.7-8), we saw that this expression saturates at $t\to\infty$. In consequence of this, for long enough times, the large $x$ behavior of the integrand in Eqn. (4.13) will be dominated by the value at saturation of the inertia produced term.
[**1. Velocity self-diffusion**]{} Inertia causes two ranges of time separations in the correlation $\langle u_1^\smalP(\x,0)u_1^\smalP(\x,t)\rangle$: one at short times dominated by sweep from the velocity difference $\u_G+\v-\u$ and one at long time associated with eddy decay, where Eqn. (3.10) holds [@olla01]. The transition between the two ranges occurs at $$t\sim \frac{\max(u_G^2,u_S^2)}{\rho^3\Ckol^\frac{3}{2}\flux}
\eqno(4.15)$$ From Eqns. (4.5-6), for dominant inertia, i.e. $u_S>u_G$, this cross-over time is much shorter than $\tau_L$, while, for dominant gravity, i.e. for $u_G\gg u_S$ it is possible that sweep dominates for all inertial timescales; for this to occur, it is necessary that the crossing time of a large eddy by the particle be less than $\tau_L$, i.e. $k_0u_G\tau_L>1$. For dominant inertia the cross-over time $u_S^2/(\rho^3\Ckol^\frac{3}{2}\flux)\sim\rho^{-2}\tau_S$ is just the lifetime of an eddy of size $S_i$ $[$see Eqn. (4.9)$]$.
For dominant gravity, the exponential term in Eqn. (4.13) can be neglected. For $t\ll\min(\tau_G,\tau_L)$ with $\tau_G=\frac{6}{\rho^2}(\frac{u_G}{u_T})^2\tau_L\sim\frac{u_G^2}{\rho^3\Ckol^{3/2}\flux}$, we find: $$\langle[u_1^\smalP(0,t)-u_1^\smalP(0,0)]^2\rangle=
\frac{u_T^2}{3}
\int_1^\infty\d x\ x^{-\frac{4}{3}}[1-\J_0(\bar u_G\bar t(x-1)^\frac{1}{2})-
\J_2(\bar u_G\bar t(x-1))^\frac{1}{2})]$$ $$\simeq
\frac{2}{3}\alpha_\smalfi\Ckol\flux^\frac{2}{3}(u_Gt)^\frac{2}{3}
\eqno(4.16)$$ where $\alpha_\smalfi\simeq 2.149$ $[$see Eqn. (3.25)$]$. The time $\tau_G$, for $u_G<u_L$, is the lifetime of vortices whose lifetime equals the crossing time by a falling particle; for $\rho=O(1)$, $\tau_G$ coincides with the eddy turn-over time of vortices with characteristic velocity $u_G$.
For dominant inertia $u_G<u_S$ and short enough times $t\ll\rho^{-2}\tau_S$, only the last piece in Eqn. (4.13) will contribute and will be quadratic in $\bar t$; if $\tau_S\ll\tau_L$: $$1-\ex^{-\bar t}-\frac{\ex^{-\bar\gamma\bar ty^\frac{1}{3}}-\ex^{-\bar t}}
{1-\bar\gamma y^\frac{1}{3}}
\simeq
\frac{1}{2}\bar\gamma y^\frac{1}{3}\bar t^2
\eqno(4.17)$$ Substituting into Eqn. (4.13), we are left with the following expression: $$\langle[u_1^\smalP(0,t)-u_1^\smalP(0,0)]^2\rangle
=\frac{u_T^2}{3}
\int_1^\infty\d x\ x^{-\frac{4}{3}}
\Big\{1-\exp\Big[-\frac{\bar\gamma^3\bar t^2(x-1)}{4\rho^2}
\int_1^\infty\frac{\d y}{y(1+\bar\gamma y^\frac{1}{3})}
\Big]\Big\}$$ $$\simeq \frac{u_T^2}{3}
\int_0^\infty\d x\ x^{-\frac{4}{3}}
\Big\{1-\exp\Big[\frac{3\bar\gamma^3\bar t^2x\log\bar\gamma}{4\rho^2}
\Big]\Big\}
\eqno(4.18)$$ where use has been made again, in passing, from the first to the second line, of the condition $t\ll\rho^{-1}\tau_S$. Using $\int_0^\infty\d x\ x^{-\frac{4}{3}}(1-\exp(-Ax))=3\Gamma(2/3)A^\frac{1}{3}$, we obtain therefore: $$\langle[u_1^\smalP(0,t)-u_1^\smalP(0,0)]^2\rangle
\simeq
\frac{3}{2}\Gamma(2/3)\Big(3\log(\tau_L/\tau_S)\Big)^\frac{1}{3}
\Ckol\flux^\frac{2}{3}
(u_St)^\frac{2}{3}
\eqno(4.19)$$ As predicted in [@olla01], at short times, the time structure function for $\u^\smalP$ has a sub-diffusive behavior with exponent $\frac{2}{3}$ both for dominant $u_G$ and dominant $u_S$. What happens is that at such short time scales, the particle crosses at constant speed (remember also, in the inertia dominated case, that $S_c\gg S_i$) vortices whose velocity field is, in the limit, basically frozen; hence a Taylor hypothesis applies, and time correlations coincide with their spatial counterparts.
[**2. Velocity correlation times**]{} Starting from Eqn. (4.13), we can calculate the correlation time $\tau_P$ for the fluid velocity sampled by a solid particle: $$\tau_P=\langle [u_1^\smalP]^2\rangle^{-1}\int_0^\infty\d t
\langle u_1^\smalP(0,0)u_1^\smalP(0,t)\rangle
\eqno(4.20)$$ To lowest order, any discrepancy between the PDF (probability distribution functions) for $u^\smalL$ and $u^\smalP$ can be neglected and we have $\langle [u_1^\smalP]^2\rangle
=\langle [u_1^\smalL]^2\rangle=\frac{1}{2}u_T^2$. We begin by analyzing the case of dominant inertia: $u_G=0$. Taylor expanding in $\rho^{-1}$ the integrand in Eqn. (4.13) and substituting into Eqn. (4.20), leads to terms which diverge when integrated in $x$. This indicates that the time independent part of the inertia term in Eqn. (4.13) dominates the integral. We thus Taylor expand in $\rho^{-1}$, only the time dependent piece of the integrand in Eqn. (4.13), i.e.: $$\exp\Big[-\bar\gamma\bar tx^\frac{1}{3}+\frac{\bar\gamma^3(x-1)}{2\rho^2}
\int_1^\infty\frac{\d y}{y^\frac{4}{3}(1+\bar\gamma y^\frac{1}{3})}
\Big(\ex^{-\bar t}+\frac{\ex^{-\bar\gamma\bar ty^\frac{1}{3}}-\ex^{-\bar t}}
{1-\bar\gamma y^\frac{1}{3}}\Big)\Big]
\eqno(4.21)$$ to obtain: $$\int_0^\infty\d t\langle u_1^\smalP(\x,0)u_1^\smalP(\x,t)\rangle
=\frac{u_T^2}{6}
\int_1^\infty\d x\ x^{-\frac{4}{3}}
\exp\Big(-\frac{\bar\gamma^2(x-1)}{2\rho^2}
\int_1^\infty\frac{\d y}{y^\frac{4}{3}(1+\bar\gamma y^\frac{1}{3})}\Big)$$ $$\times\Big[\frac{1}{\bar\gamma x^\frac{1}{3}}+
\frac{\bar\gamma^2(x-1)}{2\rho^2}
\int_1^\infty\d y
\frac{1+\bar\gamma x^\frac{1}{3}-\bar\gamma^2 y^\frac{1}{3}(x^\frac{1}{3}+y^\frac{1}{3})}
{y^\frac{4}{3}(1-\bar\gamma^2y^\frac{2}{3})(1+\bar\gamma x^\frac{1}{3})
(x^\frac{1}{3}+y^\frac{1}{3})}\Big]
\eqno(4.22)$$ and we see that the integral in $x$ of the $\O(\rho^{-2})$ on second line of Eqn. (4.22) is dominated in fact by a saddle point at $x=(k/k_0)^2\sim (\rho/\bar\gamma)^2$, i.e. at $k\sim S_l^{-1}$. Combining this result, with the fact that the integrands are peaked at $y\sim 1$, Eqn. (4.22) will take the form: $$\int_0^\infty\d t\langle u_1^\smalP(0,0)u_1^\smalP(0,t)\rangle
=\frac{u_T^2}{6}
\int_1^\infty\d x\ x^{-\frac{4}{3}}
\exp\Big(-\frac{\bar\gamma^2x}{2\rho^2}
\int_1^\infty\frac{\d y}{y^\frac{4}{3}(1+\bar\gamma y^\frac{1}{3})}\Big)$$ $$\times\Big[\frac{1}{\bar\gamma x^\frac{1}{3}}+
\frac{\bar\gamma^2x^\frac{2}{3}}{2\rho^2}
\int_1^\infty\frac{\d y}{y^\frac{4}{3}(1+\bar\gamma y^\frac{1}{3})}\Big].
\eqno(4.23)$$ We thus obtain, for the deviation $\tau_P-\tau_L$: $$\frac{\tau_P}{\tau_L}=1+B(\bar\gamma)\bar\gamma^\frac{4}{3}\rho^{-\frac{4}{3}}+\O(\rho^{-2})
\eqno(4.24)$$ where $$B(\bar\gamma)=\Big(\frac{2}{3}\Big)^\frac{1}{3}\Gamma(1/3)
\Big[\frac{1}{3}-\frac{\bar\gamma}{2}+\bar\gamma^2+\bar\gamma^3\log\frac{\bar\gamma}{1+\bar\gamma}\Big]^\frac{2}{3}
\eqno(4.25)$$ It is to be noticed that the factor $B(\bar\gamma)$ is always positive, i.e. the correlation time for the fluid velocity seen by the solid particle is longer than $\tau_L$. Following the argument in [@kraichnan64a], this would be expected in the case of a velocity field with statistics defined in an Eulerian frame, and is exactly the result obtained in [@reeks77]. In the case of a Lagrangian statistics, it is not clear whether the deviation between solid and fluid particle trajectories, should have lead to a faster, rather than slower decorrelation rate.
In the case of dominant gravity, as expected [@csanady63; @nir79], there is always a decrease of the correlation time. In place of Eqn. (4.22), we have: $$\int_0^\infty\d t\langle u_1^\smalP(0,0)u_1^\smalP(0,t)\rangle$$ $$=\frac{u_T^2}{6}
\int_0^\infty\d t\int_1^\infty\d x\ x^{-\frac{4}{3}}
[\J_0(\bar u_G(x-1)^\frac{1}{2}\bar t)+\J_2(\bar u_G(x-1)^\frac{1}{2}\bar t)]
\exp(-\bar\gamma \bar t x^\frac{1}{3})
\eqno(4.26)$$ which, using $\int_0^\infty\d x\J_\nu(\beta x)\ex^{-\alpha x}=\beta^{-\nu}
(\alpha^2+\beta^2)^{-\frac{1}{2}}[(\alpha^2+\beta^2)^\frac{1}{2}-\alpha]^\nu$ [@gradshteyn], leads to the expression for the correlation time: $$\frac{\tau_P}{\tau_L}=
\frac{2}{3}\int_0^\infty\frac{\d x}{x^\frac{4}{3}}\Big(\frac{\bar\gamma x^\frac{1}{3}}
{\bar u_G(x-1)^\frac{1}{2}}\Big)^2\Big[\Big(\Big(\frac{\bar\gamma x^\frac{1}{3}}
{\bar u_G(x-1)^\frac{1}{2}}\Big)^2+1\Big)^\frac{1}{2}\Big)\Big]
\eqno(4.27)$$ We can obtain limiting expressions for this ratio, when the crossing time $(k_0u_G)^{-1}$ is much longer or much shorter than the integral time $\tau_L$: $$\frac{\tau_P}{\tau_L}=
\begin{cases}
1+\frac{2}{3}(u_Gk_0\tau_L)^2
\log u_Gk_0\tau_L
\quad
&k_0u_G\tau_L\ll 1
\\
2^\frac{3}{2}(u_Gk_0\tau_L)^{-1}
&k_0u_G\tau_L\gg 1
\end{cases}
\eqno(4.28)$$
[**3. Eulerian correlations**]{} The limit $\tau_S\to\infty$, corresponding to the case of a particle with infinite inertia, leads, from Eqn. (4.2), to a particle velocity, which, in the absence of gravity, is identically zero. Hence $\u^\smalP(\x,t)=\u(\x,t)$ and the time statistics for the fluid velocity seen by the particle coincides with the Eulerian turbulent statistics. In this regime, the dimensionless units introduced for Eqn. (4.13) are not appropriate any more. Redefining $\bar t=\gamma_{k_0}t$, Eqn. (4.13) takes the form, after writing $\exp(-t/\tau_S)\simeq 1-t/\tau_S$: $$\langle u_1(0,0)u_1(0,t)\rangle
=\frac{u_T^2}{6}
\int_1^\infty\d x\ x^{-\frac{4}{3}}
\exp\Big[-\bar t x^\frac{1}{3}
-\frac{(x-1)}{2\rho^2}\Big(\frac{3\bar t}{2}-1
+\int_1^\infty\d y\ y^{-2}\exp(-\bar ty^\frac{1}{3})\Big)\Big]
\eqno(4.29)$$ We start by calculating the Eulerian correlation time $$\tau_E=
u_T^{-2}
\int\d t\langle\u(\x,t)\cdot\u(\x,0)\rangle
\eqno(4.30)$$ Contrary to Eqn. (4.13), it is the linear in $t$, $O(\rho^{-2})$ term in Eqn. (4.29), which, at fixed long enough $t$, dominates for $x\to\infty$. The same reasons leading to expand Eqn. (4.21), suggest that we must now expand: $$\exp\Big[\frac{(x-1)}{2\rho^2}\Big(1-
\int_1^\infty\d y\ y^{-2}\exp(-\bar ty^\frac{1}{3})\Big)\Big]
\eqno(4.31)$$ Instead of Eqn. (4.22), we find: $$\int_0^\infty\d t\langle u_1(0,0)u_1(0,t)\rangle
=\frac{u_T^2}{6}
\int_1^\infty\d x\ x^{-\frac{4}{3}}
\Big[\Big(1+\frac{(x-1)^2}{2\rho^2}\Big)\Big(x^\frac{1}{3}+\frac{3(x-1)}{4\rho^2}\Big)^{-1}$$ $$-\frac{(x-1)}{2\rho^2}
\int_1^\infty\d y\ y^{-2}\Big(x^\frac{1}{3}+y^\frac{2}{3}+\frac{3(x-1)}{4\rho^2}\Big)^{-1}
\Big]
\eqno(4.32)$$ All the terms involving factors $\rho^{-2}$ lead, after integration, to an $\O(\rho^{-2})$ result, except one which leads to a $\O(\rho^{-2}\log\rho)$ term; the integral in Eqn. (4.32) will read, to leading order in $\rho$: $$\int_1^\infty\d x\ \Big[x^{-\frac{5}{3}}
-\rho^{-2}x^{-1}\Big(1+\rho^{-2}x^\frac{2}{3}\Big)^{-1}\Big]+\O(\rho^{-2})
\simeq \frac{3}{2}-\frac{3\log\rho}{\rho^2}
\eqno(4.33)$$ We obtain then the result for the Eulerian correlation time: $$\frac{\tau_E}{\tau_L}=1-\frac{2\log\rho}{\rho^2}
\eqno(4.34)$$ which is shorter than $\tau_L$, as expected from the fact that the velocity field statistics is defined along fluid trajectories, and, sampling at fixed space position should lead to an increase in the rate of decorrelation. Comparing Eqns. (4.28) and (4.36), we see therefore that there is a transition from a correlation time longer than $\tau_L$ for light particles, to a shorter one for heavy particles. The origin of this lies in the opposite orderings $\tau_S\lesssim t$ and $\tau_S\gg t$, on which the Taylor expansions of Eqns. (4.21) and (4.31) are based. $[$More precisely, for $\tau_S>\rho\tau_L$, we have $k_0S_l>1$ and the saddle point in Eqn. (4.22) disappears$]$.
As a last exercise, it is possible to calculate the sweep produced decay in an Eulerian two-point two-time structure function in the form: $$S_{rr}(r,t)=\langle [u_r(\r,t)-u_r(0,t)] [u_r(\r,0)-u_r(0,0)]\rangle
\eqno(4.35)$$ From the discussion leading from Eqn. (3.14) to Eqn. (3.17), one finds that the structure function in Eqn. (4.35) is obtained by inserting a factor $2[1-\J_0(rx)-\J_2(rx)]$ in the integrand of Eqn. (4.29). If one considers shorter time and space scales $k_0r\ll 1$, $t\ll\tau_L$, the leading cause of correlation decay is sweep, and the $\bar tx^\frac{1}{3}$ in the integrand of Eqn, (4.29) can be disregarded. Again because of shortness of $t/\tau_L$, one can Taylor expand $\exp(-\bar ty^\frac{1}{3})$ in the same equation and the final result is: $$S_{rr}(r,t)
\simeq 2\Ckol\flux^\frac{2}{3}r^\frac{2}{3}\int_0^\infty\d x\
x^{-\frac{5}{3}}[1-\J_0(x)-\J_2(x)]
\exp\Big[-\frac{u_T^2t^2x^2}{6r^2}\Big]
\eqno(4.36)$$ The term in the exponent is $\O(t/T_{r^{-1}})^2$, with $T_{r^{-1}}$ the sweep time at scale $r$. Hence, if $t\gg T_{r^{-1}}$, it is possible to Taylor expand the Bessel functions and the result is $$S_{rr}(r,t)\sim
S_{rr}(r,0) \int_0^\infty \d x\ x^\frac{1}{3}\exp\Big(-(t/T_{r^{-1}})^2x^2\Big)
\sim S_{rr}(r,0)\Big(\frac{T_{r^{-1}}}{t}\Big)^\frac{4}{3}
\eqno(4.37)$$ i.e. a power law decay of the structure function for times longer than the sweep time at that space separation.
**V. Solid tracers: concentration fluctuations**
Because of inertia, the particle velocity field $\v(\x,t)$, contrary to $\u(\x,t)$, does not preserve volume. Physical intuition suggests that particles which are denser than the fluid, will tend to concentrate near the instantaneous hyperbolic points of the flow, and to escape from the elliptic ones [@wang92; @paradisi01]. For this reason, a distribution $\theta(\x,t)$ of solid particles, in the absence of external sources, will be characterized by finite amplitude fluctuations superimposed to a uniform mean concentration field $\bar\theta$. These fluctuations are expected to have a correlation time of the order of $\tau_S$ and a correlation length determined in consequence. We are going to neglect any effect of gravity and set from the start $\u_G=0$. We will also limit our analysis to the case in which $\tau_S$ is in the turbulent inertial range, i.e. we consider $\tau_S\ll\tau_L$ (more precisely, $\tau_S<\rho^{-2}\tau_L$). In this way, all non-universal effects associated with the large scales of the flow are eliminated from the problem.
The length $S_i$ is crucial to the two-particle statistics, in that it gives the scale below which solid particles move ballistically relative to one another. In fact, $S_c$ fixes the cross-over scale to ballistic behavior, only for the relative motion of solid and fluid particles; the resulting picture is given by pairs of particles, separated by $S_i$, moving ballistically over scale $S_c$. It is easy to see this: if $\Delta_r v$ is the typical relative velocity between two solid particles at separation $r$ and $\Delta_ru\sim\Ckol^\frac{1}{2}(\flux r)^\frac{1}{3}$ is the corresponding value for the fluid velocity, one will have for $r\ll S$, from Eqn. (4.2): $\Delta_r v\sim
(\tau_S\gamma_{r^{-1}})^{-\frac{1}{2}}\Delta_r u$; exploiting the fact that the characteristic time of variation for $v$ is $\tau_S$, the condition $\tau_S\Delta_r v\sim r$, gives then $r\sim S_i$.
The concentration correlation $\Theta(\r)=\langle\theta(\r,t)\theta(0,t)\rangle$ is proportional to the equilibrium PDF $P(\r)$ for the separation of a pair of solid particles advected by $\u(\x,t)$. The separation $\r(t)$ obeys an equation in the form $\dot\r(t)=\v^\smalP(\x+\r,t)-\v^\smalP(\x,t)$ $[$we use from now on the shorthand $\r(t) \equiv\delta\z^\smalP(t|\x+\r,0)]$, and, for $r\gg S_i$, the separation process takes a diffusive nature: $$\frac{\d}{\d t}\langle[r_\alpha(t)-r_\alpha(0)][r_\beta(t)-r_\beta(0)]\rangle=
2D_{\alpha\beta}(\r)
\eqno(5.1)$$ A finite level of concentration fluctuations, in the absence of external sources, is associated with a finite divergence of the diffusivity tensor: $\partial_\alpha D_{\alpha\beta}\ne 0$. If this component of the diffusivity tensor is small, it is possible to proceed perturbatively: $D_{\alpha\beta}=D_{\alpha\beta}^\smalze+D_{\alpha\beta}^\smalun$, $P=P^\smalze+P^\smalun$, with $\partial_\alpha D_{\alpha\beta}^\smalze=0$, $P^\smalze$ uniform and $P^\smalun(\r)\propto \langle [\theta(\r,t)-\theta(0,t]^2\rangle$; the equation for the fluctuation amplitude $P^\smalun(\r)$ would read therefore: $$D_{\alpha\beta}^\smalze\partial_\alpha\partial_\beta P^\smalun=
-P^\smalze\partial_\alpha\partial_\beta D_{\alpha\beta}^\smalun
\eqno(5.2)$$ The procedure to determine $D_{\alpha\beta}$ is similar to the one leading to Eqn. (3.17). From Eqn. (4.4) and the relation $\dot\r(t)=\v^\smalP(\x+\r,t)-\v^\smalP(\x,t)$, we obtain: $$D_{\alpha\beta}(\r)=\lim_{T\to\infty}\frac{1}{T}\int_0^T\d t_1\int_0^T\d t_2
\int_{-\infty}^{t_1}\frac{\d\tau_1}{\tau_S}\int_{-\infty}^{t_2}\frac{\d\tau_2}{\tau_S}
\exp\Big(-\frac{t_1+t_2-\tau_1-\tau_2}{\tau_S}\Big)S_{\alpha\beta}^\smalP(\r,\tau_1,\tau_2)
\eqno(5.3)$$ with $S_{\alpha\beta}^\smalP$ the time correlation of velocity differences along solid particle trajectories: $$S_{\alpha\beta}^\smalP(\r,t_1,t_2)=
\langle [u_\alpha^\smalP(\r,t_1)-u_\alpha^\smalP(0,t_1)]
[u_\beta^\smalP(\r,t_2)-u_\beta^\smalP(0,t_2)]\rangle$$ $$=2[\langle u_\alpha^\smalP(\r,t_1)u_\beta^\smalP(\r,t_2)\rangle
-\langle u_\alpha^\smalP(\r,t_1)u_\beta^\smalP(0,t_2)\rangle]
\eqno(5.4)$$ We notice that, if we approximated $S_{\alpha\beta}^\smalP(\r,t_1,t_2)
=S_{\alpha\beta}^\smalL(\r,t_1-t_2)$, since $\partial_\alpha S_{\alpha\beta}^\smalL(\r,t_1-t_2)=0$, we would obtain from Eqn. (5.3) a divergenceless $D_{\alpha\beta}(\r)$. We have to take into account therefore the effect of trajectory separation described in Eqn. (4.7). Proceeding as in the case of the 1-particle statistics, we arrive at the following modification of Eqn. (4.10): $$\langle u_\alpha^\smalP(0,t_1)u_\beta^\smalP(\r,t_2)\rangle
%=\int\frac{\d^2k}{(2\pi)^2}\frac{\d^2p}{(2\pi)^2}\Big\langle u_{i\k}^\smalL(t_1)u_{j\p}^\smalL(t_2)
%$$ = - (i) |\_[[**J**]{}=J\_[t\_1t\_2]{}]{} (5.5) $$where$$ J\_[,t\_1t\_2]{}()= |J\_[t\_1]{}()-\^[i]{}|J\_[t\_2]{}() (5.6) $$ and $Z[{\bf J}]$ and $\bar J_t$ are given in Eqns. (4.10-11). Carrying out the wavevector and time integrations in the definition of $Z[{\bf J}]$ and using Eqns. (5.6) and (3.1) leads, after some algebra, to the following expression for the velocity correlation: $$\langle u_\alpha^\smalP(0,t_1)u_\beta^\smalP(\r,t_2)\rangle=
\frac{\Ckol\flux^\frac{2}{3}}{\pi}
\int_0^\infty
\frac{k\d k}
{(k^2+k_0^2)^\frac{4}{3}}
\exp(-\gamma_k|t_1-t_2|)$$ $$\times\int_0^{2\pi}\d\phi
\Big[\frac{r_\alpha r_\beta}{r^2}\cos^2\phi+
\Big(\delta_{\alpha\beta}-\frac{r_\alpha r_\beta}{r^2}\Big)\sin^2\phi\Big]
\exp(\i kr\cos\phi)$$ $$\times
\exp\Big\{-\Ckol\flux^\frac{2}{3}\tau_S^2k^2
\int_0^\infty \frac{[F(s,t_1,t_2)+G(s,t_1,t_2)(\J_0(sr)+\J_2(sr)\cos 2\phi)]s\d s}
{(k_0^2+s^2)^\frac{4}{3}(1+\gamma_s\tau_S)}
\Big\}
\eqno(5.7)$$ where $\phi$ is the angle between $\k$ and $\r$, $$F(s,t_1,t_2)=f(s,t_1)+f(s,t_2),
\qquad
G(s,t_1,t_2)=f(s,t_1-t_2)-F(s,t_1,t_2)
\eqno(5.8)$$ and $$f(s,t)=1-\ex^{-|t|/\tau_S}-\frac{\ex^{-\gamma_s|t|}-\ex^{-|t|/\tau_S}}
{1-\gamma_S}
\eqno(5.9)$$ The effect of trajectory separation is contained in the last line of Eqn. (5.7). We see that the contribution, which leads to finite divergence of the correlation $\langle u_\alpha^\smalP(0,t_1)u_\beta^\smalP(\r,t_2)\rangle$, is the $\phi$ dependence of this factor. The remaining $\phi$ dependence, contained in the second line of this equation, is simply the factor $\k_\perp\k_\perp\exp(\i\k\cdot\r)$ arising in the Fourier transform of Eqn. (3.1), and would give by itself zero divergence.
The argument of the exponential in the last line of Eqn. (5.7), for fixed $\tau_S/\tau_L$, is $O(\rho^{-2})$, so that we may try a Taylor expansion. However, as it happened with Eqns. (4.21) and (4.31), the resulting integrals in $k$ diverge. We therefore keep in the exponential the leading contribution in $k$, which is the time independent piece of its argument, and expand the remnant, which, to leading order in $\rho$, gives the following expression: $$1-\Ckol\flux^\frac{2}{3}\tau_S^2k^2\cos 2\phi
\exp\Big\{-\int_0^\infty\frac{\Ckol\flux^\frac{2}{3}\tau_S^2k^2s\d s}
{(k_0^2+s^2)^\frac{4}{3}(1+\gamma_s\tau_S)}\Big\}
\int_0^\infty \frac{G(s,t_1,t_2)\J_2(sr)s\d s}
{(k_0^2+s^2)^\frac{4}{3}(1+\gamma_s\tau_S)}$$ $$=1-\Ckol\flux^\frac{2}{3}\tau_S^2k^2\cos 2\phi
\exp\Big\{-\frac{3\Ckol\flux^\frac{2}{3}\tau_S^2k^2}{2k_0^\frac{2}{3}}
\Big\}
\int_0^\infty \frac{G(s,t_1,t_2)\J_2(sr)s\d s}
{(k_0^2+s^2)^\frac{4}{3}(1+\gamma_s\tau_S)}
\eqno(5.10)$$ plus terms which would lead to a divergence free contribution to $\langle u_\alpha^\smalP(0,t_1)u_\beta^\smalP(\r,t_2)\rangle$ and would disappear from Eqn. (5.2). Substituting into Eqn. (5.7) and then back into Eqns. (5.4) and (5.3), we find, after carrying out the time integrals and the integral in $\phi$: $$D^\smalun_{\alpha\beta}=\frac{8\Ckol^\frac{3}{2}\flux\tau_S^2}{\rho}
\int_0^\infty x^{-\frac{1}{3}}\d x
\exp\Big\{-\frac{3S_l^2x^2}{2r^2}\Big\}
\int_0^\infty\d y\ y^{-\frac{5}{3}}\J_2(y)\Big[1-\frac{1}{2(1+(x/y)^\frac{2}{3})}\Big]$$ $$\times\Big[\delta_{\alpha\beta}(\frac{1}{2}\J_0(x)-\J_2(x)+\frac{1}{2}\J_4(x))
-\frac{r_\alpha r_\beta}{r^2}\J_4(x))\Big]
\eqno(5.11)$$ and it is possible to see that $D^\smalze_{\alpha\beta}$ is given by the same expression valid for a fluid parcel, i.e. by Eqn. (3.17): $$D^\smalze_{\alpha\beta}(\r)=
\frac{4\alpha_\smalsev\Ckol^\frac{1}{2}\flux^\frac{1}{3}}{\rho}r^\frac{4}{3}
\Big[\frac{r_\alpha r_\beta}{r^2}+\frac{7}{3}\Big(\delta_{\alpha\beta}
-\frac{r_\alpha r_\beta}{r^2}\Big)\Big]
\eqno(5.12)$$ The physical content of the expansion leading to Eqns. (5.11-12) can be clarified, noticing that, in a way perfectly analogous to Eqns. (4.10-11), the generating functional $Z[{\bf J}]$ entering Eqn. (5.5) can be written as: $$\Big\langle\exp\Big(\i{\bf k}\cdot\int\d t(\u^\smalL(0,t)\bar J_{t_1}(t)
-\u^\smalL(\r,t)\bar J_{t_2}(t))\Big)\Big\rangle$$ $$\sim\exp\Big\{-\frac{{\bf k}{\bf k}}{2}:\int\d t\d t'\,[{\bf U}(0,t-t')
(\bar J_{t_2}(t)\bar J_{t_2}(t')+\bar J_{t_1}(t)\bar J_{t_1}(t'))
+2{\bf U}(\r,t-t')J_{t_1}(t)\bar J_{t_1}(t')]\Big\}$$ The argument in the exponential is in the form $k^2U(\r,0)\tau_S^2\sim k^2(S_l^2+
\Ckol(\flux r)^\frac{2}{3}\tau_S^2)$, where the term involving $S_l$ gives the one-particle contribution to trajectory separation, while the remnant is the two-particle correction coming from $r>0$. Substituting into the definition of $D_{\alpha\beta}$ gives then, using $r\ll S_l$: $$D\sim\int k\d k U_k\gamma_k^{-1}\ex^{-k^2S_l^2}
\Big[1-\exp\Big(-k^2\Ckol(\flux r)^\frac{2}{3}\tau_S^2
-\i{\bf k}\cdot\r\Big)\Big]$$ $$\sim \int k\d k U_k\gamma_k^{-1}(1-\ex^{-\i{\bf k}\cdot\r})
+\int k^3\d k U_k\gamma_k^{-1}\Ckol(\flux r)^\frac{2}{3}\tau_S^2\ex^{-S_l^2k^2}$$ which, using Eqn. (4.9), is $\sim \frac{\Ckol^\frac{1}{2}\flux^\frac{1}{3}}{\rho}[r^\frac{4}{3}
+\rho^2S_i^\frac{4}{3}(\frac{r}{S_l})^\frac{2}{3}]$. Thus, the origin of the $D^\smalun$ in the two-particle contribution to trajectory separation is confirmed, together with its being generated at the $k_0$-dependent scale $S_l$. This is of course confirmed by direct analysis of Eqn. (5.11); the integral is dominated by $k=x/r\sim S_l^{-1}$ and we obtain, for $r>S_i$: $$\frac{D^\smalun}{D^\smalze}\sim\frac{\rho^2 S_i^\frac{4}{3}}{(rS_l)^\frac{2}{3}}
<\rho^\frac{2}{3}\Big(\frac{\tau_S}{\tau_L}\Big)^\frac{1}{3}
\eqno(5.13)$$ Thus, for $\tau_S<\rho^{-2}\tau_L$, $D^\smalun<D^\smalze$ and Eqn. (5.3) applies.
Using the relation: $\int_0^\infty\d x x^{-\frac{1}{3}}\ex^{-\alpha x^2}\J_4(x)=
\frac{\Gamma(7/3)}{2^5\alpha^{7/3}\Gamma(5)}{\rm M}(7/3,5,-1/(4\alpha))$, with ${\rm M}(a,b,x)$ the confluent hypergeometric function [@abramowitz], we obtain in general, from Eqn. (5.11): $$D^\smalun_{\alpha\beta}=
4\rho\alpha_\frac{7}{3}\Ckol^\frac{1}{2}\flux^\frac{1}{3}
S_i^\frac{4}{3}
\Big(\frac{r_\alpha r_\beta}{r^2} \tilde D(r/S_l)+\delta_{\alpha\beta}\hat D(r/S_l)\Big),
\eqno(5.14)$$ where we can write: $$\tilde D(r/S_l)=-\frac{c\beta\, 2^\frac{2}{3}\Gamma(7/3)}{\alpha_\frac{7}{3}\Gamma(5)}
\Big(\frac{r^2}{6S_l^2}\Big)^\frac{7}{3}
{\rm M}(\frac{7}{3},5,-\frac{r^2}{6S_l^2});
\qquad
\beta=\int_0^\infty
\d y\ y^{-\frac{5}{3}}\J_2(y),
\eqno(5.15)$$ with $c\simeq 1$ for $r\gg S_l$ and $c\simeq\frac{1}{2}$ for $r\ll S_l$; $\hat D$ will be shown not to contribute to the concentration correlations.
We can now calculate the probability $P^\smalun(r)$. Substituting Eqns. (5.13-15) into Eqn. (5.2), after a few manipulations, leads to: $$\partial_{\bar r}\bar r^\frac{7}{3}\partial_{\bar r}P^\smalun=-
\rho^2
\bar rP^\smalze
\Big(\frac{S_i}{S_l}\Big)^\frac{4}{3}
\Big(\partial_{\bar r}^2(\tilde D(\bar\r)+\hat D(\bar r))
+\frac{1}{\bar r}\partial_{\bar r}(2\tilde D(\bar r)+\hat D(\bar r))\Big)
\eqno(5.16)$$ where $\bar r=r/S_l$. Hence, for $S_i\ll r\ll S_l$: $$P^\smalun(r)= \rho^2 P^\smalze
\Big(\frac{S_i}{S_l}\Big)^\frac{4}{3}
\int_{r/S_l}^\infty\d y y^{-\frac{7}{3}}
\Big[\tilde D(\infty)-y\partial_y(\tilde D(y)+\hat D(y))-\tilde D(y)\Big]\simeq$$ $$\simeq\frac{3}{4}
\rho^2P^\smalze\tilde D(\infty)\Big(\frac{S_i}{r}\Big)^\frac{4}{3}
\eqno(5.17)$$ Using Eqn. (5.15) and the limiting form for the confluent hypergeometric function ${\rm M}(a,b,-z)=\frac{\Gamma(b)}{\Gamma(b-a)}z^{-a}(1+O(z^{-1}))$ [@abramowitz], we get the final result: $$\Theta(r)=\bar\theta^2\Big(1+
\bar\beta\rho^2\Big(\frac{S_i}{r}\Big)^\frac{4}{3}\Big)
\eqno(5.18)$$ where $$\bar\beta=\frac{3\beta\Gamma(7/3)}{2^\frac{2}{3}\alpha_\frac{7}{3}\Gamma(8/3)}\simeq 2.14
\eqno(5.19)$$ In conclusion, we have a range of separations $S_i\ll r\ll S_l$, in which the fluctuation correlation grows with a power $-\frac{4}{3}$, to reach amplitude $\sim \rho^2$ at $r\sim S_i$.
The picture which arises is one of concentration fluctuations produced at scale $S_l$, by compressibility of the solid particle flow, and then transported to small scales and amplified by the incompressible part of the flow. The process is different from that of a passive scalar forced at large scale, due to the derivatives in the source term $[$and in fact the scaling exponent is different; compare with Eqn. (3.36)$]$. This source term is basically $\nabla^2D^\smalun(r)$, with $D^\smalun(r)$ saturating at a constant for $r\gg S_l$ and going to zero in the opposite limit. From here, the $r^{-\frac{4}{3}}$ scaling of Eqn. (5.18) arises by dimensional analysis.
What happens when $r\ll S_i$? At such short distances, the separation process is ballistic and we cannot use a diffusive approximation anymore. In [@balkovsky01], it is suggested that the correlation build-up should stop only because of discreteness effects or because of the Brownian motion of the solid particle. Actually, extrapolating the results of the present paper to the real turbulence regime $\rho=O(1)$, there is good reason to think that, for $\tau_S>\tau_\eta$, this build-up could stop much earlier, and precisely at $r\sim S_i$, which, for $\rho=O(1)$, coincides with the size of vortices with eddy turnover time $\tau_S$.
At separations below $S_i$, Eqn. (5.18) ceases to be valid, and full analysis of the distribution $P(\r,\Delta_r\v)$ is needed. A singularity of $P(r)$ at $r=0$ would require focusing of $\Delta_r\v$ along $\r$ for $r\ll S_i$; the mechanism is sketched in Fig. 1.
This means that $P(\r,\Delta_r\v)$ itself should develop, as $r\to 0$, a singularity at $\theta$, where $\theta$ is the angle between $\Delta_r\v$ and $\r$. The necessary trajectory focusing can be produced only by the compressible part of $\v$. However, for $r<S_i$, the production term for the compressible part of $\v$ can be estimated directly from the second term in Eqn. (5.10), to be $O(\rho^{-2})$ relative to the rest, and is able to act only for a time $\tau_S$ in the ballistic region. Hence, $P(\r,\bar\v)$ must be singular before this region is reached. On the other hand, for $r>S_i$, where the diffusive approximation works, the asymmetry of $P(\r,\bar\v)$, associated with compressibility of the flow, can be estimated from $\frac{\langle\bar v_\alpha\bar v_\beta\rangle^\smalun}
{\langle\bar v_\alpha\bar v_\beta\rangle^\smalze}
\sim D^\smalun/D^\smalze
$ so that, if $\tau_S<\rho^{-2}\tau_L$, singularities should not be expected in $P(\r,\bar\v)$, for $\theta=0$ and $r\ge S_i$ either. The conclusion is that a plateau for $\Theta(r)$ should be present at $r<S_i$.
**VI. Solid tracers: ergodic properties**
One of the consequences of the compressibility of the velocity field $\v(\x,t)$ is that the ergodic property is not satisfied anymore: velocity moments calculated along solid particle trajectories differ from those obtained from spatial averages. As mentioned before, physical intuition suggests that solid particles should privilege in their motion certain regions of the fluid with respect to the others (namely, hyperbolic with respect to elliptic regions). It is difficult, however, to translate this into a statement on the form of the PDF for the velocity $\u^\smalP$.
We have at our disposal the equations satisfied by the velocity field $\u^\smalP$. It is possible therefore to calculate its moments and to reconstruct its PDF. We consider the case of zero gravity $\u_G=0$ and $\tau_S/\tau_L$ small. As in the analysis of the concentration fluctuations, all non-universal effects associated with the large scales of the flow are thus eliminated from the problem. From definition of $\u^\smalP$ and Eqns. (4.1-2), we obtain the following set of equations, valid to lowest order in $\rho^{-1}$: $$\begin{cases}
(\partial_t+\tilde\u(\x,t)\cdot\nabla)\u^\smalP(\x,t)+
\int\d y^2\gamma(\x-y)\u^\smalP(\y,t)
=\int\d^2yh(\x-\y){\boldsymbol\xi}(\y,t)
\\
\tilde\u(\x,t)\equiv\u^\smalP(\x,t)-\v^\smalP(\x,t)=\int_{-\infty}^t\d\tau\exp(-\frac{t-\tau}{\tau_S})
\dot\u^\smalP(\x,t)
\end{cases}
\eqno(6.1)$$ which differs from the analogous equation for $\u^\smalL$ because of the non volume-preserving advection term $\tilde\u\cdot\nabla\u^\smalP$. From here we can carry on standard field theoretical perturbation theory, either by the Martin-Siggia-Rose formalism [@martin73], or working directly with Eqn. (6.1). The building blocks of the diagrammatic expansion are shown in Fig. 2, and are the propagator $G_{\k\alpha\beta}$: $$G_{\k\alpha\beta}u^\smalP_{\k\beta}(t)
=\int_{-\infty}^t\d\tau\ \exp(-\gamma_k(t-\tau))\frac{k_\alpha k_\beta}{k^2}
u^\smalP_{\k\beta}(\tau)
\eqno(6.2)$$ the correlator $U^\smalP_{\k \alpha\beta}(t)$: $$U^\smalP_{\k \alpha\beta}(t)=
\frac{k^\perp_\alpha k^\perp_\beta}{k^2}\frac{4\pi\Ckol\flux^\frac{2}{3}}{(k^2+k_0^2)^\frac{4}{3}}
\exp(-\gamma_k|t|)
\eqno(6.3)$$ and the vertex $\Gamma_{\k\alpha\beta\gamma}$: $$\Gamma_{\k\alpha\beta\gamma}u^\smalP_{\p\beta}u^\smalP_{\s\gamma}(t)=
\i\lambda s_\beta\delta_{\alpha\gamma}\delta(\k+\p+\s) \int_{-\infty}^t\d\tau\
\exp(-\frac{t-\tau}{\tau_S})
u^\smalP_{\s\gamma}(\tau)\partial_\tau u^\smalP_{\p\beta}(\tau)
\eqno(6.4)$$
where the coefficient $\lambda=1$ is introduced, as in Eqn. (2.15), only for the purpose of book-keeping. To lowest order in $\lambda$, the correlations for the fields $\u^\smalP(\x,t)$ and $\u^\smalL(\x,t)$ are trivially equal. To higher orders, differences arise, which would not lead, if $\nabla\cdot\tilde\u=0$, to differences between the one-point PDF’s for $\u^\smalP$ and $\u^\smalL$ (see also [@tennekes]). In our case, this is not so, and the difference between the moments of the two PDF’s can be calculated in perturbation theory; to $\O(\lambda^n)$: $$\langle (u^\smalP)^m\rangle^\smaln=
\int\frac{\d^2k_1 }{(2\pi)^2}...\frac{\d^2k_m }{(2\pi)^2}
\langle u_{\k_1}(t)...u_{\k_m}(t)\rangle^\smaln
\eqno(6.5)$$ where $\langle u_{\k_1}(t)...u_{\k_m}(t)\rangle^\smaln$ is the sum of the Feynman diagrams with $m$ outgoing velocity lines and $n$ vertices. Because of symmetry under space reflection, the lowest order contributions are $\O(\lambda^2)$; the corresponding diagrams are shown in Fig. 3 and lead to corrections to the velocity second and fourth moments.
In order to check for the presence of divergences in loop diagrams, we carry on power counting on Eqn. (6.1). Rescaling space and time as in Eqn. (2.16), we find $[\lambda]=0$, implying the possibility of logarithmic divergences. Now, a perturbation expansion in $\lambda$ of Eqn. (6.1) ceases to be sensible at scales below the length $S_i$ defined in Eqn. (4.9). From Eqn. (4.13), the effective decay rate for the field $\u^\smalP$ appears to be: $$\gamma^\smalP_k=
\begin{cases}
\gamma_k \quad & kS_i\ll 1
\\
u_Sk\quad & kS_i\gg 1
\end{cases}
\eqno(6.6)$$ and this expression should be substituted for $\gamma$ in Eqns. (6.1-3). For $kS_i\gg 1$, $\gammaP_k$ is just the inverse of the crossing time of an eddy of size $k^{-1}$. In this large $k$ range, the appropriate scaling for the frequency should be, instead of the one provided by Eqn. (2.17), which lead to $[\lambda]=0$, the following one: $$[t]=1,
\qquad
[\lambda]=-[u]=-\frac{1}{2}
\eqno(6.7)$$ The change of scaling in $\gammaP$ is therefore sufficient to regularize the divergent diagrams, providing an effective ultraviolet cutoff at $k=S_i^{-1}$.
We calculate explicitly the loop diagram in Fig. 3, and the corresponding correction to $\langle (u^\smalP)^2\rangle$: $$\langle (u^\smalP)^2\rangle^\smaldu=
\int\frac{\d^2p}{(2\pi)^2}\frac{\d^2s}{(2\pi)^2}
\int_{-\infty}^0\d t_1\int_{-\infty}^0\d t_2
\int_{-\infty}^{t_1}\d\tau_1\int_{-\infty}^{t_2}\d\tau_2$$ $$\times\exp\Big(-\gammaP_k(t_1+t_2)
-\frac{t_1+t_2-\tau_1-\tau_2}{\tau_S}\Big)
\delta_{\alpha\beta}
\partial_{\tau_1}\partial_{\tau_2}s_\gamma s_\delta
[U^\smalP_{\p\gamma\delta}(\tau_1-\tau_2)U^\smalP_{\s\alpha\beta}(t_1-t_2)$$ $$+U^\smalP_{\p\gamma\beta}(\tau_1-t_2)U^\smalP_{\s\alpha\delta}(t_1-\tau_2)
+U^\smalP_{\p\alpha\gamma}(t_1-\tau_2)U^\smalP_{\s\alpha\beta}(\tau_1-t_2)
+U^\smalP_{\p\alpha\beta}(t_1-t_2)U^\smalP_{\s\gamma\delta}(\tau_1-\tau_2)]$$ where $\k=-\p-\s$. The time integrations can be carried out at once and, after some algebra, we reach the following result: $$\langle (u^\smalP)^2\rangle^\smaldu=
2\int\frac{\d^2p}{(2\pi)^2}\frac{\d^2s}{(2\pi)^2}
\frac{\UP_p\UP_s\gammaP_p(\p_\perp\cdot\s)^2}
{(\gammaP_k+\gammaP_p+\gammaP_s)(\gammaP_k+\gammaP_s+\tau_S^{-1})(\gammaP_p+\tau_S^{-1})
\gammaP_kp^2}$$ $$\times\Big[-\gammaP_p\tau_S(\gammaP_k+\gammaP_p+\gammaP_s+\tau_S^{-1})
+\frac{(\p\cdot\s)}{s^2}\frac{\gammaP_s}{\gammaP_s+\tau_S^{-1}}
(\gammaP_k+\gammaP_s-\tau_S^{-1})\Big]
\eqno(6.8)$$ where $U_k=U_{\k \alpha\alpha}(0)$. As predicted in the discussion leading to Eqns. (6.6-7), substituting $\gammaP\to\gamma$ would lead to a logarithmically divergent integral. Comparing with Eqn. (4.9), we see that this integral receives contribution from wavevectors in the range $[S^{-1},S_i^{-1}]$, i.e. from those eddies fast enough for the particles to be unable to respond to their velocity field, but still sufficiently slow for trajectory separation to be considered a perturbation. To find the leading behavior in $S_i^{-1}$, the integral can be rewritten, after the change of variables $y=(\gamma_{(ps)^\frac{1}{2}}\tau_S)^{-1}$, $z=p/q$, in the form: $$\langle (u^\smalP)^2\rangle^\smaldu=
\frac{3u_S^2}{16\pi^3\rho^2}\int_0^{2\pi}\d\phi\int_0^\infty\frac{\d z}{z}\int_{\rho^{-2}}^\infty
\frac{\d y}{y}\frac{\sin^2\phi}
{(\bar p^\frac{2}{3}+\bar s^\frac{2}{3}+\bar k^\frac{2}{3})
(\bar k^\frac{2}{3}+\bar s^\frac{2}{3}+y)
(\bar p^\frac{2}{3}+y)\bar k^\frac{2}{3}\bar p^\frac{2}{3}}$$ $$\times\Big[-(\bar p^\frac{2}{3}+\bar s^\frac{2}{3}+\bar k^\frac{2}{3}+y)
+\frac{y\cos\phi}{\bar s^\frac{2}{3}+y}(\bar s^\frac{2}{3}+\bar k^\frac{2}{3}-y)\Big]
+\O(\rho^{-2})
\eqno(6.9)$$ where $\bar\k=(ps)^{-\frac{1}{2}}\k$, $\bar\p=(ps)^{-\frac{1}{2}}\p$, $\bar\s=(ps)^{-\frac{1}{2}}\s$, $\cos\phi=\bar\p\cdot\bar\s$. We obtain then the final result: $$\langle (u^\smalP)^2\rangle^\smaldu=
\frac{\bar\eta\log\rho}{\rho^2}u_S^2
\eqno(6.10)$$ where: $$\bar\eta=\frac{3}{8\pi^3}\int_0^{2\pi}\d\phi\int_0^\infty\frac{\d z}{z}
\frac{z^\frac{1}{3}\sin^2\phi}{(z+z^{-1}+2\cos\phi)^\frac{1}{3}}
\Big[-\frac{1}{(z+z^{-1}+2\cos\phi)^\frac{1}{3}+z^\frac{1}{3}}$$ $$+\frac{\cos\phi}{z^\frac{1}{3}+z^{-\frac{1}{3}}+(z+z^{-1}+2\cos\phi)^\frac{1}{3}}\Big]
\simeq -0.32
\eqno(6.11)$$ is evaluated by numerical integration. The correction to the velocity amplitude is negative. In the presence of inertia, solid tracers prefer therefore to lie in regions of the flow where the turbulent velocity is smaller.
Extrapolating Eqn. (6.10) to $\rho=O(1)$ suggests that $\langle (u^\smalP)^2\rangle-
u_T^2\sim u_S^2$. We can have some idea of what we should expect for dominant gravity $u_S<u_G$ from dimensional analysis of Eqn. (6.8). In this case $\gamma_k\to u_Gk$, the inverse sweep time due to the particle fall, and we would find $\langle (u^\smalP)^2\rangle^\smaldu\sim \frac{k^4 U_k^2}{u_G^2}$ with $k^{-1}\sim u_G\tau_S$ giving the transition to the small scales for which the sweep time is shorter than $\tau_S$, and to which the particles are unable to respond. From here we find $\langle (u^\smalP)^2\rangle-
u_T^2\sim (u_S/u_G)^\frac{2}{3}u_S^2$ and we see that gravity reduces the amount of non-ergodicity of the solid particle flow.
**VII. Conclusions**
Consideration of a finite correlation time in the transport by a random velocity field has allowed analysis of a series of issues. We summarize the main results:
The self-diffusion of a fluid parcel obeys linear scaling in the inertial range (as it should) with a universal constant $C_0=\Ckol^\frac{3}{2}
\frac{\hat\rho\rho}{\hat\rho-\rho}\log\hat\rho/\rho$ $[$see Eqns. (3.7-3.10)$]$, which is sensitive both to the ratio of the eddy turn-over and life time, and to the rate of eddy velocity decorrelation at times much shorter than the eddy lifetime. A quadratic maximum (at least), at time separation equal to zero, is necessary for $C_0$ to remain finite. (An exponential time correlation, for instance, would not satisfy this condition). This sensitivity on the short time behavior of time correlation was not observed in any of the other transport processes considered in the present paper.
The relative diffusion of a pair of fluid parcels, exhibits (again as it should) Richardson and normal diffusion behavior, respectively, for coordinates and velocities. The PDF for relative separation is a stretched exponential with exponent $\frac{2}{3}$ $[$see Eqn. (3.21)$]$ and it is possible to express the universal constants $c$ and $\tilde c$ entering respectively coordinate and velocity dispersion, in terms of the parameter $\rho$. Precisely: $c\simeq\frac{0.748\Ckol^\frac{3}{2}}{\rho^3}$ and $\tilde c\simeq\frac{3.037\Ckol^\frac{3}{2}}{\rho}$ $[$see Eqns. (3.23) and (3.26)$]$. For the Batchelor constant, we obtain instead $[$see Eqn. (3.39)$]$: $B\simeq 11.32\rho$.
The correlation time $\tau_P$ for the fluid velocity sampled by a solid particle has a behavior consistent with previous analysis neglecting the structure of the turbulent inertial range [@csanady63; @reeks77]. Values of $\tau_P/\tau_L$, above unity are found for dominant inertia and $\tau_S\lesssim\tau_L$, with $\tau_P/\tau_L-1=O((\frac{\tau_S}{\rho\tau_L})^\frac{4}{3})$ $[$see Eqn. (4.25)$]$. On the contrary, in the case of dominant gravity, $\tau_P/\tau_L<1$ irrespective of the value of the ratio $u_T/u_G$ between the turbulent and the fall velocity; specifically $[$see Eqn. (4.28)$]$, we find $\tau_P/\tau_L-1=O((u_Gk_0\tau_L)^2)$ for $u_G\ll u_L$, and $\tau_P/\tau_L=O((u_Gk_0\tau_L)^{-1})$ in the opposite case. For short times, the expected sub-linear behavior for the fluid velocity along a solid particle trajectory is found: $\langle |u^\smalP(x,t)-u^\smalP(x,0)|^2\rangle\sim(\flux u_At)^\frac{2}{3}$, with $A=G,S$ depending on whether gravity or inertia dominates $[$see Eqns. (4.16) and (4.19)$]$.
The Eulerian correlation time $\tau_E$ (and by continuity, therefore, also $\tau_P$, in the regime $\tau_S\gg\tau_L$) is shorter than its Lagrangian counterpart, with $\tau_E/\tau_L=1-2\rho^{-2}\log\rho$ $[$see Eqn. (4.34)$]$. Sweep produces a power law decay of correlations between velocity increments in the form $S_{rr}(r,t)=\langle [u_r(\r,t)-u_r(0,t)] [u_r(\r,0)-u_r(0,0)]\rangle$. More precisely, for time separations longer than the sweep time $T_{r^{-1}}$: $S_{rr}(r,t)\sim S_{rr}(r,0)(T_{r^{-1}}/t)^\frac{4}{3}$ $[$see Eqn. (4.37)$]$.
In the absence of gravity, and for $\rho^2\tau_\eta\ll\tau_S\ll\rho^{-2}\tau_L$, the spectrum of concentration correlation induced by turbulence in a solid particle suspension, is universal and has power law behavior for separations above the size $S_i$ of an eddy which is crossed by a typical solid particle in a time equal to its lifetime. More precisely: $\bar\theta^{-2}\langle\theta(r)\theta(0)\rangle-1\simeq\rho^2
(S_i/r)^\frac{4}{3}$ $[$see Eqn. (5.18)$]$.
The solid particle flow is non-ergodic, with a difference between the fluid velocity sampled along a solid trajectory and the corresponding Eulerian average: $\langle (u^\smalP)^2\rangle-u_T^2=-\frac{0.32\log\rho}{\rho^2}u_S^2$ $[$see Eqns. (6.10) and (6.11)$]$. Dimensional reasoning for $\rho=O(1)$ suggests that gravity should reduce this effect from $\langle (u^\smalP)^2\rangle-u_T^2\sim u_S^2$ to $\langle (u^\smalP)^2\rangle-u_T^2\sim (u_S/u_G)^\frac{2}{3}u_S^2$.
Analysis of some of these problems, actually did not exploit the finite correlation time of the velocity field produced through Eqn. (2.7). In particular, the process of fluid parcel relative dispersion was considered to the same order in $\rho$ as in the Kraichnan model and, to this level, no information on the Lagrangian statistics was necessary. Finiteness of the correlation time had the only purpose to allow a meaningful definition of quantities such as $\Ckol$ and $\flux$.
In the case of the self-diffusion properties of fluid and solid particles, a finite correlation time and inclusion of the Lagrangian nature of time correlation was necessary from the start. Nonetheless, the only point in which analysis of the random velocity field could not be avoided, was to determine the dimensionless constant $C_0$ [@thomson87; @sawford91]; the diffusion exponents in the various cases were already available by dimensional reasoning.
Evaluation of the correlation time $\tau_P$ and analysis of concentration fluctuations and non-ergodicity of particle trajectories (points iv-vi), instead, rested heavily on the fact that the correlation time was finite and on knowledge of the actual form of the random velocity field time correlation. The analysis confirmed the role of eddies with lifetime $\tau_S$, already pointed out in [@olla01].
Some comments are due on these last issues. As regards correlation times, they depend in general on non-universal aspects of the velocity statistics, and, in the present case, on the assumption that also the large scale statistics is defined along Lagrangian trajectories. In consequence of this, the Eulerian time of the flow resulted shorter than the Lagrangian correlation time. (Following [@kraichnan64a], the Eulerian correlation feels, at the same time, the decorrelation from relative motion of the fluid, and the effect of eddy decay). For $\tau_S\ll\tau_L$, the standard picture of inertia and gravity leading, respectively, to increase and decrease of the correlation time, however, was confirmed.
As regards concentration fluctuations, previous treatments of this problem, either were limited to the case of particles with Stokes time shorter than the Kolmogorov time of the flow [@balkovsky01], or neglected turbulent small scale structures altogether [@elperin00]. This was due to the difficulty in analyzing trajectory crossing effects on inertial range scales, associated with the need for a proper treatment of the Lagrangian time statistics. The fully kinetic treatment adopted here, in which the relative motion of individual solid particles is fully taken into account, in contrast with the fluid equation approach used in [@balkovsky01], together with the large $\rho$ limit, is what allows treatment of the problem.
It should be mentioned that solid particle concentration fluctuations may be important in the process of rain formation. It is known that the settling rate of a suspension is enhanced in the presence of clumping of the heavy particles [@wang93a], and turbulence induced concentration fluctuations appear to be one of the important actors in the process [@aliseda00]. Inclusion of the effect of gravity, on the same lines of the analysis carried on in section IV would therefore be necessary.
As regards non-ergodicity of the solid particle flow, it should be mentioned that this is a problem one has to deal with, before trying to extend standard Lagrangian transport models (in particular the well mixedness hypothesis on which they are based [@thomson87]) to the case of solid particles.
An important aspect that must be stressed, in the calculation of both $\tau_P$ and the concentration correlation spectrum, is the role played by the localization length $S_l$. This length ceases to have a physical meaning for finite $\rho$, nonetheless, it fixes, in perturbation theory, the scale at which both fluctuations and the difference $\tau_P-\tau_L$ are generated. Notice that, in the case of concentration fluctuations, this occurs in spite of the fact that the concentration correlations are peaked at the inertial scale $S_i$.
Another peculiarity of the large $\rho$ expansion is the multiplicity of space scales associated with eddies having time or velocity scales related to $\tau_S$ and $\u_S$ $[$see Eqns. (4.6) and (4.9)$]$. All of them collapse, for $\rho=O(1)$, on the size of a vortex with turnover time equal to $\tau_S$. In real high Reynolds number turbulence, this is the saturation length expected for concentration fluctuation build-up, when $\tau_S$ is an inertial range quantity.
The parameters $\rho$ and $\hat\rho$ are central to the extension of the Kraichnan model to finite correlation times. The situation of reference in real flows is the inverse cascade range of two-dimensional turbulence. An estimate of these parameters could be obtained using the leading $\rho$ expressions provided by Eqns. (3.10), (3.23), (3.26) and (3.39), with the values of the constants $C_0$, $c$, $\tilde c$ and $B$ obtained from DNS. For instance, assuming $\hat\rho=\infty$, comparison with the results presented in [@boffetta00] would give $\rho\simeq 2$.
The results of the present paper have been obtained to leading order in $\rho$. To this order, no perturbative effects in the structure of random velocity fields are present, and the correlations for the Lagrangian velocity $u^\smalL$ obey Eqn. (2.6). The parameters entering these correlations must nonetheless be considered as renormalized quantities in a renormalized statistical field theory. No claim on the nature of these renormalizations is made, apart that, to lowest order, marginality of interactions suggests that correction to scaling be only logarithmic.
To this order in $\rho$, extension of the results to three dimensions presents no conceptual difficulties. In particular, the mechanism of production for concentration fluctuations, and for correlation time and PDF corrections, is not expected to suffer modifications. Whether a random velocity field model like the present one, could be appropriate to describe transport by a three dimensional turbulence, laden with coherent structures and intermittency, is a different matter.
The present extension to finite correlation times of the Kraichnan model is perturbative in nature. Imposition of time statistics along Lagrangian trajectories had as consequence a non-Gaussian velocity field. This resulted in a field theoretical perturbation theory, with expansion parameter $\rho^{-1}$, which is somewhat different from other field theories arising from closure analysis of the Navier Stokes equation. It would be interesting to understand the relation with such theories, in particular with the quasi-Lagrangian approach described in [@belinicher97] and following papers based on this work (see [@l'vov01] and references therein).
There are situations in which the higher orders in $\rho^{-1}$ become necessary. A relevant example could be the derivation of a turbulent closure: in this case, extension of the theory to realistic values of $\rho$ could not be avoided. Related to this issue, is the calculation of the anomalous scaling exponents for a passive scalar advected by a random velocity field with finite correlation time. The analysis of pair diffusion carried on in section III proceeded, at the end, as if the velocity field had zero correlation time. To lowest order in $\rho^{-1}$, the same zero-mode structure of the Kraichnan model is therefore expected [@gawedzki95]. To proceed in a consistent way, one should go to higher order, at the same time, in the passive tracer part of the problem and in the field theory for the velocity field. Such issues, concerning the nature of the field theoretical perturbation expansion, will be analyzed in a separate publication.
[**Aknowledgements:**]{} I wish to thank Paolo Muratore-Ginanneschi and Antti Kupiainen for interesting and helpful conversation. Part of this research was carried on at the Mathematics Department of the University of Helsinki. I aknowledge support by a CNR Short Term Mobility fellowship.
[99]{} R.H. Kraichnan, ”Kolmogorov hypothesis and Eulerian turbulence theory,” Phys. Fluids [**7**]{}, 1723 (1964) R.H. Kraichnan, ”Lagrangian-history closure approximation for turbulence,” Phys. Fluids [**8**]{}, 575 (1965) R.H. Kraichnan, ”Inertial range transfer in two- and three-dimensional turbulence”, J. Fluid Mech. [**47**]{}, 525 (1971) S. Orszag, in [*Fluid Dynamics*]{}, ed. R. Balian and J.-L. Peube (Gordon and Breach, New York, 1977), pp. 235-374 V. Yakhot and S. Orszag, ”Renormalization group analysis of turbulence. I. basic theory,” J. Sci. Comp. [**1**]{}, 3 (1986) R.H. Kraichnan, ”Anomalous scaling of a randomly advected passive scalar,” Phys. Rev. Lett. [**72**]{}, 1016 (1994) M. Chertkov, G. Falkovich, I. Kolokolov and V. Lebedev, ”Normal and anomalous scaling of the fourth-order correlation of a randomly advected passive scalar,” Phys. Rev. E [**52**]{}, 4924 (1995) K. Gaw[ȩ]{}dzki and A. Kupiainen, ”Anomalous scaling for the passive scalar,” Phys. Rev. Lett. [**75**]{}, 3834 (1995) U. Frisch, A. Mazzino and M. Vergassola ”Intermittency in passive scalar advection,” Phys. Rev. Lett. [**80**]{}, 5532 (1998) O. Gat, I. Procaccia and R. Zeitak ”Anomalous scaling in passive scalar advection: Monte Carlo Lagrangian trajectories,” Phys. Rev. Lett. [**80**]{}, 5536 (1998) O. Gat and R. Zeitak, ”Multiscaling in passive scalar advection as stochastic shape dynamics,” Phys. Rev. E [**57**]{}, 5331 (1998) A. Pumir, B.I. Shraiman and M. Chertkov, ”Geometry of Lagrangian dispersion in turbulence,”Phys. Rev. Lett. [**85**]{}, 5324 (2000) A. Celani and M. Vergassola, ”Statistical geometry in scalar turbulence,” Phys. Rev. Lett. [**86**]{}, 424 (2001) D.J. Thomson, ”Criteria for the selection of stochastic models of particle trajectories in turbulent flows,” J. Fluid Mech. [**180**]{}, 529 (1987) D.J. Thomson, ”A stochastic model for the motion of particle pairs in isotropic high-Reynolds-number turbulence, and its application to the problem of concentration variance,” J. Fluid Mech. [**210**]{}, 113 (1990) M.S. Borgas and B.L. Sawford, ”A family of stochastic models for two-particle dispersion in isotropic homogeneous stationary turbulence,” J. Fluid Mech. [**279**]{}, 69 (1994) G.T. Csanady, ”Turbulent diffusion of heavy particles in the atmosphere,” J. Atmos. Sci. [**20**]{}, 201 (1963) B.L. Sawford and F.M. Guest, ”Lagrangian statistical simulations of the turbulent motion of heavy particles,” Boundary-Layer Meteorol. [**54**]{}, 147 (1991) L.P Wang, M.R. Maxey, T.D. Burton and D.E. Stock, ”Chaotic dynamics of particle dispersion in fluids,” Phys. Fluids A [**4**]{}, 1789 (1992) S. Elgobashi and G.C. Truesdell, ”Direct simulation of particle dispersion in a decaying isotropic turbulence,” [**242**]{}, 655 (1992) L.-P. Wang and M.R. Maxey, ”The motion of microbubbles in a forced isotropic and homogeneous turbulence,” Applied scientific research, advances in turbulence IV [**51**]{}, 291 (1993) P. Olla, ”Transport properties of heavy particles in high Reynolds number flows,” nlin.CD/0103018 T. Elperin, N. Keeorin, I. Rogachevskii and D. Sokoloff, ”Turbulent transport of atmospheric aerosols and formation of large scale structures,” Phys. Chem. Earth A [**25**]{}, 797 (2000) E. Balkovsky, G. Falkovich and A. Fouxon, ”Intermittent distribution of inertial particles in turbulent flows,” Phys. Rev. Lett. [**86**]{}, 2790 (2001) P. Olla, “Renormalization group analysis of two-dimensional turbulence,” Phys. Rev. Lett. [**67**]{}, 2465 (1991) Going beyond lowest order would be necessary to evaluate the nonlinear energy flux produced dynamically by Eqn. (2.7) and to see if there is an inverse cascade, as in a real turbulent flow with similar characteristics. P.C. Martin, E.D. Siggia and H.A. Rose, ”Statistical dynamics of classical systems,” Phys. Rev. A [**8**]{}, 423 (1973) J. Zinn-Justin [*Quantum field theory and critical phenomena*]{} (Clarendon Press, Oxford, 1989) B.L. Sawford, ”Reynolds number effects in Lagrangian stochastic models of turbulent dispersion,” Phys. Fluids A [**3**]{}, 1577 (1991) G.W. Gardiner, [*Handbook of stochastic methods*]{}, (Springer, Berlin 1990) I.S. Gradshteyn and Ryzhik, [*Table of integrals, series and products*]{}, (Academic Press, New York, 1980) G. Boffetta and A. Celani, ”Pair dispersion in turbulence,” Physica A [**280**]{}, 1 (2000) M.R. Maxey and J.J. Riley, ”Equation of motion for a small rigid particle in a non-uniform flow,” Phys. Fluids [**26**]{}, 883 (1982) J. Weinstock, ”Lagrangian-Eulerian relation and the independence approximation,” Phys. Fluids [**19**]{}, 1702 (1976) L.M. Pismen and A. Nir, ”On the motion of suspended particle in stationary homogeneous turbulence,” J. Fluid Mech. [**84**]{}, 193 (1978) A. Nir and L.M. Pismen, ”The effect of a steady drift on the dispersion of a particle in turbulent fluid,” J. Fluid Mech. [**94**]{}, 369 (1979) R.H. Kraichnan, ”Relation between Lagrangian and Eulerian correlation times of a turbulent velocity field,” Phys. Fluids [**7**]{}, 142 (1964) M.W. Reeks, ”On the dispersion of small particles suspended in an isotropic turbulent fluid,” J. Fluid Mech. [**83**]{}, 529 (1977) P. Paradisi and F. Tampieri, ”Stability analysis of solid particle motion in rotational flows,” Nuovo Cimento C [**24**]{}, 407 (2001) M. Abramowitz and I.A. Stegun, [*Handbook of mathematical functions*]{} (Dover, New York USA, 1973) H. Tennekes and J.L. Lumley, [*A first course in turbulence*]{} (MIT Press, Cambridge USA, 1972) L.-P. Wang and M.R. Maxey, ”Settling velocity and concentration distribution of heavy particles in homogeneous isotropic turbulence,” J. Fluid Mech. [**256**]{}, 27 (1993) F. Hainaux, A. Aliseda, A. Cartellier, J. Lasheras, ”Settling velocity and clustering of particles in a homogeneous and isotropic turbulent air flow,” in [*Advances in Turbulence VIII*]{}, Ed. C. Dopazo (Proc. 8th European Turbulence Conference EUROMECH, Barcelona, Spain 2000), pp. 553-556 V.I. Belinicher and V.S. L’vov, ”A scale invariant theory of fully developed hydrodynamic turbulence,” Sov. Phys. JETP [**66**]{}, 302 (1987) V.S. L’vov and I. Procaccia, ”Analytic calculation of the anomalous exponents in turbulence: using the fusion rules to flush out a small parameter,” Phys. Rev. E [**62**]{}, 8037 (2001)
|
---
abstract: 'Thermoelectric coefficients of an ultra-thin topological insulator are presented here. The hybridization between top and bottom surface states of a topological insulator plays a significant role. In absence of magnetic field, thermopower increases and thermal conductivity decreases with increase of the hybridization energy. In presence of magnetic field perpendicular to the ultra-thin topological insulator, thermoelectric coefficients exhibit quantum oscillations with inverse magnetic field, whose frequency is strongly modified by the Zeeman energy and phase factor is governed by the product of the Lande $g$-factor and the hybridization energy. In addition to the numerical results, the low-temperature approximate analytical results of the thermoelectric coefficients are also provided. It is also observed that for a given magnetic field these transport coefficients oscillate with hybridization energy, whose frequency depends on the Lande $g$-factor.'
author:
- SK Firoz Islam and Tarun Kanti Ghosh
title: 'Thermoelectric properties of an ultra-thin topological insulator'
---
Introduction
============
Recently a new class of material, called topological insulator, has been paid much attention by condensed matter physicists [@kane; @bernevig; @konig; @moore; @fu; @zhang]. Topological insulator (TI) shows the conduction of electrons on the surface of 3D materials otherwise behaves as an insulator. It is due to the time-reversal symmetry possessed by materials like Bi$_{2}$Se$_{3}$, Sb$_2$Te$_3$ and Bi$_2$Te$_3$[@zhang]. The conducting surface states of these material show a single Dirac cone, in which spin is always locked perpendicular to it’s momentum. The angle resolved photoemission spectroscopy [@hsieh1; @hsieh2; @xia] or scanning tunneling microscopy [@roushan] has been used to realize the single Dirac cone in TIs. In two-dimensional electron systems, under the presence of a perpendicular magnetic field, electron conducts along the boundary due to the circular orbits bouncing off the edges, leading to skipping orbits. However, in 3D materials, even in absence of magnetic field electron conduction takes place on the surface. Here, strong Rashba spin-orbit coupling (RSOC) plays the role of magnetic field. The RSOC originates from the lack of structural inversion symmetry of the sample [@rashba; @rashba1].\
Though there have been several experimental works on the surface states of TIs, one of the major obstacle in studying the transport properties of the surface is the unavoidable contribution of the bulk. One of the best method to minimize this problem is to grow TIs sample in the form of ultra-thin films, in which bulk contribution becomes relatively very small in comparison to the surface contribution[@guo; @lai; @kehe]. The transition from 3D to 2D TIs lead to several effects which have been studied for different thickness[@kehe; @liu]. The ultra-thin TI not only reduces the bulk contribution, but also possesses some new phenomenon like possible excitonic superfluidity [@moore1], unique magneto-optical response[@wk1; @wk2; @jm] and better thermoelectric performances[@mong]. Moreover, the small thickness leads to the overlap of the wave functions between top and bottom surfaces which introduces a new degree of freedom hybridization[@yu; @lu]. However, it happens to a certain thickness of five to ten quintuple layers[@kehe; @cao] i.e; of the order of 10 nm. The oscillating exponential decay of hybridization induced band gap with reducing thickness in Bi$_2$Te$_3$ has been also reported theoretically[@yokoyama]. The formation of Landau levels have been confirmed by several experiments[@cheng; @jiang] in thin TIs. Moreover, several theoretical study on low-temperature transport properties in a series of works[@cao; @zyuzin; @wang; @tahir1; @tahir2] have been reported.\
Thermoelectric properties of materials [@nolas] have always been interesting topic for providing an additional way in exploring the details of an electronic system. When a temperature gradient is applied across the two ends of the electronic system, the migration of electrons from hotter to cooler side leads to the developement of a voltage gradient across these ends. This voltage difference per unit temperature gradient is known as longitudinal thermopower. In addition to this temperature gradient if a perpendicular magnetic field is applied to the system, due to Lorentz force, a transverse electric field is also established and gives transverse thermopower. In conventional 2D electronic system, Landau levels induced quantum oscillation (Shubnikov-de Hass) in thermoelectric coefficients has been reported theoretically as well as experimentally in a series of works[@prb86; @prb95; @topical; @maximov; @arindam]. In 3D TIs, improvement of thermoelectric performance without magnetic field have been predicted theoretically in a series of paper[@sinova1; @sinova2; @sinova3; @murakami]. In the newly emerged relativistic-like 2D electron system-graphene, thermoelectric effects have been also studied[@yuri; @wei; @das; @aviskar].
In this paper, we study the effect of hybridization on the thermopower and the thermal conductivity of the ultra-thin TIs in absence/presence of magnetic field. We find thermopower increases and thermal conductivity decreases with increase of the hybridization energy when magnetic field is absent. In presence of perpendicular magnetic field, thermoelectric coefficients oscillate with inverse magnetic field. The frequency of the quantum oscillations is strongly modified by the Zeeman energy, and phase factor is determined by the product of the Lande $g$-factor and the hybridization energy. The analytical expressions of the thermoelectric coefficients are also obtained. It is also shown that these transport coefficients oscillate with frequency depend on hybridization energy and Lande $g$-factor.
This paper has following structure. Section II briefly discusses energy spectrum and the density of states of the ultra-thin TI in absence and presence of magnetic field. In section III, we have studied how the hybridization affects the thermoelectric coefficients for zero magnetic field. In section IV, a complete analysis of thermoelectric coefficients in present of magnetic field is provided with numerical and analytical results. We provide a summary and conclusion of our work in section V.
ENERGY SPECTRUM AND DENSITY OF STATES
======================================
Zero magnetic field case
------------------------
Let us consider a surface of an ultra-thin TI in $xy$-plane with $L_x\times L_y$ dimension, and carriers are Dirac fermions occupying the top and bottom surfaces of the TI. The quantum tunneling between top and bottom surfaces gives rise to the hybridization and consequently the Hamiltonian can be written as the symmetric and anti-symmetric combination of both surface states as [@yu]
$$H = \left[ \begin{array}{cc}
h(k) & 0 \\
0 & h^{*}(k) \\
\end{array} \right],$$
with $h(k)=\Delta_h\sigma_z+v_{F}(p_y\sigma_x-p_x\sigma_y)$. Here ${\bf p}$ is the two-dimensional momentum operator, $v_{F}$ is the Fermi velocity of the Dirac fermion, ${\mbox{\boldmath $\sigma$} }=(\sigma_x,\sigma_y,\sigma_z)$ are the Pauli spin matrices and $\Delta_h$ is the hybridization matrix element between the states of the top and bottom surfaces of the TI. Typical value of $ \Delta_h$ varies from $(0-10^2)$ meV depending on the thickness of the 3D TI [@kehe]. Because of the block-diagonal nature, the above Hamiltonian can be written as $$H = v_{F}(\sigma_x p_y-\tau_z\sigma_y p_x)+\Delta_h\sigma_z,$$ where $\tau_z=\pm$ denotes symmetric and anti-symmetric surface states, respectively. The energy spectrum of the Dirac electron is given by $$E =\lambda\sqrt{(\hbar v_F k)^2+\Delta_h^2}.$$ Here $\lambda=\pm$ stands for electron and hole bands. The density of states is given by $$\label{dos1}
D_{0}(E) = \frac{2E}{\pi \hbar^2 v_{F}^2}.$$
Non-zero magnetic field case
----------------------------
In presence of magnetic field perpendicular to the surface, the Hamiltonian for Dirac electron with hybridization is $$H = v_F(\sigma_x\Pi_y-\tau_z\sigma_y\Pi_x)+(\tau_z\Delta_z+\Delta_h)\sigma_z,$$ where ${\bf \Pi}={\bf p}+e{\bf A}$ is the two-dimensional canonical momentum operator. Using Landau gauge $\vec{A}=(0,Bx,0)$, exact Landau levels can be obtained very easily [@zyuzin; @tahir2]. For $n=0$, there is only one energy level which is given as $E_0^{\tau_z}=-(\Delta_z+\tau_z\Delta_z)$. When integer $n \geq 1$, there are two energy bands denoted by $+$ corresponding to the electron and $-$ corresponding to the hole with energy $$E^{\tau_z}_{n,\lambda} = \lambda\sqrt{2n(\hbar\omega_c)^2+(\Delta_z+\tau_z\Delta_h)^2},$$ where $\omega_c=v_{F}/l$ is the cyclotron frequency with $l=\sqrt{\hbar/(eB)}$ is the magnetic length, $\Delta_z=g\mu_{B}B/2 $ with $g$ is the Lande $g$-factor.
The corresponding eigenstates for symmetric surface state are $$\Psi^{+} _{n}({\bf r}) = \frac{e^{i k_y y}}{\sqrt{L_y }}\left(
\begin{array}{r}
c_1 \phi_{n-1}(x + x_0)
\\
c_2\phi_n (x + x_0)
\end{array}
\right) \text{,}$$ $$\Psi^{-} _{n}({\bf r}) = \frac{e^{i k_y y}}{\sqrt{L_y }}\left(
\begin{array}{r}
c_2 \phi_{n-1}(x + x_0)
\\
-c_1\phi_n (x + x_0)
\end{array}
\right) \text{,}$$ where $\phi_n(x) = (1/\sqrt{\sqrt{\pi }2^n n!l})
e^{-x^2/2l^2} H_n(x/l) $ is the normalized harmonic oscillator wave function, $x_0=-k_yl^2 $, $c_1=\cos(\theta_{\tau_z}/2)$ and $c_2=\sin(\theta_{\tau_z}/2)$ with $\theta_{\tau_z}=\tan^{-1}[\sqrt{n}\hbar\omega_c/(\Delta_z+\tau_z\Delta_h)]$. The anti-symmetric surface state can be obtained by exchanging $n$ and $n-1$.
We have derived approximate analytical form of density of states for $n>1$, by using the Green’s function technique which is given as (see Appendix A) $$\begin{aligned}
\label{dos_B}
D_{\tau_z}(E) & \backsimeq & \frac{D_{0}(E)}{2}
\Big[1+2\sum_{s=1}^{\infty} \exp\Big\{-s \Big(2\pi\frac{\Gamma_0 E }
{\hbar^2\omega_c^2}\Big)^2\Big\}
\nonumber\\
&\times&\cos\Big\{\pi s \Big(E^2-\Delta_{\tau_z}^2\Big)/(\hbar\omega_c)^2\Big\}\Big],\end{aligned}$$ where $\Delta_{\tau_z}=\Delta_z+\tau_z\Delta_h$ and $\Gamma_0$ is the impurity induced Landau level broadening.
Thermoelectric coefficients
===========================
In this section, we shall calculate thermoelectric coefficients of an ultra-thin TI in zero and non-zero magnetic fields.
Zero-magnetic field case
------------------------
In this sub-section, the effect of hybridization on thermopower and thermal conductivity is presented. We follow most conventional approach at low temperature regime. The electrical current density ${\bf J} $ and the thermal current density ${\bf J}_{q}$ for Dirac electrons can be expressed under linear response regime as $${\bf J} = Q^{11} {\bf E} + Q^{12} (-{\nabla} T)$$ and $${\bf J}^{q} = Q^{21} {\bf E} + Q^{22}(-{\nabla} T),$$ where ${\bf E} $ is the electric field, $\nabla T$ is the temperature gradient and $ Q^{ij} $ ($ i,j=1,2$) are the phenomenological transport coefficients. The above equations describe the response of electronic system under the combined effects of thermal and potential gradient. Moreover, $Q^{ij}$ can be expressed in terms of an integral $I^{(r)}$: $Q^{11}=I^{(0)}, Q^{21} = TQ^{12} = -I^{(1)}/e$, $Q^{22}=I^{(2)}/(e^2T)$ with $$I^{(r)} = \int dE \Big[-\frac{\partial f(E)}{\partial E}\Big]
(E-\eta)^{(r)} \sigma(E),$$ where $r=0,1,2$ and $f(E)=1/[1+\exp(E-\eta)\beta]$ is the Fermi-Dirac distribution function with $\eta$ is the chemical potential and $\beta=(k_{_B}T)^{-1}$. Here, $\sigma(E)$ is the energy-dependent electrical conductivity. When circuit is open i.e; for $J=0$, thermopower can be defined as $S=Q^{12}/Q^{11}$. By using Sommerfeld expansion at low temperature regime, diffusive thermopower $S$ and thermal conductivity $\kappa$ can be obtained from Mott’s relation and the Wiedemann-Franz law as $$\label{tp}
S = - L_0 e T \Big[ \frac{d}{dE}\ln\sigma(E) \Big]_{_{E=E_F}}$$ and $$\label{ther}
\kappa = L_0 T \sigma(E_{_F}).$$ Here, $ L_0 = (\pi^2 k_{_B}^2)/(3e^2) =2.44\times10^{-8}$ W$\Omega$K$^{-2}$ is the Lorentz number and $\sigma(E_{_F})$ is the electrical conductivity at the Fermi energy.
Classical Boltzmann transport equation can be used to calculate zero magnetic field electrical conductivity, which is given by[@mermin] $$\label{boltz}
\sigma_{ij}(E)= e^2\tau(E)\int\frac{d^2k}{(2\pi)^2}\delta[E-E(k)]v^{i}(k)v^{j}(k),$$ where $i,j=x,y$. For isotropic system $v_x^2=v_y^2=(1/2)(v_x^2+v_y^2)=(1/2)v^2$. In our case, $$v^2=\frac{v_{F}^2}{2}\Big[1-\Big(\frac{\Delta_h}{E}\Big)^2\Big].$$ Using these in Eq.\[\[boltz\]\], the energy dependent conductivity becomes as $$\label{con}
\sigma(E)= e^2\tau(E)\frac{E}{\pi\hbar^2}
\Big[1- \Big(\frac{\Delta_h}{E}\Big)^2 \Big].$$
Assuming the energy dependent scattering time to be $\tau=\tau_0 (E/E_{_F})^{m}$, where $m$ is a constant depending on the scattering mechanism, $E_{F}=\sqrt{E_{F0}^2+\Delta_h^2}$ is the Fermi energy with $E_{F0}=\hbar v_{F}k_{F}^{0}$. Here, Fermi vector $k_{F}^{0}=\sqrt{2\pi n_e}$. Substituting Eq. (\[con\]) into Eq. (\[tp\]), the diffusion thermopower is obtained as $$S= -L_0 \frac{e T}{E_{F0}} \Big[(m+1) +2\Big(\frac{\Delta_h}{E_{F0}}\Big)^2\Big]
/\sqrt{1+\Big(\frac{\Delta_h}{E_{F0}}\Big)^2}.$$
We plot thermopower versus hybridization for different carrier density in the upper panel of Fig. \[1\]. It shows that thermopower increases with increasing hybridization for a particular carrier density. But for higher carrier density, this rate of enhancement with hybridization becomes very slow.
Thermal conductivity can be directly obtained from Wiedemann-Franz law given in Eq. (\[ther\]) where the electrical conductivity $\sigma (E_{_F})$ is given as $$\sigma= \sigma_0/\sqrt{1+\Big(\frac{\Delta_h}{E_{F0}}\Big)^2}.$$ Here, $ \sigma_0 =e^2\tau_{0}E_{F0}/(\pi\hbar^2)$ is the Drude conductivity without the hybridization constant for the Dirac system. Thermal conductivity is plotted in the the lower panel of Fig. \[1\]. We note that the thermal conductivity is diminished with increasing hybridization. However, unlike the case of thermopower, here thermal conductivity increases with carrier density.
{width="90mm"}
Non-zero magnetic field case
----------------------------
In presence of magnetic field, the classical approach can not explain the phenomenon depend on energy quantization. In this sub-section we follow quantum mechanical approach, based on linear response theory, to study thermal transport coefficients. Thermoelectric coefficients for two-dimensional electron system in presence of magnetic field were derived by modifying the Kubo formula in Ref. [@streda; @oji]. These phenomenological transport coefficients are $$\sigma_{\mu\nu} = {\cal L}_{\mu\nu}^{(0)}$$ $$S_{\mu\nu} = \frac{1}{eT}[({\cal L}^{(0)})^{-1}{\cal L}^{(1)}]_{\mu\nu}$$ $$\label{cond1}
\kappa_{\mu\nu} = \frac{1}{e^2T}[{\cal L}_{\mu\nu}^{(2)}-
eT({\cal L}^{(1)}S)_{\mu\nu}],$$ where $$\label{Lmunu}
{\cal L}_{\mu\nu}^{(r)}=\int dE \Big[-\frac{\partial f(E)}{\partial E}\Big]
(E-\eta)^{r} \sigma_{\mu\nu}(E).$$ Here, $\mu,\nu=x,y$. Also, $\sigma_{\mu \nu}(E)$, $S_{\mu \nu} $ and $\kappa_{\mu \nu}$ are the zero-temperature energy-dependent conductivity, thermopower and thermal conductivity tensors, respectively.
Generally, diffusive and collisional mechanism play major role in electron conduction. The quantized energy spectrum of electrons results itself through Shubnikov-de Hass oscillation by collisional mechanism. In our case, electron transport is mainly due to the collisional instead of diffusive. The zero drift velocity of electron do not allow to have diffusive contribution. In presence of temperature gradient, thermal transport coefficients can be expressed as $ {\cal L}_{xx}^{(r)}={\cal L}_{xx}^{(r){\rm col}} =
{\cal L}_{yy}^{(r){\rm col}}$ and $ {\cal L}_{yy}^{(r)}={\cal L}_{yy}^{(r){\rm dif}} +
{\cal L}_{yy}^{(r){\rm col}}={\cal L}_{yy}^{(r){\rm col}}$. In Ref. [@tahir2], the exact form of the finite temperature collisional conductivity has been calculated for the screened impurity potential $ U({\bf k}) = 2 \pi e^2/(\epsilon \sqrt{k^2 + k_s^2})\simeq 2\pi e^2/(\epsilon k_s) = U_0$ under the limit of small $|{\bf k}| \ll k_s $ with $k_s$ and $ \epsilon $ being the inverse screening length and dielectric constant of the material, respectively. In this limit, one can use $\tau_0^2 \approx \pi l^2\hbar^2/N_IU_0^2$ with $\tau_0$ is the relaxation time, $U_0$ is the strength of the screened impurity potential and $N_I$ is the two-dimensional impurity density. The exact form of the finite temperature conductivity[@tahir2] can be reduced to the zero-temperature energy-dependent electrical conductivity as $$\label{exact}
\sigma_{xx}(E) = \frac{e^2}{h}
\frac{ N_I U_0^2}{\pi \Gamma_0 l^2}\sum_{\tau_z}I^{\tau_z}_{n},$$ where $I^{\tau_z}_{n} = [n\{1+\cos^2(\theta_{\tau_z})\}-\cos(\theta_{\tau_z})] $. Here we have used $-\partial f/\partial E=\delta [E-E_n^{\tau_z}]$. Using Eq. (\[Lmunu\]), the finite temperature diagonal (${\cal L}_{xx}^{(r)} $) and off-diagonal coefficients ($ {\cal L}_{yx}^{(r)}$) can be written as $${\cal L}_{xx}^{(r)} =
\frac{e^2}{h}\frac{ N_I U_0^2}{\pi \Gamma_0 l^2}
\sum_{n,\tau_z}I^{\tau_z}_{n}\Big[(E-\eta)^{r}
\Big(-\frac{\partial f(E)}{\partial E}\Big)\Big]_{E=E^{\tau_z}_n}$$ and $$\begin{aligned}
{\cal L}_{yx}^{(r)} & = & \frac{e^2}{h}\frac{1}{2}
\sum_{n,\tau_z}\frac{\sin ^2\theta_{\tau_z}}{\Delta_n^2}
\int_{E_n,\tau_z}^{E_{n+1},\tau_z}(E-\eta)^{r}
\Big(-\frac{\partial f(E)}{\partial E}\Big)dE.\nonumber\\\end{aligned}$$ Here, $\Delta_n=\sqrt{2n+(\frac{\Delta_{\tau_z}}{\hbar\omega_c})^2}-\sqrt{2(n+1)
+(\frac{\Delta_{\tau_z}}{\hbar\omega_c})^2}$.
Numerical results and discussion
================================
In our numerical calculations, the following parameters are used: carrier concentration $n_e=2 \times 10^{15}$ m${}^{-2}$, $g = 60$, $v_F=4\times10^5$ m s${}^{-1}$ and $T=0.7$ K. These numerical parameters are consistent with Ref.[@kehe; @tahir1; @tahir2].
In Fig. \[2\], the off-diagonal thermopower, $S_{xy}$, as a function of the inverse magnetic field for different values of the hybridization constant is shown. Similarly, the thermal conductivity, $ \kappa_{xx} $, versus inverse magnetic field for different values of the hybridization constant is shown in Fig. \[3\]. Careful observation of these two figures clearly show that both $S_{xy}$ and $ \kappa_{xx} $ oscillate with the same frequency which does not depend on the hybridization energy. The hybridization energy only influences the phase of oscillations.
![Plots of the thermopower in units of $k_B/e$ versus inverse magnetic field for different values of the hybridization constant.[]{data-label="Fig1"}](power_new.eps){width="98mm"}
![Plots of the thermal conductivity in units of $k_B^2/h$ versus inverse magnetic field.[]{data-label="Fig1"}](ther_con.eps){width="94mm" height="60mm"}
To determine the frequency and the phase of the quantum oscillations in the thermoelectric coefficients, we shall derive analytical expressions of these coefficients. The components of the thermopower are given by $$\label{Syy}
S_{xx} = S_{yy} = \frac{1}{eT}
\Big[ \frac{\sigma_{xx}}{S_0} {\cal L}_{xx}^{(1)}+
\frac{{\cal L}_{yx}^{(1)}}{\sigma_{yx}}\Big]$$ and $$\label{Sxy}
S_{xy}= - S_{yx} = \frac{1}{eT}
\Big[\frac{\sigma_{xx}}{S_0} (-{\cal L}_{xy}^{(1)})+
\frac{{\cal L}_{xx}^{(1)}}{\sigma_{yx}} \Big].$$ Here, $S_0=\sigma_{xx}\sigma_{yy}-\sigma_{xy}\sigma_{yx}$. The dominating term in the above two equations is the last term. The analytical form of $\kappa_{xx}$ and $S_{xy}$ can be obtained directly by deriving analytical form of the phenomenological transport coefficients. The analytical form of density of states given in Eq. (\[dos\_B\]) allows us to obtain asymptotic expressions of $ S_{xy} $ and $ \kappa_{xx}$. This is done by replacing the summation over discrete quantum numbers $n$ by the integration i.e; $ \sum_n \rightarrow 2\pi l^2 \int D_{\tau_z}(E)dE$, then we get $${\cal L}_{xx}^{(1)} \simeq \frac{4\pi}{\beta}\frac{e^2}{h}\frac{\Gamma_0 E_F }
{(\hbar\omega_c)^2}\Omega_{D}G^{\prime}(x) \sum_{\tau_z}U_{\tau_z}
F_{\tau_z}\sin(2\pi F_{\tau_z})$$ and $$\label{cond2}
{\cal L}_{xx}^{(2)} \simeq \frac{L_0 e^2T^2\sigma_0}{(\omega_c\tau_0)^2}
\sum_{\tau_z}U_{\tau_z}F_{\tau_z}[1 -3\Omega_{D} G^{\prime \prime}(x)
\cos(2\pi F_{\tau_z})],$$ where $F_{\tau_z}=(E_F^2-\Delta_{\tau_z}^2)/(\sqrt{2}\hbar\omega_c)^2$, $U_{\tau_z}=[1+\cos^2(\bar{\theta}_{\tau_z})]$, the impurity induced damping factor is $$\Omega_{D}=\exp\Big\{-\Big(2\pi\frac{\Gamma_0 E_F }{\hbar^2\omega_c^2}\Big)^2\Big\}$$ and the temperature dependent damping factor is the derivative of the function $G(x) $ with $ G(x)=x/\sinh(x)$. Here, $x=T/T_c$ with $T_c=(\hbar\omega_c)^2/(2\pi^2k_{_B}E_{F})$ is the critical temperature which depends on strength of hybridization through Fermi energy. Note that $G(x)$ is the temperature dependent damping factor for the electrical conductivity tensor.
The off-diagonal thermopower $S_{xy} $ and the diagonal thermal conductivity $\kappa_{xx}$ are obtained as given by $$\begin{aligned}
\label{ana_s}
S_{xy} & \backsimeq & \frac{ k_{_B}}{e} \frac{1}{k^0_Fl}\frac{16\pi}{\omega_c\tau_0}
\Big[1+\Big(\frac{\Delta_h}{E_{F0}}\Big)^2\Big]^{1/2}\Omega_{D} G^{\prime}(x)\nonumber\\
& \times & \sum_{\tau_z}\frac{U_{\tau_z}}{\sin^2(\bar{\theta}_{\tau_z})}
F_{\tau_z}\sin\Big(2\pi f/B - \tau_z \phi \Big)\end{aligned}$$ and $$\begin{aligned}
\label{diagok}
\kappa_{xx} &\simeq& L_0\frac{\sigma_0T}{(\omega_c\tau_0)^2}\sum_{\tau_z}
U_{\tau_z}F_{\tau_z} \nonumber \\
& \times & \Big[1 - 6\Omega_D G^{\prime \prime}(x)
\cos\Big(2\pi f/B - \tau_z \phi \Big) \Big],\end{aligned}$$ where the frequency $f$ is given as $$f=\frac{1}{2e\hbar v_{F}^2}(E_{F0}^2-\Delta_z^2)$$ and the phase factor $\phi = \pi g \mu_B \Delta_h/(e \hbar v_{F}^2)$. Therefore, the thermopower and the thermal conductivity oscillate with the same frequency $f$ which is independent of $\Delta_h$, which can be shown from numerical result also. The oscillation frequency is strongly reduced by the Zeeman energy $\Delta_z$. On the other hand, the phase factor $ \phi $ is related to the product of the Lande $g$-factor and $\Delta_h$ and it vanishes if either of them is zero. Although the frequency and the phase of $ S_{xx} $ and $ \kappa_{xx}$ are the same but the damping factor and amplitude are different.
Now we compare the numerical and analytical results of $s_{xy} $ and $\kappa_{xx} $ in Fig. \[4\]. For better visualization, we have taken weak Landau level broadening $\Gamma_0=0.01$ meV for $S_{xy}$ and $\kappa_{xx}$. The analytical results, in particular the frequency $f$, match very well with the numerical results. We must mention here that for different values of $\Gamma_0$ analytical results may differ with numerical in the amplitude but the frequency and phase are always in good agreement.
![Plots of the numerical and analytical results of the thermoelectric coefficients versus inverse magnetic field for $\Delta_h=20$ meV. Solid and dashed lines stand for numerical and analytical results, respectively.[]{data-label="Fig1"}](compare.eps){width="98mm"}
It is interesting to see from the analytical expressions of the thermopower and the thermal conductivity that these transport coefficients possess weak periodic oscillation with the hybridization energy for a given magnetic field. These oscillation is shown in figure 5.. The frequency and phase factor of these oscillations for a fixed $B$ are $ \nu = \phi/(2\pi) = g \mu_B /(2e \hbar v_{F}^2)$ and $ \Phi = 2\pi f/B$, respectively.
![Plots of the thermopower and thermal conductivity versus the hybridization constant $(\Delta_h)$. Here, $B=0.2$ T is taken.[]{data-label="Fig1"}](sdh_both.eps){width="90mm"}
In presence of the magnetic field, these approximate analytical expression of the thermopower and thermal conductivity can be also used for monolayer graphene by putting $\Delta_h=0$. There are several experimental results [@yuri; @lau] on thermoelectric properties of a graphene monolayer but analytical expressions are not available in the literature. The approximate analytical expression of the thermopower and thermal conductivity can also be used for a monolayer graphene by setting $\Delta_h=0$.
conclusion
==========
We have presented a theoretical study on the thermoelectric coefficients of ultra-thin topological insulators in presence/absence of the magnetic field. In absence of the magnetic field, the thermopower and the thermal conductivity are modified due to the hybridization between top and bottom surface states. The thermopower is enhanced and the thermal conductivity is diminished due to the hybridization. The quantum oscillations in the thermopower and the thermal conductivity for different values of the hybridization constant are also studied numerically. In addition to the numerical results, we obtained the analytical expressions the thermopower $(S_{xy})$ and the thermal conductivity ($\kappa_{xx}$). The analytical results match very well with the numerical results. We have also provided analytical expressions of the oscillation frequency and phase. The oscillation frequency is the same for both the thermopower and thermal conductivity. It is independent of the hybridization constant but strongly suppressed by the Zeeman energy. On the other hand, the hybridization constant plays a very significant role in the phase as well as in the amplitude of the oscillations. From the analytical results, critical temperature $(T_c)$ is found to be reduced with increasing hybridization constant. Thermoelectric coefficients also show a very low-frequency oscillation with the hybridization constant for a given magnetic field. Moreover, our analytical expressions of the thermopower and thermal conductivity are also applicable for a graphene monolayer by setting $ \Delta_h=0$.
Acknowledgement
===============
This work is financially supported by the CSIR, Govt.of India under the grant CSIR-SRF-09/092(0687) 2009/EMR F-O746.
Derivation of asymptotic analytical expression of density of states of a two dimensional electronic system in presence of impurity can be done by calculating self-energy [@ando; @gerhartz] which is given as $$\label{sum}
\Sigma^-(E)=\Gamma_0^2\sum_n \frac{1}{E-E_{n}^{\tau_z}-\Sigma^-(E)}.$$ Imaginary part of the self-energy is related to density of states as $ D(E)= {\mathrm{Im}\,}\left[\frac{\Sigma^-(E)}{\pi^2 l^2 \Gamma_0^2}\right] $.
By using residue theorem, we calculate the summation in Eq.(\[sum\]), which give $ \Sigma^-(E)\simeq\frac{2\pi\Gamma_0^2 E}{(\hbar\omega_c)^2}\cot(\pi n_0)$, where $n_0$ is the pole and given as $$n_0 = \frac{1}{2(\hbar\omega_c)^2}\Big[\{E-\Sigma^{-}(E)\}^2-(\Delta_z+\tau_z\Delta_h)^2\Big].$$
By writing the self-energy as the sum of real and imaginary part, it can be further simplified as $$\Delta + i \frac{\Gamma}{2} \simeq \frac{2\pi\Gamma_0^2 E}{(\hbar\omega_c)^2}
\cot\Big[\frac{(u-iv)}{2} \Big]$$ Here, $$u = \frac{\pi}{(\hbar\omega_c)^2}[E^2-(\Delta_z+\tau_z\Delta_h)^2]$$ and $ v = \pi\Gamma E/(\hbar\omega_c)^2 $. The imaginary part is $ \frac{\Gamma}{2} = \frac{2\pi\Gamma_0^2 E}{(\hbar\omega_c)^2}\
\frac{\sinh v}{\cosh v-\cos u} $. Now, this can be re-written by using the following standard relation as $$\frac{\sinh v}{\cosh v-\cos u} = 1 + 2 \sum_{s=1}^{\infty} e^{-s v} \cos (su).$$ Here, the most dominant term is for $s=1 $ only. We can write $$\frac{\Gamma}{2} = \frac{2\pi\Gamma_0^2 E}{(\hbar\omega_c)^2}
\Big [1 + 2 \sum_{s=1}^{\infty} e^{-s\pi \Gamma E/(\hbar \omega_c)^2} \cos(u) \Big].$$ In the limit of $\pi\Gamma \gg \hbar\omega_c $, after first iteration, we have $ \Gamma/2=2 \pi\Gamma_0^2 E/(\hbar\omega_c)^2$. Substituting it in the earlier expression, we get $$\begin{aligned}
\frac{\Gamma}{2} & = & \frac{2\pi\Gamma_0^2 E}{(\hbar\omega_c)^2}
\Big [1+2 \sum_{s=1}^{\infty}\exp\Big\{-s\Big(\frac{2\pi\Gamma_0 E}{\hbar^2\omega_c^2}\Big)^2\Big\}
\nonumber\\
& &
\cos\Big\{s\pi\Big(E^2-\Delta_{\tau_z}^2\Big)/(\hbar\omega_c)^2\Big\} \Big].\end{aligned}$$ Here, $\Delta_{\tau_z}=\Delta_z+\tau_z\Delta_h$
Finally, the density of states for two branches can be obtained as $$\begin{aligned}
D_{\tau_z}(E)&=&\frac{D_0(E)}{2}\Big [1+2 \sum_{s=1}^{\infty}\exp\Big\{-s\Big(\frac{2\pi\Gamma_0 E}
{\hbar^2\omega_c^2}\Big)^2\Big\}
\nonumber\\
& &
\cos\Big\{s\pi\Big(E^2-\Delta_{\tau_z}^2\Big)/(\hbar\omega_c)^2\Big\} \Big].\end{aligned}$$
[55]{} C. L. Kane and E. J. Mele, Phys. Rev. Lett. [**95**]{}, 146802 (2005)
B. A. Bernevig, T. L. Hughes, and S. C. Zhang, Science [**314**]{}, 1757 (2006)
M. Konig, S. Wiedmann, C. Brune, A. Roth, H. Buhmann, L. W. Molenkamp, X. L. Qi, and S. C. Zhang, Science [**318**]{}, 766 (2007)
J. E. Moore and L. Balents, Phys. Rev. B [**75**]{}, 121306 (2007)
L. Fu, C. L. Kane, and E. J. Mele, Phys. Rev. Lett. [**98**]{}, 106803 (2007)
H. Zhang, C. X. Liu, X. L. Qi, X. Dai, Z. Fang, and S. C. Zhang, Nat. Phys. [**5**]{}, 438 (2009)
D. Hsieh, D. Qian, L. Wray, Y. Xia, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nature [**452**]{}, 970 (2008)
D. Hsieh, Y. Xia, L. Wray, D. Qian, A. Pal, J. H. Dil, J. Osterwalder, F. Meier, G. Bihlmayer, C. L. Kane, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Science [**323**]{}, 919 (2009)
Y. Xia, D. Qian, D. Hsieh, L. Wray, A. Pal, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nat. Phys. [**5**]{}, 398 (2009)
P. Roushan, J. Seo, C. V. Parker, Y. S. Hor, D. Hsieh, D. Qian, A. Richardella, M. Z. Hasan, R. J. Cava, and A. Yazdani, Nature [**460**]{}, 1106 (2009)
E. I. Rashba and V. I. Sheka, Fizika Tverdogo Tela; Collected Papers vol 2 (Moscow and Leningrad: Academy of Sciences of the USSR) 162 (1959); E. I. Rashba, Sov. Phys.-Solid State 2, 1109 (1960)
Y. A. Bychkov and E. I. Rashba, J. Phys. C: Solid State, [**17**]{}, 6039 (1984)
G. Zhang, H. Qin, J. Teng, J. Guo, Q. Guo, X. Dai, Z. Fang, and K. Wu, Appl. Phys. Lett. [**95**]{}, 053114 (2009)
H. Peng, K. Lai, D. Kong, S. Meister, Y. Chen, X. L. Qi, S. C. Zhang, Z. X. Shen, and Y. Cui, Nat. Mater. [**9**]{}, 225 (2010)
Y. Zhang, K. He, C. Z. Chang, C. L. Song, L. L. Wang, X. Chen, J. F. Jia, Z. Fang, X. Dai, W. Y. Shan, S. Q. Shen, Q. Niu, X. L. Qi, S. C. Zhang, X. C. Ma, and Q. K. Xue, Nat. Phys. [**6**]{}, 584 (2010)
C. X. Liu, H. Zhang, B. Yan, X. L. Qi, T. Frauenheim, X. Dai, Z. Fang, and S. C. Zhang, Phys. Rev. B [**81**]{}, 041307 (2010)
B. Seradjeh, J. E. Moore, and M. Franz, Phys. Rev. Lett. [**103**]{}, 066402 (2010)
W. K. Tse and A. H. MacDonald, Phys. Rev. Lett. [**105**]{}, 057401 (2010)
W. K. Tse and A. H. MacDonald, Phys. Rev. B [**82**]{}, 161104 (2010)
J. Maciejko, X. L. Qi, and S. C. Zhang, Phys. Rev. Lett. [**104**]{}, 166803 (2010)
P. Ghaemi, R. S. K. Mong, and J. E. Moore, Phys. Rev. Lett. [**105**]{}, 166603 (2010)
R. Yu, W. Zhang, H. J. Zhang, S. C. Zhang, X. Dai, and Z. Fang, Science [**329**]{}, 61 (2010)
H. Z. Lu, W. Y. Shan, W. Yao, Q. Niu, S. Q. Shen, Phys. Rev. B [**81**]{}, 115407 (2010)
H. Cao, J. Tian, I. Miotkowski, T. Shen, J. Hu, S. Qiao, and Y. P. Chen, Phys. Rev. Lett. [**108**]{}, 216803 (2012)
J. Linder, T. Yokoyama, and A. Sudbo, Phys. Rev. B [**80**]{}, 205401 (2009)
P. Cheng, C. Song, T. Zhang, Y. Zhang, Y. Wang, J. F. Jia, J. Wang, Y. Wang, B. F. Zhu, X. Chen, X. Ma, K. He, L. Wang, X. Dai, Z. Fang, X. Xie, X. L. Qi, C. X. Liu, S. C. Zhang, and Q. K. Xue, Phys. Rev. Lett [**105**]{}, 076801 (2010)
Y. Jiang, Y. Wang, M. Chen, Z. Li, C. Song, K. He, L. Wang, X. Chen, X. Ma, and Q. K. Xue, Phys. Rev. Lett. [**108**]{}, 016401 (2012)
A. A. Zyuzin A. A. Barkov, Phys. Rev. B [**83**]{}, 195413 (2011)
J. Wang, A. M. DaSilva, C. Z. Chang, K. H., J. K. Jain, N. Samarth, X. C. Ma, Q. K. Xue, and M. H. W. Chan, Phys. Rev. B [**83**]{}, 245438 (2011)
M. Tahir, K. Sabeeh and U. Schwingenschlogl, Scientific Reports [**3**]{}, 1261 (2013)
M. Tahir, K. Sabeeh, and U. Schwingenschlogl, J. Appl. Phys. [**113**]{}, 043720 (2013)
G. S. Nolas, J. Sharp, and H. J. Goldsmid, [*Thermoelectrics*]{} (Springer-Verlag, Berlin, 2001)
R. Fletcher, J. C. Maan, K. Ploog, and G. Weimann, Phys. Rev. B [**33**]{}, 7122 (1986)
R. Fletcher, P. T. Coleridge, and Y. Feng, Phys. Rev. B [**52**]{}, 2823 (1995)
R. Fletcher, Semicond. Sci. Technol. [**14**]{}, R1 (1999)
S. Maximov, M. Gbordzoe, H. Buhmann, L. W. Molenkamp, and D. Reuter, Phys. Rev. B [**70**]{}, 121308 (R) (2004)
S. Goswami, C. Siegert, M. Pepper, I. Farrer, D. A. Ritchie, and A. Ghosh, Phys. Rev. B [**83**]{}, 073302 (2011)
O. A. Tretiakov, Ar. Abanov, S. Murakami, and J. Sinova, Appl. Phys. Lett. [**97**]{}, 073108 (2010)
O. A. Tretiakov, Ar. Abanov, and J. Sinova, Appl. Phys. Lett. [**99**]{}, 113110 (2011)
O. A. Tretiakov, Ar. Abanov, and J. Sinova, J. Appl. Phys. [**111**]{}, 07E319 (2012)
R. Takahashi and S. Murakami, Semicond. Sci. Technol. [**27**]{}, 124005 (2012)
Y. M. Zuev, W. Chang, and P. Kim, Phys. Rev. Lett. [**102**]{}, 096807 (2009)
P. Wei, W. Bao, Y. Pu, C. N. Lau, and J. Shi, Phys. Rev. Lett. [**102**]{}, 166808 (2009)
E. H. Hwang, E. Rossi, and S. Das Sarma, Phys. Rev. B[**80**]{}, 235415 (2009)
A. A. Patel and S. Mukerjee, Phys. Rev. B [**86**]{}, 075411 (2012)
L. Smreka and P. Streda, J. Phys. C: Solid State Phys. [**10**]{}, 2153 (1977)
H. Oji, J. Phys. C: Solid State Phys., [**17**]{}, 3059 (1984)
P. Wei, W. Bao, Y. Pu, C. N. Lau, and J. Shi, Phys. Rev. Lett. [**102**]{}, 166808 (2009)
N. W. Ashcroft and N. D. Mermin, Solid stste Physics, Cengage Learning (2010)
T. Ando, A. B. Fowler, and F. Stern, Rev. Mod. Phys. [**54**]{}, 437 (1982)
C. Zhang and R. R. Gerhardts, Phys. Rev. B [**41**]{}, 12850 (1990)
|
---
abstract: 'Here we provide a general methodology to directly measure the topological currents emerging in the optical lattice implementation of the Haldane model. Alongside the edge currents supported by gapless edge states, transverse currents can emerge in the bulk of the system whenever the local potential is varied in space, even if it does not cause a phase transition. In optical lattice implementations the overall harmonic potential that traps the atoms provides the boundaries of the topological phase that supports the edge currents, as well as providing the potential gradient across the topological phase that gives rise to the bulk current. Both the edge and bulk currents are resilient to several experimental parameters such as trapping potential, temperature and disorder. We propose to investigate the properties of these currents directly from time-of-flight images with both short-time and long-time expansions.'
author:
- 'Alvaro Rubio-García'
- 'Chris N. Self'
- 'Juan Jose García-Ripoll'
- 'Jiannis K. Pachos'
bibliography:
- 'references.bib'
title: 'Seeing topological edge and bulk currents in time-of-flight images'
---
[**Introduction:–**]{} Two-dimensional topological insulators are described by a non-zero integer topological index, typically given by the Chern number, $\nu$ [@kane2005quantum; @kane2005z2topological; @bernevig2006quantum; @hasan2010colloquium; @qi2011topological]. At physical boundaries of the system the Chern number changes from a non-zero value inside the material to zero value outside it. This change can be interpreted as a topological phase transition, which is manifested at the boundary of the topological system as a one-dimensional gapless edge state [@kane2005quantum; @kane2005z2topological; @bernevig2006quantum; @hasan2010colloquium; @qi2011topological]. As the change in the Chern number necessarily takes only integer values rather than the continuum the edge states are thus robust to small perturbations and finite temperature. In the presence of a constant chemical potential $V(\boldsymbol{r})=V$ the particle-hole imbalance in the population of the edge states gives rise to the edge current $$\label{eq:current1}
I_\text{edge} = {\nu V \over 2\pi},$$ traversing along the boundary of the system [@hatsugai1993chern; @hao2008topological; @colomes2018antichiral; @self2019topological]. The edge currents provide a powerful tool to experimentally probe the topological properties of the system.
Alongside the edge currents, the bulk of the topological insulator can support currents, whenever the local potential has a non-zero gradient [@self2019topological; @lensky2015topological; @geller1994currents]. In contrast to the edge currents, these bulk currents can appear even in the absence of gapless modes, with the system remaining gapped at all times. In our previous paper [@self2019topological] we showed that the bulk of a topological insulator on the lattice is formed by hidden many-body entangled edge states. The individual currents of these states cancel each other, when the bulk is homogeneous, giving zero flux. However, we showed that an inhomogeneous potential in the system can disentangle these edge states and form a net current perpendicular to the local gradient of that potential, with an intensity given by $$\label{eq:current2}
I_\text{bulk} = \frac{\nu}{2\pi}\,a_0|\boldsymbol{\nabla} V(\boldsymbol{r})|,$$ with $a_0$ the lattice constant. Similar to the edge currents the bulk currents are robust against temperature and local disorder. Unlike the edge currents the position and direction of the topological bulk currents can be controlled at will by external potentials, thus offering a unique platform for developing new technologies.
A physical system where edge and bulk currents can naturally emerge is optical lattice implementations of the Haldane model [@tarruell2012creating; @jotzu2014experimental; @goldman2016topological; @flaschner2016experimental; @asteria2019measuring]. Time-of-flight images enable the measurement of the Chern number of this system thus allowing us to confirm its topological nature [@alba2011seeing]. In these experiments harmonic potentials are used to keep the atoms localised in a certain region, in addition to the Hamiltonian of the Haldane model. This trapping causes a spatial variation in the local potential. Here we demonstrate that, together with the edge currents at the boundary of the topological phase, bulk currents also emerge due to the variation of the trapping potential across the topological phase of the system. The interplay between edge and bulk currents can be obtained by time-of-flight (TOF) images making their investigation directly accessible to current experiments. We find signatures of the edge and bulk currents in both the short-time and long-time expansion images, thus offering novel and versatile means of probing topological current physics.
[**Realising topological edge and bulk currents in optical lattices:–**]{} To investigate the properties of the topological edge and bulk currents we consider the implementation of the Haldane model with ultra-cold atoms in optical lattices. The Haldane model is defined on a hexagonal lattice with a fermionic mode $\lbrace c_i,\ c^\dagger_i\rbrace$ living at each vertex of the lattice. The Hamiltonian is $$\begin{aligned}
H &= \sum_{\langle ij\rangle} t_1 \, c^\dagger_i c_j + \sum_{\langle\langle ij\rangle\rangle} t_2 \, \textrm{e}^{i\nu_{ij}\phi} c^\dagger_i c_j + \sum_i V(\boldsymbol{r}_i) \, c^\dagger_i c_i,
\label{eqn:haldane-model-ham}\end{aligned}$$ where $t_1$ and $t_2$ are the nearest and next-nearest neighbour hopping strengths, respectively, $\phi$ is a complex phase, $\nu_{ij}$ a parameter that is $\pm 1$ depending on the direction of the hopping $i\rightarrow j$, as shown in Fig. \[fig:real\_space\_densities\], and $V(\boldsymbol{r})$ is a local potential. For certain choices of parameters the Haldane model has a topological phase with Chern number $\nu = \pm 1$ as well as a trivial phase where $\nu =0 $ [@haldane1988model]. All results here are produced for the parameters $t_1 = 1$, $t_2 = 0.1$, $\phi = \pi/2$ that, at half-filling and for $V(\boldsymbol{r})=0$ give $\nu=1$.
![Real space occupations of the Haldane model inside a harmonic trap. There are three clearly visible phases with densities $n=1$ (red), $n=1/2$ (white), and $n = 0$ (blue). We are interested in the flux of the currents crossing the radial dashed line. (Upper detail) Hexagonal plaquette of the Haldane lattice model. Arrows denote hopping directions for which $\nu_{ij} = +1$ in Eq. (\[eqn:haldane-model-ham\]). (Lower detail) Real space currents (arrows) between lattice sites. Intensity of the arrow colour denotes the intensity of the current. Parallel edge currents are visible at the boundary of the topological phase, while the current in the bulk has the opposite orientation. Data shown is for $T=0$ and a lattice with $1,200$ sites. The harmonic potential (\[harmonic\]) has $V_0 = -2.1$, $k=5.6$ and $r_\text{max} = 13\sqrt{3}$.[]{data-label="fig:real_space_densities"}](FIG1.pdf){width="0.9\columnwidth"}
Motivated by initial theoretical work [@alba2011seeing] several experimental realisations of the Haldane model in optical lattices have been achieved [@tarruell2012creating; @jotzu2014experimental; @goldman2016topological; @flaschner2016experimental; @asteria2019measuring]. In these experiments the Haldane lattice model given by (\[eqn:haldane-model-ham\]) is engineered inside a harmonic potential trap that keeps the atoms localised in a certain region. We describe the harmonic trap as $$\label{harmonic}
V(r) = V_0 + k\left({r\over r_\text{max}}\right)^2,$$ where $V_0$ is an overall chemical potential, while $k$ and $r_\text{max}$ are suitable parameters of the trap. Depending on the choice of the constants $V_0$ and $k$ this potential can give rise to a ‘wedding cake’ structure, with three phases emerging at different radii from the centre of the harmonic trap, as shown in Fig. \[fig:real\_space\_densities\]. Near the centre, $r=0$, a trivial phase is nested with lattice filling fraction given by $n=1$ and Chern number $\nu=0$ as all bands of the system are populated. Next an annulus emerges with half-filling, $n=1/2$, that corresponds to the Haldane phase with Chern number $\nu=\pm1$. Finally, far from the centre, where the harmonic potential is too large, a trivial $\nu=0$ phase is present due to zero band population, $n=0$. The topologically non-trivial annulus configuration has two boundaries neighbouring the trivial $n=1$ and $n=0$ phases. Moreover, the trapping potential varies radially across the topological phase, as seen in Fig. \[fig:optical-lattices\]. Hence, due to (\[eq:current1\]) and (\[eq:current2\]) we expect edge and bulk currents to naturally emerge without additional engineering of the system.
![Occupation of lattice sites (circles) along a radial line – as shown in Fig. \[fig:real\_space\_densities\] – inside an harmonic potential trap (blue line), and local currents across small bins (bars) in the same radial line, with $r$ the distance from the centre of the trap. Three phases are clearly visible from the occupations: $n=1$ at the centre of the trap, $n=1/2$ in the middle corresponding to an annulus of the Haldane phase and $n=0$ at the far end of the system. The total edge and bulk currents resulting from the sum of the small bins are indicated with the three large bars. The total edge currents are equal in magnitude, propagating in the same direction (negative value), while the bulk current through the $n=1/2$ region is opposite (positive value) with double magnitude. Data presented is for $T=0$ and a lattice size of roughly $28,800$ sites. The harmonic potential (\[harmonic\]) has $V_0 = -3$, $k= 9$ and $r_\textrm{max} = 60\sqrt{3}$. (Inset) The width $N$ of the topological phase increases linearly with $1/\Delta V$, where $\Delta V$ is the potential difference at the two boundaries of the phase.[]{data-label="fig:optical-lattices"}](FIG2){width="\columnwidth"}
The density current, $J_{ij}$, flowing between two sites $i$ and $j$ of the lattice are dictated by the continuity equation for single site occupation [@self2019topological]. From that we can determine the distribution of the current flowing across a cross section of the lattice that goes radially from the centre of the trap, as shown in Fig. \[fig:real\_space\_densities\]. The local profile of currents supported in the system is shown as a histogram in Fig. \[fig:optical-lattices\]. We observe that as the harmonic potential has a smooth profile the edge states expand over a large range of the system. Moreover, the smooth change of the local potential in the bulk causes the bulk currents to disperse over the whole topological phase.
In Fig. \[fig:optical-lattices\] we also plot the total edge and bulk currents as the sum of the local ones; it is clear that the two edge currents are equal and flowing in the same direction while the bulk current is twice their value and flowing in the opposite direction. This behaviour is expected from assuming that the topological phase is stable in the middle of the bulk, where the local potential favouring half filling $n=1/2$, while the potential increases as we move further out and decreases as we move closer to the centre of the trap. In this case two opposite potentials are formed at the inner and outer boundaries of the system causing the parallel edge currents due to (\[eq:current1\]). At the same time the linear interpolation of the potential between the two boundaries causes a bulk current to be formed as dictated by (\[eq:current2\]). Finally, we observe in Fig. \[fig:optical-lattices\] (Inset) that the width of the Haldane phase increases inversely proportional to the potential difference, $\Delta V$, between the two boundaries. This possible control in the macroscopic properties of the system can facilitate the manipulation of the density and spatial extension of the currents under investigation in order to improve their visibility.
![(a) Densities of a sample of sites (inside dashed line) at a stationary state inside an harmonic trap isolated by a mask. Densities outside the sample are taken to be 0. (b) Density difference $\Delta_y (x, y; t) = n(x, y; t) - n(x, -y; t)$ after a short-time expansion ($t = \hbar/2t_1$) of the sites within the sample when the trap is turned off. A clear pattern of density imbalances is already observed, that corresponds to one up-going current inside the Haldane phase and two down-going currents at its boundaries. Data shown is for $T=0$ and a lattice with $9800$ sites. The employed harmonic potential (\[harmonic\]) has $V_0 = -2.1$, $k=5.8$ and $r_\text{max} = 43\sqrt{3}$.[]{data-label="fig:ste_density_differences"}](FIG3_3){width="\columnwidth"}
[**Detecting density currents with time-of-flight images:–**]{} The density currents of the atoms in the $n=1/2$ phase of the optical lattice, as well as at its boundaries, can be directly measured with TOF images. To distinguish between edge and bulk currents we want to selectively obtain information about particular regions of the optical lattice. We can achieve that with TOF measurements performed by switching off the Hamiltonian terms within particular regions of the system. As a result, the atoms in these regions freely expand and can be detected. This allows information, such as the atom density and velocity distribution, to be obtained. Releasing a small sample of lattice sites around the Haldane phase can therefore tell us about the physics of the edge and bulk currents.
We would like to investigate both the short-time and long-time TOF images. A short-time expansion of a sample of sites can already reveal the existence of the bulk and edge currents. In this case we take a horizontal sample of sites, as shown in Fig. \[fig:ste\_density\_differences\] (a), when the system is in a stationary state inside a harmonic trap. Then, we remove the trap and let the atoms inside that sample freely expand. After sufficiently small times a distinctive population pattern is formed, as shown in Fig. \[fig:ste\_density\_differences\] (b). We observe that there are more particles at the top of the sample in the bulk of the Haldane phase region, while at its boundaries the particles accumulate at the bottom of the sample. These particle density differences point to the existence of a particle density current propagating perpendicular to the local gradient of the trap inside the topological phase in accordance with (\[eq:current2\]) and two counter propagating currents at the topological phase boundaries as dictated by (\[eq:current1\]).
![Density differences in the TOF images taken at different phases of the system. The dotted lines mark the Brillouin zone of the hexagonal lattice and the crosses show the location of the Dirac points. (Insets) Sketch of the rectangular region where particles were sampled for the TOF image. The circular coronas show the three topological phases in the system. (a-c) Difference $\Delta_y n(k_x,k_y) = n(k_x,k_y)-n(k_x,-k_y)$ in the $y$-direction of the momentum for samples taken along the $x$-axis of the lattice. Clearly there are currents moving along the vertical direction, with the bulk currents (b) traveling opposite to the edge ones (a, c). (d-f) Difference $\Delta_x n(k_x,k_y) = n(k_x,k_y)-n(-k_x,k_y)$ in the $x$-direction of the momentum for samples taken along the $y$ axis of the lattice. There are currents moving along the horizontal direction, with the bulk currents traveling opposite to the edge ones. Data shown is for $T=0$ and a lattice with $9800$ sites. The employed harmonic potential (\[harmonic\]) has $V_0 = -2.0$, $k=5.5$ and $r_\text{max} = 38\sqrt{3}$.[]{data-label="fig:tof_density_differences"}](FIG4){width="\columnwidth"}
To obtain the signature of density currents from the long-time TOF images we select smaller regions of the lattice that isolate the bulk from the edge of the topological phase. Differences in the velocity distribution over the Brillouin zone for different regions of the trapped sample can signal a net flux of particles traveling in a certain direction. This is shown in Fig. \[fig:tof\_density\_differences\] (a-c), where the colormap pictures the difference $\Delta_y (k_x, k_y) = n(k_x, k_y) - n(k_x, -k_y)$ in the TOF sampling of rectangular patches along the $x$-axis of the optical lattice \[see Inset\]. A clear difference in the momentum population with opposite $k_y$ is seen, indicating the existence of currents moving along the $y$-direction of the lattice. Moreover, the direction of movement in (b) is opposite to that of (a, c), in agreement with Fig. \[fig:optical-lattices\]. Figs. \[fig:tof\_density\_differences\] (d-f) show the density difference $\Delta_x (k_x, k_y) = n(k_x, k_y) - n(-k_x, k_y)$ for samples along the $y$-axis of the lattice. In this case, currents flow along the $x$-axis, with the direction of the bulk currents (e) opposite to the direction of the edge ones (d, f).
![(a) Stability of the edge and bulk TOF currents against the temperature of the optical lattice. Currents decay when the temperature reaches the energy gap of the Haldane model. (b) TOF currents when the trap strength $k/t_1$ is widened. For $k/t_1 > 2.5$ all three phases are present in the system and the edge and bulk currents are stable. (c, d) Mean TOF currents over 15 samples with local (c) and coupling (d) disorder in the trap. The mean currents remain stable even when the scale of the disorder is comparable to the energy gap. Data shown is for $T=0$ and a lattice with $9800$ sites. The harmonic potential \[harmonic\] has $V_0 = -2.0$, $k=5.5$ and $r_\text{max} = 38\sqrt{3}$.[]{data-label="fig:optical-resilience"}](FIG5.pdf){width="\columnwidth"}
To determine the behaviour of the currents observed in TOF images under realistic conditions met in experiments we study the stability of the currents against temperature, local potential and coupling disorder as well as variations in the strength of the harmonic potential. These effects are commonly present in optical lattice implementations. As the edge and bulk currents have a topological origin we expect them to be both largely resilient against these perturbations [@self2019topological]. To proceed we define the net TOF current as $$\boldsymbol{I}_\text{TOF} = \sum_{\boldsymbol{k}\in\textrm{BZ}} \boldsymbol{k}\, n\left(\boldsymbol{k}\right)
\label{eqn:tof_currents}$$ where we weight all the momenta inside the Brillouin zone with the density distribution $n\left(\boldsymbol{k}\right)$ measured by the TOF images. We sample regions of the lattice as in Fig. \[fig:tof\_density\_differences\] to distinguish between bulk and edge density currents. Fig. \[fig:optical-resilience\](a) shows the behavior of the TOF currents (\[eqn:tof\_currents\]) against the temperature of the system. We observe that the topological currents remain intact until the temperature is comparable to the energy gap of the Haldane model. In Fig. \[fig:optical-resilience\](b) we test the stability of the currents against the strength of the trap potential (\[harmonic\]) parametrised by $k/t_1$. For wide enough traps capable of supporting the three phases of the system, we observe stable TOF currents with intensity independent on the trap potential. In Fig. \[fig:optical-resilience\](c) we add a local disordered potential in the Hamiltonian with values randomly sampled from the uniform distribution $[-w_{local}/2t_1, w_{local}/2t_1]$, and in Fig. \[fig:optical-resilience\](d) we multiply each coupling of the model with a value drawn from the uniform distribution $[1-w_J/2t_1, 1+w_J/2t_1]$ modelling imperfections in designed Hamiltonian. In both cases we observe that the mean edge and bulk currents are resilient to all these forms of disorder.
[**Conclusions:–**]{} To summarise, we have presented a general and versatile way to investigate the currents of a topological insulator with TOF images. In particular, we focused on the behaviour of the Haldane model simulated by ultra-cold atoms in optical lattices. In the presence of inhomogeneous potentials arising e.g. by the harmonic trapping of the atoms topological bulk currents emerge along the edge currents. The bulk currents are proportional to the gradient of the local potential that naturally arises in the optical lattice experiment due to the harmonic trapping. The topological origin of both the edge and bulk currents make them robust against perturbations, such as inhomogeneous potentials superimposed on top of the lattice, errors in the exact values of the couplings of the Haldane model, variations in the trapping potential of the atomic cloud as well as finite temperature. As bulk currents do not need phase transitions to be generated they are more versatile than the edge currents and easier to engineer and manipulate. Hence, their implementation with optical lattices would open the way to employ them for quantum technologies.
We would like to thank Monika Aidelsburger and Sofyan Iblisdir for inspiring conversations. This work was supported by the EPSRC grant EP/R020612/1, Spanish Projects PGC2018-094792-B-I00 (MCIU/AEI/FEDER, EU), PGC2018-094180-B-I00 (MCIU/AEI/FEDER, EU), FIS2015-63770-P (MINECO/FEDER, EU), CAM/FEDER Project No. S2018/TCS-4342 (QUITEMAD-CM) and CSIC Research Platform PTI-001. Statement of compliance with EPSRC policy framework on research data: This publication is theoretical work that does not require supporting research data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.